id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2303.17434 | A re-investigation of debris disc halos | A significant fraction of debris discs consist of a bright ring beyond which
extends a wide halo. Such a halo should be made of small grains produced in the
ring of parent bodies (PB) and pushed on high-e orbits by radiation pressure.
It has been shown that, under several simplifying assumptions, the surface
brightness (SB) of this halo should radially decrease as $r^{-3.5}$ in
scattered light. We aim to revisit the halo phenomenon and focus on two so far
unexplored issues: 1) How the unavoidable presence of small unbound grains,
non-isotropic scattering phase functions (SPF) and finite instrument resolution
affect scattered light SB profiles, and 2) How the halo phenomenon manifests
itself at longer wavelengths. We find that unbound grains account for a
significant fraction of the halo's luminosity in scattered light, and can
significantly flatten the SB radial profile. Realistic size-dependent SPFs also
have an effect, resulting here again in shallower SB profiles. For edge-on
discs, non-resolving the vertical profile can also flatten the projected SB. We
show that roughly half of the observationally-derived halo profiles found in
the literature are compatible with our new results, and that roughly half of
the remaining systems are probably shaped by additional processes. We also
propose that, in future observational studies, the characteristics of PB belt
and halos should be fitted separately. In thermal emission, wide halos should
remain detectable up to the far-IR and, with the exception of the $\sim
8-15\mu$m domain, the halo accounts for more than half of the system's total
flux up to $\lambda\sim80-90\mu$m. The halo's contribution strongly decreases
in the sub-mm to mm but still represents a few percents of the system's
luminosity at $\lambda\sim 1$mm. For unresolved systems, the presence of a halo
can also affect the determination of the disc's radius from its SED. | Philippe Thebault, Johan Olofsson, Quentin Kral | 2023-03-30T15:00:05Z | http://arxiv.org/abs/2303.17434v1 | # A re-investigation of debris disc halos
###### Abstract
Context:Scattered-light images reveal that a significant fraction of debris discs consist of a bright ring beyond which extends a wide halo. Such a halo is expected and should be made of small grains collisionally produced in the ring of parent bodies (PB) and pushed on high-eccentricity orbits by radiation pressure. It has been shown that, under several simplifying assumptions, the surface brightness (SB) of this halo should radially decrease as \(r^{-3.5}\) in scattered light
Aims:We aim to revisit the halo phenomenon and focus on two so far unexplored issues: 1) How the unavoidable presence of small unbound grains, non-isotropic scattering phase functions (SPF) and finite instrument resolution affect scattered light SB profiles, and 2) How the halo phenomenon manifests itself at longer wavelengths in thermal emission, both on resolved images and on system-integrated SEDs
Methods:We use a collisional evolution code to estimate the size-dependent spatial distribution of grains in a belt+halo system at steady state. We use the GRaTeR radiative transfer code to derive synthetic images in scattered-light and thermal emission, as well as SEDs.
Results:We find that unbound grains account for a significant fraction of the halo's luminosity in scattered light, and can significantly flatten the SB radial profile for the densest and brightest discs. Because halos are strongly size-segregated with radial distance, realistic size-dependent SPFs also have an affect, resulting here again in shallower SB profiles. For edge-on discs, non-resolving the vertical profile can also significantly flatten the projected SB profile. We show that roughly half of the observationally-derived halo profiles found in the literature are compatible with our new results, and that roughly half of the remaining systems are probably shaped by additional processes (planets, stellar companions, etc.). We also propose that, in future observational studies, the characteristics of PB belt and halos should be fitted separately. In thermal emission, we find that wide halos should remain detectable up to the far-IR and that, with the exception of the \(\sim 8-15\mu\)m domain, halos account for more than half of the system's total flux up to \(\lambda\sim 80-90\mu\)m. The halo's contribution strongly decreases in the sub-mm to mm but still represents a few percents of the system's luminosity at \(\lambda\sim 1\)mm. For unresolved systems, the presence of a halo can also affect the determination of the disc's radius from its SED.
Conclusions:
## 1 Introduction
Circumstellar debris discs have been detected around a significant fraction (15 to 30%) of main sequence stars (Hughes et al. 2018). These discs are thought to result from the continuous collisional grinding of leftovers from the planet-formation process, of which only the tail end (dust particles) is detectable as a photometric infra-red (IR) excess (Wyatt 2008). An increasing number of these discs have also been imaged, most commonly by scattered light observations in the visible and near-infrared, but also in thermal emission in the mid-to-far IR to millimetre wavelength domain 1. These images have revealed a variety of structures, such as clumps, warps, spirals or arcs, which have been interpreted as being sculpted by gravitational perturbations of (usually unseen) planets (e.g. Augereau et al. 2001; Wyatt 2006; Thebault et al. 2012) or companion stars (Thebault et al. 2021), or to be the result of violent and transient collisional events (Jackson et al. 2014; Kral et al. 2015; Thebault & Kral 2018).
Footnote 1: See the regularly updated database of resolved discs available at [https://www.astro.uni-jena.de/index.php/theory/catalog-of-resolved-debris-disks.html](https://www.astro.uni-jena.de/index.php/theory/catalog-of-resolved-debris-disks.html)
Apart from the aforementioned spatial structures, most imaged discs share a common feature, which is that the dust surface density is not uniformly decreasing (or increasing) with stellar distance, but peaks at a given radial location, thus creating ring-like features. This belt-like configuration is so ubiquitous that it has been suggested that "debris rings" should be a more appropriate denomination for the debris discs phenomenon (Strubbe & Chiang 2006). In many cases, scattered-light images also show a dimmer but radially extended halo of dust beyond the location of this main ring or belt (Hughes et al. 2018). The analytical and numerical studies of Strubbe & Chiang (2006, hereafter STCH) and Thebault & Wu (2008, hereafter TBWU) have shown that such halos are in fact expected beyond collisionally evolving belts of parent bodies (PB). These halos should be made of small, typically 1-10\(\mu\)m sized grains that are collisionally produced within the main ring and then placed on high-eccentricity orbits by radiation pressure or stellar wind. For systems dense enough for the Poynting-Robertson (PR) effect to be negligible with respect to collisional evolution, the vertical optical depth \(\tau\) in the outer halo naturally tends towards a radial profile in \(\tau\propto r^{-1.5}\). At wave
lengths dominated by scattered light, this translates into a surface brightness (SB) profile that decreases as \(\propto r^{-3.5}\). This SB \(\propto r^{-3.5}\) slope also holds for the projected mid-plane surface brightness profile of discs seen edge-on, provided that their vertical scale height \(z\) is \(\propto r\)(Strubbe & Chiang 2006). Note, however, that these results are valid under several simplifying assumptions, notably isotropic scattering and the absence of small unbound grains that are blown out by radiation pressure or stellar wind.
Thanks to cutting-edge instruments such as SPHERE or GPI (Beuzit et al. 2019; Macintosh et al. 2014), recent years have seen an exponential increase of the number of systems for which the radial profile of halos has been constrained (Adam et al. 2021). The canonical radial profiles of STCH and TBWU can then be used as benchmarks to judge as to whether or not the outer edge of a debris ring is "natural" or, on the contrary, shaped by additional mechanisms (outer planet, interaction with gas, companion star, etc.). However, the quantity whose radial profile is constrained by observations can differ from one study to the other. In some cases, it is the surface brightness (SB) that is retrieved, which can either be the (deprojected) stellocentric SB measured along a radial cut (Schneider et al. 2018) or the projected SB in the case of discs seen edge-on (Golimowski et al. 2006). In other cases it is the underlying grain number density profile that is estimated, usually by fitting the observed resolved images with radiative transfer models, such as the GRaTer code (Augereau et al. 1999), in which several free parameters (parent belt location and width, radial slope in the halo, scattering phase function, etc.) are explored and adjusted (Choquet et al. 2018; Bhowmik et al. 2019; Perrot et al. 2019). The radial density profile that is estimated can either be the surface density \(\sigma\) or the volumic number density \(n\).
### A coherent census of observationally-derived halo profiles
The fact that, depending on the profile-fitting approach, the notion of radial profile can refer to three distinct parameters (SB, \(\sigma\) and \(n\)) sometimes lead to some confusion when comparing different discs or when comparing observations to canonical theoretical profiles. An additional problem is that the theoretical results of STCH and TBWU constrain the system's optical depth \(\tau\) (or geometrical cross section), and not the particle number densities \(\sigma\) and \(n\). If the disc was only made of identical particles, then the radial profiles of \(\tau\) and \(\sigma\) (and also \(n\times r\) for non-flared discs) should be equivalent 2, but it is not true in the present case because there is a very strong size segregation as a function of radial distance in the halo (Thebault et al. 2014). Schematically, at a given stellar distance \(r\) the cross section is indeed dominated by grains produced in the main ring that have their apoastron \(Q=r\). Since the value of \(Q\) is imposed by the size-dependent mechanism that is radiation pressure (or, for M stars, stellar wind), it follows that, at a distance \(r\), \(\tau\) is dominated by grains of size \(s\) such that
Footnote 2: because the variations of the geometrical cross-section should directly follow the variations of the number of particles
\[\beta(s)=\frac{1}{2}\left(1-\frac{r_{0}}{r}\right) \tag{1}\]
where \(r_{0}\) is the location of the main belt and \(\beta(s)\) is the ratio between the radiation pressure and stellar gravity (or stellar wind) forces for the size \(s\).
In contrast, the number density profiles derived in most GRaTeR-type fits assume that the particle size distribution is the same everywhere in the system, which clearly cannot apply to the size-segregated halo. This means that neither \(\sigma\) nor \(n\) derived this way correspond to actual number densities (neither that of the global population nor that of a given size range). Nevertheless, for fits performed on images at short scattered-light wavelengths, the radial _dependence_ of the GRaTeR-derived \(\sigma\) parameter should, to a first approximation, match that of \(\tau\) as long as grains in the halo contribute to the flux proportionally to their cross section. As will be shown in Sec.3, this assumption does not hold if small unbound grains have a significant contribution and if the scattering phase function is size-dependent, but before exploring these important issues, let us present in Tab.1 the first coherent census of all systems with observationally-constrained halo profiles for which we specify which quantity it is (SB, \(n\) or \(\sigma\)) that has had its radial profile constrained. It can be noted that the radial index \(\Gamma\) of the surface density profile has been directly estimated for only 3 systems, and that the most commonly constrained slope is \(\alpha_{out}\) for the volumic density \(n\) and, to a lesser extent, \(\gamma_{out}\) for the stellocentric SB profile.
In most studies, the comparison to the theoretical results of STCH and THWU is done by assuming that the profiles of \(\tau\) and \(\sigma\) are identical (see previous paragraph) and that the indexes \(\Gamma\), \(\alpha_{out}\) and \(\gamma_{out}\) are related through the relations
\[\Gamma=\gamma_{out}+2 \tag{2}\]
and
\[\Gamma=\alpha_{out}+\delta \tag{3}\]
where \(\delta\) is the index of the radial profile of the halo's scale height, which is usually assumed to be equal to 1 (Thalmann et al. 2013; Ren et al. 2019). For the specific case of edge-on seen discs, we have \(\Gamma=\gamma_{out}+1+\delta\), which reduces to the same \(\Gamma=\gamma_{out}+2\) relation for \(\delta=1\)(Strubbe & Chiang 2006).
Note, however, that these relations are in principle only valid in an idealized case with isotropic scattering as well as for grains that are large enough for their contribution to the flux to be proportional to their geometrical cross section, and, in the case of edge-on discs, for an infinite instrumental resolution in the vertical direction.
### halos in thermal emission
All halos listed in Tab.1 have been imaged in scattered light and, more generally, the halo phenomenon has so far mostly been investigated in the visible or near-IR and not in thermal emission. There are two main reasons for that. The first one is theoretical and is that halos are supposed to be made of small grains not much bigger than the radiation-pressure blowout size \(s_{blown}\), which are not expected to contribute at long wavelengths where they are poor emitters. The second reason is observational: high-end instruments in the visible or near-IR are by far those offering the best spatial resolution allowing to resolve the structures of halos.
However, the recent discovery of two extended halos around HD32227 and HD61005 with ALMA in the millimetre (MacGregor et al. 2018) has shed new light on the matter. Based on order-of-magnitudes flux-to-mass conversions, MacGregor et al. (2018) estimated that these halos were mostly made of large mm-sized grains, which are not supposed to populate halos according to the canonical scenario of STCH and TBWU. To explain this puzzling presence, these authors explored some additional mechanisms, such as planetary or stellar perturbations, interaction with the ISM, or the aftermath of a large
planetesimal breakup, but could not find a fully satisfying scenario. More recently Olofsson et al. (2022a) revisited the modeling of the ALMA observations of HD 32297 (jointly with SPHERE polarimetric data) and found that one cannot rule out the possibility that these mm-halos are mostly made of smaller micron-sized grains after all. They showed that an overdensity of micron-grains, as expected because of their longer collisional lifetimes, could in principle compensate for their lower emissivity. However, using the simplified STCH and TBWU relations, they found that the flux due to these grains could only account to up to \(\sim 30\) % of the measured ALMA levels, but more sophisticated grain-distribution modelling might change these preliminary results and this crucial issue thus remain an open question. Additional halo-detections in the mm should be expected in the near future, as ALMA is slowly catching up with the resolution of near-IR instruments such as SPHERE and GPI, and high angular resolution projects are currently being executed (e.g. ARKS large program, PI Marino). In addition, new ground- and spaced-based facilities (e.g., VLT/ERIS and JWST) can now observe debris disks in the mid-IR with unprecedented precision,
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Name & \(f_{d}\)=(\(L_{disc}/L_{*}\))\({}_{\rm IR}\) & \(r_{0}\)(au) & \(r_{\rm max}(au)\) & SB & \(n\) & \(\sigma\) & \(i\)(deg) & Reference \\ & & & & (-3.5) & (-2.5) & (-1.5) & & \\ \hline HD139664 & \(9\times 10^{-3}\) & 83.1 & 109.4 & & -2.5 & & 87 & Kalas et al. (2006) \\ HD141943\({}^{d}\) & \(1.2\times 10^{-4}\) & 115 & 145 & & -4 & & 87 & Boccaletti et al. (2019) \\ HD160305 & \(1.2\times 10^{-4}\) & 86 & 106 & & -7.1 & & 82.3 & Perrot et al. (2019) \\ HD202628 & \(1.4\times 10^{-4}\) & 175 & 200;250 & -4.5;-3.9/-12.9;-2.8 & & & 61 & Schneider et al. (2016) \\ HP67497 & \(1.4\times 10^{-4}\) & 60.7 & 70 & & -8.5 or -1.3\({}^{c}\) & & 80 & Bonnefoy et al. (2017) \\ HD53143 & \(2.5\times 10^{-4}\) & 82 & 110 & -3 & -1 & & 45 & Kalas et al. (2006) \\ AUIc & \(3.9\times 10^{-4}\) & 33 & 60 & -3.8 & & 90 & Fitzgerald et al. (2007) \\ HD15115 & \(5\times 10^{-4}\) & 98 & 340/580 & -4.7/-3 & -5.5/-3.5 & & 86 & Engler et al. (2019) \\ HD143675 & \(5.6\times 10^{-4}\) & 48.1 & 55.6 & & -3 & 87.2 & Esposito et al. (2020) \\ HD192758 & \(5.7\times 10^{-4}\) & 95 & 170(7) & & -2 & 58 & Choquet et al. (2018) \\ HD104860 & \(6.3\times 10^{-4}\) & 114 & 160 & & -3.9 & 54 & Choquet et al. (2018) \\ HD172555 & \(7.2\times 10^{-4}\) & 10.9 & 15 & & -9.8 & 103.5 & Engler et al. (2018) \\ HD157687 & \(7.9\times 10^{-4}\) & 78.9 & 211 & & -2.2 & & 68.3 & Millar-Blanchaer et al. (2016) \\ Fomalhaut & \(9\times 10^{-4}\) & 154 & 209 & -3.3 & & & 66 & Kalas et al. (2013) \\
49Cet & \(9\times 10^{-4}\) & 129 & 270 & & -2.1 & & 79 & Choquet et al. (2017) \\ HD107146 & \(1.2\times 10^{-3}\) & 135.5 & 185 & -4.8 & & -2.8 & 18 & Ardila et al. (2004) \\ HD145560 & \(1.2\times 10^{-3}\) & 85.3 & 108 & & -3 & & 44 & Esposito et al. (2020) \\ HD106906\({}^{b}\) & \(1.3\times 10^{-3}\) & 67.6 & 110 & & -4 & & 85 & Lagrange et al. (2016) \\ HD35841 & \(1.3\times 10^{-3}\) & 56 & 110 & -3.55 & & & 84.9 & Esposito et al. (2018) \\ HD191089\({}^{b}\) & \(1.4\times 10^{-3}\) & 45 & 60;400 & -6.1;-2.68 & & & 59 & Ren et al. (2019) \\ HD131835 & \(1.4\times 10^{-3}\) & 96 & 140 & & -2.3 & & 75.1 & Feldt et al. (2017) \\ TWA \({}^{a}\) & \(1.7\times 10^{-3}\) & 25 & 70 & & -1.5 & & 13 & Olofsson et al. (2018) \\ HD115600 & \(1.7\times 10^{-3}\) & 48 & 60 & & -7.5 & & 79.5 & Currie et al. (2015) \\ HD181327 & \(2\times 10^{-3}\) & 90 & 150;250 & & & -3.7;-1.7 & 32 & Stark et al. (2014) \\ HD15745 & \(2.2\times 10^{-3}\) & 165 & 450 & -3.5 & & & 65 & Kalas et al. (2007) \\ \(\beta\) Pic & \(2.4\times 10^{-3}\) & 127 & 193;258 & -4;-3/-4.5;-3.5 & & & 90 & Golimowski et al. (2006) \\ HD117214 & \(2.67\times 10^{-3}\) & 60.2 & 97.2 & & -4.5 & & 71 & Esposito et al. (2020) \\ HD32297\({}^{a}\) & \(2.7\times 10^{-3}\) & 130 & 400 & -3.2 or -5.3 & -4 or -6 & & 90 & Bhowmik et al. (2019) \\ HD141082 & \(3\times 10^{-3}\) & 29.6 & 7 & & -3.9 & 83 & Wahhaj et al. (2016) \\ HD61005 & \(3\times 10^{-3}\) & 60 &? & & -2.7 & & 84.1 & Olofsson et al. (2016) \\ HD36546 & \(4\times 10^{-3}\) & 82 & 110 & -2.5 & -1.5 & & 78.9 & Lawson et al. (2021) \\ HD111161 & \(4.2\times 10^{-3}\) & 72.4 & 98.1 & & -3 & 62 & Esposito et al. (2020) \\ HD156623 & \(4.3\times 10^{-3}\) & 80 & 94 & & -3.5 & & 30 & Esposito et al. (2020) \\ HD121617 & \(4.8\times 10^{-3}\) & 78.3 & 128 & & -5.6 & & 43.1 & Perrot et al. (2023) \\ HR4796 & \(5\times 10^{-3}\) & 81.4 & 330/123;330 & -5.1/-7.8;-3.8 & & & 76 & Schneider et al. (2018) \\ HD129590 & \(5\times 10^{-3}\) & 59.3 & 141 & & -1.3 & 74.6 & Matthews et al. (2017) \\ HIP79977 & \(5.21\times 10^{-3}\) & 73 & 225 & & -2.5 & & 84.6 & Engler et al. (2017) \\ \hline \end{tabular}
\end{table}
Table 1: Debris discs, ranked by increasing fractional luminosity \(f_{d}\), with resolved halos in scattered light, for which a power-law fit of the outer radial profile of either the surface brightness (SB), the volume number density \(n\) or the surface density \(\sigma\) is available in the literature. \(r_{0}\) is the location of the density peak in the main belt, \(r_{\rm max}\) the radial distance out to which the radial slope of the halo has been constrained. When two different profiles have been obtained for opposite sides of the disc, a ”\(/\)” is inserted between the two fits. When the fitted radial slope has the form of a broken power-law, two \(r_{max}\) values are given, separated by a “\(;\)”, the first value being the distance at which the slope changes. Likewise, two radial index values are also given, also separated by a “\(;\)”. The values in parenthesis given at the top of the SB, \(n\) and \(\sigma\) columns correspond to the expected radial index values in the simplified scenario of STCH and TBWU.
which might potentially allow to image halos at these wavelengths, underlining the need for a more comprehensive study of their detectability in thermal emission.
### paper outline
In the light of these pending issues and recent new developments, we undertake a thorough numerical re-investigation of the halo phenomenon. We will focus on two main problems. Firstly, how the scattered light radial profiles derived by STCH and TBWU might be affected by taking into account the effect of small unbound grains, size-dependent scattering phase functions and, for edge-on discs, instrument resolution in the vertical direction. Secondly, we explore how the belt+halo phenomenon manifests itself at longer wavelengths, both in terms of the system's radial brightness profile as well as of the halo's imprint on the integrated Spectral Energy Distribution (SED).
We briefly present in Sec.2 the collisional evolution code that will serve as a basis in our study as well as the typical belt+halo setup that we will explore. Secs.3.1 and 3.2 present the results of our numerical exploration regarding radial profiles in scattered light. Results regarding profiles at longer wavelength as well as system-integrated SEDs are presented in Sec.3.3. We discuss the implications of our results in Sec.4 and conclude in Sec.5.
## 2 Model
### Basic principle
Our numerical investigation will use the tried and tested particle-in-a-box collisional model initially created by Thebault et al. (2003), and constantly improved during the past 2 decades. As with similar codes, such as the ACE code of the Jena group (Krivov et al. 2005), particles are sorted into logarithmic size bins, whose populations are evolved following estimates of their mutual impact rates and collisional outcomes. This code has a 1D spatial resolution and is divided into radially concentric annuli. We use the latest version of the code, as described by Thebault & Kral (2019), for which collisional rates are estimated by separate deterministic \(N\)-body simulations taking into account the effect of stellar gravity and radiation pressure 3, and with a more realistic collisional prescription for small particles in the \(\lesssim 1\)mm range taken from the laboratory experiments by Nagaoka et al. (2014). We refer the reader to Thebault & Augereau (2007) and Thebault & Kral (2019) for a detailed description of the code.
Footnote 3: under the assumption that no additional perturbing body is present
To derive surface brightness profiles at different wavelengths as well as SEDs we use the (also tried and tested) GRaTeR radiative-transfer package (Augereau et al. 1999).
### Setup
For the sake of readability of our results and in order to avoid numerically-expensive parameter explorations, we consider a reference "nominal" setup, chosen as to be the most representative of observed belt+halo systems. The considered setup is that of a narrow belt of parent bodies, extending from 50 to 66 au, thus centered at \(r_{0}=58\) au and of width \(\Delta r_{0}=16\) au, where all the mass is initially confined. We consider 119 log size bins, between \(s_{max}=20\)km and \(s_{min}=0.025\mu\)m. Large parent bodies in the belt (large enough for radiation pressure effect to be negligible) are assumed to be located on orbits of average eccentricity \(<\)\(e_{0}\)\(>\)=0.05 and inclination \(<\)\(i_{0}\)\(>\)=\(<\)\(e_{0}\)\(>\)/2=0.025, which are typical values for debris-producing discs (e.g., Thebault 2009). Regarding grain composition, we consider the generic case of compact astroisilicates (Draine 2003). As for the central star, we consider an A6V stellar type, identical to the archetypal \(\beta\) Pictoris case. For this stellar type, the blow-out size \(s_{blow}\) due to radiation pressure is of the order of \(2\mu\)m, which is much larger than our \(s_{min}\), meaning that our code takes into account the potentially important effect of unbound grains.
Another important parameter is the level of collisional activity within the disc, which can be, to a first order, parameterized by \(f_{d}\), the disc's fractional luminosity in IR. We consider as a reference case a system with \(f_{d}=8\times 10^{-4}\), which is approximately the average value for the resolved-halos systems presented in Tab.1. In the spirit of Thebault & Kral (2019), we also consider a "very collisionally active" case with \(f_{d}=4\times 10^{-3}\), corresponding to the brightest discs in our census (such as HD32297 or HD129590). As explained in Thebault & Kral (2019), in practice we always start with discs whose initial masses are expected to correspond to \(f_{d}\) larger than the ones we are aiming for, and then let the systems collisionally evolve until (1) the shape of the particle size distribution (PSD) no longer changes (steady state), and (2) \(f_{d}\) has decreased to the desired value.
All main parameters for this nominal setup are summarized in Tab.2.
## 3 Results
### Radial profile in scattered light
Fig.1 presents the radial profile, at steady state, of both \(\tau\) and the SB in scattered light, for the nominal case as well as for the "bright disc" case, assuming isotropic scattering when deriving the SB. Strictly speaking, the displayed SB(r) profiles correspond to a disc seen face-on (\(i=0\)), but they should also be valid for any other viewing configuration as long as the disc is not seen edge-on, i.e., as long that, at a given position along any radial cut of the observed disc, there are only grains from a given stellocentric distance that contribute to the flux. To a first approximation, this is true if the disc's opening angle \(\psi\) is less than \(90-i\). For these non-face-on cases, the displayed SB profiles are the ones that should be obtained for the deprojected stellocentric luminosity profile along any radial cut.
The average radial slope that is asymptotically reached by the optical depth profiles is -1.48 for the nominal case (\(f_{d}=8\times 10^{-4}\)), which is very close to the theoretical value of -1.5, and the average slope of the \(SB(r)\) profile is -3.42, again very close to the canonical value of -3.5 when applying the relation \(\Gamma=\gamma_{out}+2\). The optical depth profiles are, however, shallower
\begin{table}
\begin{tabular}{l c} \hline \hline Stellar-type & A6V \\ Stellar luminosity & 8.7L\({}_{\odot}\) \\ Stellar Temperature & 8052K \\ \hline Material & Compact astrosilicate \\ Bulk density & 2.7g.cm\({}^{-3}\) \\ Blow-out size (\(s_{\rm blue}\)) & 2.5 \(\mu\)m \\ Minimum particle size & 0.025 \(\mu\)m \\ Maximum particle size & 20 km \\ Dynamical excitation in the PB belt & \(<\)\(e_{0}\)\(>\)=2\({}_{\rm c}\)\(>\)=0.05 \\ Radial extent of the PB belt & 50-66 au \\ fractional luminosity at steady state & \(f_{d}\sim 8\times 10^{-4}\) (nominal) \\ & \(f_{d}\sim 4\times 10^{-3}\) (bright disc) \\ \hline \end{tabular}
\end{table}
Table 2: Numerical setup
for the highly collisional bright disc case (\(f_{d}=4\times 10^{-3}\)), with a radial index of -1.21 for \(\tau(r)\) instead of -1.48. This is mainly due to the fact that, for such high-\(f_{d}\) cases, unbound grains (\(s<s_{blow}\)) begin to significantly contribute to the geometrical cross section, the proportion of unbound grains being directly proportionnal to the level of collisional activity in the system (see Thebault & Kral 2019). The effect of unbound grains is less pronounced on the flux, because their smaller size, especially the ones in the \(\lesssim 0.1\mu\)m range, makes them less efficient scatterers. The slope of \(SB(r)\) in the "very bright" case is still, however, slightly smaller than for the \(f_{d}=8\times 10^{-4}\) case, with a radial index of -3.25 instead of -3.42. In this \(f_{d}=4\times 10^{-3}\) run, unbound grains even dominate the flux in the halo beyond \(\sim 200\) au (Fig.2). The reason why these \(s<s_{blow}\) dust particles will tend to flatten the radial profile is that their distribution should follow \(\tau\propto r^{-1}\) instead of \(\propto r^{-1}\) for bound grains (see section 3.3 in STCH). This flattening effect is less visible for the reference \(f_{d}=8\times 10^{-4}\) case, but the contribution of unbound grains to the flux still exceeds 10% everywhere in the halo (see black solid line in Fig.2).
Note that, as already noted by Thebault et al. (2012), the asymptotic \(\tau\) and \(SB\) slopes are not reached immediately beyond the PB belt, but after a transition region where the density and flux drop more abruptly. This is because, right outside the PB belt, there is a sudden transition from a ring where all particle sizes are present to an outer region only populated by small, radiation-pressure affected grains. If we define the limiting size \(s_{PR}\) for radiation-pressure-affected grains by the criterion
\[e(s_{PR})=\frac{\beta(s_{PR})}{1-\beta(s_{PR})}=e_{0} \tag{4}\]
then, for \(e_{0}=0.05\), we get \(s_{PR}\sim 11s_{blow}\). For a standard \(dN\propto s^{-3.5}\) size distribution, the absence of all \(s>s_{PR}\) grains results in a drop of geometrical cross section of \(\sim 30\%\), which is roughly what we observe on Fig.1. The width of this transition region at the outer edge of the PB belt is \(\sim 0.15r_{0}\) for both the nominal and high-\(f_{d}\) cases.
The radial SB slopes of Fig.1 have been derived by implicitly assuming isotropic scattering. Because the viewing angle should not vary along a given radial cut, they are in principle independent of the scattering phase function (SPF) provided that the SPF does not depend on stellocentric distance. This would, for instance, be the case for a standard Henyey-Greenstein SPF prescription (Henyey & Greenstein 1941) with a constant, size-averaged \(g\) parameter. Nevertheless, this assumption might not hold for halos, where the strong grain-size segregation as a function of \(r\), coupled to the fact that the SPF is expected to vary with grain size, should result in a radial variation of the phase function. We thus explore this effect further by including a more realistic SPF in the models. The simplest option would be to use the Mie theory (Mie 1908), for which the dust particles are assumed to be compact spheres. However, several studies (e.g., Rodigas et al. 2015, Milli et al. 2019, Chen et al. 2020, Arriaga et al. 2020) have demonstrated that the assumption of spherical grains does not really hold when trying to reproduce observations of debris disks. Instead, we here use the SPF computed using the distribution of hollow spheres model (DHS, Min et al.
Figure 1: Normalized radial profiles of the vertical optical depth (\(\tau\)) and the surface brightness (SB), in scattered light (\(\lambda=0.8\mu\)m), for the nominal setup presented in Tab.2, as well as for a “very bright” disc case (fractional luminosity \(f_{d}=4\times 10^{-3}\)). The blue area marks the radial extent of the parent body belt. Scattering is here assumed to be isotropic
Figure 3: Radial profile of the normalized surface brightness obtained using the DHS scattering phase function prescription for 3 different scattering angles
Figure 2: Radial dependence of the fraction of the flux density, at 0.8\(\mu\)m in scattered light, that is due to unbound grains (\(s<s_{blow}\)). Results are shown for the nominal and “bright disc” cases, as well as for anisotropic scattering, at 2 different angles, using the distribution of hollow spheres model (DHS, Min et al. 2005) for the scattering phase function (SPF)
2005). For each grain size, the optical properties (e.g., scattering efficiencies, SPF) are obtained by averaging over a distribution of shapes. As discussed in Min et al. (2016), this model is able to reproduce the properties of irregularly shaped samples and the departure from spherical symmetry is controlled by the maximum filling factor \(0\leq f_{\rm max}<1\), which we set to 0.8 (Min et al. 2016). The SPF are computed using the optool(Dominik et al. 2021) using the "DIANA" standard opacities (Woitke et al. 2016, a mixture of pyroxene and carbon, with a mass ratio of 87 and 13% and a porosity of 25%, optical constants from Dorschner et al. 1995 and Zubko et al. 1996, respectively), at a wavelength of 0.8 \(\mu\)m.
As we can see in Fig.3, the SB profile is significantly affected by using a more realistic SPF prescription. The most striking effect is to be found in the innermost part of the halo, right beyond the PB ring, where the luminosity drop is significantly reduced and even almost completely vanishes at low values of \(\theta\). In addition, the profile also gets slightly shallower further out in the halo. For the lowest \(\theta=15\)deg run, the slope is \(\sim-3.12\) for the nominal \(f_{d}=8\times 10^{-4}\) case (instead of -3.42 for isotropic scattering) and \(\sim-2.95\) for the bright \(f_{d}=4\times 10^{-3}\) disc. This is due to two concurring effects. Firstly, since the average size of bound grains in the halo decreases with stellar distance, asymptotically tending towards \(s\sim s_{blue}\) (see Equ.1), and since, at all scattering angles \(\theta\), the considered SPF increases with decreasing grain size in the \(s>s_{blue}\) domain (Fig.4), the relative contribution of outer halo regions to the SB is increased. Secondly, for \(\theta\geq 4\)deg., the SPF actually peaks in the \(s<s_{blue}\) domain. This enhances the contribution of unbound grains to the flux (Fig.2) which, as already noted, tends to further flatten the SB profile..
### Edge-on configuration
The edge-on case introduces additional specific issues that did not affect the general configuration presented in the previous section. The first one regards the scattering phase function. The SB is indeed now measured in the disc's midplane along the projected distance \(\rho\), and is thus now the sum of contributions coming from different physical stellocentric distances \(r\), for which the scattering angle is not the same. As a result, the resulting luminosity should in principle depend on the SPF even for SPF prescriptions that do _not_ depend on grain size, and hence stellar distance. We test the importance of this effect by considering the classical Henyey-Greenstein prescription, for which the anisotropy of the scattering behavior of the dust grains is characterized by the dimensionless asymmetry parameter \(-1<g<1\):
\[f(\theta)=\frac{1-g^{2}}{4\pi\left(1+g^{2}-2g\cos(\theta)\right)^{3/2}}. \tag{5}\]
The \(g=0\) case represents isotropic scattering, in which photons are scattered in all directions with equal probabilities. For positive (negative) values of \(g\), incident photons are scattered in the forward (backward) direction, and as \(g\) increases (decreases) the asymmetry is even more pronounced. In practice, the value of the asymmetry parameter \(g\) is a way to control the size of the particles, since grains much smaller than the wavelength of observations are expected to scatter isotropically (hence \(g=0\)), while grains larger than the wavelength should display a strong forward-scattering peak (\(g>0\)).
Figure 4: Size dependence of the DHS scattering phase function, for 4 different scattering angles
Figure 5: Edge-on seen disc. Projected radial profile of the midplane surface brightness for 4 different values of the \(g\) parameter of the Henyey–Greenstein phase function, as well as for the DHS prescription for the SPF (see main body of the text)
Figure 6: Edge-on seen disc. Projected radial profile of the midplane surface brightness for 4 different values of the instrument’s resolution. \(h\) is the disc’s aspect ratio, \(r_{0}\) the centre of the parent body belt and \(dr/2\) its width (isotropic scattering is assumed)
Fig.5 presents the projected midplane profiles \(SB(\rho)\) for the isotropic (\(g=0\)) case as well as for 3 different values of \(g\), assuming that the disc is not flared and has a constant aspect ratio \(h=H/r\). For the isotropic scattering case, we get SB\((\rho)\propto r^{-3.3}\) in the outer regions, which is close to the standard -3.5 value that directly follows from the aforementioned relation \(\Gamma=\gamma_{out}+1+\delta\) for a constant \(h\) (i.e., \(\delta=1\)). We see that the projected \(SB(\rho)\) profile strongly depends on \(g\) in the innermost \(\rho<r_{0}\) regions. This is expected, because the scattering angle for the most luminous grains, i.e., those in the PB ring, can reach very small values at small \(\rho\), which leads, for strong forward scattering (i.e., high \(g\) values) to a significant increase of \(SB(\rho)\) for decreasing \(\rho\). In the outer regions beyond the projected outer edge of the PB belt, the dependence on \(g\) is weaker but still noticeable. For the most extreme \(g=0.95\) case, the radial index reaches \(\gamma_{out}\sim-3.7\) in the outer regions, which is \(\sim-0.4\) steeper than for the isotropic (\(g=0\)) case. This weaker dependence is due to the fact that, in these outer regions, the variation of scattering angles with \(\rho\) are more limited than in low \(\rho\) regions. Interestingly, taking a more realistic and size-dependent DHS prescription for the SPF tends to move the SB profile back close to the reference \(r^{-3.5}\) slope (blue line in Fig.5). This is because of the increased role of small bound and unbound grains that acts to flatten the surface brightness profile (see previous section), and thus acts in the opposite direction of the geometrical effect of integrating along the line of sight.
Another potentially important issue for the edge-on configuration is whether or not the disc is resolved in the vertical direction. The midplane profiles presented in Fig.5 indeed implicitly assume that the disc is resolved in \(z\) at all projected distances \(\rho\), so that there is a natural geometrical dilution of the flux in the vertical direction. However, vertical resolution is not necessarily achieved with current instrument facilities (see Sec.4). We explore the effect of not resolving the disk vertically in Fig.6, by considering 4 different cases, ranging from fully-resolved-everywhere (resolution \(res\ll h\times r_{0}\)) to a very poorly resolved case where only the outermost \(>4r_{0}\) regions are vertically resolved. As can clearly be seen, the \(SB(\rho)\) midplane profile flattens with decreasing vertical resolution. For the lowest resolution (\(res=4\times h\times r_{0}\)) case, the slope for the midplane profile tends towards \(\gamma_{out}=-2.3\). This value is \(\sim\gamma_{out(res=0)}+1\), which is expected because the vertical dilution term is now absent and the "midplane" luminosity does here correspond to the \(z\)-integrated flux.
### Wavelength dependence and SED
We present here, for the first time, an exploration of the halo phenomenon at over a wide range of wavelengths, notably in thermal emission up to the millimetre domain.
Fig.7 presents, for the non-edge-on configuration, how the \(SB(\rm r)\) profiles vary with \(\lambda\). At long wavelengths (160 and 800\(\mu\)m) there is a sharp drop at the outer edge of the parent body belt. This is due to the fact that, at these large \(\lambda\), the absorption coefficient \(Q_{abs}\) of the small grains that dominate the geometrical cross section is much smaller than 1. As a consequence, the flux within the PB belt is dominated by large \(s\gg s_{\rm blow}\) grains, whose number density abruptly drops beyond the PB belt because their orbits are only very weakly affected by stellar radiation pressure. Beyond \(r_{0}+\Delta r_{0}/2\), small grains take over, but their higher number density cannot compensate for their very low \(Q_{abs}\ll 1\) values. Note that, in this dimmer far-IR and mm halo, the radial profile index of the SB is \(\sim-2.2\), which is significantly shallower than in scattered light. The likely interpretation for this is that, at these long wavelengths, the temperature \(T\) of grains in the \(\sim 70-400\) au regions implies that they emit very close to the Rayleigh-Jeans approximation of the Planck function, for which the flux at a given wavelength \(\lambda\) is \(\propto T\). Since the blackbody temperature of grains is in turn proportional to \(r^{-0.5}\), it follows that the relation between the slope \(\Gamma\) of the vertical optical depth \(\tau\) and the slope \(\gamma_{out}\) of the SB(\(\rm r)\) profile is \(\Gamma=\gamma_{out}+0.5\) instead of \(\Gamma=\gamma_{out}+2\) (see Equ.2).
At \(\lambda=70\mu\)m, however, the drop at the outer edge of the PB belt is much more limited, and is only \(\sim 30\%\) more pronounced than in scattered light. This is because, at this wavelength, the poorly-emitting (i.e., with \(Q_{abs}<1\)) grains come from a narrow size range \(s_{blow}\leq s\leq\lambda/2\pi\sim 10\mu\)m that only accounts for \(\sim 60\%\) of the total cross section in the birth ring 4. Moreover, even grains in this \(s_{blow}\leq s\leq\lambda/2\pi\) domain still have non-negligible \(Q_{abs}\) values at \(\lambda=70\mu\)m (typically \(\sim 0.2-0.3\), e.g., Morales
Figure 8: Normalized system-integrated SED for the nominal setup (\(f_{d}=8\times 10^{-4}\)), displaying also the respective contributions coming from the parent body belt (between 50 and 66 au) and the halo (beyond 66 au).
Figure 7: Radial profile of the normalized surface brightness at four different wavelengths, estimated with the GRaTeR package for the nominal setup
et al. 2013). As for the slope of the profile, it is \(\sim-2.9\), slightly steeper than at longer wavelengths, which results from the fact that the Rayleigh-Jeans approximation becomes less accurate at these shorter \(\lambda\).
The SB profile at \(\lambda=15\mu\)m is very different, with an absence of luminosity drop at the edge of the PB belt, followed by a very steep decrease with radial distance in the halo. This is because, at this wavelength and at \(r\geq 60\) AU from an A6V star, we are in the Wien side of the Planck function, for which the flux increases exponentially with \(T\). As a consequence, small grains, even unbound ones, whose temperatures exceeds the almost black-body temperature of larger particles, totally dominate the flux 5, and there is no drop at the outer edge of the PB ring due to the sudden absence of large grains. However, because of the exponential dependence of the flux on T in the Wien domain, the decrease of the SB with radial distance becomes steeper and steeper as the grains get colder in the outer halo, going from a radial index of \(\sim-4.3\) just outside the PB belt to \(\sim-6\) in the \(300-400\) au region.
Footnote 5: their higher T being enough to compensate for their lower emissivity (for a detailed discussion on this issue, see Thebault & Kral 2019)
In Fig.8 we look at the wavelength dependence from the perspective of the system-integrated Spectral Energy Distribution. We see that, despite corresponding to grains that are further away from the star, the halo's SED actually peaks at a shorter wavelength than the PB belt's contribution. In addition, at all wavelengths shorter than \(\sim 90\mu\)m, the relative contribution from the halo to the total flux \(F_{halo}/F_{disc}\) exceeds \(50\%\) and thus dominates that of the PB ring (Fig.9). These results agree well with those of the radial profiles, showing that there is no sharp luminosity drop at the PB belt/halo interface for wavelengths short of \(70\mu\)m. The only exception is a narrow wavelength range around \(\lambda\sim 8-15\mu\)m. This corresponds to a "sweet spot" where the thermal flux dominates scattered light but is in the Wien regime of the Planck function, for which there is a very steep decrease of the flux with radial distance (see above) and the halo's contribution is thus much lower. Note, however, that the \(\lambda\sim 10\mu\)m domain corresponds to wavelengths at which the disc is very faint (Fig.8). At longer wavelengths (\(\geq 90\mu\)m), we logically see a decrease of the halo's contribution to the total flux, which is the direct consequence of the low emissivity of its small grain population at increasing \(\lambda\). Nevertheless, even in the mm-wavelengths domain, the halo's contribution never gets totally negligible. As an example, it is still \(5\%\) of the total flux at \(\lambda=800\mu\)m.
From Fig.9 we also see that, when considering the global \(F_{halo}/F_{disc}\) ratio, we obtain relatively similar results for both the nominal and bright-disc cases. This is because, in terms of the relative contribution of the halo, the only parameter distinguishing these two cases (once normalized) is the fraction of unbound grains in the system. The higher fraction of submicron grains for \(f_{d}=4\times 10^{-3}\) will only affect the flux at short wavelengths (\(\lambda\lesssim 1\mu\)m), where we see that \(F_{halo}/F_{disc}\) is indeed \(\sim 10\%\) higher for this bright disc case, as well as for the aforementioned \(\lambda\sim 8-15\mu\)m domain, where the system's total flux is dominated by unbound grains _within_ the PB belt (see Thebault & Kral 2019).
## 4 Discussion
### How universal are the SB \(\propto r^{-3.5}\) and \(\tau\propto r^{-1.5}\) profiles?
As mentioned in Sec.1, halo radial profiles are sometimes used as a proxy to constrain the level of "unexpected activity" in the outer regions of debris discs: perturbations by (unseen) planets, effect of companion or passing stars, gas drag, etc... This is usually done by measuring the surface brightness profiles or, more often, the extrapolated underlying dust density distribution are compared to the expected "normal" profiles for unperturbed systems derived by Strubbe & Chiang (2006) and Thebault & Wu (2008). Such direct comparisons do, however, raise several issues. The first one is that the STCH and TBWU reference radial
Figure 10: Surface brightness radial slopes taken from Tab.1. The dark blue area corresponds to the expected values in a non-perturbed system according to the present numerical investigation. The light blue area is the same, but for edge-on discs, taking into account the potential non-resolution of the disc in the vertical direction. For some systems, there are different slopes estimates depending on the radial position in the halo, as well as depending on which side of the disc has been considered. In this case, up to 4 values are displayed: diamonds and “x” stand for the radial indexes in the inner and outer-halo, respectively, for one disc side, and the squares and “+” are the equivalent indexes for the opposite side. For systems where there is only one global fitted radial index, all 4 symbols do overlap. Edge-on systems are written in italics
Figure 9: Relative contribution to the total flux, as a function of wavelength, coming from the whole halo for both the nominal and ”very bright” disc cases. The discontinuity at \(\lambda\sim 8\mu\)m corresponds to the transition between the scattered-light dominated domain and the thermal-emission dominated one.
slopes are only valid under several simplifying assumptions, the main ones being the absence of unbound grains and, for the SB profile, isotropic scattering. Conversely, most observation-based fits of the \(\sigma\) and \(n\) profiles also make some strong simplifications, notably that the grain size distribution is the same everywhere in the system, which is clearly not the case for halos that are on the contrary extremely size-segregated. While this simplification is of limited consequences as long as isotropic scattering is assumed 6, it becomes problematic when considering more realistic, and, in particular, size-dependent SPFs.
Footnote 6: note, however, that, in this case, the estimated slopes do not correspond to density profiles but to that of the vertical optical depth \(\tau\), see Sec.1
We have here, for the first time, numerically explored how these different approximations and simplifications could bias our understanding of debris disc halos. One important result regards the proportion of unbound grains, which always account for more than 10% of the scattered-light luminosity and even dominate the flux in the outer-halo regions for a bright, collisionally active disc (Fig.2). For this \(f_{d}=4\times 10^{-3}\) case, the presence of unbound grains is able to significantly flatten both the \(\tau\) and SB profiles, since these small particles have a shallower radial distribution in \(\tau\propto r^{-1}\). The effect of small grains becomes even more pronounced when considering a realistic size-dependent SPF function, which, for all scattering angles \(\theta\geq 4^{\circ}\) always peaks in the \(s<s_{blaw}\) domain. To a first approximation, the combined effect of high collisional activity and size-dependent SPF results in SB slopes that tend towards \(\sim-3\) instead of the standard -3.5 value. Conversely, we expect GRaTeR-type fits of the underlying dust density, which do not take into account these effects, to underestimate the index of the \(\sigma\) or \(n\) slopes by up to a value of \(\sim 0.5\).
For the specific case of edge-on discs, this flattening effect, due to the increased influence of unbound grains and of the smallest bound ones, is less visible for the midplane SB profile. This is mainly because it is in large parts compensated by the purely geometrical effect that comes from the fact that the flux at a projected distance \(\rho\) is now the sum of contributions integrated along the line of sight, for which the flux dependence on \(\rho\) becomes weaker with increasing projected distance. There is, however, a parameter that potentially has a much greater influence on the midplane profile of edge-on discs, which is the non-resolution of the disc in the vertical direction. Our results indeed show that the consequence of not vertically resolving a disc could lead to a flattening of up to +1 in terms of radial index (with an index of -2.5 instead of -3.5) of the SB radial profile of its halo. This result might prove important because, even if second-generation instruments, such as SPHERE or GPI, provide a pixel scale of about \(12-15\) milli-arcsec (e.g., Maire et al. 2016), and the angular resolution of ALMA observations keeps on improving, down to the au scale for the closest systems, only a handful of systems, and the closest ones, have been resolved in the vertical direction \(z\). From Table 3 and Figure 11 of Olofsson et al. (2022b), we see that, amongst the list of constrained-halo systems of Tab.1, only \(\beta\)-Pic, Au Mic, HR4796, HD115600 and HD61005 have been unambiguously resolved in \(z\) at near-IR wavelengths.
With these news results in mind, we can take a renewed look at Tab.1. Our analysis has shown that the most reliable halo slope estimates are likely to be those made directly on the observed SB profiles, because they do rely on fewer model-dependent assumptions (about the presence of unbound grains or the size-dependence of the SPF). As a consequence, we show in Fig.10 the dispersion of radial slopes for all systems for which it is the SB profile that has been fitted from observations. It can be seen that 6 out of the 13 considered systems do have halos with profiles that are fully within the boundaries of acceptable values found by our numerical investigation: Fomalhaut, AU Mic, HD15745, HD32297, HD35841 and HD53143. On the opposite side of the spectrum, three discs, HR4796, HD107146 and HD365467 fall fully outside these boundaries and should thus be systems for which additional mechanisms are sculpting the outer realms of the discs. For the remaining 4 systems, some parts of the halo do have expected radial slopes while some do not, and an assessment of these systems is not possible without undertaking system-specific investigations that go beyond the scope of the present paper. Note, however, that for 2 of these systems, HD191089 and HD202628, the steeper slopes have been measured in a narrow region just outside the PB belt (see the difference between the \(r_{0}\) and \(r_{max}\) values in Tab.1), for which our simulations have shown that a steeper SB profile is in fact expected.
Footnote 7: HD36546 falls within the light-blue area but it is not an edge-on system
As discussed in Sec.3, fits of the underlying \(n\) and \(\sigma\) profiles obtained with GRaTeR or similar codes should be less reliable than fits of the SB, as well as more challenging to interpret in terms of what the fitted quantities mean physically, especially when considering the fact that there is a strong size segregation in the halo. We can, however, consider that the radial dependence of the estimated \(\sigma\) and \(n\) profiles8 should, to a first approximation, give a rough estimate of the radial dependence of the vertical optical depth in these system's halos that can be compared to our results regarding the \(\tau\) profiles. Making this approximation of an equivalence between the \(\sigma\) (or \(n\times r\)) and \(\tau\) profiles we see in Fig.11 that a little less than half of the systems have density radial profiles that are within \(\pm 50\%\) of the reference \(\tau\) profiles obtained in Fig.1. The spread of radial indexes is larger than for the fitted SB slopes displayed in Fig.10,
Figure 11: Values of the vertical optical depth (\(\tau\)) radial profiles derived from the \(n\) and \(\sigma\) fits displayed in Tab.1, when making the simplifying assumption that the radial dependence of \(\tau\) is the same as that of \(\sigma\), or is equal to that of \(n\) plus one (constant opening angle). The blue domain corresponds to the range of \(\tau(r)\) indexes between the -1.48 value obtained for our nominal case and the -1.21 value for our bright disc (\(f_{d}=4\times 10^{-3}\)) case (see Fig.1)
with \(\sim 25\%\) of systems having radial slopes more than 2 indexes below that of our reference simulations. We note, however, that for the 3 systems with the steepest estimated slopes, HD172555, HD160305 and HD115600, the halo profiles are only derived for a relatively limited radial region. As already pointed out, for this narrow region just beyond the PB belt both the SB and \(\tau\) profiles are expected to be steeper, so that the slope indexes in this region should not be representative of the halo profile further out. In addition, for some systems where both the SB and \(n\) or \(\sigma\) radial behaviour have been fitted, we see some incoherent results, with for instance the \(\sigma\) profiles being _steeper_ than that of the SB for HD15115 and HD32297. This points towards an intrinsic potential pitfall in global ring+halo fits, which we discuss in the next subsection.
### Suggested procedure for fitting discs with halos
Studies deriving \(\sigma\) or \(n\) profiles from observations treat the ring+halo system as a whole, usually assuming that the density profiles follows
\[n(r)=\left[\left(\frac{r}{r_{0}}\right)^{-2\alpha_{in}}+\left(\frac{r}{r_{0}} \right)^{-2\alpha_{out}}\right]^{-1/2}n_{0} \tag{6}\]
and exploring different values of \(r_{0}\), \(\alpha_{in}\) and \(\alpha_{out}\), as well as of the disc inclination \(i\) and, often, the \(g\) parameter of the HG scattering phase function. The best fit is then found by comparison to the observed luminosity profile through a classical \(\chi^{2}\) minimizing procedure. A potential problem is that, because the flux in the bright PB belt is usually much higher than in the halo, the \(\chi^{2}\) fit is dominated by the narrow bright ring, so that acceptable global fits can actually be a poor match of the luminosity profile in the halo region. This could explain the fact that, in some cases, fitted \(n\) profiles do seem to disagree with the observed SB profile in the halo (see previous section). More generally, this could lead to large errors when trying to assess the specific structure of the halo.
To alleviate these potential problems we suggest that, in future observational studies, the fitting of the system's radial profile should be done in two steps, in which the main ring and the halo are fitted separately. For non edge-on cases, the PB belt alone could first be fitted by finding the best possible set of \(i\), \(r_{0}\) and \(\Delta r_{0}\) values, where \(\Delta r_{0}\) is the FWHM of the ring. Once these parameters are constrained, the halo profile beyond \(r_{0}+0.5\Delta r_{0}\) can then be investigated by fitting the \(\alpha_{out}\) index. Such a procedure should ideally take into account the varying size distribution within the halo, the potential effect of unbound grains and the size-dependence of the SPF, all parameters that could potentially change the correspondence between an observed SB profile and the radial profile of the underlying optical depth distribution. Nevertheless, thoroughly exploring these parameters would probably render the fitting procedure much too cumbersome. A possible compromise could be the "semi-dynamical" model experimented by Pawellek et al. (2019), which takes as a starting point a PB belt as constrained from observations in the sub-mm, from which smaller grains are produced with their abundances scaled up by the corrective factor found by STCH and TBWU. Synthetic images are then produced, potentially taking into account realistic SPFs, which are compared to observations in scattered-light. This is, however, not a fit _per se_, even though it provides important information on as to whether or not the observed ring+halo system behaves according to the predictions for unperturbed systems found by STCH and TBWU. We here propose to approach the problem the other way around, taking as a starting point the constrained SB(\(r\)) profile in the halo, and "reverse engineer" the \(\tau(r)\) profile from it. Of course, since the SPF in the halo depends on the grain size distribution, which in turn depends on \(r\) and is not known beforehand, there is in principle one unknown too many. However, an approximation of the SPF(\(r\)) dependence could be obtained assuming that, at any given position \(r\) in the halo, the geometrical cross section should be dominated by grains of a size \(s\), such as their apoastron is located at \(r\) when produced at \(r_{0}\) in the main PB belt (Equ.1). The SPF at position \(r\) can then be estimated for this specific grain size \(s\) and inputted into the \(\alpha_{out}\) fitting procedure.
Note that, in principle, sophisticated numerical codes such as ACE, LIDT-DD (Kral et al. 2013), or even the collisional model presented in Section 2 of the present paper, can provide self-consistent estimates of the particle size distribution as a function of radial location in the system that are more accurate than what would be obtained by the procedure suggested in the present paragraph. However, using such sophisticated models to do parameter-best-fits of observed discs would require performing a huge set of CPU-consuming simulations exploring an extended parameter space, which would imply a considerable amount of effort. Why this is undertaken in some rare cases, such as Muller et al. (2010) for Vega or Lohne et al. (2012) for HD207129, it can only be done within the frame of long and specially dedicated numerical studies. What we are aiming for here is different, that is, a relatively easy and "quick-and-not-so-dirty" way of constraining main disc parameters without the inherent flaws of only using Equ.6 coupled to a radiation transfer code. The procedure we are presenting is typically to be used at the end of observational studies presenting new resolved data.
### Halos in thermal emission
Our numerical exploration has shown that halos, despite being made of small micron-sized grains, should remain relatively bright deep into the mid-IR and even far-IR domain. As an example, for a typical debris ring located at 60au, the halo to PB belt brightness ratio at 70\(\mu\)m is still close to what it is in scattered light (Fig.7). It is only at wavelengths longer than \(\sim 100\mu\)m that the halo luminosity drops. Note also that, at these long wavelengths, the radial profile (beyond the sharp drop at the PB belt's outer edge) of the SB in the halo is significantly shallower than in scattered light (\(\propto r^{-2.2}\) instead of \(\propto r^{-3.5}\)). Another important result of the present study regards halo total fluxes. We have shown that, except for a narrow domain around \(\lambda\sim 10\mu\)m, halos contribute between 50 and 70% of the system's total flux at all wavelengths short of \(\sim 90\mu\)m. And even in the millimetre-domain the halo still has an integrated luminosity that amounts to a few percents of that of the main parent belt. Perhaps more tellingly, we find that the total, wavelength-integrated thermal emission of the halo is approximately half that of the whole system.
Our results show that, in thermal emission, the optimal wavelength window for observing halos beyond belts in the \(50-70\) au region is \(\lambda\sim 20-70\mu\)m. This domain covers two bands (24 and 70\(\mu\)m) of the MIPS instrument on the Spitzer telescope as well the 70\(\mu\)m band of the PACS instrument on the Herschel telescope. While the resolution and sensibility of Spitzer probably was too limited to explore halos, the situation was more favourable for Herschel-PACS at 70\(\mu\)m. However, most PACS-resolved discs had estimated sizes smaller than the FWHM of the instrument (5.6\({}^{\circ}\) at 70 \(\mu\)m) and were inferred from image-deconvolution (e.g., Booth et al. 2013), preventing to derive
reliable radial profiles. Such radial profiles were obtained for only a very few systems, such as Vega (Sibthorpe et al., 2010), \(\epsilon\) Eridani (Greaves et al., 2014), Fomalhaut (Acke et al., 2012) or HD207129 (Lohne et al., 2012), but without constraining the slopes of the SB profile in the outer regions. A re-investigation of these systems' Herschel data, and in particular an estimate of their SB(\(r\)) radial slopes would definitely improve our understanding of the halo phenomenon by comparison with our present results. In most cases, however, the absence of radial profiles means that, if halos were present, they could not be identified as a fading extension of a bright belt but would anyway have contributed to the estimated disc size. It is thus likely that, at 70\(\mu\)m, a significant fraction of PACS-derived disc radii correspond to a blend between the main collisional belt and the halo, and cannot be reliably used to trace the location of these systems' dust mass reservoirs. This blending effect would be less important for the 100\(\mu\)m PACS band, but the resolution is here poorer (6.8\({}^{\circ}\)) and our results show that, even at this wavelength, the halo still contributes to \(\sim 40\%\) of the system's flux (see Fig.9). New generations of far-IR instruments would here be crucially needed to untangle the belt and halo contributions by providing reliable radial surface brightness profiles in the \(20-70\mu\)m domain. Note that the JWST's longest wavelength of observation, 28.3\(\mu\)m, does overlap with the low end of our optimal window for halo observations. Given that instrument's unparalleled sensitivity, we do thus expect it to provide us with the first resolved images of debris disc halos in the mid-IR.
Our results also challenge the way disc SEDs are sometimes used to constrain the global particle size distribution (PSD) in resolved systems. The procedure to constrain the PSD is indeed usually to consider the geometrical profile constrained from image-fitting and then find the PSD's power law index \(q\) that best fits the SED, under the assumption that the \(dN\propto s^{q}ds\) PSD holds everywhere in the system (e.g., Pawellek et al., 2019). However, this procedure is not adapted to systems for which a large fraction of the SED is due to the halo, which has a PSD that is very different from that of the PB belt. In fact, the very notion of a single \(dN\propto s^{q}ds\) law for the halo makes little physical sense, since this region is strongly size-segregated. The PSD for the halo should thus in principle be derived as a function of radial location, but this would imply having reliable estimates of the SED at different locations in the outer regions, which is generally impossible to obtain. A possible intermediate solution would be, as for estimating the SPF, to assume a simplified size-distribution in the halo, where a given radial location \(r\) is only populated by monosized grains produced in the PB belt and having their apoastron at \(r\) (Equ.1). With this assumption, and for systems for which the halo's geometry and SB profile have been constrained from image-fitting by the aforementioned procedure, its contribution to the SED can be unequivocally estimated. It can then be subtracted from the total SED to allow estimating the \(q\) PSD index in the main belt through the usual procedure. Of course, our simplifying assumption for estimating the halo's SED would imply that the halo's SB is close to \(\propto r^{-3.5}\) and should in principle not be used for systems where the SB's profile strongly departs from this radial dependence. However, we believe that this procedure is in any case preferable to a global fit that assumes a uniform PSD everywhere in the system. As for the procedure outlined at the end of Sec.4.2, sophisticated codes such as ACE or LIDT-DD could be in principle used to obtain more accurate estimates of the SED. However, here again, what we are aiming for is a quick and easy enough procedure that can be used in observation-based studies without the flaws of having to assume a constant PSD everywhere in the system.
The detailed study of individual systems goes beyond the scope of the present paper, but we conclude this section by briefly discussing if the two halo detections obtained with ALMA for HD 32297 and HD 61005 (MacGregor et al., 2018) can be explained by the "natural" behaviour of halos at long wavelengths without invoking additional mechanisms. We first note that, contrary to our numerical results, the radial profiles in the millimetre of these two halos appear relatively steep, with -6.2 and -5.5 for the extrapolated surface density's index \(\Gamma\) instead of -1.5. In addition, the radially-integrated deprojected luminosities \(F_{halo}\) of these halos amount to between 20 and 30% that of parent body belt \(F_{belt}\), which is one order of magnitude more than for our synthetic halos at \(\lambda=1.3\mu\)m (see Fig.9). This points towards an "abnormal" halo and thus the need for additional mechanisms at play in the outer regions of these systems 9 Note, however, that the fitting procedure adopted by MacGregor et al. (2018) imposes a density continuity at the belt/halo interface and it is not clear to what extent this assumption, coupled to the almost edge-on orientation of the system, affects the obtained results in terms of density slopes and \(F_{halo}/F_{belt}\) ratios.
Footnote 9: Interestingly, Krivov & Booth (2018) have identified HD61005 as being potentially self-stirred, but it remains to be seen how self-stirring can alter the brightness profiles of outer regions
### Unresolved systems
Our results have also consequences for systems that have not been resolved at any wavelength and whose size and radial location \(r_{d}\) are only constrained by their SED. One of the most sophisticated method to retrieve \(r_{d}\) from the SED is the one proposed by Pawellek et al. (2014) and Pawellek & Krivov (2015). The first step of this procedure is to fit the SED with a modified black body (MBB) model (Backman & Paresce, 1993) to derive the typical dust temperature \(T_{dust}\), which is in turn used to derive a blackbody radius \(r_{BB}\). The "true" disc radius \(r_{d}\), which implicitly corresponds to that of the PB belt, is then obtained by multiplying \(r_{BB}\) by a factor \(\Gamma_{(L*)}\) that depends on stellar type. The \(\Gamma_{(L*)}\) ratio is obtained separately by empirically comparing \(r_{BB}\) to real disc radii for a sample of _resolved_ systems in the far-IR with Herschel. To their credit, Pawellek et al. (2014) were aware of the risk of incorrectly estimating \(r_{d}\) if considering wavelengths at which small, radiation-pressure grains can contribute, and thus chose disc sizes retrieved from Herschel PACS images at \(\lambda=100\mu\)m instead of 70\(\mu\)m. However, as already mentioned, our results show that, even at this relatively long wavelength, the halo of small grains still makes out \(\sim 40\%\) of the system's total flux 10 and could thus lead to overestimating \(r_{d}\). A more reliable estimate would be obtained by considering ALMA images in the mm domain, but such data was not readily available at the time the \(\Gamma_{(L*)}\)-based procedures were developed. We thus strongly recommend updating the \(\Gamma_{(L*)}\) empirical law taking as a reference \(r_{d}\) values determined from ALMA images, in the spirit of the study by Matra et al. (2018) or Pawellek et al. (2021)
Footnote 10: because the \(Q_{obs}\) of the grains dominating the cross section is still of the order of 0.2-0.3 at this wavelength, see Section 3.3
Note that our results do challenge the notion that a single MBB law can accurately fit a system that is actually made of two distinct components, PB belt and halo, which have very different spatial structures and particle size distributions, and whose SEDs peak at different wavelengths. As a consequence, the "disc temperature" \(T_{d}\) derived by the MBB procedure probably has a limited physical meaning and cannot be a reliable estimate of
the temperature in the PB belt. Since the halo's SED peaks at a shorter wavelength than that of the PB belt, \(T_{d}\) probably overestimates the actual temperature of the collisionally active region of the disc. This does not, however, invalidate the \(\Gamma\)-based procedure, as it is in principle independent of the fact that \(T_{d}\) has a physical meaning or not. What matters here is the reliability of the empirical \(\Gamma_{(ks)}\) ratio estimates, for which \(T_{d}\) (or rather \(r_{BB}\)) can be considered as an abstract proxy.
## 5 Summary and conclusion
We have carried out the most thorough investigation of the halo phenomenon to date. We focus in particular on two issues: 1) how robust the theoretical \(\tau\propto r^{-1.5}\) and \(S\,B\propto r^{-3.5}\) radial profiles are when taking into account the role of unbound grains, realistic SPF prescriptions and instrument resolution. 2) How halos behave in thermal emission, out to the millimetre domain, both on resolved images and on system-integrated SEDs.
For a typical halo produced beyond a collisional belt located at \(\sim 60\) au, our main results can be summarized as follows:
* The contribution of small unbound grains amounts to at least \(\sim 10\%\) of halo luminosities in scattered light, and can even dominate in the outer halo regions for bright discs. For these brightest discs, halo radial profiles can become significantly shallower.
* Size-dependent scattering phase functions (SPF) do also result in flatter radial profiles, which directly follows from the fact that halos are strongly size-segregated regions.
* For edge-on viewed systems, not resolving the disc in the vertical direction can lead to a flattening of the SB(\(\rho\)) radial profile index by up to one
* Comparing these new results to a complete sample of observationally-constrained halo SB profiles, we find that roughly half of them have radial profiles fully compatible with our predictions, while \(\sim 25\%\) have profiles that cannot be explained by our models (being usually too steep). For these systems, additional mechanisms should be at play to shape the outer regions. For a large fraction of the remaining \(\sim 25\%\), halo profiles have been derived in a too narrow region to allow reaching definitive conclusions.
* We obtain comparable results for systems for which it is the underlying dust density distribution whose radial profile has been observationally constrained. However, these density distribution fits should be less reliable than SB ones. They should in particular be biased by the fact that they do not discriminate between belt and halo, and that they do not take into account the effect of unbound grains and size-dependent SPFs.
* We suggest that future observational fits of the underlying density distribution in systems with halos should be made in two steps, starting with a geometrical fit of the PB belt whose parameter are then injected in a separate fit of the halo's radial slope that accounts for size-dependent SPF effects
* Radially extended halos should also be visible in thermal emission in the \(\lambda\sim 20-100\mu\)m range, where the halo-to-main-belt contrast is comparable to what it is in scattered light
* With the exception of a narrow \(8\lesssim\lambda\lesssim 15\mu\)m domain, halos always account for more than 50% of the disc's total flux up to \(\lambda\sim 90\mu\)m.
* Despite being located further out than the PB belt, the halo's SED peaks at a shorter wavelength than that of the belt and thus makes the global system appear hotter
* Beyond \(\lambda\sim 90\mu\)m, halo brightness strongly decreases with wavelength, but halos still contribute to a few percents of the flux in the millimetre domain. This seems, however, not to be enough to explain the bright halos detected with ALMA around HD32297 and HD61005
* For unresolved discs, the presence of a halo can also bias the procedure inferring their radial location from an analysis of their SED.
The study of individual systems goes beyond the scope of the present paper and is deferred to future studies, in which additional parameters, such as stellar type, PB belt radial location and dynamical context (such as known stellar or planetary companions) will be explored.
###### Acknowledgements.
J. O. acknowledges support by ANID, - Millennium Science Initiative Program - NCN19_171.
|
2309.02704 | Resistance distance in $k$-coalescence of certain graphs | Any graph can be considered as a network of resistors, each of which has a
resistance of $1 \Omega.$ The resistance distance $r_{ij}$ between a pair of
vertices $i$ and $j$ in a graph is defined as the effective resistance between
$i$ and $j$. This article deals with the resistance distance in the
$k$-coalescence of complete graphs. We also present its results in connection
with the Kemeny's constant, Kirchhoff index, additive degree-Kirchhoff index,
multiplicative degree-Kirchhoff index and mixed degree-Kirchhoff index.
Moreover, we obtain the resistance distance in the $k$-coalescence of a
complete graph with particular graphs. As an application, we provide the
resistance distance of certain graphs such as the vertex coalescence of a
complete bipartite graph with a complete graph, a complete bipartite graph with
a star graph, the windmill graph, pineapple graph, etc. | Haritha T, Chithra A V | 2023-09-06T04:31:56Z | http://arxiv.org/abs/2309.02704v1 | # Resistance distance in \(k\)-coalescence of certain graphs
###### Abstract
Any graph can be considered as a network of resistors, each of which has a resistance of \(1\Omega\). The resistance distance \(r_{ij}\) between a pair of vertices \(i\) and \(j\) in a graph is defined as the effective resistance between \(i\) and \(j\). This article deals with the resistance distance in the \(k\)-coalescence of complete graphs. We also present its results in connection with the Kemeny's constant, Kirchhoff index, additive degree-Kirchhoff index, multiplicative degree-Kirchhoff index and mixed degree-Kirchhoff index. Moreover, we obtain the resistance distance in the \(k\)-coalescence of a complete graph with particular graphs. As an application, we provide the resistance distance of certain graphs such as the vertex coalescence of a complete bipartite graph with a complete bipartite graph with a complete bipartite graph, a complete bipartite graph with a star graph, the windmill graph, pineapple graph, etc.
Keywords: Resistance distance, Laplacian matrix, Generalized inverse, Coalescence, Kirchhoff index, Kemeny's constant.
## 1 Introduction
Let \(G_{n}=(V(G_{n}),E(G_{n}))\) be a simple connected undirected graph, consisting of \(n\) vertices \(\{v_{1},v_{2},\ldots,v_{n}\}\) and \(m\) edges \(\{e_{1},e_{2},\ldots,e_{m}\}\). The _adjacency matrix_\(A(G_{n})=(a_{ij})\) of \(G_{n}\) is defined such that \(a_{ij}=1\) if vertex \(v_{i}\) is adjacent to vertex \(v_{j}\), and it is zero otherwise. Denote the all-one entry matrix by \(J_{n\times m}\), and the identity matrix by \(I_{n}\). The _complete graph_, _path_, _cycle_ and _star graph_ are denoted by \(K_{n}\), \(P_{n}\), \(C_{n}\), and \(K_{1,n}\) respectively, and \(K_{n_{1},n_{2}}\) is said to be the complete bipartite graph. Let \(d_{i}\) denote the degree of a vertex \(v_{i}\) in \(G_{n}\). Note that the _Laplacian matrix_\(L(G_{n})=D(G_{n})-A(G_{n})\), where \(D(G_{n})\) is the diagonal matrix of vertex degrees. The concept of _resistance distance_ is introduced by Klein and Randic in 1993 [16]. The authors presented a new point of view, if we assign fixed resistances to each edge of a connected graph, then the resulting effective resistance between pairs of vertices corresponds to a graphical distance. For an \(m\times n\) matrix, the matrix \(P\) of order \(n\times m\) is said to be a \(\{1\}\)-inverse of \(M\) (denoted by \(M^{(1)}\)) if \(MPM=M\). For any square matrix \(N\), its group-inverse \(N^{\#}\), refers to a distinct matrix \(X\) that satisfies three conditions: \(NXN=N\), \(XNX=X\), and \(NX=XN\). Clearly, the group inverse of \(N\) is a \(\{1\}\)-inverse of \(N\)[5].
The standard method to compute the resistance distance \(r_{ij}\)[2] between two vertices \(v_{i}\) and \(v_{j}\) is by using the \(\{1\}\)-inverse and group inverse of the Laplacian matrix \(L=(l_{ij})\) of the underlying graph \(G_{n}\) which is
\[r_{ij}=l_{ii}^{(1)}+l_{jj}^{(1)}-l_{ij}^{(1)}-l_{ji}^{(1)}=l_{ii}^{\#}+l_{jj}^ {\#}-2l_{ij}^{\#}.\]
The matrix \(R(G_{n})=(r_{ij})_{n\times n}\) is called the resistance distance matrix of \(G_{n}\). The _resistance distance energy_\(RE(G_{n})\) of \(G_{n}\) is defined as the sum of the absolute values of the eigenvalues of \(R(G_{n})\).
_Kemeny's constant_[15] is essential in the theory of random walks, and it measures the average time it takes for a random walk to reach a vertex. It is defined as
\[\kappa(G_{n})=\frac{1}{4m}\sum_{v_{i},v_{j}\in V(G_{n})}d_{i}d_{j}r_{ij}.\]
The _Kirchhoff index_ of \(G_{n}\), also known as the total resistance of a network, represented as \(\mathcal{K}f(G_{n})\)[7, 16], is defined as,
\[\mathcal{K}f(G_{n})=\sum_{i<j}r_{ij}.\]
The following are three graph parameters which are in terms of vertex degrees and resistance distance of a graph \(G_{n}\).
The _mixed degree-Kirchhoff index_ of \(G_{n}\)[6] is
\[\hat{R}(G_{n})=\sum_{i<j}\left(\frac{d_{i}}{d_{j}}+\frac{d_{j}}{d_{i}}\right)r _{ij}.\]
The _multiplicative degree-Kirchhoff index_[9] of \(G_{n}\) is
\[R^{*}(G_{n})=\sum_{i<j}d_{i}d_{j}r_{ij}.\]
The _additive degree-Kirchhoff index_[12] of \(G_{n}\) is
\[R^{+}(G_{n})=\sum_{i<j}(d_{i}+d_{j})r_{ij}.\]
Suppose we have two graphs \(G_{n}\) and \(G_{n^{\prime}}^{\prime}\) with \(v\in V(G_{n})\) and \(v^{\prime}\in V(G_{n^{\prime}}^{\prime})\), then the coalescence \(G_{n}\circ_{1}G_{n^{\prime}}^{\prime}\)[10] of \(G_{n}\) and \(G_{n^{\prime}}^{\prime}\) with respect to \(v\) and \(v^{\prime}\) is formed by identifying \(v\) and \(v^{\prime}\). Sudhir et al. introduced the concept of \(k\)-coalescence of two graphs in their work [13], and it is defined as follows:
**Definition 1.1**.: Let \(G_{n}\) and \(G_{n^{\prime}}^{\prime}\) be two connected graphs of orders \(n\) and \(n^{\prime}\) and sizes \(m\) and \(m^{\prime}\) respectively having an induced complete graph order \(k\) with \(n,n^{\prime}\geq k\). Then the \(k\)-coalescence \(G_{n}\circ_{k}G_{n^{\prime}}^{\prime}\) of \(G_{n}\) and \(G_{n^{\prime}}^{\prime}\) is the graph obtained by identifying \(k\) vertices on \({}^{k}C_{2}\) edges of induced \(K_{k}\). The order and size of \(G_{n}\circ_{k}G_{n^{\prime}}^{\prime}\) are \(n+n^{\prime}-k\) and \(m+m^{\prime}-^{k}C_{2}\) respectively.
Resistance distance is significant in combinatorial matrix theory [2, 3] and spectral graph theory [1, 4, 8, 9, 19]. For a survey of methods for finding resistance distance in graphs see [11]. Graph operations have been widely used to analyse complex networks with properties abstracted from the real world. The formulas for resistance distance and Kirchhoff index pertaining to numerous graph classes and graph operations were presented in [8, 17, 18]. The aim of this work is to give the resistance distance in the \(k\)-coalescence of complete graphs and to provide parameters such as the Kemeny's constant, Kirchhoff index, additive degree-Kirchhoff index, multiplicative degree-Kirchhoff index and mixed degree-Kirchhoff index. Moreover, we obtain these results for some class of graphs.
Preliminaries
Through this section we present some useful lemmas and theorems.
**Lemma 2.1**.: [20] Let \(C=\begin{bmatrix}C_{0}&C_{1}\\ C_{2}&C_{3}\end{bmatrix}\) be a nonsingular matrix. If \(C_{0}\) and \(C_{3}\) are nonsingular, then
\[C^{-1}=\begin{bmatrix}(C_{0}-C_{1}C_{3}^{-1}C_{2})^{-1}&-C_{0}^{-1}C_{1}P^{-1} \\ -P^{-1}C_{2}C_{0}^{-1}&P^{-1}\end{bmatrix},\]
where \(P=C_{3}-C_{2}C_{0}^{-1}C_{1}\).
**Lemma 2.2**.: [8] Let \(L=\begin{bmatrix}L_{1}&L_{2}\\ L_{2}^{T}&L_{3}\end{bmatrix}\) be the Laplacian matrix of a connected graph. If each column vector of \(L_{2}^{T}\) is \(-e\) (the all-ones column vector) or a zero vector, then \(L^{(1)}=\begin{bmatrix}L_{1}^{-1}&0\\ 0&S^{\#}\end{bmatrix}\), where \(S=L_{3}-L_{2}^{T}L_{1}^{-1}L_{2}\).
**Lemma 2.3**.: [8] Let \(L\) be the Laplacian matrix of a graph \(G_{n}\). For any \(a>0\), we have \((L+aI-\frac{a}{n}J_{n})^{\#}=(L+aI)^{-1}-\frac{1}{an}J_{n}\).
**Lemma 2.4**.: [14] For any real numbers \(r,s>0\),
\[(rI_{n}-sJ_{n})^{-1}=\frac{1}{r}I_{n}+\frac{s}{r(r-ns)}J_{n}.\]
## 3 Main results
This section provides the resistance distance in \(k\)-coalescence of certain graphs and discusses some of its graph parameters. The graph \(K_{p_{1}}\circ_{k}K_{p_{2}}\) consists of \(t=p_{1}+p_{2}-k\) vertices.
**Theorem 3.1**.: For \(p_{1},p_{2}\geq k\), let \(T\) be the collection of vertices in \(K_{p_{1}}\circ_{k}K_{p_{2}}\), which are the identified vertices of some vertices in \(K_{p_{1}}\) and \(K_{p_{2}}\). Then,
1. for any \(v_{i},v_{j}\in T\), \(r_{ij}=\frac{2}{t}\).
2. for \(v_{i}\in T,v_{j}\in V(K_{p_{1}}\setminus T)\), \(r_{ij}=\frac{(k+1)(p_{2}-k)+2p_{1}k}{kp_{1}t}\).
3. for \(v_{i}\in T,v_{j}\in V(K_{p_{2}}\setminus T)\), \(r_{ij}=\frac{(k+1)(p_{1}-k)+2p_{2}k}{kp_{2}t}\).
4. for any \(v_{i},v_{j}\in V(K_{p_{1}}\setminus T),r_{ij}=\frac{2}{p_{1}}\).
5. for \(v_{i}\in V(K_{p_{1}}\setminus T),v_{j}\in V(K_{p_{2}}\setminus T),r_{ij}= \frac{(p_{1}+p_{2})(k+1)}{kp_{1}p_{2}}\).
6. for any \(v_{i},v_{j}\in V(K_{p_{2}}\setminus T),r_{ij}=\frac{2}{p_{2}}\).
Proof.: The Laplacian matrix of \(K_{p_{1}}\circ_{k}K_{p_{2}}\) is given by,
\[L(K_{p_{1}}\circ_{k}K_{p_{2}})=\begin{bmatrix}tI_{k}-J_{k\times p_{1}-k}&-J_{k \times p_{2}-k}\\ -J_{k\times p_{1}-k}^{T}&p_{1}p_{1}p_{1}-k&0\\ -J_{k\times p_{2}-k}^{T}&0&p_{2}I_{p_{2}-k}-J_{p_{2}-k}\end{bmatrix}.\]
Let \(L_{1}=\begin{bmatrix}tI_{k}-J_{k}&-J_{k\times p_{1}-k}\\ -J_{k\times p_{1}-k}^{T}&p_{1}I_{p_{1}-k}-J_{p_{1}-k}\end{bmatrix},L_{2}= \begin{bmatrix}-J_{k\times p_{2}-k}\\ 0\end{bmatrix}\), and \(L_{3}=p_{2}I-J_{p_{2}-k}\).
Then by Lemmas 2.1 and 2.4 we get,
\[L_{1}^{-1}=\begin{bmatrix}\frac{1}{t}(I_{k}+\frac{p_{1}}{k(p_{2}-k)}J_{k})& \frac{1}{k(p_{2}-k)}J_{k\times p_{1}-k}\\ \frac{1}{k(p_{2}-k)}J_{k\times p_{1}-k}^{T}&\frac{1}{p_{1}}I_{p_{1}-k}+\frac{t} {p_{1}k(p_{2}-k)}J_{p_{1}-k}\end{bmatrix}.\]
Consider
\[S =L_{3}-L_{2}^{T}L_{1}^{-1}L_{2}\] \[=p_{2}I_{p_{2}-k}-\frac{p_{2}}{p_{2}-k}J_{p_{2}-k}\]
then, \(S^{\#}=\frac{1}{p_{2}}(I_{p_{2}-k}-\frac{1}{p_{2}-k}J_{p_{2}-k}).\)
From Lemma 2.2,
\[L^{(1)}(K_{p_{1}}\circ_{k}K_{p_{2}})=\begin{bmatrix}\frac{1}{t}(I_{k}+\frac{p_{ 1}}{k(p_{2}-k)}J_{k})&\frac{1}{k(p_{2}-k)}J_{k\times p_{1}-k}&0\\ \frac{1}{k(p_{2}-k)}J_{k\times p_{1}-k}^{T}&\frac{1}{p_{1}}I_{p_{1}-k}+\frac{t} {p_{1}k(p_{2}-k)}J_{p_{1}-k}&0\\ 0&0&\frac{1}{p_{2}}(I_{p_{2}-k}-\frac{1}{p_{2}-k}J_{p_{2}-k})\end{bmatrix}.\]
For any \(v_{i},v_{j}\in T,\)
\[r_{ij} =\frac{2}{t}\left(1+\frac{p_{1}}{k(p_{2}-k)}\right)-\frac{2p_{1}} {kt(p_{2}-k)}\] \[=\frac{2}{t}.\]
For \(v_{i}\in T,v_{j}\in V(K_{p_{1}}\setminus T),\)
\[r_{ij} =\frac{1}{t}\left(1+\frac{p_{1}}{k(p_{2}-k)}\right)-\frac{1}{p_{1 }}\left(1+\frac{t}{k(p_{2}-k)}\right)-\frac{2}{k(p_{2}-k)}\] \[=\frac{(k+1)(p_{2}-k)+2p_{1}k}{kp_{1}t}.\]
For \(v_{i}\in T,v_{j}\in V(K_{p_{2}}\setminus T),\)
\[r_{ij} =\frac{1}{t}+\frac{p_{1}}{k(p_{2}-k)t}+\frac{1}{p_{2}}-\frac{1}{p _{2}(p_{2}-k)}\] \[=\frac{(k+1)(p_{1}-k)+2p_{2}k}{kp_{2}kt}.\]
For any \(v_{i},v_{j}\in V(K_{p_{1}}\setminus T),r_{ij}=\frac{2}{p_{1}}.\)
For \(v_{i}\in V(K_{p_{1}}\setminus T),v_{j}\in V(K_{p_{2}}\setminus T),\)
\[r_{ij} =\frac{1}{p_{1}}\left(1+\frac{t}{k(p_{2}-k)}\right)+\frac{1}{p_{2 }}\left(1-\frac{1}{p_{2}-k}\right)\] \[=\frac{(p_{1}+p_{2})(k+1)}{kp_{1}p_{2}}.\]
For any \(v_{i},\;v_{j}\in V(K_{p_{2}}\setminus T),r_{ij}=\frac{2}{p_{2}}.\)
Therefore, the resistance distance matrix of \(K_{p_{1}}\circ_{k}K_{p_{2}}\) is
\[R(K_{p_{1}}\circ_{k}K_{p_{2}})=\begin{bmatrix}\frac{2}{t}(J_{k}-I_{k})&\frac{(k +1)(p_{2}-k)+2p_{1}k}{kp_{1}t}&\frac{(k+1)(p_{1}-k)+2p_{2}k}{p_{2}kt}J_{k\times p _{2}-k}\\ \frac{(k+1)(p_{2}-k)+2p_{1}k}{kp_{1}t}J_{k\times p_{1}-k}^{T}&\frac{2}{p_{1}} (J_{p_{1}-k}-I_{p_{1}-k})&\frac{(p_{1}+p_{2})(k+1)}{kp_{1}p_{2}}J_{p_{1}-k \times p_{2}-k}\\ \frac{(k+1)(p_{1}-k)+2kp_{2}}{kp_{2}t}J_{k\times p_{2}-k}^{T}&\frac{(p_{1}+p_{ 2})(k+1)}{kp_{1}p_{2}}J_{p_{1}-k\times p_{2}-k}^{T}&\frac{2}{p_{2}}(J_{p_{2}- k}-I_{p_{2}-k})\end{bmatrix}.\]
**Example 3.1**.: The _kite graph_\(Kite_{p,2}\) is obtained by identifying a vertex in \(K_{p}\) to a pendant vertex of a path graph with 2 vertices, which can be viewed as \(K_{p}\circ_{1}K_{2}.\) Let \(v^{*}\) be the identified vertex of a vertex \(v_{1}\) in \(K_{p}\) and the vertex \(u_{1}\) in \(K_{2}.\) Then by Theorem 3.1 we have
1. for \(v_{i}=v^{*},\;v_{j}\in V(K_{p}\setminus\{v^{*}\}),\;r_{ij}=\frac{2}{p}.\)
2. for \(v_{i},v_{j}\in V(K_{p}\setminus\{v^{*}\}),\;r_{ij}=\frac{2}{p}.\)
3. for \(v_{i}=v^{*}\) and \(v_{j}=u_{2}\in V(K_{2}),\;r_{ij}=1.\)
4. for \(v_{i}\in V(K_{p}\setminus\{v^{*}\})\) and \(v_{j}=u_{2}\in V(K_{2}),\;r_{ij}=\frac{p+2}{p}.\)
The windmill graph \(W_{n+1}^{t}\) is the graph obtained by taking \(t\geq 2\) copies of complete graph \(K_{n+1},\) for \(n\geq 1,\) with a vertex in common. By the definition of coalescence of graphs one can easily write \(W_{n+1}^{t}=K_{n+1}\underbrace{\circ_{1}\cdots\circ_{1}}_{t-times}K_{n+1}.\) Next theorem gives the resistance distance in \(W_{n+1}^{t}.\)
**Proposition 3.1**.: For \(n>1,\) the resistance distance of vertices in \(W_{n+1}^{t}\) is given by,
\[r_{ij}=\begin{cases}\frac{4}{n+1},&\text{if $v_{i},v_{j}$ are in different blocks},\\ \frac{2}{n+1},&\text{otherwise}.\end{cases}\]
Proof.: The Laplacian matrix of \(W_{n+1}^{t}\) is
\[L(W_{n+1}^{t})=\begin{bmatrix}tnI_{1}&-J_{1\times tn}\\ -J_{tn\times 1}&I_{t}\otimes(n+1I_{n}-J_{n})\end{bmatrix}.\]
Then its \(\{1\}\)-inverse is
\[L^{(1)}(W_{n+1}^{t}) =\begin{bmatrix}\frac{1}{tn}I_{1}&0\\ 0&(I_{t}\otimes(n+1I_{n}-J_{n})-\frac{1}{tn}J_{tn})^{\#}\end{bmatrix}\] \[=\begin{bmatrix}\frac{1}{tn}I_{1}&0\\ 0&I_{t}\otimes\frac{1}{n+1}(I_{n}+J_{n})-\frac{1}{tn}J_{tn}\end{bmatrix}.\]
Now by the definition of resistance distance we get the required result.
From Proposition 3.1, we get the following corollaries.
**Corollary 3.1**.: The Kirchhoff index of \(W_{n+1}^{t}\) is
\[Kf(W_{n+1}^{t})=\frac{2n^{2}t^{2}-n^{2}t+nt}{n+1}.\]
**Corollary 3.2**.: The Kemeny's constant of \(W_{n+1}^{t}\) is
\[\kappa(W_{n+1}^{t})=\frac{n^{2}(2t-1)}{n+1}.\]
A \(3\)-rose graph is a graph consisting of three cycles intersecting in a common vertex. Let \(\mathcal{R}(r,s,t)\) denote the \(3\)-rose graph on \(n=r+s+t-2\) vertices, that is, the graph consisting of three cycles \(C_{a},\)\(C_{b}\) and \(C_{c}\) intersecting in a common vertex. Next corollary directly follows from Proposition 3.1.
**Corollary 3.3**.: The resistance distance in \(\mathcal{R}(3,3,3)\) is given by,
\[r_{ij}=\begin{cases}\frac{4}{3},&\text{if $v_{i},v_{j}$ are in different blocks},\\ \frac{2}{3},&\text{otherwise}.\end{cases}\]
**Theorem 3.2**.: Let \(G_{n}\) be a graph of order \(n\). For \(p\geq k,\) let \(T\) be the collection of vertices in \(K_{p}\circ_{k}(G_{n}\lor K_{k}),\) which are the identified vertices of \(K_{k}\) and some vertices in \(K_{p}.\) Then,
1. for any \(v_{i},v_{j}\in T,\)\(r_{ij}=\frac{2}{p+n}.\)
2. for \(v_{i}\in T,v_{j}\in V(K_{p}\setminus T)\), \(r_{ij}=\frac{k(2p+n)+n}{kp(p+n)}\).
3. for \(v_{i}\in T,v_{j}\in V(G_{n})\), \(r_{ij}=\frac{k-1}{k(p+n)}+(L(G_{n})+kI_{n})_{jj}^{-1}\).
4. for any \(v_{i},v_{j}\in V(K_{p}\setminus T),r_{ij}=\frac{2}{p}\).
5. for \(v_{i}\in V(K_{p}\setminus T),v_{j}\in V(G_{n}),r_{ij}=\frac{k+1}{kp}+(L(G_{n} )+kI_{n})_{jj}^{-1}\).
6. for any \(v_{i},v_{j}\in V(G_{n}),r_{ij}=(L(G_{n})+kI_{n})_{ii}^{-1}+(L(G_{n})+kI_{n})_{ jj}^{-1}-2(L(G_{n})+kI_{n})_{ij}^{-1}\).
Proof.: The Laplacian matrix of \(K_{p}\circ_{k}(G_{n}\lor K_{k})\) is given by,
\[L(K_{p}\circ_{k}(G_{n}\lor K_{k}))=\begin{bmatrix}(p+n)I_{k}-J_{k}&-J_{k\times p -k}&-J_{k\times n}\\ -J_{k\times p-k}^{T}&pI_{p-k}-J_{p-k}&0\\ -J_{k\times n}^{T}&0&L(G_{n})+kI_{n}\end{bmatrix}.\]
Let \(L_{1}=\begin{bmatrix}(p+n)I_{k}-J_{k}&-J_{k\times p-k}\\ -J_{p-k\times k}&pI_{p-k}-J_{p-k}\end{bmatrix},L_{2}=\begin{bmatrix}-J_{k\times n }\\ 0\end{bmatrix}\), and \(L_{3}=L(G_{n})+kI_{n}\).
Then by Lemmas 2.1 and 2.4 we get,
\[L_{1}^{-1}=\begin{bmatrix}\frac{1}{p+n}(I_{k}+\frac{p}{nk}J_{k})&\frac{1}{nk}J _{k\times p-k}\\ \frac{1}{nk}J_{p-k\times k}&\frac{1}{p}(I_{p-k}+\frac{p+n}{nk}J_{p-k})\end{bmatrix}.\]
Now let \(S=L_{3}-L_{2}^{T}L_{1}^{-1}L_{2}\), then \(S=L(G_{n})+kI_{n}-\frac{k}{n}J_{n}\).
From Lemma 2.3, \(S^{\hat{\mu}}=(L(G_{n})+kI_{n})^{-1}-\frac{k}{n}J_{n}\).
Therefore,
\[L^{(1)}(K_{p}\circ_{k}(G_{n}\lor K_{k}))=\begin{bmatrix}\frac{1}{p+n}(I_{k}+ \frac{p}{nk}J_{k})&\frac{1}{nk}J_{k\times p-k}&0\\ \frac{1}{nk}J_{p-k\times k}&\frac{1}{p}(I_{p-k}+\frac{p+n}{nk}J_{p-k})&0\\ 0&0&(L(G_{n})+kI_{n})^{-1}-\frac{1}{kn}J_{n}\end{bmatrix}.\]
By applying the definition of resistance distance, we obtain the required result.
**Theorem 3.3**.: If \(G_{n}\) is a graph of order \(n\), then the resistance distance of the vertices in \(K_{1,p-1}\circ_{1}(G_{n}\lor K_{1})\) is given by,
1. for \(v_{i}=u^{*},v_{j}\in V(K_{1,p-1}\setminus\{u^{*}\})\), \(r_{ij}=1\),
2. for \(v_{i}=u^{*}\), \(v_{j}\in V(G_{n})\), \(r_{ij}=(L(G_{n})+I_{n})_{jj}^{-1}\),
3. for \(v_{i},v_{j}\in V(K_{1,p-1}\setminus\{u^{*}\}),r_{ij}=2\),
4. for \(v_{i}\in V(K_{1,p-1}\setminus\{u^{*}\}),v_{j}\in V(G_{n})\), \(r_{ij}=1+(L(G_{n})+I_{n})_{jj}^{-1}\),
5. for \(v_{i},v_{j}\in V(G_{n}),r_{ij}=(L(G_{n})+I_{n})_{ii}^{-1}+(L(G_{n})+I_{n})_{jj} ^{-1}-2(L(G_{n})+I_{n})_{ij}^{-1}\),
where \(u^{*}\) is the identified vertex of a vertex in \(K_{1}\) and a vertex in \(K_{1,p-1}\) (center).
Proof.: The Laplacian matrix of \(K_{1,p-1}\circ_{1}(G_{n}\lor K_{1})\) is given by,
\[L(K_{1,p-1}\circ_{1}(G_{n}\lor K_{1}))=\begin{bmatrix}L_{1}&L_{2}\\ L_{2}^{T}&L_{3}\end{bmatrix},\]
where \(L_{1}=\begin{bmatrix}(p+n-1)I_{1}&-J_{1\times p-1}\\ -J_{p-1\times 1}&I_{p-1}\end{bmatrix},L_{2}=\begin{bmatrix}-J_{1\times n}\\ 0\end{bmatrix}\), and \(L_{3}=L(G_{n})+I_{n}\).
Then by Lemmas 2.1 and 2.4 we get,
\[L_{1}^{-1}=\begin{bmatrix}\frac{1}{n}I_{1}&\frac{1}{n}J_{1\times p-1}\\ \frac{1}{n}J_{p-1\times 1}&I_{p}+\frac{1}{n}J_{p}\end{bmatrix}.\]
Now let \(S=L_{3}-L_{2}^{T}L_{1}^{-1}L_{2}\), then \(S=L(G_{n})+I_{n}-\frac{1}{n}J_{n}\).
From Lemma 2.3, \(S^{\#}=(L(G_{n})+I_{n})^{-1}-\frac{1}{n}J_{n}\).
Therefore,
\[L^{(1)}(K_{1,p-1}\circ_{1}(G_{n}\lor K_{1}))=\begin{bmatrix}\frac{1}{n}I_{1}& \frac{1}{n}J_{1\times p-1}&0\\ \frac{1}{n}J_{p-1\times 1}&I_{p-1}+\frac{1}{n}J_{p-1}&0\\ 0&0&(L(G_{n})+I_{n})^{-1}-\frac{1}{n}J_{n}\end{bmatrix}.\]
Now by the definition of resistance distance we get the required result.
**Theorem 3.4**.: The resistance distance matrix of \(K_{p,q}\circ_{1}K_{1,n}\) is given by,
\[R(K_{p,q}\circ_{1}K_{1,n})=\begin{bmatrix}0&\frac{2}{q}J_{1\times p-1}&\frac{p +q-1}{pq}J_{1\times q}&J_{1\times n}\\ \frac{2}{q}J_{p-1\times 1}&\frac{2}{q}(J_{p-1}-I_{p-1})&\frac{p+q-1}{pq}J_{p-1 \times q}&\frac{2+2}{2}J_{p-1\times n}\\ \frac{p+q-1}{pq}J_{q\times 1}&\frac{p+q-1}{pq}J_{q\times p-1}&\frac{2}{p}(J_{q}-I_ {q})&\frac{q(p+1)+(p-1)}{pq}J_{q\times n}\\ J_{n\times 1}&\frac{q+2}{q}J_{n\times p-1}&\frac{q(p+1)+(p-1)}{pq}J_{n\times q}& 2(J_{n}-I_{n})\end{bmatrix}.\]
Proof.: The Laplacian matrix of \(K_{p,q}\circ_{1}K_{1,n}\) is given by,
\[L(K_{p,q}\circ_{1}K_{1,n})=\begin{bmatrix}L_{1}&L_{2}\\ L_{2}^{T}&L_{3}\end{bmatrix},\]
where \(L_{1}=\begin{bmatrix}(q+n)I_{1}&0&-J_{1\times q}\\ 0&qI_{p-1}&-J_{p-1\times q}\\ -J_{q\times 1}&-J_{q\times p-1}&PI_{q}\end{bmatrix},L_{2}=\begin{bmatrix}-J_{1 \times n}\\ 0\\ 0\end{bmatrix}\), and \(L_{3}=I_{n}\).
Then by Lemmas 2.1 and 2.4 we get,
\[L_{1}^{-1}=\begin{bmatrix}\frac{1}{n}I_{1}&J_{1\times p-1}&\frac{1}{n}J_{1 \times q}\\ \frac{1}{n}J_{p-1\times 1}&\frac{1}{q}I_{p-1}+\frac{n+q}{nq}J_{p-1}&\frac{q+n}{ nq}J_{p-1\times q}\\ \frac{1}{n}J_{q\times 1}&\frac{q+n}{nq}J_{q\times p-1}&\frac{1}{p}I_{q}+\frac{p (q+n)-n}{npq}J_{q}\end{bmatrix}.\]
Consider
\[S =L_{3}-L_{2}^{T}L_{1}^{-1}L_{2}\] \[=I_{n}-\frac{1}{n}J_{n}\]
then, \(S^{\#}=I_{n}-\frac{1}{n}J_{n}\).
Therefore,
\[L^{(1)}(K_{p,q}\circ_{1}K_{1,n})=\begin{bmatrix}\frac{1}{n}I_{1}&J_{1\times p -1}&\frac{1}{n}J_{1\times q}&0\\ \frac{1}{n}J_{p-1\times 1}&\frac{1}{q}I_{p-1}+\frac{n+q}{nq}J_{p-1}&\frac{q+n}{ nq}J_{p-1\times q}&0\\ \frac{1}{n}J_{q\times 1}&\frac{q+n}{nq}J_{q\times p-1}&\frac{1}{p}I_{q}+\frac{p (q+n)-n}{npq}J_{q}&0\\ 0&0&0&I_{n}-\frac{1}{n}J_{n}\end{bmatrix}.\]
Now by the definition of resistance distance we get the required result.
**Theorem 3.5**.: The resistance distance matrix of \(K_{p,q}\circ_{1}K_{n}\) is given by,
\[R(K_{p,q}\circ_{1}K_{n})=\begin{bmatrix}0&\frac{2}{q}J_{1\times p-1}&\frac{p +q-1}{pq}J_{1\times q}&\frac{2}{n}J_{1\times n}\\ \frac{2}{q}J_{p-1\times 1}&\frac{2}{q}(J_{p-1}-I_{p-1})&\frac{p+q-1}{pq}J_{p-1 \times q}&\frac{2(q+n)}{pn}J_{p-1\times n}\\ \frac{p+q-1}{pq}J_{q\times 1}&\frac{p+q-1}{pq}J_{q\times p-1}&\frac{2}{p}(J_{q}-I_ {q})&\frac{q(n+2p)+n(p-1)}{npq}J_{q\times n}\\ \frac{2}{n}J_{n\times 1}&\frac{2(q+n)}{pn}J_{n\times p-1}&\frac{q(n+2p)+n(p-1)}{npq}J_{n \times q}&\frac{2}{n}(J_{n-1}-I_{n-1})\end{bmatrix}.\]
Proof.: The Laplacian matrix of \(K_{p,q}\circ_{1}K_{n}\) is given by,
\[L(K_{p,q}\circ_{1}K_{n})=\begin{bmatrix}L_{1}&L_{2}\\ L_{2}^{T}&L_{3}\end{bmatrix},\]
where \(L_{1}=\begin{bmatrix}(q+n-1)I_{1}&0&-J_{1\times q}\\ 0&qI_{p-1}&-J_{p-1\times q}\\ -J_{q\times 1}&-J_{q\times p-1}&PI_{q}\end{bmatrix},L_{2}=\begin{bmatrix}-J_{1 \times n-1}\\ 0\\ 0\end{bmatrix}\), and \(L_{3}=nI_{n-1}-J_{n-1}\).
Then by Lemmas 2.1 and 2.4 we get,
\[L_{1}^{-1}=\begin{bmatrix}\frac{1}{n-1}I_{1}&\frac{1}{n-1}J_{1\times p-1}&\frac {1}{n-1}J_{1\times q}\\ \frac{1}{n-1}&\frac{1}{q}(I_{p-1}+\frac{q+n-1}{n-1}J_{p-1})&\frac{q+n-1}{q(n-1 )}J_{p-1\times q}\\ \frac{1}{n-1}J_{q\times 1}&\frac{q+n-1}{q(n-1)}J_{q\times p-1}&\frac{1}{p}(I_{q}+ \frac{p(q+n-1)-(n-1)}{q(n-1)}J_{q})\end{bmatrix}.\]
Consider
\[S=L_{3}-L_{2}^{T}L_{1}^{-1}L_{2}\] \[=nI_{n-1}-\frac{n}{n-1}J_{n-1}\]
then, \(S^{\#}=\frac{1}{n}I_{n-1}-\frac{1}{n(n-1)}J_{n-1}\).
Therefore,
\[L^{(1)}(K_{p,q}\circ_{1}K_{n})=\begin{bmatrix}\frac{1}{n-1}I_{1}&\frac{1}{n-1 }J_{1\times p-1}&\frac{1}{n-1}J_{1\times q}&0\\ \frac{1}{n-1}&\frac{1}{q}(I_{p-1}+\frac{q+n-1}{n-1}J_{p-1})&\frac{q+n-1}{q(n-1 )}J_{p-1\times q}&0\\ \frac{1}{n-1}J_{q\times 1}&\frac{q+n-1}{q(n-1)}J_{q\times p-1}&\frac{1}{p}(I_{q}+ \frac{p(q+n-1)-(n-1)}{q(n-1)}J_{q})&0\\ 0&0&0&\frac{1}{n}(I_{n-1}-\frac{1}{n-1}J_{n-1})\end{bmatrix}.\]
Now by the definition of resistance distance we get the required result.
The _pineapple graph_\(K_{p}^{q}\) is the coalescence of the complete graph \(K_{p}\) (at any vertex) with the star \(K_{1,q}\) at the vertex of degree \(q\). It has \(n=p+q\) vertices and \({}^{p}C_{2}+q\) edges.
Using Theorem 3.5, we get the following proposition.
**Proposition 3.2**.: The resistance distance in a pineapple graph \(K_{p}^{q}\) is given by
1. for \(v_{i}=v^{**},v_{j}\in V(K_{p}\setminus\{v^{**}\})\), \(r_{ij}=\frac{2}{p}\),
2. for \(v_{i}=v^{**}\), \(v_{j}\in V(K_{1,q}\setminus\{v^{**}\})\), \(r_{ij}=1\),
3. for \(v_{i},v_{j}\in V(K_{p}\setminus\{v^{**}\}),r_{ij}=\frac{2}{p}\),
4. for \(v_{i}\in V(K_{p}\setminus\{v^{**}\}),v_{j}\in V(K_{1,q}\setminus\{v^{**}\})\), \(r_{ij}=\frac{p+2}{p}\),
5. for \(v_{i},v_{j}\in V(K_{1,q}\setminus\{v^{**}\}),r_{ij}=2\),
where \(v^{**}\) is the identified vertex of a vertex \(u\) in \(K_{p}\) and the vertex \(v\) of degree \(q\) in \(K_{1,q}\).
From Proposition 3.2, we get the following corollaries.
**Corollary 3.4**.: The Kirchhoff index of \(K_{p}^{q}\) is
\[Kf(K_{p}^{q})=q(p+q+3)+p+1-\frac{2}{p}(q+1).\]
Figure 2: \(K_{6}^{5}\).
**Corollary 3.5**.: The Kemeny's constant of \(K_{p}^{q}\) is
\[\kappa(K_{p}^{q})=\frac{p^{4}-p^{3}+p^{3}q+3p^{2}q+2pq^{2}-3p^{2}-7pq+7p+4q-2}{p( p-1)+2q}.\]
The _dandelion graph_\(D(n,l)\) on \(n\) vertices is the coalescence of the star graph \(K_{1,n-l}\) (at the center) with the path \(P_{l}\) at any pendant vertex.
The following theorem describes the resistance distance matrix of a dandelion graph \(D(n,l)\).
**Theorem 3.6**.: The resistance distance matrix of a dandelion graph \(D(n,l)\) on \(n\) vertices is given by
\[R(D(n,l))=\left[\begin{array}{cccccccc}0&1&2&\cdots&l-1&1&\cdots&1\\ 1&0&1&\cdots&l-2&2&\cdots&2\\ 2&1&0&\cdots&l-3&3&\cdots&3\\ \vdots&\vdots&\ddots&\ddots&\vdots&\vdots&\vdots&\vdots\\ l-1&l-2&\cdots&1&0&l&\cdots&l\\ 1&2&&\cdots&l&0&2&\cdots&2\\ 1&2&&\cdots&l&2&0&\cdots&2\\ \vdots&\vdots&&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 1&2&&\cdots&l&2&2&\cdots&0\end{array}\right].\]
Proof.: By a proper labelling of vertices in \(D(n,l)=K_{1,n-l}\circ_{1}P_{l}\), we can write its Laplacian matrix as
\[L(D(n,l))=\begin{bmatrix}L_{1}&L_{2}\\ L_{2}^{T}&L_{3}.\end{bmatrix},\]
where \(L_{1}=\begin{bmatrix}n+1-l&-1&0&\cdots&0\\ -1&2&-1&\cdots&0\\ \vdots&\cdots&\ddots&\vdots&\vdots\\ 0&\cdots&-1&2&-1\\ 0&\cdots&0&-1&1\end{bmatrix}_{l\times l}\), \(L_{2}=\begin{bmatrix}-J_{1\times n-l}\\ 0_{l-1\times n-l}\end{bmatrix}\) and \(L_{3}=I_{n-l}\).
Then by applying Lemma 2.2 we get,
\[L^{(1)}=\begin{bmatrix}L_{1}^{-1}&0\\ 0&I_{n-l}-J_{n-l}\end{bmatrix},\]
where \(L_{1}^{-1}=\begin{bmatrix}\frac{1}{n-l}&\frac{1}{n-l}&\cdots&\frac{1}{n-l}\\ \frac{1}{n-l}&\frac{(n-l)+1}{n-l}&\cdots&\frac{(n-l)+1}{n-l}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{1}{n-l}&\frac{(n-l)+1}{n-l}&\cdots&\frac{(l-1)(n-l)+1}{n-l}\end{bmatrix}.\)
Now by the definition of resistance distance we get the required result.
Figure 3: \(D(19,4)\).
From Theorem 3.6, we get the following corollaries.
**Corollary 3.6**.: The Kirchhoff index of \(D(n,l)\) is
\[Kf(D(n,l))=\frac{l^{2}(3n-2l+2)+l(5-9n)+6(n^{2}-1)}{6}.\]
**Corollary 3.7**.: The Kemeny's constant of \(D(n,l)\) is
\[\kappa(D(n,l))=\frac{(n+1)(2l^{2}-1)+2n(n-3l)}{2(n-1)}+\frac{l(5-2l^{2})}{3(n-1 )}.\]
The following theorems describes various graph parameters of \(K_{p_{1}}\circ_{k}K_{p_{2}}\).
**Proposition 3.3**.: The Kirchhoff index of \(K_{p_{1}}\circ_{k}K_{p_{2}}\) is given by
\[\mathcal{K}f(K_{p_{1}}\circ_{k}K_{p_{2}}) =\frac{1}{kp_{1}p_{2}t}\big{(}(p_{1}-k)(p_{2}-k)(k+1)\left(p_{1}(k +t)+p_{2}+\frac{p_{2}k}{k+1}+\frac{p_{2}k(p_{1}k+p_{1}t-t)}{(p_{2}-k)(k+1)}\right)\] \[+kp_{1}(p_{2}-k)\left(p_{2}(t+2k)-t(k+1)+kp_{2}(k-1)\right)\big{)}.\]
Proof.: By definition \(\mathcal{K}f(G_{n})=\sum_{i<j}r_{ij}(G_{n}).\) Then,
\[\mathcal{K}f(K_{p_{1}}\circ_{k}K_{p_{2}}) =\sum_{v_{i},v_{j}\in T}\frac{2}{t}+\sum_{v_{i}\in T,v_{j}\in V(K_ {p_{1}}(T)}\frac{k(p_{1}+t)+(p_{2}-k)}{p_{1}kt}\] \[\quad+\sum_{v_{i}\in T,v_{j}\in V(K_{p_{2}}\setminus T)}\frac{(k+ 1)(p_{1}-k)+2p_{2}k}{p_{2}kt}+\sum_{v_{i},v_{j}\in V(K_{p_{1}}\setminus T)} \frac{2}{p_{1}}\] \[\quad+\sum_{v_{i}\in V(K_{p_{1}}\setminus T),v_{j}\in V(K_{p_{2} }\setminus T)}\frac{(p_{1}+p_{2})(k+1)}{kp_{1}p_{2}}+\sum_{v_{i},v_{j}\in V(K _{p_{2}}\setminus T)}\frac{2}{p_{2}}.\] \[=\frac{1}{kp_{1}p_{2}t}\big{(}(p_{1}-k)(p_{2}-k)(k+1)\left(p_{1}( k+t)+p_{2}+\frac{p_{2}k}{k+1}+\frac{p_{2}k(p_{1}k+p_{1}t-t)}{(p_{2}-k)(k+1)}\right)\] \[\quad+kp_{1}(p_{2}-k)\left(p_{2}(t+2k)-t(k+1)+kp_{2}(k-1)\right) \big{)}.\]
**Proposition 3.4**.: The Kemeny's constant of \(K_{p_{1}}\circ_{k}K_{p_{2}}\) is given by
\[\kappa(K_{p_{1}}\circ_{k}K_{p_{2}})= \frac{1}{2mp_{1}p_{2}}\left(\frac{t-1}{t}(p_{1}(t-1)(p_{2}k(k-1)+ (p_{2}-1)(p_{2}-k)((k+1)(p_{1}-k)+2p_{2}k))\right.\right.\] \[\quad+p_{2}(p_{1}-1)(p_{1}-k)(2p_{1}k+(p_{2}-k)(k+1))+p_{1}(p_{2} -k)(p_{2}-1)^{2}(p_{2}-k-1)\] \[\quad\left.\left.+\frac{(p_{1}-k)(p_{1}-1)(p_{2}k(p_{1}-k-1)(p_{1 }-1)+(p_{2}-k)(p_{2}-1)(p_{1}+p_{2})(k+1))}{k}\right).\]
**Proposition 3.5**.: The additive degree-Kirchhoff index of \(K_{p_{1}}\circ_{k}K_{p_{2}}\) is given by
\[R^{+}(K_{p_{1}}\circ_{k}K_{p_{2}})= \frac{2p_{1}k(k-1)(t-1)+k(p_{1}-k)(2t-2)\left((2t)+p_{2}-k\right) }{p_{1}t}+\frac{(p_{1}-k)(p_{2}-k)(p_{1}+p_{2}-2)(p_{1}+p_{2})(k+1)}{kp_{1}p_{ 2}}\] \[\quad+\frac{(p_{2}-k)(p_{1}+2p_{2}-k-2)\left((k+1)(p_{1}-k)+2p_{2 }k\right)}{p_{2}t}\] \[\quad+\frac{2p_{2}(p_{1}-k)(p_{1}-k-1)(p_{1}-1)+2p_{1}(p_{2}-k)( p_{2}-k-1)(p_{2}-1)}{p_{1}p_{2}}.\]
**Proposition 3.6**.: The multiplicative degree-Kirchhoff index of \(K_{p_{1}}\circ_{k}K_{p_{2}}\) is given by
\[R^{*}(K_{p_{1}}\circ_{k}K_{p_{2}})= \frac{1}{p_{1}p_{2}}\left(\frac{t-1}{t}(p_{1}(t-1)(p_{2}k(k-1)+( p_{2}-1)(p_{2}-k)((k+1)(p_{1}-k)+2p_{2}k))\right.\] \[\quad+p_{2}(p_{1}-1)(p_{1}-k)(2p_{1}k+(p_{2}-k)(k+1))+p_{1}(p_{2} -k)(p_{2}-1)^{2}(p_{2}-k-1)\] \[\quad\left.+\frac{(p_{1}-k)(p_{1}-1)(p_{2}k(p_{1}-k-1)(p_{1}-1)+( p_{2}-k)(p_{2}-1)(p_{1}+p_{2})(k+1))}{k}\right).\]
**Proposition 3.7**.: The mixed degree-Kirchhoff index of \(K_{p_{1}}\circ_{k}K_{p_{2}}\) is given by
\[\hat{R}(K_{p_{1}}\circ_{k}K_{p_{2}})= \frac{2k(k-1)}{t}+\frac{(p_{1}-k)((t-1)^{2}+(p_{1}-1)^{2})(k(2t)+(p _{2}-k))}{p_{1}(p_{1}-1)t(t-1)}\] \[+\frac{(p_{2}-k)((t-1)^{2}+(p_{2}-1)^{2})((k+1)(p_{1}-k)+2p_{2}k)} {p_{2}(p_{2}-1)t(t-1)}\] \[+\frac{2(p_{1}-k)(p_{1}-k-1)}{p_{1}}+\frac{2(p_{2}-k)(p_{2}-k-1)}{ p_{2}}\] \[+\frac{(p_{1}-k)(p_{2}-k)((p_{1}-1)^{2}+(p_{2}-1)^{2})(p_{1}+p_{2} )(k+1)}{kp_{1}p_{2}(p_{1}-1)(p_{2}-1)}.\]
In general, it is difficult to find the resistance energy of graphs. The following table gives the resistance energy of \(K_{p_{1}}\circ_{k}K_{p_{2}}\), for different values of \(p_{1},p_{2}\) and \(k\).
## 4 Conclusion
This article explores the concept of resistance distance in the \(k\)-coalescence of complete graphs. These results enable us to determine several graph parameters, including Kemeny's constant, Kirchhoff index, additive degree-Kirchhoff index, multiplicative degree-Kirchhoff index, and mixed degree-Kirchhoff index of \(k\)-coalescence of complete graphs. In addition, the resistance distance in the \(k\)-coalescence of a complete graph with particular graphs are obtained. Furthermore, the article applies the findings to determine the resistance distance of specific graphs like the vertex coalescence of a complete bipartite graph with a complete graph, a complete bipartite graph with a star graph, the windmill graphs, the pineapple graph, etc.
## 5 Declarations
On behalf of all authors, the corresponding author states that there is no conflict of interest.
|
2303.01256 | Choosing Public Datasets for Private Machine Learning via Gradient
Subspace Distance | Differentially private stochastic gradient descent privatizes model training
by injecting noise into each iteration, where the noise magnitude increases
with the number of model parameters. Recent works suggest that we can reduce
the noise by leveraging public data for private machine learning, by projecting
gradients onto a subspace prescribed by the public data. However, given a
choice of public datasets, it is not a priori clear which one may be most
appropriate for the private task. We give an algorithm for selecting a public
dataset by measuring a low-dimensional subspace distance between gradients of
the public and private examples. We provide theoretical analysis demonstrating
that the excess risk scales with this subspace distance. This distance is easy
to compute and robust to modifications in the setting. Empirical evaluation
shows that trained model accuracy is monotone in this distance. | Xin Gu, Gautam Kamath, Zhiwei Steven Wu | 2023-03-02T13:36:28Z | http://arxiv.org/abs/2303.01256v1 | # Choosing Public Datasets for Private Machine Learning via Gradient Subspace Distance+
###### Abstract
Differentially private stochastic gradient descent privatizes model training by injecting noise into each iteration, where the noise magnitude increases with the number of model parameters. Recent works suggest that we can reduce the noise by leveraging public data for private machine learning, by projecting gradients onto a subspace prescribed by the public data. However, given a choice of public datasets, it is not a priori clear which one may be most appropriate for the private task. We give an algorithm for selecting a public dataset by measuring a low-dimensional subspace distance between gradients of the public and private examples. We provide theoretical analysis demonstrating that the excess risk scales with this subspace distance. This distance is easy to compute and robust to modifications in the setting. Empirical evaluation shows that trained model accuracy is monotone in this distance.
## 1 Introduction
Recent work has shown that machine learning (ML) models tend to memorize components of their training data [14], and in fact attackers can often recover training samples from published models through carefully designed attacks [13, 15]. This is a critical privacy issue when models are trained on private data. A popular approach to address this issue is to adopt _Differential Privacy_ (DP) [12] as a rigorous privacy criterion that provably limits the amount of information attackers can infer about any single training point. _Differentially private stochastic gradient descent_ (DPSGD) [1, 1, 2] is one of the most commonly used methods to train a ML model (differentially) privately. It makes two main modifications to vanilla SGD: 1) clipping per-sample gradients to ensure a bound on their \(\ell_{2}\) norms; 2) adding Gaussian noise to the gradient.
One downside of adopting DP in ML is that we need to sacrifice utility of the trained model to guarantee privacy. Specifically, DPSGD noises the gradient at each step, with noise drawn from a spherical Gaussian distribution, \(\mathcal{N}\left(\mathbf{0},\sigma^{2}\mathbb{I}_{p\times p}\right)\), where \(p\) is the model dimension (i.e., the number of model parameters) and the variance \(\sigma^{2}\) scales the noise. In order to bound the privacy leakage, the magnitude of noise introduced in each step must scale with the square root of the number of parameters \(p\). Consequently, for many large models, the noise introduced may overwhelm the signal contributed by the original gradients, significantly diminishing the utility.
Several works have proposed methods to improve the utility of private machine learning [21, 1, 1, 22, 1, 16]. One fruitful direction uses _public_ data, i.e., data that is not subject to any privacy constraint. There are primarily two types of approaches that incorporate public data in private training. The first involves _transfer learning_, where we pretrain the model on a public dataset and then (privately) finetune the model on a sensitive dataset for our target task [1, 1, 1, 22, 1]. Another approach is based on _pre-conditioning_, which exploits the empirical observation that during training, the stochastic gradients (approximately) stay in a lower-dimensional subspace of the \(p\)-dimensional gradient space. Consequently, some works find this subspace using the public data, and then project the sensitive gradients to this (public) subspace before privatization [21, 1, 22, 16]. This reduces the magnitude of the introduced noise and generally improves utility over DPSGD without supplementary data.
However, this raises a natural question: _which public dataset should one select for a particular private task?_ It may be ideal if a fraction of the private dataset is public, as using it would incur minimal penalty due to distribution shift. But otherwise, it is unclear when one should prefer one public dataset over another. Our main contribution is an algorithm that quantifies a public dataset's fitness for use in private ML.
We demonstrate its efficacy in both transfer learning and pre-conditioning settings. To summarize our contributions:
1. **We introduce Gradient Subspace Distance (GSD), an algorithm to quantify the difference between private and public datasets.** GSD is an easily computable quantity that measures the distance between two datasets.
2. **We find GSD is useful for selecting public datasets in both pre-conditioning and transfer learning settings.** As a representative example, Table 1 shows the utility of a privately trained model using a public dataset increases monotonously as GSD decreases. Our theoretical analysis demonstrates that the excess risk of Gradient Embedding Perturbation (GEP) (a private training algorithm that leverages public data for gradient pre-conditioning) scales with the GSD.
3. **We show that GSD is _transferable_. The ordering of GSD for several choices of public dataset remains fixed across architectures, both simple (e.g., 2-layer CNN) and complex. Using these simple architectures as a proxy, we can efficiently compute GSDs which are still useful for privately training large models.
\begin{table}
\begin{tabular}{l l l l} \hline \hline AUC & Private Dataset & Public Dataset & Distance \\ \hline
**69.02\%** & & ChestX-ray14 & **0.15** \\ \cline{2-4}. & 66.62\% & & KagChest & 0.36 \\ \hline
64.90\% & & - & - \\ \cline{2-4} 48.80\% & & CIFAR-100 & 0.55 \\ \hline \hline \end{tabular}
\end{table}
Table 1: GEP evaluation AUC and corresponding distance in descending order. We use the _same_ model setting for private training and distance computation. ”-” means DP-SGD training without using any public data.
Related Work
Transfer Learning.In the differentially private setting, it is now common to pre-train a model on public data, and then privately fine-tune on private data. This can result in comparable utility as in the non-private setting, evidenced for both language models [21, 22, 23] and vision tasks [16, 15, 17, 18]. In many cases, due to computational requirements, it may be challenging to pre-train a large model on a public dataset. Instead, many practitioners will turn to pre-trained weights, which obviate the computational burden, but give less flexibility to choose an appropriate training dataset. As a result, we use second-phase pre-training, in which we perform a second phase of pre-training with a modestly-sized public dataset. This has been proven to be useful in non-private setting [14].
Pre-conditioning.Empirical evidence and theoretical analysis indicate that while training deep learning models, gradients tend to live in a lower-dimensional subspace [15, 16, 17, 18, 19, 20]. This has led to methods for private ML which project the sensitive gradients onto a subspace estimated from the public gradients. By using a small amount of i.i.d. public data, [21] demonstrate that this approach can improve the accuracy of differentially private stochastic gradient descent in high-privacy regimes and achieve a dimension-independent error rate. Similarly, [21] proposed GEP, a method that utilizes public data to identify the most useful information carried by gradients, and then splits and clips them separately.
Domain Adaptation.We aim to quantify the similarity between private and public datasets. One related area of research is distribution shift, or domain adaptation [1, 14, 15, 16, 17, 18, 19, 20, 21]. At a high level, research in this area examines the problem of when the distributions of test and training data differ, which aligns with our goals. However, most work in this area focuses on reducing the gap between in- and out-of-distribution test errors, where target data is used repeatedly for accuracy improvement. Most of the work along this line assumes that the target data is also public or doesn't consider privacy, and is thus inappropriate for the private learning setting. To the best of our knowledge, the only work with a similar focus to us is Task2Vec [2], which uses the Fisher information matrix to represent a dataset as a vector, allowing for the measurement of a distance between two datasets. However, it is not suitable for private learning tasks as our empirical evaluation shows that Task2Vec fails to accurately rank the utility of public datasets.
## 3 Preliminaries
Notation.We use \(p\) to denote the model dimension, i.e., the number of parameters in the model. \(k\) is a parameter we will use to denote the dimension of the lower-dimensional space we choose. \(m\) refers to the number of examples in a batch. We use superscripts and subscripts interchangeably to denote private or public data, like \(x_{priv}\), \(V^{pub}\).
Definition 1 (Differential Privacy [16])._A randomized algorithm \(\mathcal{A}\) is \((\varepsilon,\delta)\)-differential private if for any pair of datasets D, D' that differ in exactly one data point and for all subsets \(E\) of outputs, we have:_
\[\Pr[\mathcal{A}(D)\in E]\leq e^{\varepsilon}\Pr[\mathcal{A}(D^{\prime})\in E]+\delta.\]
Definition 2 (Principal Angles [19])._Let \(V_{1}\) and \(V_{2}\) be two orthonormal matrices of \(\mathbb{R}^{p\times k}\). The _principal angles_\(0\leq\theta_{1}\leq\cdots\leq\theta_{k}\leq\pi/2\) between two subspaces span(\(V_{1}\)) and span(\(V_{2}\)), are defined recursively by_
\[\cos\theta_{k}=\max_{\mathbf{u}_{\mathbf{k}}\in span(V_{1})}\max_{\mathbf{v}_{ \mathbf{k}}\in span(V_{2})}\mathbf{u}_{\mathbf{k}}^{\prime}\mathbf{v}_{ \mathbf{k}},\ \ \text{subject to}\]
\[\mathbf{u}_{\mathbf{k}}^{\prime}\mathbf{u}_{\mathbf{k}}=1,\mathbf{v}_{ \mathbf{k}}^{\prime}\mathbf{v}_{\mathbf{k}}=1,\mathbf{u}_{\mathbf{k}}^{\prime }\mathbf{u}_{\mathbf{i}}=0,\mathbf{v}_{\mathbf{k}}^{\prime}\mathbf{v}_{ \mathbf{i}}=0,i=1,...,k-1\]
That is, the first principal angle \(\theta_{1}\) is the smallest angle between all pairs of unit vectors over two subspaces, the second \(\theta_{2}\) is the second smallest angle, and the rest are similarly defined.
Definition 3 (Projection Metric [19, 18])._The _projection metric_ between two \(k\)-dimensional subspaces \(V_{1}\), \(V_{2}\) is defined as:
\[d\left(V_{1},V_{2}\right)=\left(\sum_{i=1}^{k}\sin^{2}\theta_{i}\right)^{1/2} =\left(k-\sum_{i=1}^{k}\cos^{2}\theta_{i}\right)^{1/2}\]
where the \(\theta_{i}\)'s are the principal angles between \(V_{1}\) and \(V_{2}\).
Definition 4 ((\(\rho,\eta\))-close)._A randomized algorithm \(\mathcal{A}(.)\) that outputs an approximate distance between to subspaces span(\(V_{1}\)) and span(\(V_{2}\)), \(\hat{d}\left(V_{1},V_{2}\right)\), is an \((\rho,\eta)\)-close approximation to the true subspace distance \(d\left(V_{1},V_{2}\right)\), if they satisfy:
\[\Pr\left[\left|\hat{d}\left(V_{1},V_{2}\right)-d\left(V_{1},V_{2}\right) \right|\leq\rho\right]\geq 1-\eta.\]
Gradient Embedding Perturbation (GEP).Our theoretical analysis is based on GEP [17], a private learning algorithm that leverages public data for gradient pre-conditioning. Here we briefly introduce their algorithm. GEP involves three steps: 1) it computes an orthonormal basis for the lower-dimensional subspace; 2) it projects the private gradients to the subspace derived from step 1, thus dividing the private gradients into two parts: embedding gradients that contain most of the information carried by the gradient, and the remainder are called residual gradients; 3) it clips two parts of the gradients separately and perturbs them to achieve differential privacy. The full algorithm is in Appendix A.
## 4 Gradient Subspace Distance
Suppose we have a task that consists of a private dataset \(X^{priv}\) and a differentially private deep learning algorithm \(A\) that can leverage public data to improve model utility. We have a collection of potential choices of public dataset \([X_{1}^{pub},X_{2}^{pub},\cdots]\). We want a metric that can prescribe which public dataset to use with algorithm \(A\) on the private task \(X^{priv}\), in order to achieve the highest utility.
We present the pseudo-code of our algorithm, Gradient Subspace Distance (GSD) in Algorithm 1. At a high level, our method involves the following two steps: finding the gradient subspace of the public and private data examples, and computing their gradient subspace distance. The algorithm uses the same model \(A\) and a batch of randomly labeled data examples from private and public datasets. Following standard DPSGD, the algorithm will first compute and store per-example gradients from each data example, that is \(G_{priv},G_{pub}\in\mathbb{R}^{m\times p}\). Then it computes the
top-\(k\) singular vectors of both the private and public gradient matrix by performing singular value decomposition (SVD). Finally we use projection metric to derive the subspace distance \(\mathbf{d}\) by taking the right singular vectors \(V_{k}^{pub},V_{k}^{priv}\) from the previous step.
GSD is naturally suited to the aforementioned pre-conditioning methods. In each iteration, these methods project the private gradients to a low-dimensional subspace, which ideally contains most of the signal of the gradients.1 Since repeatedly selecting the top subspace of the gradients themselves is not a privacy-preserving operation, we instead choose a public dataset to use as a proxy. Thus intuitively, a public dataset with a "similar top subspace" should be suitable. This is what GSD tries to capture, and the best dataset should be the one with minimum GSD.
Footnote 1: In Appendix C, we empirically reconfirm that using the top subspace of the gradients themselves contains most of their signal.
However, following this intuition only gets us so far: taking it literally would measure distances between the public and private datasets at each step throughout the training process, an impractical procedure that would introduce significant overhead. Remarkably, we instead find that a simple alternative is effective: compute the distance only once at initialization (Section 4.2). This requires only a single minibatch of each dataset, and as we show in our experiments, is surprisingly robust to changes in model architecture (Section 6.3). Most importantly, we show that it is also effective for _transfer learning_ settings (Section 6.2), where subspace projections are not used at all, thus demonstrating that GSD more generally captures dataset similarity and fitness-for-use of public datasets.
Finally, we note that, as stated, Algorithm 1 is not differentially private, as it interacts with the unprotected gradients of the private data. We discuss differentially private methods for GSD computation in Section 4.3. Nonetheless, we expect non-private computation of GSD to have minimal privacy implications, comparable to the cost of non-private hyperparameter selection, which is usually disregarded in private ML and considered to be minimal [11, 12].
### Excess Risk Scales with GSD
In this section, we theoretically prove that the excess risk of the GEP algorithm [13] is bounded by the Gradient Subspace Distance (GSD) under standard statistical learning assumptions. Recall that GEP is a canonical example of a private learning algorithm that employs public data to precondition the gradients, and is described in Appendix A. We first show that the reconstruction error \(\|\mathbf{G}_{priv}-\mathbf{G}_{priv}V_{k}^{pub}V_{k}^{pub\top}\|_{2}\) is bounded by GSD. Then we show that the convergence bound
of excess risk is determined by the reconstruction error.
Lemma 1 indicates that the reconstruction error of the private gradient matrix using public examples at step \(t\) is bounded by GSD, the subspace distance between the public and private gradient subspaces. A larger GSD may yield a larger reconstruction error at each step.
**Lemma 1**.: _For GEP, let \(\mathbf{G}_{priv}\), \(V_{k}^{pub}\), \(V_{k}^{priv}\) be the gradient matrix and top-\(k\) gradient subspace from public examples at step \(t\), respectively. Then we have the spectral norm of reconstruction error_
\[\left\|\mathbf{R}\right\|_{2}\leq\sqrt{2}s_{1,t}\mathbf{GSD}(V_{k}^{priv},V_ {k}^{pub})+s_{k+1,t} \tag{1}\]
_where \(\mathbf{R}=\mathbf{G}_{priv}-\mathbf{G}_{priv}V_{k}^{pub}V_{k}^{pub\top}\) is the reconstruction error of private gradient matrix \(\mathbf{G}_{priv}\) using public examples, \(s_{1,t}\geq...\geq s_{k,t}\geq...\) are the singular values of \(\mathbf{G}_{priv}\), \(\mathbf{GSD}(V_{k}^{priv},V_{k}^{pub})\) is the gradient subspace distance given by our algorithm._
Proof.: We have
\[\mathbf{R} =\mathbf{G}_{priv}-\mathbf{G}_{priv}\Pi_{k}^{pub} \tag{2}\] \[=\mathbf{G}_{priv}-\mathbf{G}_{priv}\Pi_{k}^{priv}+\mathbf{G}_{ priv}\Pi_{k}^{priv}-\mathbf{G}_{priv}\Pi_{k}^{pub}\] \[\Rightarrow \left\|\mathbf{R}\right\|_{2}-\left\|\mathbf{G}_{priv}(\Pi_{k}^{ pub}-\Pi_{k}^{priv})\right\|_{2}\leq\left\|\mathbf{G}_{priv}\left(\mathbb{I}-\Pi_{k}^{priv} \right)\right\|_{2}\] (3) \[\Rightarrow \left\|\mathbf{R}\right\|_{2}\leq\underbrace{\left\|\mathbf{G}_{ priv}(\Pi_{k}^{pub}-\Pi_{k}^{priv})\right\|_{2}}_{D_{1}}+\underbrace{\left\| \mathbf{G}_{priv}\left(\mathbb{I}-\Pi_{k}^{priv}\right)\right\|_{2}}_{D_{2}} \tag{4}\]
where \(\Pi_{k}^{pub}=V_{k}^{pub}V_{k}^{pub\top}\) denotes the orthogal projection to the subspace of \(\mathrm{span}(V_{k}^{pub})\), \(\Pi_{k}^{priv}=V_{k}^{priv}V_{k}^{priv\top}\) denotes the orthogonal projection to the subspace of \(\mathrm{span}(V_{k}^{priv})\).
For \(D_{2}\), recall that the Eckart-Young-Mirsky theorem [1] shows that the best rank-\(k\) approximation of \(\mathbf{G}_{priv}\) is given by its top-\(k\) reconstruction using SVD. Therefore, we have
\[D_{2} =\left\|\mathbf{G}_{priv}\left(\mathbb{I}-\Pi_{k}^{priv}\right) \right\|_{2} \tag{5}\] \[=\left\|\sum_{i=1}^{p}s_{i}u_{i}v_{i}^{\top}-\sum_{i=1}^{k}s_{i}u _{i}v_{i}^{\top}\right\|_{2}\] \[=\left\|\sum_{i=k+1}^{p}s_{i}u_{i}v_{i}^{\top}\right\|_{2}\] \[=s_{k+1}\]
For \(D_{1}\), the definition of projection metric (Definition 3) shows that
\[\mathbf{GSD}^{2}(V_{k}^{priv},V_{k}^{pub}) =k-(\cos^{2}\theta_{1}+...+\cos^{2}\theta_{k}) \tag{6}\] \[\overset{(a)}{=}k-\mathrm{Tr}\left(V_{k}^{priv\top}V_{k}^{priv}V_ {k}^{pub\top}V_{k}^{pub}\right)\] \[\overset{(b)}{=}\frac{1}{2}\left\|\Pi_{k}^{pub}-\Pi_{k}^{priv} \right\|_{F}^{2}\]
(a) and (b) hold according to Equation 5.4 in [1].
Therefore, we have
\[D_{1} =\left\|\mathbf{G}_{priv}(\Pi_{k}^{pub}-\Pi_{k}^{priv})\right\|_{2} \tag{7}\] \[\leq\left\|\mathbf{G}_{priv}\right\|_{2}\left\|\Pi_{k}^{pub}-\Pi_{k }^{priv}\right\|_{2}\] \[\leq s_{1}\left\|\Pi_{k}^{pub}-\Pi_{k}^{priv}\right\|_{F}\] \[=\sqrt{2}s_{1}\mathbf{GSD}(V_{k}^{priv},V_{k}^{pub})\]
Combining \(D_{1}\) and \(D_{2}\), we have
\[\left\|\mathbf{R}\right\|_{2} \leq\left\|\mathbf{G}_{priv}(\Pi_{k}^{pub}-\Pi_{k}^{priv})\right\| _{2}+\left\|\mathbf{G}_{priv}\left(\mathbb{I}-\Pi_{k}^{priv}\right)\right\|_{2} \tag{8}\] \[\leq\sqrt{2}s_{1,t}\mathbf{GSD}(V_{k}^{priv},V_{k}^{pub})+s_{k+1,t}\]
Thus we know that GSD bounds the reconstruction error at step \(t\).
Lemma 1 shows that the excess risk is affected by the GSD at each step. A larger GSD will result in a larger excess risk, which is often evaluated by the error rate in the experiments.
**Theorem 1**.: _Assume that the loss \(L(\mathbf{w})=\frac{1}{n}\sum_{i=1}^{n}\ell\left(\mathbf{w},z_{i}\right)\) is 1-Lipschitz, convex, and \(\beta\)-smooth. Let \(\mathbf{w}^{*}=\operatorname*{argmin}_{w\in\mathcal{W}}L(\mathbf{w})\). The excess risk of GEP obeys_
\[\mathbb{E}[L(\overline{\mathbf{w}})]-L\left(\mathbf{w}^{*}\right)\leq O\left( \frac{\sqrt{k\log(1/\delta)}}{n\varepsilon}\right)+O\left(\frac{\sqrt{p\log(1 /\delta)}}{n\varepsilon}\overline{\mathbf{d}}\right) \tag{9}\]
_where GEP is \((\varepsilon,\delta)\)-DP (see Appendix A). Here we set \(\eta=\frac{1}{\beta},T=\frac{n\beta\varepsilon}{\sqrt{p}}\), \(\overline{\mathbf{w}}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{w}_{t}\), \(\overline{\mathbf{d}}=\frac{1}{T}\sum_{t=1}^{T}d_{t}^{2}\), \(d_{t}=\sqrt{2}s_{1,t}\mathbf{GSD}+s_{k+1,t}\), and **GSD**, \(s\) are the gradient subspace distance and singular values of the gradient matrix at step \(t\)._
Proof.: Theorem 3.3 in [20] (see Appendix A) shows that the excess risk of GEP obeys
\[\mathbb{E}[L(\overline{\mathbf{w}})]-L\left(\mathbf{w}^{*}\right)\leq O\left( \frac{\sqrt{k\log(1/\delta)}}{n\varepsilon}\right)+O\left(\frac{\overline{r} \sqrt{p\log(1/\delta)}}{n\varepsilon}\right) \tag{10}\]
where \(\overline{r}=\frac{1}{T}\sum_{t=0}^{T-1}r_{t}^{2}\) and \(r_{t}=\|\mathbf{G}_{priv}-\mathbf{G}_{priv}V_{k}^{pub}V_{k}^{pub\top}\|_{2}\) is the reconstruction error at step \(t\).
From Lemma 1 we know that \(r_{t}\leq d_{t}\) at each step \(t\), thus completing the proof.
### Ordering of GSD is Preserved over Training
Theorem 1 shows that the excess risk, measured by the error on the test set, can be predicted by GSD, assuming that we have _fresh_ private examples at each step. However, this will cause significant privacy leakage and computational overhead if we repeatedly compute this distance using the whole private dataset.
We empirically measure the GSD for each of the public datasets throughout the training process, as shown in Figure 1. This demonstrates that the relative ordering of the distances is preserved at almost all times. As a result, it suffices to compute each of the GSDs only once at initialization, requiring only one batch of examples from the private dataset and one pass through the model, incurring minimal data exposure and computational overhead. We can then select the dataset with the smallest GSD for use as our public dataset.
### Private Distance Measurement
While Algorithm 1 has relatively low data exposure (requiring only a single batch of private examples), it is not differentially private. In this section, we give a general algorithm that computes GSD differential-privately: Differentially Private Gradient Subspace Distance (DP-GSD, Algorithm 2). As GSD needs top-\(k\) singular vectors from private examples, we derive these singular vectors in a differentially private manner, and the rest of the algorithm remains DP because of post-processing.
```
1:\(m\) private examples \(x_{priv}\), \(m\) public examples \(x_{pub}\), loss function \(\mathcal{L}\), model weights \(\mathbf{w}_{0}\), dimension \(k\), privacy parameter \(\varepsilon,\delta\), clip norm \(c\)
2:Distance between two image datasets \(\boldsymbol{d}\)
3:// Compute per-sample gradient matrix for private and public examples
4:\(G_{priv}=\nabla\mathcal{L}(\mathbf{w}_{0},x_{priv})\)
5:\(G_{pub}=\nabla\mathcal{L}(\mathbf{w}_{0},x_{pub})\)
6:// Privately compute top-\(k\) subspace of the private gradient matrix
7:Clip per-row: \(G_{priv}=\mathbf{Clip}(G_{priv},c)\)
8:Compute \(V_{k}^{priv}\leftarrow\mathbf{DPPCA}(G_{priv},k,\varepsilon,\delta)\)
9:// Compute top-\(k\) subspace of the public gradient matrix
10:\(U^{pub},S^{pub},V^{pub}\leftarrow\mathbf{SVD}(G_{pub})\)
11:// Compute the distance between two subspaces
12:\(\boldsymbol{d}=\mathbf{ProjectionMetric}(V_{k}^{priv},V_{k}^{pub})\)
```
**Algorithm 2** Differentially Private Gradient Subspace Distance (DP-GSD)
At a high level, DP-GSD makes one adaptation to GSD: we compute top-\(k\) subspace of the private per-sample gradient matrix in a differentially private manner. **DPPCA** in line 6 of Algorithm 2 can be any Differentially Private Principle Component Analysis (DPPCA), e.g., input perturbation [11], subspace perturbation [10], exponential mechanism [11] and stochastic methods [12]. We give a theoretical analysis of the privacy and utility guarantee of DP-GSD
Figure 1: The trend of distance during the process of training a ResNet20 model on CIFAR-10 using vanilla SGD. We follow a standard SGD training procedure and compute the distance between the current private batch and public examples at each iteration.
based on the techniques of [13], given in Algorithm 3.
```
0:\(m\times p\) data matrix \(X\), dimension \(k\), privacy parameter \(\varepsilon\)
0:\(\hat{V}_{k}\): Top-\(k\) subspace of \(X\)
1: Set \(A=\frac{1}{m}X^{\top}X\)
2: Sample \(\hat{V}_{k}=\mathbf{BMF}\left(\frac{m\varepsilon}{2}A\right)\)
```
**Algorithm 3** Differentially Private Principle Component Analysis (DPPCA)
To achieve DP, DPPCA randomly samples a \(k\)-dimensional distribution from the matrix Bing-ham distribution, which has the following density function:
\[f(V|A,k,p)=\frac{1}{{}_{1}F_{1}\left(\frac{1}{2}k,\frac{1}{2}p,A\right)}\exp \left(\operatorname{tr}\left(V^{T}AV\right)\right) \tag{11}\]
where \(V\) is the \(p\times k\) subspace and \({}_{1}F_{1}\left(\frac{1}{2}k,\frac{1}{2}p,A\right)\) is a normalization factor. We use \(\mathbf{BMF}(V)\) in Algorithm 3 to denote this distribution. Thus, we have the following privacy and utility guarantees (proofs in Appendix D):
**Theorem 2**.: _(Privacy Guarantee) Let Algorithm 3 be an implementation of \(\mathbf{DPPCA}\) in DP-GSD, then DP-GSD is \(\varepsilon/c^{2}\)-differentially priate._
**Theorem 3**.: _(Utility Guarantee) Let Algorithm 3 be an implementation of \(\mathbf{DPPCA}\) in DP-GSD, then for \(k=1\), the distance given by DP-GSD, \(\hat{d}(V_{k}^{priv},V_{k}^{pub})\) is \((\rho,\eta)\)-close to the distance given by GSD, \(d(V_{k}^{priv},V_{k}^{pub})\), if we have_
\[m>\frac{pc^{2}}{\varepsilon\alpha(1-\sqrt{1-\rho^{2}})}\left(4\frac{\log(1/ \eta)}{p}+2\log\frac{8\lambda_{1}}{\rho^{2}\alpha}\right) \tag{12}\]
_where \(\lambda_{1}\) is the top eigenvalue, \(\alpha=\lambda_{1}-\lambda_{2}\) is the eigen-gap, \(p\) is the model dimension, \(c\) is clip norm and \(\varepsilon\) is the privacy parameter._
## 5 Second-Phase Pre-training
The standard method of private transfer learning consists of two phases: pre-training on a public dataset and fine-tuning on a private task. However, with large training sets and models, the computational burden of pre-training is prohibitive for most practitioners. Consequently, it is common to instead use pre-trained weights (obtained through pre-training on a fixed dataset) rather than run pre-training on a public dataset of choice. While this is computationally convenient, it limits the choice of pre-training datasets, and thus limits the accuracy in downstream fine-tuning.
Figure 2: Second-phase pre-training pipeline. We first choose a large model and download its pre-trained weights. Then we use an appropriate parameter-efficient fine-tuning mechanism and get trainable parameters \(\theta\). We train \(\theta\) on a public dataset and get \(\theta_{0}\), called _second-phase pre-training_. Finally, we pass \(\theta_{0}\) as the initial weights for private fine-tuning on the target task.
To alleviate this issue, we consider _second-phase pre-training_, in which a set of pre-trained weights are pre-trained on a second public dataset. We can then (privately) fine-tune the model on a sensitive dataset for the downstream task of interest. While this paradigm has previously been considered in the non-private setting [14], to the best of our knowledge, we are the first to explore second phase pre-training in the differentially private setting. Pre-trained models may be significantly out of distribution with respect to the downstream task. Due to the noise introduced, the ability to adapt during fine-tuning may be diminished under differential privacy. Thus, the additional public data may be valuable for reducing the distribution shift. Second-phase pre-training is illustrated in Figure 2.
### Second-Phase Pre-training Step by Step
Now we formally define second-phase pre-training. Suppose \(f(\mathbf{W}_{pt};x)\) where \(\mathbf{W}_{pt}\) denotes pre-trained weights and \(x\) is input. To do second-phase pre-training, we first use a parameter-efficient fine-tuning mechanism and create new trainable parameters. Then we train these parameters on some public datasets. This step can be described by:
\[f_{2pt}\left(\mathbf{W}_{pt},\theta;x_{pub}\right)\rightarrow\theta_{0} \tag{13}\]
where \(x_{pub}\) are the public datasets and \(\theta\) are the new trainable parameters, which are of far lower dimensionality than \(\mathbf{W}\). We get the parameter vector \(\theta_{0}\) after this second-phase pre-training step. Finally, we initialize \(\theta=\theta_{0}\) and privately fine-tune it by running DPSGD on the private task:
\[f_{ft}\left(\mathbf{W}_{pt},\theta_{0};x_{priv}\right)\rightarrow\hat{\theta} \tag{14}\]
Our experiments show that second-phase pre-training can give additional accuracy improvements, even when we only have a small number of public data examples. Furthermore, our distance measurement GSD remains a good indicator for choosing good public data for the second phase pre-training.
### Parameter Efficiency in Private Fine-tuning
In both private and non-private settings, approaches frequently depart from the default of fine-tuning all model weights. For example, one can freeze parameters and fine-tune only specific layers, or introduce new parameters entirely. The resulting number of tunable parameters is almost always chosen to be smaller than during pre-training, leading to _parameter efficient_ methods. This can be beneficial in terms of portability and resource requirements, and the fine-tuned model utility frequently matches or compares favorably to full fine-tuning. Parameter efficiency may be further advantageous in the differentially private setting, as it reduces the magnitude of noise one must introduce (though findings on the downstream impact on utility remain inconclusive). In the settings we consider, we will empirically find that parameter-efficient methods result in better utility.
In general, there are two ways of parameter-efficient fine-tuning. One approach is to select a subset of layers or parameters for fine-tuning. For instance, [15] proposed fine-tuning only the bias terms of a model, which is both computationally and parameter-efficient while retaining similar accuracy compared to other methods. Another study by [13] found that fine-tuning the first and last layers of a model consistently improves its accuracy. The other approach is to freeze all existing parameters and add new trainable parameters during fine-tuning. Some examples include Adapter [11], Compacter [10] and LoRA [12]. [13, 14] demonstrated that private fine-tuning using parameter-efficient methods on large language models can be both computationally efficient and accurate.
Experiments
We explore the predictive power of GSD in both pre-conditioning and transfer learning settings. Specifically, we use GSD to choose a public dataset for GEP [21] (representative of pre-conditioning methods) and second-phase pre-training (representative of transfer learning settings). We use a variety of datasets, including Fashion MNIST [14], SVHN [13], and CIFAR-10 [15], as three canonical vision tasks. Based on the recommendations of [16], we also evaluate our methods on datasets closer to privacy-sensitive applications. In particular, we also work with two medical image dataset: ChestX-ray14 [12] and HAM10000 [17]. A variety of datasets are chosen as public data respectively. We evaluate our algorithms using both CNN-based (e.g., ResNet152 [11], DenseNet121 [10]) and Transformer-based (ViTs [14]) architectures. A variety of parameter-efficient fine-tuning mechanisms are considered, including freezing layers and LoRA [15]. Further details on our experimental setup appear in Appendix B.
We compute GSD non-privately using Algorithm 1, for two reasons. First, as discussed in Section 4, the privacy leakage due to hyperparameter selection is considered to be minimal and often disregarded in private ML. We thus treat selection via GSD similarly. Second, beyond being a tool for public dataset selection, it is interesting in its own right to understand properties of GSD, including how it determines downstream utility across a variety of settings.
Ideally, we would like our distance measure GSD to be model agnostic: it should depend only the two datasets, not on any particular model. This is not the case, since, as stated, our algorithms take gradients of the two datasets on the model of interest. However, we show that GSD is robust to changes in model architecture. We evaluate GSD on a 2-layer CNN (which we call a "probe network"), and show that relative ordering of GSDs is preserved, even though the architecture is far simpler than the models of interest.
We also compare our algorithm with Task2Vec [1], which has a similar goal as GSD. At a high level, Task2Vec represents a task (i.e., dataset) by transforming it into a vector so that the similarity between different datasets can be prescribed by the distance between two vectors. Although experiments show that Task2Vec matches taxonomic relations for datasets like iNaturalist [1], our empirical evaluation shows that it is outperformed by GSD in the differentially private setting.
### Results for Pre-conditioning
We compute GSD and evaluate using GEP for the chosen datasets. The evaluation results are in Table 2. We find that, across several different private and public datasets, final accuracy is monotone as GSD decreases. Unexpectedly, we find that GSD between CIFAR-10 and CIFAR-100 is less than between CIFAR-10 and CIFAR-10. Nonetheless, this is predictive of final performance, where we see using CIFAR-100 as a public dataset is better than CIFAR-10, despite the fact that the private dataset is also CIFAR-10.
For ChestX-ray14, we use AUC instead of prediction accuracy because of high class imbalance. The evaluation results are given in Table 1. Once again, lower GSD implies higher model utility. We see that ChestX-ray14 is the best public dataset, but the second best is another chest x-ray dataset. Furthermore, using a significantly different dataset (CIFAR-100) as the public dataset results in worse utility than using no public dataset at all. Therefore, it may be prudent for a practitioner to compute GSD in order to measure data suitability before proceeding to use it.
### Results for Second-Phase Pre-training
We compute the GSD and evaluate using second-phase pre-training for the chosen datasets. The evaluation results are given in Table 3 and Table 4. As before, we consistently find that smaller GSD leads to larger utility. Like ChestX-ray14, HAM10000 is highly imbalanced, so we again use AUC. However, unlike ChestX-ray14, which contains roughly 100,000 images, HAM10000 is relatively small (only 10000 skin lesion images). We assume that we can only collect 300 images from it and treat them as public. As shown in Table 3, even this small public dataset can boost the utility through second-phase pre-training. While even the worst public dataset does not dramatically hurt utility (in contrast to the pre-conditioning setting), GSD can still be a good indicator of the utility of public datasets. Similar results apply when we evaluate second-phase pre-training and GSD on ChestX-ray14 using ViTs, as shown in Table 4.
### Transferability: Simple Models Remain Predictive
Our empirical evaluation suggests that GSD is transferable over different architectures. In previous experiments, we used the same model architecture for both GSD and the (private) learning algorithm. We find that the relative GSD ordering of different public datasets is robust across
\begin{table}
\begin{tabular}{l l l l} \hline \hline Accuracy & Private Dataset & Public Dataset & Distance \\ \hline
**58.63\%** & & CIFAR-100 & **0.20** \\ \hline
57.64\% & & CIFAR-10 & 0.24 \\ \hline
56.75\% & & SVHN & 0.28 \\ \hline
52.16\% & & - & - \\ \hline
**91.32\%** & & SVHN & **0.25** \\ \hline
89.29\% & & CIFAR-100 & 0.31 \\ \hline
89.08\% & & MNIST-M & 0.39 \\ \hline
83.21\% & & - & - \\ \hline
**85.25\%** & & FMNIST & **0.34** \\ \hline
84.54\% & & FLOWER & 0.43 \\ \hline
83.91\% & & MNIST & 0.50 \\ \hline
79.77\% & & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: GEP evaluation accuracy and corresponding distance in descending order. We use the _same_ model for private training and GSD computation. ”-” means DP-SGD without public data.
\begin{table}
\begin{tabular}{c c c c} \hline \hline AUC & Private Dataset & Public Dataset & Distance \\ \hline
**87.06\%** & & HAM10000 & **0.50** \\ \hline
85.53\% & & KagSkin & 0.68 \\ \cline{2-3}
85.40\% & & - & - \\ \cline{2-3}
84.92\% & & CIFAR-100 & 0.73 \\ \cline{2-3}
84.88\% & & KagChest & 0.73 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Second-Phase evaluation results and corresponding distance in descending order. We use DenseNet121 and choose two convolutional layers and the last layer for second-phase pre-training and private fine-tuning. Detailed settings can be found in Appendix B. We use the _same_ model setting for private training and distance computation. ”-” means DP-SGD training.
different architectures. For example, \(\boldsymbol{GSD}\)(ChestX-ray14, KagChest) is consistently smaller than \(\boldsymbol{GSD}\)(ChestX-ray14, CIFAR-100), no matter what model architecture or parameter-efficient fine-tuning mechanism we choose. Inspired by this finding, we measure GSD with a very simple CNN, which we call a "probe network." It consists of two convolutional layers and one linear layer, with roughly 30,000 parameters. Evaluation results are given in Table 5. They demonstrate that even using a simple CNN, GSD can still derive accurate distance measurement with regard to the utility of public data for private learning tasks. The similarity described by GSD is thus robust against the choice of model architecture.
### Task2Vec May Give Wrong Prediction
Result.We evaluate the similarity between each public-private dataset pair using Task2Vec. Task2Vec gives similarity results of mixed quality: to highlight one notable failure case, we consider the ChestX-ray14 private dataset in Table 6. The closest dataset is itself. However, following this, CIFAR-100 is as close as KagChest, while it is qualitatively very different from ChestX-ray14 and provides low utility when used as the public dataset. In contrast, GSD orders the quality of these datasets in a manner consistent with their quality. We find similar discrepancies for HAM10000,
\begin{table}
\begin{tabular}{c c c c} \hline AUC & Private Dataset & Public Dataset & Distance \\ \hline
**72.99\%** & & ChestX-ray14 & **0.44** \\ \hline
71.86\% & \multirow{2}{*}{ChestX-ray14} & KagChest & 0.59 \\ \cline{2-3}
70.93\% & & - & - \\ \cline{2-3}
70.84\% & & CIFAR-100 & 0.98 \\ \hline \end{tabular}
\end{table}
Table 4: Second-Phase evaluation results and corresponding distance in descending order. We use ViT and LoRA fine-tuning. We use the _same_ model setting for private training and distance computation. Detailed settings can be found in Appendix B. ”-” means DP-SGD training.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \multicolumn{2}{c|}{Probe} & \multicolumn{2}{c}{ResNet152*} & ResNet152** & DenseNet121 & ViT \\ \hline Task & & Pre-conditioning & Second-phase & Second-phase & Second-phase \\ \hline & \multicolumn{5}{c}{Distance — Accuracy} \\ \hline (Xray, Xray) & **0.39** & **0.15 — 69.02\%** & **0.31 — 67.48\%** & **0.33 — 67.53\%** & **0.44 — 72.99\%** \\ (Xray, Chest) & 0.52 & 0.36 — 66.62\% & 0.34 — 67.27\% & 0.37 — 67.40\% & 0.59 — 71.86\% \\ (Xray, CIFAR) & 0.58 & 0.55 — 48.80\% & 0.39 — 66.57\% & 0.40 — 67.28\% & 0.98 — 70.84\% \\ \hline (HAM, HAM) & **0.42** & - & **0.48 — 86.83\%** & **0.50 — 87.06\%** & **0.50 — 84.94\%** \\ (HAM, Skin) & 0.55 & - & 0.65 — 85.95\% & 0.68 — 85.53\% & 0.76 — 81.23\% \\ (HAM, CIFAR) & 0.67 & - & 0.70 — 85.49\% & 0.73 — 84.92\% & 0.97 — 77.07\% \\ (HAM, Chest) & 0.76 & - & 0.70 — 85.41\% & 0.73 — 84.88\% & 0.93 — 78.65\% \\ \hline \end{tabular}
\end{table}
Table 5: Transferability evaluation results. The left-most column denotes each pair of private-public datasets, e.g. (Xray, Xray) means we take ChestX-ray14 as a private dataset, split part of its testset and take those images as public. Detailed settings can be found in Appendix B. The first row denotes different model architectures. ”Probe” is a simple CNN with around 30,000 parameters. ResNet152* and ResNet152** use different parameter-efficient fine-tuning settings. We use the same model for each Distance-Accuracy. The results show that this distance given by GSD is generally robust across different algorithms (pre-conditioning or second-phase pre-training) and different model architectures (from simple Probe to ViT). A smaller distance indicates that this public dataset is more similar to the private one, thus leveraging this public dataset for private learning will result in better accuracy.
results are given in the Appendix C.
## 7 Conclusion
A recent line of work explores the power of public data in private machine learning. However, evaluating the quality of the data is still a question that must be addressed. We propose a new distance GSD to predict utility of public datasets in private ML. We empirically demonstrate that lower GSD of a public dataset is strongly predictive of higher downstream utility. Our algorithms require minimal data and are computationally efficient. Additionally, transferability of GSD demonstrates that it is generally model agnostic, allowing one to decouple the public dataset selection and private learning. We further demonstrate that GSD is effective for predicting utility in settings involving both pre-conditioning and second-phase pre-training, and that GSD compares favorably to other measures of dataset distance.
|
2310.19521 | Maximum principle preserving time implicit DGSEM for linear scalar
hyperbolic conservation laws | We investigate the properties of the high-order discontinuous Galerkin
spectral element method (DGSEM) with implicit backward-Euler time stepping for
the approximation of hyperbolic linear scalar conservation equation in multiple
space dimensions. We first prove that the DGSEM scheme in one space dimension
preserves a maximum principle for the cell-averaged solution when the time step
is large enough. This property however no longer holds in multiple space
dimensions and we propose to use the flux-corrected transport limiting [Boris
and Book, J. Comput. Phys., 11 (1973)] based on a low-order approximation using
graph viscosity to impose a maximum principle on the cell-averaged solution.
These results allow to use a linear scaling limiter [Zhang and Shu, J. Comput.
Phys., 229 (2010)] in order to impose a maximum principle at nodal values
within elements. Then, we investigate the inversion of the linear systems
resulting from the time implicit discretization at each time step. We prove
that the diagonal blocks are invertible and provide efficient algorithms for
their inversion. Numerical experiments in one and two space dimensions are
presented to illustrate the conclusions of the present analyses. | Riccardo Milani, Florent Renac, Jean Ruel | 2023-10-30T13:21:46Z | http://arxiv.org/abs/2310.19521v1 | # Maximum principle preserving time implicit DGSEM for linear scalar hyperbolic conservation laws
###### Abstract
We investigate the properties of the high-order discontinuous Galerkin spectral element method (DGSEM) with implicit backward-Euler time stepping for the approximation of hyperbolic linear scalar conservation equation in multiple space dimensions. We first prove that the DGSEM scheme in one space dimension preserves a maximum principle for the cell-averaged solution when the time step is large enough. This property however no longer holds in multiple space dimensions and we propose to use the flux-corrected transport limiting [5] based on a low-order approximation using graph viscosity to impose a maximum principle on the cell-averaged solution. These results allow to use a linear scaling limiter [53] in order to impose a maximum principle at nodal values within elements. Then, we investigate the inversion of the linear systems resulting from the time implicit discretization at each time step. We prove that the diagonal blocks are invertible and provide efficient algorithms for their inversion. Numerical experiments in one and two space dimensions are presented to illustrate the conclusions of the present analyses.
keywords: _2000 MSC:_ 65M12, 65M70, 76T10 hyperbolic scalar equations, maximum principle, discontinuous Galerkin method, summation-by-parts, backward Euler
## 1 Introduction
We are here interested in the accurate and robust approximation of the following problem with an hyperbolic scalar linear conservation law in \(d\geq 1\) space dimensions:
\[\partial_{t}u+\nabla\cdot\mathbf{f}(u) =0,\quad\text{in}\;\Omega\times(0,\infty), \tag{1a}\] \[u(\cdot,0) =u_{0}(\cdot),\quad\text{in}\;\Omega, \tag{1b}\]
with \(\Omega\subset\mathbb{R}^{d}\), appropriate boundary conditions on \(\partial\Omega\) (e.g., inflow or outflow conditions, periodic conditions), a smooth flux \(\mathbf{f}\in\mathcal{C}^{1}(\mathbb{R},\mathbb{R}^{d})\) and \(u_{0}\) in \(L^{\infty}(\mathbb{R}^{d},\mathbb{R})\). We here consider linear fluxes with constant coefficients \(\mathbf{f}(u)=\mathbf{c}u\) with a given \(\mathbf{c}\) in \(\mathbb{R}^{d}\). Without loss of generality, we assume \(\mathbf{c}\) in \(\mathbb{R}^{d}_{+}\), a negative component being handled by reverting the corresponding space direction.
Problem (1) has to be understood in the sense of distributions where we look for weak solutions that are piecewise \(\mathcal{C}^{1}\) solutions. Introducing the square entropy \(\eta(u)=\frac{u^{2}}{2}\) and associated entropy flux \(\mathbf{q}(u)=\mathbf{c}\frac{u^{2}}{2}\) pair, solutions to (1a) also satisfy
\[\partial_{t}\eta(u)+\nabla\cdot\mathbf{q}(u)\leq 0,\quad\text{in}\;\Omega\times(0,\infty), \tag{2}\]
in the sense of distributions. For compactly supported solutions, this brings uniqueness and \(L^{2}\) stability:
\[\|u\|_{L^{2}(\Omega,\mathbb{R})}\leq\|u_{0}\|_{L^{2}(\Omega,\mathbb{R})}\quad \forall t\geq 0.\]
Solutions to (1) also satisfy a maximum principle:
\[m\leq u_{0}(x)\leq M\;\text{in}\;\Omega\quad\Rightarrow\quad m\leq u(x,t)\leq M \;\text{in}\;\Omega\times(0,\infty), \tag{3}\]
almost everywhere, which brings \(L^{\infty}\) stability.
We are here interested in the approximation of (1) with a high-order space discretization that satisfies the above properties at the discrete level. We here consider the discontinuous Galerkin spectral element method (DGSEM) based on collocation between interpolation and quadrature points [25] and tensor products of one-dimensional (1D) function bases and quadrature rules. The collocation property of the DGSEM associated to tensor-product evaluations and sum factorization drastically reduces the number of operations in the operators implementing the discretization and makes the DGSEM computationally efficient. Moreover, using diagonal norm summation-by-parts (SBP) operators and the entropy conservative numerical fluxes from Tadmor [43], semi-discrete entropy conservative finite-difference and spectral collocation schemes have been derived in [15; 6] and applied to a large variety of nonlinear conservation laws [18; 4; 12; 37; 36; 39; 49; 33; 53], nonconservative hyperbolic systems and balance laws [29; 38; 10; 2; 48; 47], among others.
Most of the time, these schemes are analyzed in semi-discrete form for which the time derivative is not discretized, or when coupled with explicit in time discretizations. Time explicit integration may however become prohibitive for long time simulations or when looking for stationary solutions due to the strong CFL restriction on the time step which gets smaller as the approximation order of the scheme increases to ensure either linear stability [16; 3; 26], or positivity of the approximate solution [53; 54]. The DGSEM also presents attractive features for implicit time stepping. First, the collocation property reduces the connectivity between degrees of freedom (DOFs) which makes the DGSEM well suited due to a reduced number of entries in the Jacobian matrix of the space residuals. This property has been used in [40] to rewrite the time implicit discretization of the compressible Navier-Stokes equations as a Schur complement problem at the cell level that is then efficiently solved using static condensation. Then, tensor-product bases and quadratures have motivated the derivation of tensor-product based approximations of the diagonal blocks of the Jacobian matrix by Kronecker products [46; 45] of 1D operators using singular value decomposition of a shuffled matrix [31], or a least squares alternatively in each space direction [13].
We here consider and analyze a DGSEM discretization in space associated with a first-order backward-Euler time integration which allows to circumvent the CFL condition for linear stability and makes it well adapted for approximating stationary solutions or solutions containing low characteristic frequency scales. It is however of strong importance to also evaluate to what extent other properties of the exact solution are also satisfied at the discrete level. Positivity preservation for instance is an essential property that is required to prevent the computations from crashing due to floating exceptions during the simulation of many hyperbolic systems. Little is known about the properties of time implicit DGSEM schemes, apart from the entropy stability which holds providing the semi-discrete scheme is entropy stable due to the dissipative character of the backward-Euler time integration. An analysis of a time implicit discontinuous Galerkin (DG) method with Legendre basis functions for the discretization of a 1D linear scalar hyperbolic equation has been performed in [35] and showed that a lower bound on the time step is required for the cell-averaged solution to satisfy a maximum principle at the discrete level. A linear scaling limiter of the DG solution around its cell-average [53] is then used to obtain a maximum principle preserving scheme. Numerical experiments with linear and also nonlinear hyperbolic scalar equations and systems support the conclusion of this analysis. The theoretical proof of this lower bound uses the truncated expansion of the Dirac delta function in Legendre series that is then used as a test function in the DG scheme to prove that the Jacobian matrix of the cell-averaged discrete scheme is an M-matrix. It is however difficult to use this trick in the DGSEM scheme that uses lower-order quadrature rules and whose form is directly linked to the particular choice of Lagrange interpolation polynomials as test functions. Unfortunately, this discrete preservation of the maximum principle or positivity no longer holds in general in multiple space dimensions even on Cartesian grids and solutions with negative cell-average in some cell can be generated [27]. In the case of linear hyperbolic equations and radiative transfer equations, Ling et al. [27] showed that it is possible to impose positivity of the solution providing the approximation polynomial space is enriched with additional functions. The use of reduced order quadrature rules and suitable test functions were proposed in [50] to define a
conservative scheme that preserves positivity in the case of stationary linear hyperbolic conservation laws, The work in [51] proposes limiters that allow to ensure positivity of stationary solutions of the radiative transfer equations, while keeping a particular local conservation property for stationary conservation laws. These modifications seem difficult to be directly applied to the DGSEM without loosing the collocation which is essential for the efficiency of the method. A limiter for time implicit DG schemes for nonlinear scalar equations has been proposed in [44] by reformulating the discrete problem as a constrained optimization problem and introducing Lagrange multipliers associated to the constraints. This however results in a nonlinear and nonsmooth algebraic system of equations that requires an adapted Newton method for its resolution.
In the present work, we propose an analysis of the DGSEM scheme with backward-Euler time stepping for linear hyperbolic equations on Cartesian grids. We first analyze the discrete preservation of the maximum principle property and show that it holds for the cell-averaged solution in one space dimension for sufficiently large time steps. This result is similar to the one obtained in [35] for a modal DG scheme with Legendre polynomials, though the conditions on the time step are different. The proof relies on the nilpotent property of the discrete derivative matrix evaluating the derivatives of the Lagrange interpolation polynomials at quadrature points. This property allows to easily invert the mass and stiffness matrices and derive a scheme for the cell-averaged scheme, thus allowing to derive conditions for the associated Jacobian matrix to be an M-matrix. The DOFs are then limited with the linear scaling limiter from [53] to impose a maximum principle to the whole solution. Unfortunately, this property no longer holds in multiple space dimensions similarly to the modal DG scheme [27]. We thus follow [19; 14] that propose to use the flux-corrected transport (FCT) limiter [5; 52] combining a low-order and maximum principle preserving scheme with the high-order DGSEM. The low-order scheme is obtained by adding graph viscosity [21; 20; 30] to the DGSEM scheme. The FCT limiter is here designed to preserve a maximum principle for the cell-averaged solution, not for all the DOFs. This aspect is essential to reduce the effect of the limiter when the solution is smooth. In particular, the numerical experiments highlight a strong improvement of the accuracy of the limited scheme that would be otherwise affected when limiting the DOFs as already observed in the literature [20; 22]. Here again, the linear scaling limiter is applied after the FCT limiter to ensure the maximum principle on the whole solution.
We also analyze the inversion of the linear system resulting from the time implicit discretization to be solved at each time step. The linear system is large, non symmetric, sparse with a sparsity pattern containing dense diagonal and sparse off-diagonal blocks of size the number of DOFs per cell. Efficient inversion could be achieved through the use of block sparse direct or iterative linear solvers. Many algorithms require the inversion of the diagonal blocks as in block-preconditionned Krylov solvers [31; 32; 11], block relaxation schemes [41], etc. We here prove that the diagonal blocks are invertible and propose efficient algorithms for their inversion1. We again use the nilpotency of the discrete derivative matrix to inverse the diagonal blocks of the 1D scheme. We use the inversion of the 1D diagonal blocks as building blocks for the inversion of diagonal blocks in multiple space dimensions thanks to the tensor product structure of the discretization operators.
Footnote 1: A repository of the algorithms for block inversion is available at [https://github.com/rueljean/fast_DGSEM_block_inversion](https://github.com/rueljean/fast_DGSEM_block_inversion). Consult Appendix B for a description of the repository.
The paper is organized as follows. Section 2 introduces some properties of the DGSEM function space associated to Gauss-Lobatto quadrature rules. The 1D DGSEM is introduced and analyzed in section 3, while section 4 focuses on the DGSEM in two space dimensions (see C for a summary of the results in three space dimensions). The results are assessed by numerical experiments in one and two space dimensions in section 5 and concluding remarks about this work are given in section 6.
## 2 The DGSEM discretization in space
### The DGSEM function space
The DGSEM discretization consists in defining a discrete weak formulation of problem (1). The space domain \(\Omega\) is first discretized with a Cartesian grid \(\Omega_{h}\subset\mathbb{R}^{d}\) with elements \(\kappa\) labeled as \(\kappa_{i}=[x_{i-1/2},x_{i+1/2}]\) of size \(\Delta x_{i}=x_{i+1/2}-x_{i-1/2}>0\), \(1\leq i\leq N_{x}\), for \(d=1\) (see Fig. 1); \(\kappa_{ij}=[x_{i-1/2},x_{i+1/2}]\times[y_{j-1/2},y_{j+1/2}]\) of size \(\Delta x_{i}\Delta y_{j}=(x_{i+1/2}-x_{i-1/2})(y_{j+1/2}-y_{j-1/2})>0\), \(1\leq i\leq N_{x}\), \(1\leq j\leq N_{y}\), for \(d=2\) (see Fig. 2), etc. We also set \(h\coloneqq\min_{\kappa\in\Omega_{h}}|\kappa|^{\frac{1}{2}}\).
The approximate solution to (1) is sought under the form (with some abuse in the notation for the indices and exponents that will be clarified below)
\[u_{h}(\mathbf{x},t)=\sum_{k=1}^{N_{p}}\phi_{\kappa}^{k}(\mathbf{x})U_{\kappa}^{k }(t)\quad\forall\mathbf{x}\in\kappa,\,\kappa\in\Omega_{h},\,\forall t\geq 0, \tag{4}\]
where \((U_{\kappa}^{k})_{1\leq k\leq N_{p}}\) are the DOFs in the element \(\kappa\). The subset \((\phi_{\kappa}^{k})_{1\leq k\leq N_{p}}\) constitutes a basis of \(\mathcal{V}_{h}^{p}\) restricted onto the element \(\kappa\) and \(N_{p}=(p+1)^{d}\) is its dimension. We use tensor product in each space direction of Lagrange interpolation polynomials \((\ell_{k})_{0\leq k\leq p}\) associated to the Gauss-Lobatto quadrature nodes over \(I=[-1,1],\xi_{0}=-1<\xi_{1}<\cdots<\xi_{p}=1\):
\[\ell_{k}(\xi)=\prod_{l=0}^{p}\frac{\xi-\xi_{l}}{\xi_{k}-\xi_{l}},\quad 0\leq k \leq p, \tag{5}\]
which satisfy
\[\ell_{k}(\xi_{l})=\delta_{kl},\quad 0\leq k,l\leq p, \tag{6}\]
with \(\delta_{kl}\) the Kronecker delta.
The basis functions are thus defined for \(d=1\) by \(\phi_{i}^{k}(x)=\ell_{k}(\frac{2}{\Delta x_{i}}(x-x_{i-\frac{1}{2}})-1)\) and for \(d=2\) by \(\phi_{ij}^{kl}(\mathbf{x})=\ell_{k}(\frac{2}{\Delta x_{i}}(x-x_{i-\frac{1}{2}} )-1)\ell_{l}(\frac{2}{\Delta y_{j}}(y-y_{j-\frac{1}{2}})-1)\), and so on.
The DGSEM uses Gauss-Lobatto quadrature rules to approximate the integrals over elements:
\[\int_{-1}^{1}f(\xi)d\xi\simeq\sum_{k=0}^{p}\omega_{k}f(\xi_{k}), \tag{7}\]
with \(\omega_{k}>0\) and \(\sum_{k=0}^{p}\omega_{k}=\int_{-1}^{1}ds=2\), the weights and \(\xi_{k}\) the nodes over \(I\) of the quadrature rule.
Figure 2: Mesh for \(d=2\) with positions of quadrature points (gray bullets) in \(\kappa_{j}\) for \(p=3\).
### Derivatives of the Lagrange polynomials
It is convenient to introduce the discrete derivative matrix \(\mathbf{D}\)[24] with entries
\[D_{kl}=\ell_{l}^{\prime}(\xi_{k}),\quad 0\leq k,l\leq p. \tag{8}\]
Note that we have \(\ker\mathbf{D}=\mathbb{P}^{0}(I)\) and by the rank-nullity theorem \(\mathbf{D}\) is of rank \(p\). We will also consider \(\mathbf{D}^{(\alpha)}\) the generalization to \(\alpha\)th-order derivatives:
\[D_{kl}^{(\alpha)}=\ell_{l}^{(\alpha)}(\xi_{k}),\quad 0\leq k,l\leq p,\quad \alpha\geq 0, \tag{9}\]
with the conventions \(D_{kl}^{(0)}=\ell_{l}(\xi_{k})=\delta_{kl}\) and \(D_{kl}^{(1)}=\ell_{l}^{\prime}(\xi_{k})=D_{kl}\). The matrix \(\mathbf{D}\) maps any element of \(\mathcal{V}_{h}^{p}\) to its derivative in \(\mathcal{V}_{h}^{p-1}\subset\mathcal{V}_{h}^{p}\) and a direct calculation gives \(\mathbf{D}^{(\alpha)}=\mathbf{D}^{\alpha}\), and since the \((\ell_{k})_{0\leq k\leq p}\) are polynomials of degree \(p\), the matrix \(\mathbf{D}\) is nilpotent:
\[\mathbf{D}^{(p+1)}=\mathbf{D}^{p+1}=0, \tag{10}\]
so one can easily invert the following matrices
\[(\mathbf{I}-y\mathbf{D})^{-1}=\sum_{k=0}^{p}y^{k}\mathbf{D}^{(k)}\quad\forall y \in\mathbb{R}, \tag{11}\]
which corresponds to the truncated matrix series associated to the Taylor development of the function \(x\mapsto(1-yx)^{-1}=\sum_{k\geq 0}(yx)^{k}\) for \(|xy|<1\).
Likewise \(\mathbf{D}^{(p)}\) has columns with constant coefficients since \(\ell_{l}^{(p)}\) is a constant function and its entries are easily obtained from (5):
\[\mathbf{D}_{kl}^{(p)}=\ell_{l}^{(p)}(\xi_{k})=p!\prod_{m=0,m\neq l}^{p}\frac{1 }{\xi_{l}-\xi_{m}}\quad\forall 0\leq k,l\leq p. \tag{12}\]
Integrating \(\ell_{k}^{(\alpha)}\) over \(I\) leads to the generalized integration relation
\[\sum_{l=0}^{p}\omega_{l}D_{lk}^{(\alpha)}=D_{pk}^{(\alpha-1)}-D_{0k}^{(\alpha- 1)}\quad\forall 0\leq k\leq p,\quad\alpha\geq 1, \tag{13}\]
which is the discrete counterpart to \(\int_{-1}^{1}\ell^{(\alpha)}(\xi)d\xi=\ell^{(\alpha-1)}(1)-\ell^{(\alpha-1)} (-1)\) and for \(\alpha=1\) we get
\[\sum_{l=0}^{p}\omega_{l}D_{lk}=\delta_{kp}-\delta_{k0},\quad 0\leq k\leq p. \tag{14}\]
Finally, as noticed in [17], the DGSEM satisfies the following important relation known as the summation-by-parts (SBP) property [42]
\[\omega_{k}D_{kl}+\omega_{l}D_{lk}=\delta_{kp}\delta_{lp}-\delta_{k0}\delta_{ l0},\quad 0\leq k,l\leq p, \tag{15}\]
which is the discrete counterpart to integration by parts.
## 3 Time implicit discretization in one space dimension
We here consider (1) in one space dimension, \(d=1\) and \(f(u)=c_{x}u\) with \(c_{x}>0\), over a unit domain \(\Omega=(0,1)\) and consider periodic conditions \(u(0,t)=u(1,t)\) which makes the analysis more difficult due to the existence of an upper block in the matrix. The present analysis however encompasses the case of Dirichlet boundary conditions leading to a block lower triangular system (see remark 3.3).
### Space-time discretization
The discretization in space of problem (1) is obtained by multiplying (1a) by a test function \(v_{h}\) in \(\mathcal{V}_{h}^{p}\) where \(u\) is replaced by the approximate solution (4), then integrating by parts in space over elements \(\kappa_{i}\) and replacing the physical fluxes at interfaces by two-point numerical fluxes:
\[\frac{\omega_{k}\Delta x_{i}}{2}\partial_{t}U_{i}^{k}+R_{i}^{k}(u_{h})=0,\quad 1 \leq i\leq N_{x},\ 0\leq k\leq p,\ n\geq 0,\] (16a) with \[R_{i}^{k}(u_{h})=-\sum_{l=0}^{p}\omega_{l}D_{lk}f(U_{i}^{l})+\delta_{kp}n(U_{i }^{p},U_{i+1}^{0})-\delta_{k0}n(U_{i-1}^{p},U_{i}^{0}), \tag{16b}\]
where we have used the conventions \(U_{0}^{p}=U_{N_{x}}^{p}\) and \(U_{N_{x}+1}^{0}=U_{1}^{0}\) to impose the periodic boundary condition. Since \(c_{x}>0\), we use the upwind flux \(h(u^{-},u^{+})=u^{-}\).
We now focus on a time implicit discretization with a backward Euler method in (16a) and the fully discrete scheme reads
\[\frac{\omega_{k}}{2}U_{i}^{k,n+1}+\lambda_{i}\Big{(}-\sum_{l=0}^{p}\omega_{l} D_{lk}U_{i}^{l,n+1}+\delta_{kp}U_{i}^{p,n+1}-\delta_{k0}U_{i-1}^{p,n+1}\Big{)}= \frac{\omega_{k}}{2}U_{i}^{k,n},\quad 1\leq i\leq N_{x},\ 0\leq k\leq p,\ n\geq 0, \tag{17}\]
with \(\lambda_{i}=c_{x}\frac{\Delta^{(n)}}{\Delta x_{i}}\), \(\Delta^{(n)}=t^{(n+1)}-t^{(n)}>0\), with \(t^{(0)}=0\), the time step, and using the notations \(u_{h}^{(n)}(\cdot)=u_{h}(\cdot,t^{(n)})\) and \(U_{i}^{k,n}=U_{i}^{k}(t^{(n)})\).
Summing (17) over \(0\leq k\leq p\) gives
\[\langle u_{h}^{(n+1)}\rangle_{i}+\lambda_{i}\big{(}U_{i}^{p,n+1}-U_{i-1}^{p,n+ 1}\big{)}=\langle u_{h}^{(n)}\rangle_{i}\quad\forall 1\leq i\leq N_{x},\ n\geq 0, \tag{18}\]
for the cell-averaged solution
\[\langle u_{h}^{(n)}\rangle_{i}:=\sum_{k=0}^{p}\frac{\omega_{k}}{2}U_{i}^{k,n}. \tag{19}\]
It is convenient to also consider (17) in vector form as
\[\mathbf{M}\mathbf{U}_{i}^{n+1}=\mathbf{M}\mathbf{U}_{i}^{n}+\lambda_{i} \Big{(}(2\mathbf{D}^{\top}\mathbf{M}-\mathbf{e}_{p}\mathbf{e}_{p}^{\top}) \mathbf{U}_{i}^{n+1}+\mathbf{e}_{0}\mathbf{e}_{p}^{\top}\mathbf{U}_{i-1}^{n+1 }\Big{)},\quad 1\leq i\leq N_{x},\ n\geq 0. \tag{20}\]
where \(\mathbf{M}=\frac{1}{2}\operatorname{diag}(\omega_{0},\ldots,\omega_{p})\) denotes the mass matrix, while \((\mathbf{e}_{k})_{0\leq k\leq p}\) is the canonical basis of \(\mathbb{R}^{p+1}\) and \(\mathbf{U}_{i}^{n}=(U_{i}^{k,n})_{0\leq k\leq p}\).
Finally, we derive the discrete counterpart to the inequality (2) for the square entropy. Left multiplying (20) by \(\big{(}\eta^{\prime}(U_{i}^{0\leq k\leq p,n+1})\big{)}^{\top}=\mathbf{U}_{i}^ {(n+1)}\), solutions to (17) satisfy the following inequality for the discrete square entropy \(\frac{1}{2}\langle u_{h}^{2}\rangle_{i}\)
\[\frac{1}{2}\langle(u_{h}^{(n+1)})^{2}\rangle_{i}-\frac{1}{2}\langle(u_{h}^{(n) })^{2}\rangle_{i}+\frac{\lambda_{i}}{2}\big{(}(U_{i}^{p,n+1})^{2}-(U_{i-1}^{p, n+1})^{2}\big{)}\leq 0,\]
which brings existence and uniqueness of solutions to (17) in \(L^{2}(\Omega_{h}\times\cup_{n\geq 0}(t^{(n)},t^{(n+1)}),\mathbb{R})\).
### The M-matrix framework
Before starting the analysis, we introduce the M-matrix framework that will be useful in the following. We first define the set \(\mathcal{Z}^{n\times n}\) of all the \(n\times n\) real matrices with nonpositive off-diagonal entries:
\[\mathcal{Z}^{n\times n}=\left\{\mathbf{A}=(a_{ij})\in\mathbb{R}^{n\times n}:a _{ij}\leq 0,i\neq j\right\}.\]
Different characterizations of M-matrices exist [34] and we use the following definition:
**Definition 3.1**: _A matrix \(\mathbf{A}\in\mathcal{Z}^{n\times n}\) is called an M-matrix if \(\mathbf{A}\) is inverse-positive. That is \(\mathbf{A}^{-1}\) exists and each entry of \(\mathbf{A}^{-1}\) is nonnegative._
We will use the following characterizations of an M-matrix [34]:
**Theorem 3.1**: _A matrix \(\mathbf{A}\in\mathcal{Z}^{n\times n}\) is an M-matrix if and only if \(\mathbf{A}\) is semi-positive. That is, there exists \(\mathbf{x}=(x_{1},\ldots,x_{n})^{\top}\) with \(x_{i}>0\) such that \((\mathbf{A}\mathbf{x})_{i}>0\) for all \(1\leq i\leq n\)._
**Theorem 3.2**: _A matrix \(\mathbf{A}\in\mathcal{Z}^{n\times n}\) is an M-matrix if \(\mathbf{A}\) has all positive diagonal elements and it is strictly diagonally dominant, \(a_{ii}>\sum_{j\neq i}|a_{ij}|\) for all \(1\leq i\leq n\)._
M-matrices will be used as a tool to prove positivity preservation for the DGSEM scheme which is equivalent to prove a discrete maximum principle (see lemma 3.1).
### Maximum principle for the cell average
Following [35], we here prove in theorem 3.3 a weaken discrete maximum principle for the cell average, \(m\leq\langle u_{h}^{(n+1)}\rangle_{i}\leq M\). We then use the linear scaling limiter from [53] to enforce all the DOFs at time \(t^{(n+1)}\) to be in the range \([m,M]\) (see section 5.1).
We here use the following result that shows that for the linear and conservative scheme (17), maximum-principle preservation and positivity preservation are equivalent.
**Lemma 3.1**: _To prove a discrete maximum principle for the DGSEM scheme (17), it is enough to prove that it is positivity preserving._
From (14) we obtain
\[\frac{\omega_{k}}{2}=\frac{\omega_{k}}{2}+\lambda_{i}\Big{(}\sum_{l=0}^{p} \omega_{l}D_{lk}-\delta_{kp}+\delta_{k0}\Big{)},\]
and subtracting the above equation multiplied by \(m\) defined in (3) from (17), then subtracting (17) from the above equation multiplied by \(M\) in (3), we deduce that both (\(U_{i\in\mathbb{Z}}^{0\leq k\leq p,n\leq 0}-m\)) and (\(M-U_{i\in\mathbb{Z}}^{0\leq k\leq p,n\leq 0}\)) satisfy (17). As a consequence, the positivity preserving property, \(U_{i\in\mathbb{Z}}^{0\leq k\leq p,n}\geq 0\) implies \(U_{i\in\mathbb{Z}}^{0\leq k\leq p,n+1}\geq 0\), is equivalent to the discrete maximum principle, \(m\leq U_{i\in\mathbb{Z}}^{0\leq k\leq p,n}\leq M\) implies \(m\leq U_{i\in\mathbb{Z}}^{0\leq k\leq p,n+1}\leq M\). \(\Box\)
Using (11) with \(y=2\lambda_{i}\) to invert (20), we get
\[\frac{\omega_{k}}{2}U_{i}^{k,n+1}=\sum_{l=0}^{p}\frac{\omega_{l}}{2}\mathcal{ D}_{kl}^{i}U_{i}^{l,n}-\lambda_{i}(\mathcal{D}_{kp}^{i}U_{i}^{p,n+1}-\mathcal{D}_{ k0}^{i}U_{i-1}^{p,n+1}),\]
where the \(\mathcal{D}_{kl}^{i}\) denote the entries of the matrix
\[\mathcal{D}_{l}\coloneqq(\mathbf{I}-2\lambda_{i}\mathbf{D}^{\top})^{-1} \overset{(\ref{eq:1})}{=}\sum_{l=0}^{p}(2\lambda_{i}\mathbf{D}^{\top})^{l}. \tag{21}\]
We use (18) to get \(\lambda_{i}U_{i-1}^{p,n+1}=\lambda_{i}U_{i}^{p,n+1}+\langle u_{h}^{(n+1)} \rangle_{i}-\langle u_{h}^{(n)}\rangle_{i}\) and injecting this result into the above expression for \(U_{i}^{p,n+1}\) gives
\[\sigma_{i}^{p}U_{i}^{p,n+1}=\xi_{i}^{p,n}+2\mathcal{D}_{p0}^{i} \Big{(}\langle u_{h}^{(n+1)}\rangle_{i}-\langle u_{h}^{(n)}\rangle_{i}\Big{)},\]
where
\[\sigma_{i}^{p}=\omega_{p}+2\lambda_{i}(\mathcal{D}_{pp}^{i}- \mathcal{D}_{p0}^{i}),\quad\xi_{i}^{p,n}=\sum_{l=0}^{p}\omega_{l}\mathcal{D}_{ pl}^{i}U_{i}^{l,n}. \tag{22}\]
Further using the above expression to eliminate \(U_{i}^{p,n+1}\) and \(U_{i-1}^{p,n+1}\) from the cell-averaged scheme (18), we finally obtain
\[\left(1+2\lambda_{i}\frac{\mathcal{D}_{p0}^{i}}{\sigma_{i}^{p}} \right)\langle u_{h}^{(n+1)}\rangle_{i}-2\lambda_{i}\frac{\mathcal{D}_{p0}^{i- 1}}{\sigma_{i-1}^{p}}\langle u_{h}^{(n+1)}\rangle_{i-1} =\left(1+2\lambda_{i}\frac{\mathcal{D}_{p0}^{i}}{\sigma_{i}^{p}} \right)\langle u_{h}^{(n)}\rangle_{i}-2\lambda_{i}\frac{\mathcal{D}_{p0}^{i- 1}}{\sigma_{i-1}^{p}}\langle u_{h}^{(n)}\rangle_{i-1}-\lambda_{i}\left(\frac{ \xi_{i}^{p,n}}{\sigma_{i}^{p}}-\frac{\xi_{i-1}^{p,n}}{\sigma_{i-1}^{p}}\right)\] \[=\langle u_{h}^{(n)}\rangle_{i}-\lambda_{i}\frac{\xi_{i}^{p,n}-2 \mathcal{D}_{p0}^{i}\langle u_{h}^{(n)}\rangle_{i}}{\sigma_{i}^{p}}+\lambda_ {i}\frac{\xi_{i-1}^{p,n}-2\mathcal{D}_{p0}^{i-1}\langle u_{h}^{(n)}\rangle_{i -1}}{\sigma_{i-1}^{p}}\] \[=\sum_{k=0}^{p}\frac{\omega_{k}}{2}\left(\left(1-\frac{2\lambda_ {i}\left(\mathcal{D}_{pk}^{i}-\mathcal{D}_{p0}^{i}\right)}{\sigma_{i}^{p}} \right)U_{i}^{k,n}+\left(\frac{2\lambda_{i}\left(\mathcal{D}_{pk}^{i-1}- \mathcal{D}_{p0}^{i-1}\right)}{\sigma_{i-1}^{p}}\right)U_{i-1}^{k,n}\right), \tag{23}\]
where we have used (19) and (22) in the last step.
Let us now derive conditions on \(\lambda_{i}\) for which the above relation preserves a discrete maximum principle for the cell-averaged solution. According to lemma 3.1, it is enough to prove that the scheme preserves positivity, i.e., \(U_{1\leq i\leq N_{x}}^{0\leq\xi\leq p,n}\geq 0\) imply \(\langle u_{h}^{(n+1)}\rangle_{1\leq i\leq N_{x}}\geq 0\). We will thus show that, under some conditions on \(\lambda_{i}\), the matrix stemming from the linear system (23) for the \(\langle u_{h}^{(n+1)}\rangle_{1\leq j\leq N_{x}}\) is an M-matrix and that its RHS is a nonnegative combination of the \(U_{i}^{k,n}\) and \(U_{i-1}^{k,n}\). Assuming the DOFs at time \(t^{(n)}\) are in the range \([m,M]\), so will do the cell-averaged solutions \(\langle u_{h}^{(n+1)}\rangle_{1\leq i\leq N_{x}}\).
In view of (23), conditions for the RHS to be nonnegative read
\[\sigma_{i}^{p}-2\lambda_{i}(\mathcal{D}_{pk}^{i}-\mathcal{D}_{p0}^{i})=\omega _{p}+2\lambda_{i}(\mathcal{D}_{pp}^{i}-\mathcal{D}_{pk}^{i})\geq 0,\quad\mathcal{D} _{pk}^{i}-\mathcal{D}_{p0}^{i}\geq 0\quad\forall 0\leq k\leq p,\ 1\leq i\leq N_{x}, \tag{24}\]
with \(\sigma_{i}^{p}>0\), while we impose the off-diagonal entries to be negative through
\[\sigma_{i}^{p}=\omega_{p}+2\lambda_{i}(\mathcal{D}_{pp}^{i}-\mathcal{D}_{p0}^{ i})>0,\quad\mathcal{D}_{p0}^{i}\geq 0. \tag{25}\]
The strict inequality on the \(\mathcal{D}_{p0}^{i}\) allows to satisfy theorem 3.1 by choosing the vector \(\mathbf{x}\) such that
\[x_{i}=\prod_{j=1,j\neq i}^{N_{x}}\frac{\mathcal{D}_{p0}^{j}}{\sigma_{j}^{p}}>0 \quad\forall 1\leq i\leq N_{x},\]
and we obtain \((\mathbf{A}\mathbf{x})_{i}=x_{i}>0\) from (25).
**Lemma 3.2**.: _For all \(p\geq 1\), there exists a finite \(\lambda_{min}=\lambda_{min}(p)\geq 0\) such that conditions (24) and (25) are satisfied for all \(\lambda_{i}>\lambda_{min}\), \(1\leq i\leq N_{x}\)._
Proof.: Let consider the first condition in (24), similar arguments hold for all other conditions. For a fixed \(0\leq k\leq p\), use (21) to rewrite
\[\mathcal{D}_{pp}^{i}-\mathcal{D}_{pk}^{i}=\sum_{l=0}^{p}(2\lambda_{i})^{l} \left(D_{pp}^{(l)}-D_{kp}^{(l)}\right)=\sum_{l=0}^{p-1}(2\lambda_{i})^{l} \left(D_{pp}^{(l)}-D_{kp}^{(l)}\right),\]
since by (12), we have \(D_{kp}^{(p)}=D_{pp}^{(p)}\). Hence for large \(\lambda_{i}\), we have
\[\mathcal{D}_{pp}^{i}-\mathcal{D}_{pk}^{i}\underset{\lambda_{i}}{\sim}(2\lambda_ {i})^{p-1}\left(D_{pp}^{(p-1)}-D_{kp}^{(p-1)}\right)\]
and we are going to show that this is a positive quantity. By using the linearity of \(\ell_{p}^{(p-1)}(\cdot)\), we have \(D_{kp}^{(p-1)}=\frac{1-\xi_{k}}{2}D_{0p}^{(p-1)}+\frac{1+\xi_{k}}{2}D_{pp}^{(p- 1)}\). Then, we obtain
\[D_{pp}^{(p-1)}-D_{kp}^{(p-1)}=\frac{1-\xi_{k}}{2}\left(D_{pp}^{(p-1)}-D_{0p}^{ (p-1)}\right)\overset{\eqref{eq:D_p0}}{=}\frac{1-\xi_{k}}{2}\sum_{l=0}^{p} \omega_{l}D_{lp}^{(p)}\overset{\eqref{eq:D_p0}}{=}(1-\xi_{k})D_{pp}^{(p)} \overset{\eqref{eq:D_p0}}{=}\frac{(1-\xi_{k})p!}{\prod\limits_{l=0}^{p-1}(1- \xi_{l})}>0,\]
which concludes the proof.
The following theorem immediately follows.
**Theorem 3.3**.: _Under the conditions \(\lambda_{1\leq i\leq N_{x}}>\lambda_{min}\) defined in lemma 3.2, the DGSEM scheme (17) is maximum principle preserving for the cell-averaged solution:_
\[m\leq U_{i}^{k,n}\leq M\quad\forall 1\leq i\leq N_{x},\ 0\leq k\leq p\quad \Rightarrow\quad m\leq\langle u_{h}^{(n+1)}\rangle_{i}\leq M\quad\forall 1 \leq i\leq N_{x}.\]
Tab. 1 indicates the lower bounds on the \(\lambda_{i}\) as a function of the polynomial degree \(p\) evaluated from the conditions (24) and (25). We observe that the second-order in space scheme, \(p=1\), is unconditionally maximum principle preserving, while the lower bound decreases with increasing \(p\) values for \(p\geq 2\). These bounds are different from those obtained in (35, Tab. 1) for the modal DG scheme with Legendre polynomials as function basis. In particular, the modal DG scheme with \(p=1\) is not unconditionally maximum principle preserving and is seen to require a larger CFL value for larger \(p\) values.
**Remark 3.1** (Linear hyperbolic systems).: The above results also apply to the case of linear hyperbolic systems of size \(n_{eq}\) with constant coefficients \(\partial_{i}\mathbf{u}+\mathbf{A}\partial_{x}\mathbf{u}=0\) with \(\mathbf{A}\) diagonalizable in \(\mathbb{R}\) with eigenvalues \(\psi_{k}\), normalized left and right eigenvectors \(\mathbf{l}_{k}\) and \(\mathbf{r}_{k}\) such that \(\mathbf{l}_{k}^{\top}\mathbf{r}_{l}=\delta_{kl}\). Assuming that the right eigenvectors form a basis of \(\mathbb{R}^{n_{eq}}\) and setting \(\mathbf{u}=\sum_{k}u_{k}\mathbf{r}_{k}\), each component satisfies a maximum principle: \(\partial_{t}u_{k}+\psi_{k}\partial_{x}u_{k}=0\). Using a Roe flux, \(\mathbf{h}(\mathbf{u}^{-},\mathbf{u}^{+})=\frac{1}{2}\mathbf{A}(\mathbf{u}^{ -}+\mathbf{u}^{+})+\frac{1}{2}\sum_{k}|\psi_{k}|(u_{k}^{-}-u_{k}^{+})\mathbf{ r}_{k}\), the time implicit DGSEM (17) decouples into \(n_{eq}\) independent schemes (17) for \(u_{k}\) upon left multiplication by \(\mathbf{l}_{k}\) since \(\mathbf{l}_{k}^{\top}\mathbf{h}(\mathbf{u}^{-},\mathbf{u}^{+})=\frac{\partial _{t}}{2}(u_{k}^{-}+u_{k}^{+})+\frac{|\psi_{k}|}{2}(u_{k}^{-}-u_{k}^{+})\) reduces to the upwind flux.
**Remark 3.2** (Geometric source).: Theorem 3.3 with the bounds from lemma 3.2 also applies to linear equations with a geometric source term:
\[\partial_{t}u+c_{x}\partial_{x}u=s(x)\quad\text{in }\Omega\times(0,\infty),\]
with \(s(\cdot)\geq 0\). Providing nonnegative initial and boundary data are imposed, the entropy solution remains nonnegative for all time. The DGSEM scheme (17) for the discretization of the above equation remains the same by substituting \(U_{i}^{k,n}+s(x_{i}^{k})\Delta t\) for \(U_{i}^{k,n}\) in the RHS. The conditions to obtain an M-matrix in (23) are therefore unchanged and only the RHS is modified by the above change of variable on \(U_{i}^{k,n}\). We thus conclude that the present DGSEM will be positivity-preserving for the cell-averaged solution under the same conditions as in theorem 3.3.
### Linear system solution
Equations (17) or (20) result in a banded block linear system
\[\mathbb{A}_{1d}\mathbf{U}^{(n+1)}=\mathbb{M}_{1d}\mathbf{U}^{(n)} \tag{26}\]
of size \(N_{x}(p+1)\) with blocks of size \(p+1\) to be solved for \(\mathbf{U}^{(n+1)}\) where \(\mathbf{U}_{1+k+(p+1)j}=U_{i}^{k}\) and \(\mathbb{M}_{1d}\) is the global mass matrix. Using the block structure of (26) is usually important for its efficient resolution with either direct or iterative block-based solvers. We will also propose a direct algorithm below based on the inversion of the diagonal block only. In most cases, we need to invert the diagonal blocks and we now prove that they are unconditionally invertible and give an explicit expression for their inverse. Unless necessary, we omit the cell index \(1\leq i\leq N_{x}\) in this section.
\begin{table}
\begin{tabular}{l c c c c c} \hline p & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \(\lambda_{min}\) & 0 & \(\frac{1}{4}\) & \(\frac{1+\sqrt{5}}{6(\zeta-\sqrt{5})}\) & 0.150346 & 0.147568 & 0.109977 \\ \hline \end{tabular}
\end{table}
Table 1: Lower bounds on the non-dimensional time step \(\lambda_{i}>\lambda_{min}\), \(1\leq i\leq N_{x}\), for (24) and (25) to hold, which make (23) maximum principle preserving.
**Lemma 3.3**.: _For all \(p\geq 1\), the diagonal blocks \(\mathbf{L}_{1d}\mathbf{M}\) of \(\mathbb{A}_{1d}\) in the linear system (20), with_
\[\mathbf{L}_{1d}=\mathbf{I}-2\lambda\mathcal{L},\quad\mathcal{L}=\mathbf{D}^{ \top}-\frac{1}{\omega_{p}}\mathbf{e}_{p}\mathbf{e}_{p}^{\top}, \tag{27}\]
_are invertible for any \(\lambda>0\)._
Proof.: Let us prove that \(\mathbf{L}_{1d}\) is invertible: assume that \(\mathbf{L}_{1d}\mathbf{u}=0\) for some \(\mathbf{u}=(u_{0},\ldots,u_{p})^{\top}\), then by (27) we have
\[(\mathbf{I}-2\lambda\mathbf{D}^{\top})\mathbf{u}=-\frac{2\lambda}{\omega_{p}} \mathbf{e}_{p}\mathbf{e}_{p}^{\top}\mathbf{u}=-\frac{2\lambda u_{p}}{\omega_{ p}}\mathbf{e}_{p}\quad\Rightarrow\quad\mathbf{u}=-\frac{2\lambda u_{p}}{\omega_{ p}}\mathcal{D}\mathbf{e}_{p},\]
with \(\mathcal{D}=(\mathbf{I}-2\lambda\mathbf{D}^{\top})^{-1}\) given by (21). Hence \(u_{k}=-\frac{2\lambda u_{p}}{\omega_{p}}\mathcal{D}_{kp}\) and for \(k=p\) we get \((\omega_{p}+2\lambda\mathcal{D}_{pp})u_{p}=0\), so \(u_{p}=0\) since \(\omega_{p}+2\lambda\mathcal{D}_{pp}>0\) from (25) and we conclude that \(\mathbf{u}=0\). Note that (27) is invertible for all \(\lambda>0\) since we have \(\mathcal{D}_{pp}=1+\sum_{l=1}^{p}(2\lambda)^{l}D_{pp}^{(l)}>0\). Indeed, by differentiating (5) \(l\)-times we obtain
\[D_{pp}^{(l)}=\ell_{p}^{(l)}(1)=\sum_{k_{1}=0}^{p}\sum_{k_{2}=0,k_{2}\neq k_{1 }}^{p}\ldots\sum_{k_{l}=1,k_{l}\neq\{k_{1},\ldots,k_{l-1}\}}^{p}\prod_{m=1}^{l }\frac{1}{1-\xi_{k_{m}}}>0\quad\forall 1\leq l\leq p.\]
Note that we easily deduce the explicit expression of the inverse of (27) from (21) by using the Sherman-Morisson formula which provides the inverse of the sum of an invertible matrix \(\mathbf{A}\) and a rank-one matrix \(\mathbf{u}\mathbf{v}^{\top}\):
\[\left(\mathbf{A}+\mathbf{u}\mathbf{v}^{\top}\right)^{-1}=\left(\mathbf{I}+ \mathbf{A}^{-1}\mathbf{u}\mathbf{v}^{\top}\right)^{-1}\mathbf{A}^{-1}=\left( \mathbf{I}-\mathbf{A}^{-1}\mathbf{u}\mathbf{v}^{\top}+\mathbf{A}^{-1}\mathbf{ u}\mathbf{v}^{\top}\mathbf{A}^{-1}\mathbf{u}\mathbf{v}^{\top}-\ldots\right) \mathbf{A}^{-1}=\left(\mathbf{I}-\frac{1}{1+\mathbf{v}^{\top}\mathbf{A}^{-1} \mathbf{u}}\mathbf{A}^{-1}\mathbf{u}\mathbf{v}^{\top}\right)\mathbf{A}^{-1}.\]
Using \(\mathbf{A}=\mathbf{I}-2\lambda\mathbf{D}^{\top}\) and \(\mathbf{u}=\mathbf{v}=\mathbf{e}_{p}\), we obtain
\[\mathbf{M}^{-1}\mathbf{L}_{1d}^{-1}=\mathbf{M}^{-1}\left(\mathbf{I}-2\lambda \left(\mathbf{D}^{\top}-\frac{1}{\omega_{p}}\mathbf{e}_{p}\mathbf{e}_{p}^{ \top}\right)\right)^{-1}=\mathbf{M}^{-1}\left(\mathbf{I}-\frac{2\lambda}{ \omega_{p}+2\lambda\mathcal{D}_{pp}^{\top}}\mathcal{D}\mathbf{e}_{p}\mathbf{e }_{p}^{\top}\right)\mathcal{D}\]
with \(\mathcal{D}=(\mathbf{I}-2\lambda\mathbf{D}^{\top})^{-1}\) given by (21). Again, this formula is well defined since \(\omega_{p}+2\lambda\mathcal{D}_{pp}^{i}>0\) by (25).
Let us finally propose a method to solve the global linear system (26). From (17), we observe that \(\mathbb{A}_{1d}=\mathbb{A}_{0}-\lambda_{1}\mathbf{e}_{1}^{0}(\mathbf{e}_{N_{x} }^{p})^{\top}\) in (26) with \(\mathbb{A}_{0}\) a block lower triangular matrix and a rank-one matrix defined from \((\mathbf{e}_{i}^{k})_{1\leq i\leq N_{x}}^{0\leq k\leq p}\) the canonical basis of \(\mathbb{R}^{N_{x}(p+1)}\). Using again the Sherman-Morisson formula, we easily solve (26) from algorithm 1 where steps 1 and 2 can be solved efficiently using blockwise back substitution.
**Algorithm 1** Algorithm flowchart for solving the global system (26) by using the decomposition \(\mathbb{A}_{1d}=\mathbb{A}_{0}-\lambda_{1}\mathbf{e}_{1}^{0}(\mathbf{e}_{N_{x }}^{p})^{\top}\) with \(\mathbb{A}_{0}\) a block lower triangular matrix and \(\mathbf{e}_{1}^{0}(\mathbf{e}_{N_{x}}^{p})^{\top}\) a rank-one matrix.
Applying algorithm 1 again requires \(1-\lambda_{1}\mathbf{e}_{N_{x}}^{p}\cdot\mathbf{W}=1-\lambda_{1}\mathbf{e}_{N_ {x}}^{p}\cdot(\lambda_{0}^{-1}\mathbf{e}_{1}^{0})\neq 0\). This is indeed the case and to prove it we temporarily consider a uniform mesh for the sake of clarity, so \(\lambda_{i}=\lambda\). We observe that the solution to \(\mathbb{A}_{0}\mathbf{W}=\mathbf{e}_{1}^{0}\) satisfies \(\mathbf{L}_{1d}\mathbf{M}\mathbf{W}_{1}=\mathbf{e}_{0}\) and \(\mathbf{L}_{1d}\mathbf{M}\mathbf{W}_{i}=(\lambda\mathbf{e}_{0}\mathbf{e}_{p} ^{\top})\mathbf{W}_{i-1}\) for \(j\geq 2\). We thus get \(\mathbf{W}_{i}=(\lambda\mathbf{M}^{-1}\mathbf{L}_{1d}^{-1}\mathbf{e}_{0} \mathbf{e}_{p}^{\top})^{j-1}\mathbf{M}^{-1}\mathbf{L}_{1d}^{-1}\mathbf{e}_{0}\) and \(\mathbf{e}_{N_{x}}^{p}\cdot\mathbf{W}=\lambda^{N_{x}-1}(\frac{1}{\omega_{p}}( \mathbf{L}_{1d}^{-1})_{p0})^{N_{x}}\) with \((\mathbf{L}_{1d}^{-1})_{p0}=\frac{2}{\omega_{p}}(1-\frac{2\lambda}{\omega_{p}+2 \lambda\mathcal{D}_{pp}}\mathcal{D}_{pp})\mathcal{D}_{p0}^{j}=\frac{2\lambda \mathcal{D}_{p0}}{\omega_{p}+2\lambda\mathcal{D}_{pp}}>0\) from (25). Note that \(\mathcal{D}_{p0}^{i}>0\) holds for \(\lambda>\lambda_{min}\) defined in lemma 3.2.
**Remark 3.3** (Dirichlet boundary condition): _The case of an inflow boundary condition, \(u(0,t)=g(t)\in[m,M]\), results in a similar linear system (26) with the only difference that \(U_{-1}^{p,n+1}=g(t^{(+1)})\) in (16b). As a consequence, (26) is a block lower triangular matrix with the same diagonal blocks \(\mathbf{L}_{1d}\) as in (20). The system (26) is therefore easily solved by block forward substitution since the diagonal blocks are invertible. Likewise, the cell-averaged solution is maximum principle preserving under the same conditions in lemma 3.2 as with periodic boundary conditions. \(\Box\)_
## 4 Time implicit discretization in two space dimensions
We now consider a 2D linear problem with constant coefficients:
\[\partial_{t}u+c_{x}\partial_{x}u+c_{y}\partial_{y}u =0,\quad\text{in }\Omega\times(0,\infty), \tag{28a}\] \[u(\cdot,0) =u_{0}(\cdot),\quad\text{in }\Omega, \tag{28b}\]
with boundary conditions on \(\partial\Omega\) and we again assume \(c_{x}\geq 0\) and \(c_{y}\geq 0\) without loss of generality. We again assume \(\Omega=\mathbb{R}^{2}\) for the analysis, which amounts in practice to consider a rectangular domain with periodic boundary conditions. As in the 1D case, considering inflow and outflow boundary conditions results in a block lower triangular system to be solved and hence an easier analysis. The results in this section may be easily generalized to three space dimensions and C summarizes the analysis of the three-dimensional scheme.
### Space-time discretization
We consider a Cartesian mesh with rectangular elements of measure \(|\kappa_{ij}|=\Delta x_{i}\times\Delta y_{j}\) for all \(i,j\) in \(\mathbb{Z}\). Using again a time implicit discretization with a backward Euler method and upwind numerical fluxes, the fully discrete scheme reads
\[\begin{split}\frac{\omega_{k}\omega_{l}}{4}(U_{ij}^{kl,n+1}-U_{ ij}^{kl,n})-&\frac{\omega_{l}}{2}\lambda_{x_{i}}\big{(}\sum_{m=0}^{p} \omega_{m}D_{mk}U_{ij}^{ml,n+1}-\delta_{kp}U_{ij}^{pl,n+1}+\delta_{k0}U_{(i-1) j}^{pl,n+1}\big{)}\\ &-\frac{\omega_{k}}{2}\lambda_{y_{j}}\big{(}\sum_{m=0}^{p} \omega_{m}D_{ml}U_{ij}^{km,n+1}-\delta_{lp}U_{ij}^{kp,n+1}+\delta_{l0}U_{i(j- 1)}^{kp,n+1}\big{)}=0,\end{split} \tag{29}\]
where \(\lambda_{x_{i}}=\frac{c_{x}\Delta t}{\Delta x_{i}}\) and \(\lambda_{y_{j}}=\frac{c_{y}\Delta t}{\Delta y_{j}}\). We again use the conventions \(U_{0j}^{pl}=U_{N_{x}j}^{pl}\) and \(U_{i0}^{kp}=U_{iN_{r}}^{kp}\) to take the periodic boundary conditions into account. Using a vector storage of the DOFs as \((\mathbf{U}_{ij})_{u_{kl}}=U_{ij}^{kl}\) with \(1\leq n_{kl}\coloneqq 1+k+l(p+1)\leq N_{p}\) and \(N_{p}=(p+1)^{2}\), it will be convenient to rewrite the scheme under vector form as
\[\begin{split}(\mathbf{M}\otimes\mathbf{M})(\mathbf{U}_{ij}^{n+1}- \mathbf{U}_{ij}^{n})&-\lambda_{x_{i}}\big{(}\mathbf{M}\otimes(2 \mathbf{D}^{\top}\mathbf{M}-\mathbf{e}_{p}\mathbf{e}_{p}^{\top})\big{)} \mathbf{U}_{ij}^{n+1}-\lambda_{x_{i}}(\mathbf{M}\otimes\mathbf{e}_{0}\mathbf{e }_{p}^{\top})\mathbf{U}_{(i-1)j}^{n+1}\\ &-\lambda_{y_{j}}\big{(}(2\mathbf{D}^{\top}\mathbf{M}-\mathbf{e} _{p}\mathbf{e}_{p}^{\top})\otimes\mathbf{M}\big{)}\mathbf{U}_{ij}^{n+1}- \lambda_{y_{j}}(\mathbf{e}_{0}\mathbf{e}_{p}^{\top}\otimes\mathbf{M})\mathbf{ U}_{i(j-1)}^{n+1}=0,\end{split} \tag{30}\]
where \(\mathbf{M}=\frac{1}{2}\operatorname{diag}(\omega_{0},\dots,\omega_{p})\) denotes the 1D mass matrix, \(\mathbf{M}\otimes\mathbf{M}\) the 2D mass matrix, \((\mathbf{e}_{k})_{0\leq k\leq p}\) is the canonical basis of \(\mathbb{R}^{p+1}\), and \(\otimes\) denotes the Kronecker product [46; 45]: \((\mathbf{A}\otimes\mathbf{B})_{u_{l}u_{l}v_{l}}=\mathbf{A}_{l^{\top}}\mathbf{ B}_{kk^{\prime}}\), which satisfies
\[(\mathbf{A}\otimes\mathbf{B})(\mathbf{C}\otimes\mathbf{D})=\mathbf{AC} \otimes\mathbf{BD},\quad(\mathbf{A}\otimes\mathbf{B})^{-1}=\mathbf{A}^{-1} \otimes\mathbf{B}^{-1},\quad(\mathbf{A}\otimes\mathbf{B})^{\top}=\mathbf{A}^{ \top}\otimes\mathbf{B}^{\top}.\] (31a) Likewise, for diagonalizable matrices \[\mathbf{A}=\mathbf{R}_{A}\boldsymbol{\Psi}_{A}\mathbf{R}_{A}^{-1}\] and \[\mathbf{B}=\mathbf{R}_{B}\boldsymbol{\Psi}_{B}\mathbf{R}_{B}^{-1}\], the product \[\mathbf{A}\otimes\mathbf{B}\] is also diagonalizable with eigenvalues being the product of eigenvalues of \[\mathbf{A}\] and \[\mathbf{B}\] : \[\mathbf{A}\otimes\mathbf{B}=(\mathbf{R}_{A}\otimes\mathbf{R}_{B})(\boldsymbol{ \Psi}_{A}\otimes\boldsymbol{\Psi}_{B})(\mathbf{R}_{A}\otimes\mathbf{R}_{B})^{-1}. \tag{31b}\]
Summing (29) over \(0\leq k,l\leq p\) gives:
\[\langle u_{h}^{(n+1)}\rangle_{ij}-\langle u_{h}^{(n)}\rangle_{ij}+\frac{\lambda _{x_{i}}}{2}\sum_{l=0}^{p}\omega_{l}\Big{(}U_{ij}^{pl,n+1}-U_{(i-1)j}^{pl,n+1 }\Big{)}+\frac{\lambda_{y_{j}}}{2}\sum_{k=0}^{p}\omega_{k}\Big{(}U_{ij}^{kp,n+1 }-U_{i(j-1)}^{kp,n+1}\Big{)}=0, \tag{32}\]
where the cell-average operator reads
\[\langle u_{h}\rangle_{ij}=\sum_{k=0}^{p}\sum_{l=0}^{p}\frac{\omega_{k}\omega_{l}}{ 4}U_{ij}^{kl}. \tag{33}\]
Finally, left-multiplying (30) by \(\mathbf{U}_{ij}^{(n+1)}\) brings \(L^{2}\) stability:
\[\frac{1}{2}\langle(u_{h}^{(n+1)})^{2}\rangle_{ij}-\frac{1}{2}\langle(u_{h}^{(n )})^{2}\rangle_{ij}+\frac{\lambda_{x_{i}}}{2}\sum_{l=0}^{p}\frac{\omega_{l}}{ 2}\langle(U_{ij}^{pl,n+1})^{2}-(U_{(i-1)j}^{pl,n+1})^{2}\rangle+\frac{\lambda_{ y_{j}}}{2}\sum_{k=0}^{p}\frac{\omega_{k}}{2}\langle(U_{ij}^{kp,n+1})^{2}-(U_{i (j-1)}^{kp,n+1})^{2}\rangle\leq 0.\]
The discrete derivative matrix is still nilpotent as shown in A. Unfortunately, the scheme (30) is in general not maximum principle preserving for the cell average as may be observed in the numerical experiments of section 5. We now propose to modify the scheme by adding graph viscosity to make it maximum principle preserving.
### Maximum principle through graph viscosity
We add a graph viscosity [21] term \(\mathbf{V}_{ij}^{(n+1)}\) to the LHS of (30) which becomes
\[\begin{split}(\mathbf{M}\otimes\mathbf{M})(\mathbf{U}_{ij}^{n+1} -\mathbf{U}_{ij}^{n})&-\lambda_{x}(\mathbf{M}\otimes(2\mathbf{D} ^{\top}\mathbf{M}-\mathbf{e}_{p}\mathbf{e}_{p}^{\top}))\mathbf{U}_{ij}^{n+1}- \lambda_{x_{i}}(\mathbf{M}\otimes\mathbf{e}_{0}\mathbf{e}_{p}^{\top})\mathbf{ U}_{(i-1)j}^{n+1}\\ &-\lambda_{y_{j}}((2\mathbf{D}^{\top}\mathbf{M}-\mathbf{e}_{p} \mathbf{e}_{p}^{\top})\otimes\mathbf{M})\mathbf{U}_{ij}^{n+1}-\lambda_{y_{j}} (\mathbf{e}_{0}\mathbf{e}_{p}^{\top}\otimes\mathbf{M})\mathbf{U}_{i(j-1)}^{n +1}+\mathbf{V}_{ij}^{(n+1)}=0,\end{split} \tag{34}\]
where
\[\begin{split}\mathbf{V}_{ij}^{(n+1)}&=2d_{ij}\Big{(} \lambda_{x}\mathbf{M}\otimes(\mathbf{M}-\mathbf{\omega}\mathbf{1}^{\top}\mathbf{ M})+\lambda_{y_{j}}(\mathbf{M}-\mathbf{\omega}\mathbf{1}^{\top}\mathbf{M})\otimes \mathbf{M}\Big{)}\mathbf{U}_{ij}^{(n+1)}\\ &=2d_{ij}\Big{(}(\lambda_{x}+\lambda_{y_{j}})\mathbf{I}\otimes \mathbf{I}-\lambda_{x_{i}}(\mathbf{I}\otimes\mathbf{\omega}\mathbf{1}^{\top})- \lambda_{y_{j}}(\mathbf{\omega}\mathbf{1}^{\top}\otimes\mathbf{I})\Big{)}( \mathbf{M}\otimes\mathbf{M})\mathbf{U}_{ij}^{(n+1)},\end{split} \tag{35}\]
with \(d_{ij}\geq 0\), \(\mathbf{\omega}=\frac{1}{2}(\omega_{0},\ldots,\omega_{p})^{\top}\) and \(\mathbf{1}=(1,\ldots,1)^{\top}\in\mathbb{R}^{p+1}\), which reads componentwise as
\[V_{ij}^{kl,n+1}=d_{ij}\frac{\omega_{k}\omega_{l}}{2}\Big{(}\lambda_{x}\sum_{m= 0}^{p}\frac{\omega_{m}}{2}(U_{ij}^{kl,n+1}-U_{ij}^{ml,n+1})+\lambda_{y_{j}} \sum_{m=0}^{p}\frac{\omega_{m}}{2}(U_{ij}^{kl,n+1}-U_{ij}^{lm,n+1})\Big{)}. \tag{36}\]
This term keeps conservation of the scheme: \(\sum_{k,l}V_{ij}^{kl,n+1}=0\), so the cell-averaged scheme still satisfies (32). It also enforces the \(L^{2}\) stability since
\[\begin{split}\mathbf{U}_{ij}\cdot\mathbf{V}_{ij}& \overset{\eqref{eq:L2}}{=}d_{ij}\sum_{k,l=0}^{p}\frac{\omega_{k} \omega_{l}}{2}U_{ij}^{kl}\Bigg{(}\lambda_{x}\sum_{m=0}^{p}\frac{\omega_{m}}{2} (U_{ij}^{kl}-U_{ij}^{ml})+\lambda_{y_{j}}\sum_{m=0}^{p}\frac{\omega_{m}}{2}(U_ {ij}^{kl}-U_{ij}^{km})\Bigg{)}\\ &=& d_{ij}\sum_{k,l=0}^{p}\frac{\omega_{k}\omega_{l}}{ 2}\Bigg{(}\lambda_{x_{i}}\sum_{m=0}^{p}\frac{\omega_{m}}{2}\frac{(U_{ij}^{kl} -U_{ij}^{ml})^{2}+(U_{ij}^{kl})^{2}-(U_{ij}^{ml})^{2}}{2}+\lambda_{y_{j}} \sum_{m=0}^{p}\frac{\omega_{m}}{2}\frac{(U_{ij}^{kl}-U_{ij}^{km})^{2}+(U_{ij} ^{kl})^{2}-(U_{ij}^{km})^{2}}{2}\Bigg{)}\\ &=& d_{ij}\sum_{k,l=0}^{p}\frac{\omega_{k}\omega_{l}}{ 2}\Bigg{(}\lambda_{x_{i}}\sum_{m=0}^{p}\frac{\omega_{m}}{2}\frac{(U_{ij}^{kl} -U_{ij}^{ml})^{2}}{2}+\lambda_{y_{j}}\sum_{m=0}^{p}\frac{\omega_{m}}{2} \frac{(U_{ij}^{kl}-U_{ij}^{km})^{2}}{2}\Bigg{)}\geq 0.\end{split}\]
We now look for conditions on the linear system (34) to correspond to an M-matrix, thus imposing a maximum principle for the DOFs.
**Lemma 4.1**.: _Under the condition_
\[d_{ij}\geq 2\max_{0\leq\tau\in m\leq p}\Big{(}-\frac{D_{mk}}{\omega_{k}} \Big{)}, \tag{37}\]
_the linear system (34) is maximum principle preserving._
Proof.: This is a direct application of theorem 3.2 to show that the linear system (34) is defined from an M-matrix. Positivity preservation is then enough to get maximum principle preservation. We rewrite (34) componentwise as
\[a_{kl}U_{ij}^{kl,n+1}+\sum_{m=0,m\neq k}^{p}b_{klm}U_{ij}^{ml,n+1}+\sum_{m=0,m \neq l}^{p}c_{klm}U_{ij}^{lm,n+1}-\lambda_{x_{l}}\frac{\omega_{l}}{2}\delta_{k0 }U_{(i-1)j}^{pl,n+1}-\lambda_{y_{j}}\frac{\omega_{k}}{2}\delta_{l0}U_{i(j-1)}^{ kp,n+1}=\frac{\omega_{k}\omega_{l}}{4}U_{ij}^{kl,n}\]
where the diagonal coefficients read
\[a_{kl}=\frac{\omega_{k}\omega_{l}}{4}-\lambda_{x_{l}}\frac{\omega_{l}}{2}( \omega_{k}D_{kk}-\delta_{kp})-\lambda_{y_{j}}\frac{\omega_{k}}{2}(\omega_{l}D _{ll}-\delta_{lp})+\frac{\omega_{k}\omega_{l}}{2}d_{lj}\Big{(}\lambda_{x_{l}} \sum_{m=0,m\neq k}^{p}\frac{\omega_{m}}{2}+\lambda_{y_{j}}\sum_{m=0,m\neq l}^ {p}\frac{\omega_{m}}{2}\Big{)}\]
and are positive since \(-(\omega_{k}D_{kk}-\delta_{kp})=\frac{1}{2}(\delta_{kp}+\delta_{k0})\) and \(-(\omega_{l}D_{ll}-\delta_{lp})=\frac{1}{2}(\delta_{lp}+\delta_{l0})\) from the SBP property (15). Likewise, \(b_{klm}=-\lambda_{x_{l}}\frac{\omega_{l}}{2}\omega_{m}(D_{mk}+\frac{\omega_{k }}{2}d_{ij})\) and \(c_{klm}=-\lambda_{y_{j}}\frac{\omega_{k}}{2}\omega_{m}(D_{ml}+\frac{\omega_{k }}{2}d_{ij})\) are nonpositive under (37).
Finally, strict diagonal dominance reads \(a_{kl}>-\sum_{m\neq k}b_{klm}-\sum_{m\neq l}c_{klm}+\lambda_{x_{l}}\frac{ \omega_{l}}{2}\delta_{k0}+\lambda_{y_{j}}\frac{\omega_{k}}{2}\delta_{l0}\) since \(b_{klm}\leq 0\) and \(c_{klm}\leq 0\). This reduces to
\[\frac{\omega_{k}\omega_{l}}{4}-\lambda_{x_{l}}\frac{\omega_{l}}{2}(\omega_{k} D_{kk}-\delta_{kp})-\lambda_{y_{j}}\frac{\omega_{k}}{2}(\omega_{l}D_{ll}-\delta_{ lp})>\lambda_{x_{l}}\frac{\omega_{l}}{2}\sum_{m=0,m\neq k}^{p}\omega_{m}D_{mk}+ \lambda_{y_{j}}\frac{\omega_{k}}{2}\sum_{m=0,m\neq l}^{p}\omega_{m}D_{ml}+ \lambda_{x_{l}}\frac{\omega_{k}}{2}\delta_{k0}+\lambda_{y_{j}}\frac{\omega_{ k}}{2}\delta_{k0},\]
where all the coefficients from the graph viscosity cancel each other out from (36) and have been removed. This can be rearranged into
\[\frac{\omega_{k}\omega_{l}}{4}>\lambda_{x_{l}}\frac{\omega_{l}}{2}\Big{(}\sum_ {m=0}^{p}\omega_{m}D_{mk}-\delta_{kp}+\delta_{k0}\Big{)}+\lambda_{y_{j}}\frac {\omega_{k}}{2}\Big{(}\sum_{m=0}^{p}\omega_{m}D_{ml}-\delta_{lp}+\delta_{R0} \Big{)}\stackrel{{\eqref{eq:c_klm}}}{{=}}0,\]
which is always satisfied and concludes the proof.
The modified DGSEM scheme with graph viscosity therefore satisfies the maximum principle for large enough \(d_{ij}\) values. Table 2 gives the minimum \(d_{ij}\) values (37) in lemma 4.1 guaranteeing a maximum principle. Likewise, the diagonal blocks in (34) are now strictly diagonally dominant and hence invertible.
The scheme is however first order in space when \(d_{ij}>0\) and is not used in practice. In the following, it is combined with the high-order scheme within the FCT limiter framework to keep high-order accuracy.
### Flux-corrected transport limiter
Following [19], the Flux-corrected transport (FCT) limiter [5; 52] can be applied to guarantee a maximum principle by combining the high-order (HO) DGSEM scheme (30) and the low-order (LO) modified DGSEM scheme (34) with graph viscosity. We here propose to use the FCT to guarantee a maximum principle on the cell-averaged solution (33), the maximum principle on all DOFs within the elements being ensured through the use of the linear scaling limiter (see section 5.1) as in one space dimension.
By \(u_{h,LO}^{(n+1)}\) and \(u_{h,HO}^{(n+1)}\) we denote the solutions to the LO and HO schemes, respectively. Both are solutions to the cell-averaged scheme (32). Subtracting the cell-averaged for the LO solution from the one for the HO solution gives
\begin{table}
\begin{tabular}{l l l l l l} \hline p & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \(d_{min}\) & 1 & 3 & \(3(1+\sqrt{5})\) & 24.8 & 53.6 & 102.6 \\ \hline \end{tabular}
\end{table}
Table 2: Lower bounds of the coefficient \(d_{ij}>d_{min}\) for (34) to be maximum-principle preserving
\[\langle u_{h,HO}^{(n+1)}\rangle_{ij}-\langle u_{h,LO}^{(n+1)}\rangle_{ij}= \lambda_{x_{i}}\sum_{l=0}^{p}\frac{\omega_{l}}{2}\Big{(}U_{(i-1) j,HO}^{pl,n+1}-U_{i,HO}^{pl,n+1}+U_{ij,LO}^{pl,n+1}-U_{(i-1),LO}^{pl,n+1}\Big{)}\] \[+\lambda_{y_{j}}\sum_{k=0}^{p}\frac{\omega_{k}}{2}\Big{(}U_{i(j-1 ),HO}^{kp,n+1}-U_{ij,HO}^{kp,n+1}+U_{ij,LO}^{kp,n+1}-U_{i(j-1),LO}^{kp,n+1} \Big{)}\] \[= \lambda_{x_{i}}\sum_{l=0}^{p}\frac{\omega_{l}}{2}\Big{(}U_{(i-1) j,HO}^{pl,n+1}-U_{(i-1),LO}^{pl,n+1}\Big{)}+\lambda_{x_{i}}\sum_{l=0}^{p} \frac{\omega_{l}}{2}\Big{(}-U_{i,HO}^{pl,n+1}+U_{ij,LO}^{pl,n+1}\Big{)}\] \[+\lambda_{y_{j}}\sum_{k=0}^{p}\frac{\omega_{k}}{2}\Big{(}U_{i(j-1 ),HO}^{kp,n+1}-U_{i(j-1),LO}^{kp,n+1}\Big{)}+\lambda_{y_{j}}\sum_{k=0}^{p} \frac{\omega_{k}}{2}\Big{(}-U_{ij,HO}^{kp,n+1}+U_{ij,LO}^{kp,n+1}\Big{)}\] \[= \mathcal{A}_{ij}^{(i-1)j}+A_{ij}^{(i+1)j}+A_{ij}^{i(j-1)}+A_{ij} ^{i(j+1)}\] \[= \sum_{(r,s)\in\mathcal{S}(i,j)}A_{ij}^{rs}\]
with \(\mathcal{S}(i,j)=\{(i-1,j);(i+1,j);(i,j-1);(i,j+1)\}\). Note that we have \(A_{ij}^{(i-1)j}=-A_{(i-1)j}^{ij}\) and \(A_{ij}^{i(j-1)}=-A_{i(j-1)}^{ij}\). Again following [19, Sec. 5.3], we introduce the limiter coefficients defined by
\[P_{ij}^{-}=\sum_{(r,s)\in\mathcal{S}(i,j)}\min\big{(}A_{ij}^{rs},0 \big{)}\leq 0,\qquad\qquad Q_{ij}^{-}=m-\langle u_{LO}^{(n+1)}\rangle_{ij} \leq 0,\qquad\qquad l_{ij}^{-}=\min\Big{(}1,\frac{Q_{ij}^{-}}{P_{ij}^{-}} \Big{)}\in[0,1], \tag{38a}\] \[P_{ij}^{+}=\sum_{(r,s)\in\mathcal{S}(i,j)}\max\big{(}A_{ij}^{rs},0 \big{)}\geq 0,\qquad\qquad Q_{ij}^{+}=M-\langle u_{LO}^{(n+1)}\rangle_{ij} \geq 0,\qquad\qquad l_{ij}^{+}=\min\Big{(}1,\frac{Q_{ij}^{+}}{P_{ij}^{+}} \Big{)}\in[0,1], \tag{38b}\]
where \(m\) and \(M\) are the lower and upper bounds in (3) we want to impose to \(\langle u_{h}^{(n+1)}\rangle_{ij}\). The new update of the mean value of the solution is now defined by:
\[\langle u_{h}^{(n+1)}\rangle_{ij}-\langle u_{h,LO}^{(n+1)}\rangle_{ij}=\sum_{( r,s)\in\mathcal{S}(i,j)}I_{ij}^{rs}A_{ij}^{rs},\quad I_{ij}^{rs}=\left\{ \begin{array}{ll}\min(l_{ij}^{-},l_{rs}^{+})&\text{if }A_{ij}^{rs}<0\\ \min(l_{rs}^{-},l_{ij}^{+})&\text{otherwise}.\end{array}\right. \tag{39}\]
As a consequence, the cell-averaged solution satisfies the maximum principle (see [19, Lemma 5.4]): using (39) we get
\[\langle u_{h}^{(n+1)}\rangle_{ij}-\langle u_{h,LO}^{(n+1)}\rangle_ {ij}\geq\sum_{(r,s)\in\mathcal{S}(i,j)}I_{ij}^{rs}\min(A_{ij}^{rs},0)=\min(l_{ ij}^{-},l_{rs}^{+})P_{ij}^{-}\geq l_{ij}^{-}P_{ij}^{-}\geq Q_{ij}^{-}=m- \langle u_{LO}^{(n+1)}\rangle_{ij},\] \[\langle u_{h}^{(n+1)}\rangle_{ij}-\langle u_{h,LO}^{(n+1)}\rangle_ {ij}\leq\sum_{(r,s)\in\mathcal{S}(i,j)}I_{ij}^{rs}\max(A_{ij}^{rs},0)=\min(l_{ rs}^{-},l_{ij}^{+})P_{ij}^{+}\leq l_{ij}^{+}P_{ij}^{+}\leq Q_{ij}^{+}=M-\langle u_{LO}^{ (n+1)}\rangle_{ij},\]
since \(Q_{ij}^{-}\leq l_{ij}^{-}P_{ij}^{-}\leq 0\) and \(0\leq l_{ij}^{+}P_{ij}^{+}\leq Q_{ij}^{+}\) by definition (38).
Likewise, by (39) we have \(I_{ij}^{rs}=l_{rs}^{j}\) for \((r,s)\in\mathcal{S}(i,j)\), thus ensuring conservation of the method:
\[\sum_{ij}\langle u_{h}^{(n+1)}\rangle_{ij}=\sum_{ij}\langle u_{h,HO}^{(n+1)} \rangle_{ij}=\sum_{ij}\langle u_{h,LO}^{(n+1)}\rangle_{ij}=\sum_{ij}\langle u _{h}^{(n+1)}\rangle_{ij}=\sum_{ij}\langle u_{h}^{(n)}\rangle_{ij},\]
for periodic boundary conditions or compactly supported solutions.
From (39), the limiter is only applied at the interfaces and the DOFs can be evaluated explicitly from \(u_{h,LO}^{(n+1)}\) and \(u_{h,HO}^{(n+1)}\) through
\[\frac{\omega_{k}\omega_{l}}{4}(U_{ij}^{kl,n+1}-U_{ij,HO}^{kl,n+1}) =\delta_{kp}\frac{\omega_{l}x_{v_{l}}}{2}(1-l_{ij}^{(i+1)j})\Big{(} U_{ij,HO}^{p,n+1}-U_{ij,LO}^{pl,n+1}\Big{)}-\delta_{k0}\frac{\omega_{l}x_{v_{l}}}{2} (1-l_{ij}^{(i-1)j})\Big{(}U_{(i-1)j,HO}^{pl,n+1}-U_{(i-1)j,LO}^{pl,n+1}\Big{)}\] \[+\delta_{lp}\frac{\omega_{k}x_{y_{j}}}{2}(1-l_{ij}^{(j+1)})\Big{(} U_{ij,HO}^{k,n+1}-U_{ij,LO}^{kp,n+1}\Big{)}-\delta_{k0}\frac{\omega_{k}x_{y_{j}}} {2}(1-l_{ij}^{(i-1)})\Big{(}U_{i(j-1),HO}^{kp,n+1}-U_{i(j-1),LO}^{kp,n+1}\Big{)}.\]
This limited scheme is conservative, satisfies the maximum principle for the cell-averaged solution, but requires to solve two linear systems for \(u_{h,LO}^{(n+1)}\) and \(u_{h,HO}^{(n+1)}\) at each time step. Let us stress that since we need to compute \(u_{h,HO}^{(n+1)}\), we know easily if the limiter is required, that is if the maximum principle is violated for the cell-averaged solution in some cell of the mesh. If it is not violated, we set \(u_{h}^{(n+1)}\equiv u_{h,HO}^{(n+1)}\) and do not need to compute \(u_{h,LO}^{(n+1)}\), but only to apply the linear scaling limiter (see section 5.1). The FCT limiter may hence be viewed as an a posteriori limiter which is applied when needed after the solution update in the same way as other a posteriori limiters, such as in the MOOD method [8]. Preserving the maximum principle on the cell-averaged solution is a weaker requirement than preserving it on every DOFs and should therefore be more likely to be respected. As a consequence, the present FCT limiter is expected to less modify the solution, which is supported by the numerical experiments of section 5.3.
In the next section, we also propose efficient algorithms to solve these linear systems to mitigate the extra cost induced by the additional linear solution when \(u_{h,LO}^{(n+1)}\) is required.
### Linear system solution
Both linear systems without, (30), and with graph viscosity, (34), result in a block linear system
\[\mathbb{A}_{2d}\mathbf{U}^{(n+1)}=\mathbb{M}_{2d}\mathbf{U}^{(n)} \tag{40}\]
of size \(N_{x}N_{y}N_{p}\) with blocks of size \(N_{p}=(p+1)^{2}\) to be solved for \(\mathbf{U}^{(n+1)}\) where \(\mathbf{U}_{ni_{l}+(i-1)N_{p}+(j-1)N_{x}N_{p}}=U_{ij}^{kl}\) with \(n_{kl}=1+k+l(p+1)\) and \(\mathbb{M}_{2d}\) the global mass matrix. Considering the block structure of \(\mathbb{A}_{2d}\) is important for efficiently solving (40) and usually requires the inversion of the diagonal blocks as a main step. These blocks are dense and hence require algorithms of complexity \(\mathcal{O}(N_{p}^{3})\) for their inversion. We propose below algorithms based on the properties of the 1D schemes in section 3.4 for their efficient inversion. A repository of these algorithms (equations (43), (45), (46) and algorithm 2) is available at [1] and B provides a description of the repository.
#### 4.4.1 1D diagonal blocks as building blocks of the 2D linear systems
Let us introduce the diagonalization in \(\mathbb{C}\) of the matrix \(\mathcal{L}\) in (27):
\[\mathcal{L}=\mathbf{R}\mathbf{V}\mathbf{R}^{-1}, \tag{41}\]
where the columns of \(\mathbf{R}\in\mathbb{C}^{(p+1)\times(p+1)}\) are the right eigenvectors of \(\mathcal{L}\) and \(\mathbf{\Psi}\) is the diagonal matrix of the corresponding \(p+1\) eigenvalues. We therefore have
\[\mathbf{L}_{1d}=\mathbf{R}\mathbf{\Psi}_{\lambda}\mathbf{R}^{-1},\quad\mathbf{ \Psi}_{\lambda}=\mathbf{I}-2\lambda\mathbf{\Psi}, \tag{42}\]
for the 1D diagonal blocks in (27).
From (27), eigenpairs \(\psi\) and \(\mathbf{r}=(r_{0},\ldots,r_{p})^{\top}\), such that \(\mathcal{L}\mathbf{r}=\psi\mathbf{r}\), satisfy \(\sum_{l}D_{lk}r_{l}-\delta_{kp}\frac{r_{p}}{\omega_{p}}=\psi r_{k}\) and summing this relation over \(0\leq k\leq p\) gives \(-\frac{1}{\omega_{p}}r_{p}=\psi\sum_{k}r_{k}\) and for \(\psi=0\) we would have \(r_{p}=0\), hence \(\mathbf{D}^{\top}\mathbf{r}=0\) so \(\mathbf{r}=0\) since \(\mathbf{D}^{\top}\) is of rank \(p\). So we have \(\psi\neq 0\) and we can invert the above relation with (21) to get \(\mathbf{r}=-\frac{r_{p}}{\phi\omega_{p}}\big{(}\sum_{l=0}^{p}\psi^{-l}\mathbf{ D}^{l}\big{)}^{\top}\mathbf{e}_{p}\) and the \(p\)th component with \(r_{p}\neq 0\) gives the \(\psi\) as the roots of the polynomial
\[\omega_{p}\psi^{p+1}+\sum_{l=0}^{p}\psi^{p-l}D_{pp}^{(l)}=0,\] (43a) and the eigenvector associated to any eigenvalue \[\psi\] may be explicitly computed from \[r_{k}=-\frac{1}{\omega_{p}}\sum_{l=0}^{p}\psi^{-l-1}D_{pk}^{(l)}\quad\forall 0 \leq k\leq p-1,\quad r_{p}=1. \tag{43b}\]
#### 4.4.2 Solution of the HO scheme (30)
Setting \(\lambda=\lambda_{x_{i}}+\lambda_{y_{j}}>0\), we rewrite the scheme (30) without graph viscosity as
\[\mathbf{L}_{2d}(\mathbf{M}\otimes\mathbf{M})\mathbf{U}_{ij}^{n+1}-\lambda_{x_{i }}(\mathbf{M}\otimes\mathbf{e}_{0}\mathbf{e}_{p}^{\top})\mathbf{U}_{(i-1)j}^{n +1}-\lambda_{y_{j}}(\mathbf{e}_{0}\mathbf{e}_{p}^{\top}\otimes\mathbf{M}) \mathbf{U}_{ii(j-1)}^{n+1}=(\mathbf{M}\otimes\mathbf{M})\mathbf{U}_{ij}^{n},\]
where the first matrix in the diagonal blocks may be written as follows from the definition of \(\mathbf{L}_{1d}\) in (42):
\[\mathbf{L}_{2d}\coloneqq \frac{\lambda_{x}}{\lambda}\mathbf{I}\otimes\mathbf{L}_{1d}+ \frac{\lambda_{y_{j}}}{\lambda}\mathbf{L}_{1d}\otimes\mathbf{I}=(\mathbf{R} \otimes\mathbf{R})\boldsymbol{\Psi}_{2d}(\mathbf{R}\otimes\mathbf{R})^{-1} \tag{44a}\] \[\boldsymbol{\Psi}_{2d}= \frac{\lambda_{x}}{\lambda}\mathbf{I}\otimes\boldsymbol{\Psi}_{ \lambda}+\frac{\lambda_{y_{j}}}{\lambda}\boldsymbol{\Psi}_{\lambda}\otimes \mathbf{I}, \tag{44b}\]
with \(\mathbf{I}\) the identity matrix in \(\mathbb{R}^{p+1}\) and \(\mathbf{L}_{1d}\) the 1D operator defined in (27). The diagonal matrix \(\boldsymbol{\Psi}_{2d}\) has \(1-2(\lambda_{x_{i}}\psi_{l}+\lambda_{y_{j}}\psi_{k})\neq 0\) as \(n_{kl}\)th component. Hence the inverse of the diagonal blocks in (30) has an explicit expression
\[(\mathbf{M}\otimes\mathbf{M})^{-1}\mathbf{L}_{2d}^{-1} =\left((\mathbf{M}^{-1}\mathbf{R})\otimes(\mathbf{M}^{-1}\mathbf{ R})\right)\boldsymbol{\Psi}_{2d}^{-1}\left(\mathbf{R}\otimes\mathbf{R} \right)^{-1}, \tag{45}\] \[\boldsymbol{\Psi}_{2d}^{-1} =\text{diag}\left(\frac{1}{1-2(\lambda_{x_{i}}\psi_{k}+\lambda_{ y_{j}}\psi_{l})}:\ 1\leq n_{kl}=1+k+l(p+1)\leq N_{p}\right).\]
Note that \(\mathcal{L}\) in (27) depends only on the approximation order of the scheme \(p\), not on the \(\lambda_{x_{i}}\) and \(\lambda_{y_{j}}\), so the matrices \(\mathbf{R}\), \(\mathbf{R}^{-1}\), \(\mathbf{M}^{-1}\mathbf{R}\), \(\boldsymbol{\Psi}\), \(\boldsymbol{\Psi}_{2d}\), etc. may be computed once from (43) at the beginning of the computation.
#### 4.4.3 Solution of the LO scheme (34)
Including the graph viscosity (35) into (30) modifies the diagonal blocks of the linear system and we now need to solve
\[\mathbf{L}_{2d}^{v}(\mathbf{M}\otimes\mathbf{M})\mathbf{U}_{ij}^{n+1}-\lambda _{x_{i}}(\mathbf{M}\otimes\mathbf{e}_{0}\mathbf{e}_{p}^{\top})\mathbf{U}_{(i -1)j}^{n+1}-\lambda_{y_{j}}(\mathbf{e}_{0}\mathbf{e}_{p}^{\top}\otimes \mathbf{M})\mathbf{U}_{ii(j-1)}^{n+1}=(\mathbf{M}\otimes\mathbf{M})\mathbf{U}_ {ij}^{n},\]
with
\[\mathbf{L}_{2d}^{v}=\mathbf{L}_{2d}^{0}-\mathbf{U}_{v}\mathbf{V}_{v}^{\top}, \tag{46}\]
and
\[\mathbf{L}_{2d}^{0}=\mathbf{L}_{2d}+2d_{ij}\lambda\mathbf{I}\otimes\mathbf{I} =(\mathbf{R}\otimes\mathbf{R})(\boldsymbol{\Psi}_{2d}+2d_{ij}\lambda\mathbf{I }\otimes\mathbf{I})(\mathbf{R}\otimes\mathbf{R})^{-1}, \tag{47a}\] \[\mathbf{U}_{v}=2d_{ij}(\lambda_{x}\mathbf{I}\otimes\omega,\lambda_{y} \omega\otimes\mathbf{I}),\quad\mathbf{V}_{v}=(\mathbf{I}\otimes\mathbf{I}, \mathbf{I}\otimes\mathbf{I}), \tag{47b}\]
where \(\mathbf{U}_{v}\) and \(\mathbf{V}_{v}\) are matrices in \(\mathbb{R}^{N_{p}\times(2p+2)}\). Although the diagonal blocks \(\mathbf{L}_{2d}^{v}\) may be efficiently built from the proposed method (46) (i.e., the 1D operators in \(\mathbf{L}_{2d}^{0}\) plus a low-rank product) and then inverted with a direct solver, we propose below an alternative algorithm for their inversion that is found to be more efficient for polynomial degree up to \(p\leq 6\) (see B). Indeed, the matrix \(\mathbf{L}_{2d}^{0}\) in (46) is easily inverted from (47a) since \(\boldsymbol{\Psi}_{2d}+2d_{ij}\lambda\mathbf{I}\otimes\mathbf{I}\) is diagonal. Then, we invert \(\mathbf{L}_{2d}^{v}\) by using the Woodbury identity:
\[(\mathbf{L}_{2d}^{v})^{-1}\stackrel{{\eqref{eq:2d}}}{{=}} \left(\mathbf{I}\otimes\mathbf{I}-(\mathbf{L}_{2d}^{0})^{-1} \mathbf{U}_{v}\mathbf{V}_{v}^{\top}\right)^{-1}\!\!
## 5 Numerical experiments
In this section we present numerical experiments on problems in one and two space dimensions (sections 5.2 and 5.3) in order to illustrate the properties of the DGSEM considered in this work. The FCT limiter (39) is applied in the 2D experiments only. A maximum principle holds for the cell-averaged solution, \(m\leq\langle u_{h}^{(n+1)}\rangle\leq M\), in one space dimension and in two space dimensions with the FCT limiter. We then apply the linear scaling limiter from [53] described in section 5.1 to enforce a maximum principle on all the DOFs within the cells.
### Linear scaling limiter
Assuming \(\langle u^{(n+1)}\rangle_{\kappa}\in[m,M]\) in a cell \(\kappa\) (either \(\kappa_{i}\) in 1D, or \(\kappa_{ij}\) in 2D), Zhang and Shu [53] proposed to modify \(\mathbf{U}_{\kappa}^{(n+1)}\), the vector of DOFs in \(\kappa\), as follows:
\[\widetilde{\mathbf{U}}_{\kappa}^{(n+1)}=\theta_{\kappa}\mathbf{U}_{\kappa}^{( n+1)}+(1-\theta_{\kappa})\langle u_{h}^{(n+1)}\rangle_{\kappa}\mathbf{1},\quad \theta_{\kappa}=\min\left(\left|\frac{M-\langle u_{h}^{(n+1)}\rangle_{\kappa} }{\max\mathbf{U}_{\kappa}-\langle u_{h}^{(n+1)}\rangle_{\kappa}}\right|, \left|\frac{m-\langle u_{h}^{(n+1)}\rangle_{\kappa}}{\min\mathbf{U}_{\kappa}- \langle u_{h}^{(n+1)}\rangle_{\kappa}}\right|,1\right), \tag{48}\]
with \(\mathbf{1}=(1,1,\ldots,1)^{\top}\in\mathbb{R}^{(p+1)^{\ell}}\), and \(\max\mathbf{U}_{\kappa}\) (resp., \(\min\mathbf{U}_{\kappa}\)) is the maximum (resp., minimum) value of the DOFs in the vector \(\mathbf{U}_{\kappa}^{k,n+1}\). This limiter does not affect the high-order of accuracy for smooth solutions and does not change the cell average of the solution thus keeping the method conservative [53].
### One space dimension
#### 5.2.1 Space-time accuracy
First, the accuracy in time of the scheme can be checked with a smooth initial condition on a series of uniform grids with \(\lambda_{1\leq i\leq N_{x}}=\lambda\). Tab. 3 displays the error levels and associated numerical orders of convergence with and without the linear scaling limiter (48). An accuracy of order one in time is observed and the limiter does not affect the time accuracy of the method.
Then, the spatial accuracy is checked by looking for steady-state solutions of the following problem with a geometric source term and an inflow boundary condition:
\[\partial_{t}u+\partial_{x}u=2\pi\cos(2\pi x)\quad\text{in }[0,1]\times[0,T],\quad u(0, \cdot)=0\quad\text{in }[0,T], \tag{49}\]
whose exact solution reads \(u(x)=\sin(2\pi x)\). We take \(\lambda=1\), start from \(u_{0}(x)=0\), and march in time until \(\|u_{h}^{n+1}-u_{h}^{n}\|_{2}\leq 10^{-14}\). The \(p+1\) accuracy of DGSEM is observed in Tab. 4. As expected [53; 55], the limiter does not affect the accuracy of the method.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{no limiter} & \multicolumn{5}{c}{linear scaling limiter} \\ \cline{3-10} \(p\) & \(N_{x}\) & \(L^{2}\) error & \(\mathcal{O}_{2}\) & \(L^{\infty}\) error & \(\mathcal{O}_{\infty}\) & \(L^{2}\) error & \(\mathcal{O}_{2}\) & \(L^{\infty}\) error & \(\mathcal{O}_{\infty}\) \\ \hline \multirow{4}{*}{1} & 20 & 2.092E-2 & - & 4.071E-2 & - & 1.900E-2 & - & 4.071E-2 & - \\ & 40 & 5.239E-3 & 2.00 & 1.025E-2 & 1.99 & 4.997E-3 & 1.93 & 1.025E-2 & 1.99 \\ & 80 & 1.310E-3 & 2.00 & 2.569E-3 & 2.00 & 1.280E-3 & 1.96 & 2.569E-3 & 2.00 \\ & 160 & 3.276E-4 & 2.00 & 6.424E-4 & 2.00 & 3.239E-4 & 1.98 & 6.424E-4 & 2.00 \\ \cline{1-1} & 20 & 4.164E-4 & - & 1.274E-2 & - & 4.256E-4 & - & 1.274E-2 & - \\ \cline{1-1} & 40 & 5.210E-5 & 3.00 & 1.609E-4 & 2.99 & 5.226E-5 & 3.03 & 1.609E-4 & 2.99 \\ \cline{1-1} & 80 & 6.515E-6 & 3.00 & 2.017E-5 & 3.00 & 6.517E-6 & 3.00 & 2.017E-5 & 3.00 \\ \cline{1-1} & 160 & 8.144E-7 & 3.00 & 2.523E-6 & 3.00 & 8.144E-7 & 3.00 & 2.523E-6 & 3.00 \\ \cline{1-1} & 20 & 6.978E-6 & - & 2.669E-5 & - & 6.978E-6 & - & 2.669E-5 & - \\ \cline{1-1} & 40 & 4.365E-7 & 4.00 & 1.685E-6 & 3.99 & 4.365E-7 & 4.00 & 1.685E-6 & 3.99 \\ \cline{1-1} & 80 & 2.729E-8 & 4.00 & 1.056E-7 & 4.00 & 2.729E-8 & 4.00 & 1.056E-7 & 4.00 \\ \cline{1-1} & 160 & 1.706E-9 & 4.00 & 6.604E-9 & 4.00 & 1.706E-9 & 4.00 & 6.604E-9 & 4.00 \\ \cline{1-1} & 20 & 1.008E-07 & - & 4.493E-07 & - & 1.008E-07 & - & 4.493E-07 & - \\ \cline{1-1} & 40 & 3.153E-09 & 5.00 & 1.418E-08 & 4.99 & 3.153E-09 & 5.00 & 1.418E-08 & 4.99 \\ \cline{1-1} & 80 & 9.854E-11 & 5.00 & 4.444E-10 & 5.00 & 9.854E-11 & 5.00 & 4.444E-10 & 5.00 \\ \cline{1-1} & 160 & 3.080E-12 & 5.00 & 1.392E-11 & 5.00 & 3.080E-12 & 5.00 & 1.392E-11 & 5.00 \\ \cline{1-1} & 20 & 1.253E-09 & - & 6.274E-09 & - & 2.237E-09 & - & 1.249E-08 & - \\ \cline{1-1} & 40 & 1.959E-11 & 6.00 & 9.902E-11 & 5.99 & 2.851E-11 & 6.29 & 1.978E-10 & 5.98 \\ \cline{1-1} & 80 & 3.061E-13 & 6.00 & 1.554E-12 & 5.99 & 3.826E-13 & 6.22 & 3.111E-12 & 5.99 \\ \cline{1-1} & 160 & 1.182E-14 & 4.69 & 1.125E-13 & 3.79 & 1.334E-14 & 4.84 & 1.395E-13 & 4.48 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Steady-state problem (49): \(L^{k\in\{2,\infty\}}\) error levels \(\|u_{h}-u\|_{L^{2}(\Omega_{k})}\) and associated orders of convergence \(\mathcal{O}_{k}\) obtained with \(\lambda=1\) when refining the mesh. The linear scaling limiter (48) is applied or not.
#### 5.2.2 Maximum-principle preservation
We now compare experiments with the theoretical bounds on the time to space steps ratio \(\lambda\) indicated in Tab. 1 for the DGSEM scheme to be maximum-principle preserving. We use a discontinuous initial condition composed of a Gaussian, a square pulse, a sharp triangle and a combination of semi-ellipses [28] in the range \(0\leq x\leq 1\):
\[u_{0}(x)=\left\{\begin{array}{ll}\frac{1}{6}(G(x,\beta,z-\delta)+4G(x,\beta, z)+G(x,\beta,z+\delta))&0.04\leq x\leq 0.24,\\ 1&0.28\leq x\leq 0.48,\\ 1-10|x-0.62|&0.52\leq x\leq 0.72,\\ \frac{1}{6}(F(x,a,a-\delta)+4F(x,\alpha,a)+F(x,\alpha,a+\delta))&0.76\leq x\leq 0.96,\\ 0&\text{else}.\end{array}\right. \tag{50}\]
with \(G(x,\beta,z)=e^{-\beta(x-z)^{2}}\), \(F(x,\alpha,a)=\sqrt{\max(1-\alpha^{2}(x-a)^{2},0)}\), \(a=0.86\), \(z=0.14\), \(\delta=0.005\), \(\alpha=10\) and \(\beta=\frac{\log(2)}{36\delta^{2}}\).
Table 5 displays the minimum and the maximum values of the cell average solution of (17) after a short physical time for different approximation orders and different values of \(\lambda\). The results are in good agreement with the theoretical lower bounds in Tab. 1 and for \(p\geq 2\) the maximum principle is seen to be violated on at least one mesh for the lowest value \(\lambda=0.1\).
In Tab. 6 we experimentally evaluate the lower bound on \(\lambda\) to guarantee a maximum principle on the cell-averaged solution by using a bisection method from the same configuration as in Tab. 5. We observe that the theoretical lower bound on \(\lambda\) derived in theorem 3.3 and Tab. 1 is sharp and confirmed by the experimental observations, though it seems to slightly overestimate the experimental lower bound for \(p=3\) and \(p=5\). Let us recall that the condition \(\lambda>\lambda_{min}\) in Tab. 1 is a sufficient to obtain maximum principle preservation.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{\(p\)} & \multirow{2}{*}{\(\lambda\)} & \multicolumn{2}{c}{\(N_{x}=100\)} & \multicolumn{2}{c}{\(N_{x}=101\)} \\ \cline{3-6} & & \(\min\limits_{1\leq i\leq N_{x}}\langle u_{h}\rangle_{i}\) & \(\max\limits_{1\leq i\leq N_{x}}\langle u_{h}\rangle_{i}\) & \(\min\limits_{1\leq i\leq N_{x}}\langle u_{h}\rangle_{i}\) & \(\max\limits_{1\leq i\leq N_{x}}\langle u_{h}\rangle_{i}\) \\ \hline & 0.1 & 0.0 & 1.0 & 0.0 & 1.0 \\
1 & 0.25 & 9.88E-9 & 1.0 & 3.68E-10 & 1.0 \\ & 0.5 & 6.14E-6 & 1.0 & 9.40E-7 & 1.0 \\ & 0.1 & -3.92E-3 & 1.0005 & -5.43E-3 & 1.005 \\
2 & 0.25 & 0.0 & 1.0 & 0.0 & 1.0 \\ & 0.5 & 5.35E-7 & 1.0 & 6.29E-8 & 1.0 \\ & 0.1 & 0.0 & 1.0 & -5.52E-3 & 1.006 \\
3 & 0.195137 & 0.0 & 1.0 & 0.0 & 1.0 \\ & 0.5 & 6.29E-7 & 1.0 & 8.91E-8 & 1.0 \\ & 0.1 & -4.00E-4 & 1.0002 & -3.58E-5 & 1.000007 \\
4 & 0.151 & 0.0 & 1.0 & 0.0 & 1.0 \\ & 0.5 & 1.32E-7 & 1.0 & 8.93E-8 & 1.0 \\ & 0.1 & 0.0 & 1.0 & -2.45E-7 & 1.0005 \\
5 & 0.147568 & 0.0 & 1.0 & 0.0 & 1.0 \\ & 0.5 & 9.92E-8 & 1.0 & 9.06E-8 & 1.0 \\ & 0.10 & -1.78E-05 & 1.000013 & -1.44E-4 & 1.000144 \\
6 & 0.109977 & 0.0 & 1.0 & 0.0 & 1.0 \\ & 0.5 & 0.0 & 1.0 & 0.0 & 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Linear scalar equation with a discontinuous initial condition (50): evaluation of the maximum principle for the cell-averaged solution as proved in theorem 3.3 and Tab. 1 after a short physical time \(t=0.01\). The solution should remain in the interval \([0,1]\). The linear scaling limiter (48) is always applied.
#### 5.2.3 Linear advection-reaction with source
We finally consider a linear advection-reaction problem with a geometric source term:
\[\partial_{t}u+c_{x}\partial_{x}u+\beta u=s(x)\quad\text{in}\ \Omega\times(0, \infty),\quad u(x,0)=u_{0}(x)\quad\text{in}\ \Omega. \tag{51}\]
with \(\beta\geq 0\) and \(s(\cdot)\geq 0\). Providing nonnegative initial and boundary data are imposed, the solution remains nonnegative for all time.
We here adapt the problem representative of the radiative transfer equations from (51, Ex. 6.2) with \(c_{x}=1\), \(\beta=6000\), \(s(x)=\beta(\frac{1}{9}\cos^{4}(2\pi x)+\epsilon)-\frac{4}{9}\cos^{3}(2\pi x) \sin(2\pi x)\) on \(\Omega=[0,3]\), \(\epsilon=10^{-14}\), and an inflow boundary condition \(u(0,t)=\frac{1}{9}+\epsilon\) on \(\Omega=[0,\pi]\). This problem has a steady-state smooth solution, but with low positive values and large oscillations: \(u(x)=\frac{1}{9}\cos^{4}(2\pi x)+\epsilon\geq\epsilon\) (see Fig. 3).
We again set \(\lambda=1\) and iterate up to convergence \(\|u_{h}^{n+1}-u_{h}^{n}\|_{2}\leq 10^{-14}\). Table 7 displays the error levels obtained for different approximation orders and mesh refinements when applying the scaling limiter (48) or not, together with the evaluation of the lowest value of the DGSEM solution. The limiter keeps the high-order accuracy of the DGSEM, while it successfully preserves positivity of the solution thus confirming that the DGSEM preserves positivity of the cell-averaged solution before the application of the limiter. We however observe a suboptimal \(p\)th order of accuracy with or without the limiter which was also reported in preceding experiments [7] and can be attributed to the low accuracy of the Gauss-Lobatto quadrature rules applied to the nonlinear geometric source term compared to other quadrature rules [9] (see also [7, Remark 3.1]).
### Two space dimensions
We now focus on numerical tests in two space dimensions in the unit square using a Cartesian mesh with \(N_{x}=N_{y}\) cells in the \(x\) and \(y\) directions respectively. For all the tests, we set \(c_{x}=c_{y}=1\). We here compare results obtained with the DGSEM scheme and without or with the FCT limiter:
\begin{table}
\begin{tabular}{l l l l l l l} \hline p & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \(\lambda^{exp}_{min}\) & 0 & 0.25 & 0.17 & 0.16 & 0.13 & 0.11 \\ \(\lambda_{min}\) & 0 & 0.25 & 0.195137 & 0.150346 & 0.147568 & 0.109977 \\ \hline \end{tabular}
\end{table}
Table 6: Experimental evaluation of the lower bounds of the time to space steps ratio \(\lambda^{exp}_{min}\) such that \(\lambda_{1\leq i\leq N_{x}}=\lambda\geq\lambda^{exp}_{min}\) ensures the maximum principle preservation for the cell-averaged solution in theorem 3.3 and Tab. 1, while it doesn’t for \(\lambda\leq\lambda^{exp}_{min}-10^{-2}\) on meshes with \(N_{x}=100\) and \(N_{x}=101\) elements. We report the theoretical values from Tab. 1 in the bottom line for the sake of comparison.
Figure 3: Advection-reaction equation with source (51): steady-state DGSEM solution for \(p=5\), \(N_{x}=10\) and the linear scaling limiter (48). The solution is plotted at quadrature points and \(T\) refers to the pseudo time required to converge the solution, i.e., \(\|u_{h}^{n+1}-u_{h}^{n}\|_{2}\leq 10^{-14}\).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline & & \multicolumn{3}{c}{no limiter} & \multicolumn{6}{c}{linear scaling limiter} \\ \cline{3-13} \(p\) & \(N_{x}\) & \(u_{u_{min}}\) & \(L^{2}\) error & \(\mathcal{O}_{2}\) & \(L^{\infty}\) error & \(\mathcal{O}_{\infty}\) & \(u_{h_{min}}\) & \(L^{2}\) error & \(\mathcal{O}_{2}\) & \(L^{\infty}\) error & \(\mathcal{O}_{\infty}\) \\ \hline & 10 & 9.39E-04 & 1.89E-04 & – & 1.70E-04 & – & 9.39E-04 & 1.96E-04 & – & 1.70E-04 & – \\ & 20 & -5.28E-05 & 1.50E-04 & 0.33 & 1.71E-04 & -0.01 & 9.99E-15 & 1.49E-04 & 0.39 & 1.71E-04 & -0.01 \\
1 & 40 & -1.04E-05 & 8.90E-05 & 0.76 & 1.02E-04 & 0.74 & 9.99E-15 & 8.42E-05 & 0.83 & 1.02E-04 & 0.74 \\ & 80 & -1.46E-06 & 4.63E-05 & 0.94 & 5.35E-05 & 0.94 & 1.00E-14 & 4.43E-05 & 0.93 & 5.35E-05 & 0.94 \\ & 160 & -3.01E-07 & 2.32E-05 & 1.00 & 2.70E-05 & 0.98 & 1.00E-14 & 2.26E-05 & 0.97 & 2.68E-05 & 1.00 \\ & 10 & -4.33E-08 & 1.35E-04 & – & 1.60E-04 & – & 9.99E-15 & 1.26E-04 & – & 1.60E-04 & – \\ & 20 & -3.17E-05 & 5.52E-05 & 1.29 & 7.38E-05 & 1.12 & 1.00E-14 & 6.78E-05 & 0.89 & 1.57E-04 & 0.03 \\
2 & 40 & -7.44E-06 & 1.58E-05 & 1.80 & 2.24E-05 & 1.72 & 1.00E-14 & 1.81E-05 & 1.91 & 4.35E-05 & 1.86 \\ & 80 & -1.05E-06 & 4.11E-06 & 1.95 & 5.90E-06 & 1.93 & 1.00E-14 & 4.21E-06 & 2.10 & 6.51E-06 & 2.74 \\ & 160 & -1.31E-07 & 1.03E-06 & 1.99 & 1.53E-06 & 1.94 & 1.00E-14 & 1.04E-06 & 2.02 & 1.53E-06 & 2.09 \\ & 10 & 1.62E-04 & 7.86E-05 & – & 1.27E-04 & – & 1.62E-04 & 8.90E-05 & – & 1.27E-04 & – \\ & 20 & -5.33E-06 & 1.60E-05 & 2.29 & 2.71E-05 & 2.22 & 9.99E-15 & 1.71E-05 & 2.38 & 3.66E-05 & 1.79 \\
3 & 40 & -1.46E-06 & 2.25E-06 & 2.83 & 3.84E-06 & 2.82 & 1.00E-14 & 2.71E-06 & 2.66 & 7.75E-06 & 2.24 \\ & 80 & -2.59E-07 & 2.86E-07 & 2.98 & 4.81E-07 & 3.00 & 1.00E-14 & 3.30E-07 & 3.04 & 1.07E-06 & 2.85 \\ & 160 & -3.36E-08 & 3.52E-08 & 3.02 & 6.22E-08 & 2.95 & 1.00E-14 & 3.81E-08 & 3.11 & 1.35E-07 & 2.99 \\ & 10 & -1.97E-05 & 3.90E-05 & – & 6.46E-05 & – & 9.99E-15 & 4.35E-05 & – & 6.54E-05 & – \\ & 20 & -5.26E-06 & 3.72E-06 & 3.39 & 6.56E-06 & 3.30 & 1.00E-14 & 5.68E-06 & 2.94 & 2.05E-05 & 1.67 \\
4 & 40 & -2.65E-07 & 2.58E-07 & 3.85 & 4.55E-07 & 3.85 & 1.00E-14 & 3.08E-07 & 4.21 & 1.33E-06 & 3.94 \\ & 80 & -8.99E-09 & 1.66E-08 & 3.96 & 3.11E-08 & 3.87 & 1.00E-14 & 1.71E-08 & 4.17 & 4.76E-08 & 4.81 \\ & 160 & -2.73E-10 & 1.04E-09 & 3.99 & 2.08E-09 & 3.90 & 1.00E-14 & 1.04E-09 & 4.03 & 2.08E-09 & 4.51 \\ & 10 & 1.95E-06 & 1.56E-05 & – & 2.70E-05 & – & 1.95E-06 & 1.92E-05 & – & 3.48E-05 & – \\ & 20 & -3.30E-07 & 7.12E-07 & 4.46 & 1.32E-06 & 4.35 & 9.99E-15 & 7.78E-07 & 4.63 & 1.32E-06 & 4.72 \\
5 & 40 & -2.62E-08 & 2.41E-08 & 4.88 & 4.58E-08 & 4.8 & 1.00E-14 & 3.05E-08 & 4.67 & 7.84E-08 & 4.08 \\ & 80 & -1.06E-09 & 7.54E-10 & 5.00 & 1.38E-09 & 5.04 & 1.00E-14 & 9.22E-10 & 5.05 & 3.10E-09 & 4.66 \\ & 160 & -3.14E-11 & 2.29E-11 & 5.04 & 4.63E-11 & 4.91 & 1.00E-14 & 2.56E-11 & 5.17 & 9.06E-11 & 5.10 \\ \hline \end{tabular}
\end{table}
Table 7: Advection-reaction problem with source (51): \(L^{k\in[2,\infty]}\) error levels \(\|u_{h}-u\|_{L^{2}(\Delta_{h})}\) and associated orders of convergence \(\mathcal{O}_{k}\) obtained with \(\lambda=1\) when refining the mesh. The solution should remain in the interval \([0,\frac{1}{9}]\). Minimum value of DOFs \(u_{u_{min}}=\min(U_{1\leq|\leq N_{x}}^{0\leq k\leq p})\). The linear scaling limiter (48) is applied or not.
**no limiter**: we solve (30) without graph viscosity for \(u_{h}^{(n+1)}\). We cannot apply the linear scaling limiter (48) since the \(\langle u_{h}^{(n+1)}\rangle_{ij}\) are not guaranteed to satisfy the maximum principle;
**FCT limiter**: we first solve (30) without graph viscosity for \(u_{h,HO}^{(n+1)}\) and check if the cell-averaged solution satisfies the maximum principle, and if so, we set \(u_{h}^{(n+1)}\equiv u_{h,HO}^{(n+1)}\). If not, we solve (34) with graph viscosity for \(u_{h,LO}^{(n+1)}\) and apply the FCT limiter (39) introduced in section 4.3. Finally, we apply the linear scaling limiter (48) after the FCT limiter to preserve a maximum principle on all the DOFs in \(u_{h}^{(n+1)}\).
#### 5.3.1 Maximum-principle preservation
We first evaluate both DGSEM schemes on an unsteady problem with a discontinuous initial condition, \(u_{0}(x,y)=1\) if \(|x-\frac{1}{4}|+|y-\frac{1}{4}|\leq 0.15\) and \(0\) else, and periodic boundary conditions. Table 8 gives the minimum and maximum values of the cell-averaged solution after one time step with \(1\leq p\leq 5\) and different values of \(\lambda_{x}=\lambda_{y}\). The maximum principle is not satisfied when using the DGSEM without limiter, except for \(p=1\) and the smallest time step. In particular, the maximum principle is violated even for large \(\lambda_{x}=\lambda_{y}\) in contrast to what is observed and proved in one space dimension. As expected, the FCT limiter successfully imposes a maximum principle on the cell-averaged solution, thus enabling a maximum principle through the use of the linear scaling limiter.
#### 5.3.2 Steady smooth solution
We now consider a smooth steady-state solution of the problem \(\partial_{x}u+\partial_{y}u=0\) in \(\Omega=[0,1]^{2}\) with inlet conditions \(u(x,0)=\sin(2\pi x)\), \(u(0,y)=-\sin(2\pi y)\) and outflow conditions at boundaries \(x=1\) and \(y=1\). The exact solution is \(u(x,y)=\sin(2\pi(x-y))\). In practice, we look for a steady solution to the unsteady problem (1). We take \(\lambda_{x}=\lambda_{y}=5\), start from \(u_{h}^{(0)}\equiv 0\) and march in time until \(\|u_{h}^{n+1}-u_{h}^{n}\|_{2}\leq 10^{-14}\) with the DGSEM scheme with FCT limiter. Error levels are summarized in Tab. 9 together with the minimum and maximum values of the cell-averaged solution. The FCT limiter keeps here the \(p+1\) high-order accuracy in space of the DGSEM while it successfully preserves the maximum principle on the cell-averaged solution, and hence also on the DOFs through the linear scaling limiter.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & no limiter & & FCT limiter & \\ \cline{3-6} p & \(\lambda\) & \(\min\limits_{1\leq i,j\leq 20}\langle u_{h}\rangle_{ij}\) & \(\max\limits_{1\leq i,j\leq 20}\langle u_{h}\rangle_{ij}\) & \(\min\limits_{1\leq i,j\leq 20}\langle u_{h}\rangle_{ij}\) & \(\max\limits_{1\leq i,j\leq 20}\langle u_{h}\rangle_{ij}\) \\ \hline & 0.05 & 0.0 & 1.0 & 0.0 & 1.0 \\
1 & 1 & -9.45E-3 & 0.90 & 9.59E-08 & 0.90 \\ & 5 & -9.45E-3 & 0.47 & 1.37E-02 & 0.49 \\ & 0.05 & -6.76E-4 & 1.0002 & 0.0 & 1.0 \\
2 & 1 & -6.76E-3 & 0.93 & 1.42E-07 & 0.90 \\ & 5 & -6.60E-3 & 0.44 & 2.01E-3 & 0.41 \\ & 0.05 & -4.85E-8 & 1.0 & 0.0 & 1.0 \\
3 & 1 & -2.98E-3 & 0.92 & 5.40E-07 & 0.91 \\ & 5 & -6.24E-4 & 0.44 & 3.22E-03 & 0.38 \\ & 0.05 & -1.41E-4 & 1.0007 & 0.0 & 1.0 \\
4 & 1 & -1.33E-4 & 0.92 & 5.97E-07 & 0.92 \\ & 5 & -6.09E-5 & 0.44 & 3.25E-03 & 0.38 \\ & 0.05 & -1.31E-3 & 1.0 & 0.0 & 1.0 \\
5 & 1 & -8.80E-5 & 0.92 & 6.45E-07 & 0.92 \\ & 5 & -1.10E-4 & 0.44 & 3.33E-03 & 0.38 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Verification of the maximum principle for problem (28) after one time step on a mesh with \(N_{x}\times N_{y}=20\times 20\) elements, \(\lambda_{x_{i}}=\lambda_{y_{j}}=\lambda\), and the discontinuous initial condition \(u_{0}(x,y)=|_{x-\frac{1}{4}|y-\frac{1}{4}|0.15}\). The solution should remain in the interval \([0,1]\).
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \(p\) & \(N_{x}=N_{y}\) & \(\langle u_{h}\rangle_{\min}\) & \(\langle u_{h}\rangle_{\max}\) & \(L^{2}\) error & \(\mathcal{O}_{2}\) & \(L^{\infty}\) error & \(\mathcal{O}_{\infty}\) \\ \hline \multirow{3}{*}{1} & 5 & -0.7803 & 0.7803 & 3.260\(E-01\) & – & 6.805E-01 & – \\ & 10 & -0.9313 & 0.9313 & 9.840E-02 & 1.73 & 2.779E-01 & 1.29 \\ & 20 & -0.9955 & 0.9955 & 2.431E-02 & 2.02 & 6.341E-02 & 2.13 \\ & 40 & -0.9991 & 0.9991 & 6.589E-03 & 1.88 & 1.789E-02 & 1.83 \\ \multirow{3}{*}{2} & 5 & -0.8293 & 0.8293 & 3.808E-02 & – & 1.610E-01 & – \\ & 10 & -0.9200 & 0.9200 & 4.770E-03 & 3.00 & 1.348E-02 & 3.58 \\ & 20 & -0.9917 & 0.9917 & 6.038E-04 & 2.98 & 2.354E-03 & 2.52 \\ & 40 & -0.9979 & 0.9979 & 7.377E-05 & 3.03 & 2.084E-04 & 3.50 \\ \multirow{3}{*}{3} & 5 & -0.8322 & 0.8322 & 2.511E-03 & – & 8.746E-03 & – \\ & 10 & -0.9201 & 0.9201 & 1.569E-04 & 4.00 & 7.599E-04 & 3.52 \\ & 20 & -0.9918 & 0.9918 & 1.074E-05 & 3.87 & 7.432E-05 & 3.35 \\ & 40 & -0.9979 & 0.9979 & 6.457E-07 & 4.06 & 4.724E-06 & 3.98 \\ \multirow{3}{*}{4} & 5 & -0.8323 & 0.8323 & 1.430E-04 & – & 6.283E-04 & – \\ & 10 & -0.9201 & 0.9201 & 4.545E-06 & 4.98 & 1.880E-05 & 5.06 \\ & 20 & -0.9918 & 0.9918 & 1.431E-07 & 4.99 & 6.162E-07 & 4.93 \\ & 40 & -0.9979 & 0.9979 & 4.461E-09 & 5.00 & 1.950E-08 & 4.98 \\ \multirow{3}{*}{5} & 5 & -0.8323 & 0.8323 & 7.131E-06 & – & 3.774E-05 & – \\ & 10 & -0.9201 & 0.9201 & 1.131E-07 & 5.98 & 7.490E-07 & 5.65 \\ & 20 & -0.9918 & 0.9918 & 4.074E-09 & 4.80 & 6.652E-08 & 3.49 \\ \multirow{3}{*}{5} & 40 & -0.9979 & 0.9979 & 4.789E-11 & 6.41 & 1.058E-09 & 5.97 \\ \hline \end{tabular}
\end{table}
Table 9: Smooth steady-state problem: \(L^{k[2,\infty]}\) error levels \(\|u_{h}-u\|_{L^{k}\langle\Lambda_{y}\rangle}\) and associated orders of convergence \(\mathcal{O}_{k}\) for problem \(\partial_{x}u+\partial_{y}u=0\) with data \(u(x,0)=\sin(2\pi x)\), \(u(0,y)=-\sin(2\pi y)\) obtained with \(\lambda_{x}=\lambda_{y}=\bar{3}\) when refining the mesh and using the FCT limiter. The solution should remain in the interval \([-1,1]\). Minimum and maximum values of the cell-averaged solution over the mesh: \(\langle u_{h}\rangle_{\min/\max}=\min/\max(\langle u_{h}\rangle_{ij}:1\leq i \leq N_{x},1\leq j\leq N_{y})\).
#### 5.3.3 Steady discontinuous solution
We now consider a discontinuous steady solution and consider \(\partial_{x}u+\partial_{y}u=0\) in \(\Omega=[0,1]^{2}\), inlet conditions \(u(x,0)=\cos(\pi x)\), \(u(0,y)=-\cos(\pi y)\) and outflow conditions at boundaries \(x=1\) and \(y=1\). The exact solution is \(u(x,y)=\mathrm{sgn}(x-y)\cos(\pi(x-y))\), with \(\mathrm{sgn}\) the sign function, and is therefore discontinuous at \(x=y\). Results are reported in Tab. 10 and Fig. 4. Here again, the FCT limiter is required to guarantee the maximum principle. In particular, the DGSEM without limiter violates the maximum principle for the cell-averaged solution which prevents the use of the linear scaling limiter.
#### 5.3.4 Linear advection-reaction with source
We finally consider a linear advection-reaction problem with a geometric source term:
\[\partial_{x}u+\partial_{y}u+\beta u=s(x,y)\quad\text{in}\ \Omega,\quad u(x,0)=u_{0}(x) \quad\text{in}\ \Omega. \tag{52}\]
with \(\beta\geq 0\), \(s(\cdot)\geq 0\), and nonnegative inflow boundary data. We adapt the problem from section 5.2.3 and [51] to two space dimensions and set \(\beta=6000\) and a source term \(s(x,y)\) such that the solution is \(u(x,y)=\frac{1}{9}\cos(3\pi x)^{4}cos(3\pi y)^{4}\) (see Fig. 5). Inflow boundary conditions, \(u(x,0)=\frac{1}{9}\cos(3\pi x)^{4}\) and \(u(0,y)=\frac{1}{9}\cos(3\pi y)^{4}\), are applied to \(x=0\) and \(y=0\), while outflow conditions are imposed at \(x=1\) and \(y=1\).
Tab. 11 displays the error levels together with minimum and maximum values of the DOFs obtained without or with the FCT limiter, different approximation orders and different mesh refinements. We again use \(\lambda_{x}=\lambda_{y}=5\), start from \(u_{h}^{(0)}\equiv 0\) and march in time until \(\|u_{h}^{n+1}-u_{h}^{n}\|_{2}\leq 10^{-14}\). As in the 1D case in section 5.2.3, we observe a suboptimal convergence order of \(p\) as the mesh is refined due to the insufficient accuracy of the Gauss-Lobatto quadrature rules for integrating the highly nonlinear geometric source terms. Using the limiter or not leads to comparable error levels, while the FCT limiter is necessary for the solution to satisfy the maximum principle.
## 6 Concluding remarks
This work provides an analysis of the high-order DGSEM discretization with implicit backward-Euler time stepping for the approximation of hyperbolic linear scalar conservation equations in multiple space dimensions. Two main aspects are considered here. We first investigate the maximum principle preservation of the scheme. For the 1D scheme, we prove that the DGSEM preserves the maximum principle of the cell-averaged solution providing that the CFL number is larger than a lower bound. This result allows to use linear scaling limiters [53; 55] to impose all the DOFs to satisfy the maximum principle. This property however does not hold in general in multiple space dimensions and we propose to use the FCT limiter [19; 5; 52] to enforce the maximum principle on the cell-averaged
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \multicolumn{3}{c}{no limiter} & \multicolumn{6}{c}{FCT limiter} \\ \cline{3-10} \(N\) & \(p\) & \(\langle u_{h}\rangle_{\min}\) & \(\langle u_{h}\rangle_{\max}\) & \(u_{h_{\max}}\) & \(\langle u_{h}\rangle_{\min}\) & \(\langle u_{h}\rangle_{\max}\) & \(u_{h_{\min}}\) & \(u_{h_{\max}}\) & \(u_{h_{\max}}\) & \(u_{h_{\max}}\) \\ \hline & 1 & -0.7518 & 0.7518 & -1.1363 & 1.1363 & -0.7512 & 0.7512 & -1.0000 & 1.0000 \\ & 2 & -0.7820 & 0.7820 & -1.2634 & 1.2634 & -0.7820 & 0.7820 & -1.0000 & 1.0000 \\
5 & 3 & -0.7972 & 0.7972 & -1.3364 & 1.3364 & -0.7827 & 0.7827 & -1.0000 & 1.0000 \\ & 4 & -0.7832 & 0.7832 & -1.3633 & 1.3633 & -0.7828 & 0.7828 & -1.0000 & 1.0000 \\ & 5 & -0.7828 & 0.7828 & -1.3764 & 1.3764 & -0.7828 & 0.7828 & -1.0000 & 1.0000 \\ & 1 & -1.0121 & 1.0121 & -1.2437 & 1.2437 & -0.9967 & 0.9967 & -1.0000 & 1.0000 \\ & 2 & -1.0465 & 1.0465 & -1.2843 & 1.2843 & -0.9781 & 0.9781 & -1.0000 & 1.0000 \\
20 & 3 & -1.0042 & 1.0042 & -1.3438 & 1.3438 & -0.9857 & 0.9857 & -1.0000 & 1.0000 \\ & 4 & -0.9937 & 0.9937 & -1.3667 & 1.3667 & -0.9857 & 0.9857 & -1.0000 & 1.0000 \\ & 5 & -0.9857 & 0.9857 & -1.3781 & 1.3781 & -0.9857 & 0.9857 & -1.0000 & 1.0000 \\ \hline \end{tabular}
\end{table}
Table 10: Discontinuous steady-state problem: verification of the maximum principle for problem \(\partial_{x}u+\partial_{y}u=0\) with data \(u(x,0)=\cos(\pi x)\), \(u(0,y)=-\cos(\pi y)\) obtained with \(\lambda_{x}=\lambda_{y}=5\) without and with the FCT limiter, and with \(N_{x}=N_{y}=N\). The solution should remain in the interval \([-1,1]\). Minimum and maximum values of the cell-averaged solution and DOFs over the mesh: \(\langle u_{h}\rangle_{\min/\max}=\min/\max(\langle u_{h}\rangle_{ij}:\ 1\leq i \leq N_{x},1\leq j\leq N_{y})\) and \(u_{h_{\min/\max}}=\min/\max(U_{1\leq j,j\leq N}^{(0)\leq j\leq p})\).
Figure 4: Discontinuous steady-state problem: DGSEM solutions for a discontinuous steady-state problem \(\partial_{x}u+\partial_{y}u=0\) with data \(u(x,0)=\cos(\pi x)\), \(u(0,y)=-\cos(\pi y)\) obtained with \(\lambda_{x}=\lambda_{y}=5\), \(N_{x}=N_{y}=20\), without and with the FCT limiter. The solution is plotted at quadrature points and \(T\) refers to the pseudo time required to converge the solution, i.e., \(\|u_{h}^{w+1}-u_{h}^{n}\|_{2}\leq 10^{-14}\).
\begin{table}
\begin{tabular}{c
solution. The FCT limiter combines the DGSEM scheme with a low-order maximum-principle preserving scheme derived by adding graph viscosity to the DGSEM scheme. The linear scaling limiter is then used to impose the maximum principle to all the DOFs. Numerical experiments in one and two space dimensions are provided to illustrate the conclusions of the present analyses. Then, we investigate the inversion of the linear systems resulting from the time implicit discretization at each time step. We prove that the diagonal blocks are invertible and provide efficient algorithms for their inversion. Future work will concern the extension of this analysis to nonlinear hyperbolic scalar equations and systems of conservation laws on unstructured grids. Another direction of research may consist in using the fast inversion algorithms introduced in this work for solving preconditionning steps based on tensor product of 1D building blocks in block-preconditionned iterative solvers.
## Appendix A Multidimensional discrete difference matrix
The 2D discrete difference matrix reads
\[\mathbf{D}_{2d}^{\top}=\lambda_{x}\mathbf{I}\otimes\mathbf{D}^{\top}+\lambda_ {y_{j}}\mathbf{D}^{\top}\otimes\mathbf{I},\]
and is also nilpotent:
\[\mathbf{D}_{2d}^{2p+1}=0.\]
Indeed, using properties (31a) we have:
\[\mathbf{D}_{2d}^{2p+1} =(\lambda_{x}\mathbf{I}\otimes\mathbf{D}+\lambda_{y_{j}}\mathbf{D }\otimes\mathbf{I})^{2p+1}\] \[=\sum_{k=0}^{2p+1}\binom{2p+1}{k}k^{k}_{x_{i}}(\mathbf{I}\otimes \mathbf{D})^{k}\lambda_{y_{j}}^{2p+1-k}(\mathbf{D}\otimes\mathbf{I})^{2p+1-k}\] \[=\sum_{k=0}^{2p+1}\binom{2p+1}{k}k^{k}_{x_{i}}\lambda_{y_{j}}^{2p +1-k}(\mathbf{I}\otimes\mathbf{D}^{k})(\mathbf{D}^{2p+1-k}\otimes\mathbf{I})\] \[=\sum_{k=0}^{2p+1}\binom{2p+1}{k}\lambda_{x_{i}}^{k}\lambda_{y_{j} }^{2p+1-k}\mathbf{D}^{2p+1-k}\otimes\mathbf{D}^{k}.\]
For all \(0\leq k\leq 2p+1\), either \(2p+1-k\), or \(k\) is greater or equal to \(p+1\). Hence, nilpotency of \(\mathbf{D}\) (10) gives the desired result. The matrix is therefore easily invertible:
\[\left(\mathbf{I}-\mathbf{D}_{2d}\right)^{-1}=\sum_{k=0}^{2p}\mathbf{D}_{2d}^{k }=\sum_{k=0}^{2p}\sum_{l=0}^{k}\binom{k}{l}l^{l}_{x_{i}}\lambda_{y_{j}}^{k-l} \mathbf{D}^{l}\otimes\mathbf{D}^{k-l}.\]
Figure 5: 2D advection-reaction with source: steady-state DGSEM solution for problem (52) with \(p=5\), \(N_{x}=N_{y}=10\) and the FCT limiter. The solution is plotted at quadrature points and \(T\) refers to the pseudo time required to converge the solution, i.e., \(\|u_{h}^{n+1}-u_{h}^{n}\|_{2}\leq 10^{-14}\).
## Appendix B Inversion of diagonal blocks
The linear systems associated to the DGSEM discretization of problem (1) with an implicit time stepping have a sparse pattern with dense diagonal blocks of large size. One can take advantage of this structure in order to significantly speed up the resolution of such systems with respect to standard inversion algorithms. To support these claims, we implemented the proposed methods and compared them with standard ones. The code is freely available online [1]. python being our language of choice, we used as reference the linear algebra tools of the popular computational library numpy[23]. In what follows, we recall the main equations and operations involved in the inversion of the DGSEM systems.
### Reminder of main definitions and equations
At the very core of the DGSEM discretization there is the derivative matrix (8):
\[D_{kl}=\ell_{l}^{\prime}(\xi_{k}),\quad 0\leq k,l\leq p, \tag{109}\]
where \(\ell_{l}(x)\) are the 1D Lagrange interpolation polynomials (5) and \(\xi_{k}\) the nodes of the Gauss-Lobatto quadrature rule.
Once the time discretization has been taken into account as well, following section 3, one recover a sparse linear system whose diagonal blocks read \(\mathbf{L}_{1d}\mathbf{M}\), where, given the quadrature weights (\(\omega_{k}\))\({}_{0\leq k\leq p}\),
\[\mathbf{M}\coloneqq\tfrac{1}{2}\operatorname{diag}(\omega_{0},\dots,\omega_{p}) \tag{110}\]
denotes the 1D mass matrix and \(\mathbf{L}_{1d}\) is given by (27):
\[\mathbf{L}_{1d} \coloneqq\mathbf{I}-2\lambda\mathcal{L}, \tag{111a}\] \[\mathcal{L} \coloneqq\mathbf{D}^{\top}-\frac{1}{\omega_{p}}\mathbf{e}_{p} \mathbf{e}_{p}^{\top}. \tag{111b}\]
As discussed in section 4.4.1, the matrix \(\mathcal{L}\) can be diagonalized (41):
\[\mathcal{L}=\mathbf{R}\mathbf{V}\mathbf{R}^{-1}, \tag{112}\]
and an explicit formula for the eigenpairs is available (43):
\[\omega_{p}\psi^{p+1}+\sum_{l=0}^{p}\psi^{p-l}D_{pp}^{(l)} =0, \tag{113a}\] \[r_{k} =\ -\frac{1}{\omega_{p}}\sum_{l=0}^{p}\psi^{-l-1}D_{pk}^{(l)} \quad\forall 0\leq k\leq p-1,\quad r_{p}=1. \tag{113b}\]
The system coming from the high-order 2D discretization has diagonal blocks which are easily inverted from the quantities discussed just above. Indeed, system
\[\mathbf{L}_{2d}(\mathbf{M}\otimes\mathbf{M})\mathbf{x}=\mathbf{b}, \tag{114}\]
with (see (44)):
\[\mathbf{L}_{2d}\coloneqq \frac{\lambda_{x}}{\lambda}\mathbf{I}\otimes\mathbf{L}_{1d}+ \frac{\lambda_{y_{j}}}{\lambda}\mathbf{L}_{1d}\otimes\mathbf{I}=(\mathbf{R} \otimes\mathbf{R})\boldsymbol{\Psi}_{2d}(\mathbf{R}\otimes\mathbf{R})^{-1}, \tag{115a}\] \[\boldsymbol{\Psi}_{2d}= \frac{\lambda_{x}}{\lambda}\mathbf{I}\otimes\boldsymbol{\Psi}_{ \lambda}+\frac{\lambda_{y_{j}}}{\lambda}\boldsymbol{\Psi}_{\lambda}\otimes \mathbf{I}, \tag{115b}\]
can be solved following (45):
\[(\mathbf{M}\otimes\mathbf{M})^{-1}\mathbf{L}_{2d}^{-1} =\left((\mathbf{M}^{-1}\mathbf{R})\otimes(\mathbf{M}^{-1}\mathbf{ R})\right)\boldsymbol{\Psi}_{2d}^{-1}\left(\mathbf{R}\otimes\mathbf{R} \right)^{-1}, \tag{116a}\] \[\boldsymbol{\Psi}_{2d}^{-1} =\operatorname{diag}\left(\frac{1}{1-2(\lambda_{x}\psi_{k}+ \lambda_{y_{j}}\psi_{l})}:\ 1\leq n_{kl}=1+k+l(p+1)\leq N_{p}\right). \tag{116b}\]
Whenever the graph viscosity is considered in the 2D problem, the diagonal blocks are modified and their closed form reads (see (46) and (47)):
\[\mathbf{L}_{2d}^{v} =\mathbf{L}_{2d}^{0}-\mathbf{U}_{v}\mathbf{V}_{v}^{\top}, \tag{111a}\] \[\mathbf{L}_{2d}^{0} =\mathbf{L}_{2d}+2d_{ij}\lambda\mathbf{I}\otimes\mathbf{I}=( \mathbf{R}\otimes\mathbf{R})(\boldsymbol{\Psi}_{2d}+2d_{ij}\lambda\mathbf{I} \otimes\mathbf{I})(\mathbf{R}\otimes\mathbf{R})^{-1},\] (111b) \[\mathbf{U}_{v} =2d_{ij}(\lambda_{x}\mathbf{I}\otimes\boldsymbol{\omega},\lambda _{y},\boldsymbol{\omega}\otimes\mathbf{I}),\] (111c) \[\mathbf{V}_{v} =(\mathbf{I}\otimes\mathbf{I},\mathbf{I}\otimes\mathbf{I}). \tag{111d}\]
Hinging on this and applying the Woodbury identity, an efficient way to solve
\[\mathbf{L}_{2d}^{v}(\mathbf{M}\otimes\mathbf{M})\mathbf{x}=\mathbf{b} \tag{112}\]
has been proposed in algorithm 2:
1. Solve \(\mathbf{L}_{2d}^{0}\mathbf{y}=\mathbf{b}\), which gives: \[\mathbf{y}=(\mathbf{R}\otimes\mathbf{R})\operatorname{diag}\left(\frac{1}{1+2 \lambda d_{ij}-2(\lambda_{x_{i}}\psi_{k}+\lambda_{y_{j}}\psi_{l})}:\ 1\leq n_{kl}=1+k+l(p+1)\leq N_{p}\right)(\mathbf{R}^{-1} \otimes\mathbf{R}^{-1})\mathbf{b};\] (113a)
2. Solve \(\mathbf{L}_{2d}^{0}\mathbf{Z}=\mathbf{U}_{v}\), which gives: \[\mathbf{Z}=2d_{ij}(\mathbf{R}\otimes\mathbf{R})\operatorname{diag}\left(\frac{ 1}{1+2\lambda d_{ij}-2(\lambda_{x_{i}}\psi_{k}+\lambda_{y_{j}}\psi_{l})}:1 \leq n_{kl}\leq N_{p}\right)(\lambda_{x_{i}}\mathbf{R}^{-1}\otimes(\mathbf{R }^{-1}\boldsymbol{\omega}),\lambda_{y_{j}}(\mathbf{R}^{-1}\boldsymbol{\omega}) \otimes\mathbf{R}^{-1});\] (113b)
3. Solve \[(\mathbf{I}_{2p+2}-\mathbf{V}_{v}^{\top}\mathbf{Z})\mathbf{z}=\mathbf{V}_{v}^ {\top}\mathbf{y};\] (113c)
4. Finally, set \[\mathbf{x}=(\mathbf{M}^{-1}\otimes\mathbf{M}^{-1})(\mathbf{y}+\mathbf{Z} \mathbf{z}).\] (113d)
It is important to notice that matrices \(\mathcal{L}\), \(\mathbf{R}\), \(\mathbf{M}\), \(\boldsymbol{\Psi}\), \(\boldsymbol{\Psi}_{2d}\), and related matrices (e.g., \(\mathbf{R}^{-1}\), \(\mathbf{M}^{-1}\mathbf{R}\),...) depend only on the approximation order of the scheme \(p\), not on the \(\lambda_{x_{i}}\) and \(\lambda_{y_{j}}\), so that they may be computed only once at the beginning of the computation.
### Remarks about the GitHub repository
Repository [1] contains a python module, fast_DGSEM_block_inversion.py which implements (109) to (113) and, more importantly, assess (110) (respectively, (113) and (114)) by comparing it in terms of exactness to the result of (111) (resp., (112) and (113)) obtained with reference algebraic tools (mainly, numpy.linalg.inv).
Several other optimizations, especially regarding operations involving diagonal matrices have been considered, and, for the sake of fairness, used both in the proposed and standard ways of solving (111), (111), and (112).
The performances of such resolution methods can be computed and (visually) analyzed thanks to the notebook test_fast_dgsem.ipynb. Indeed, we give in Fig. 11 the results obtained with this notebook on a personal machine with 8 Intel Xeon(R) W-2223 CPUs and 16Gb RAM. Performance analysis has been evaluated thanks to built-in python module timeit; statistical data has been computed over 20 runs, each calling the procedure under evaluation more than 1000 times. Even though such performance measures might vary from one machine to the other, and, also, from one run to the other, we can reliably say that inversion strategies (110) and (113) show consistent and often significant performance gains with respect to their dense counterparts. Admittedly, the gains are less noticeable for (112) whenever high orders are used, see right part of Fig. 11 (indeed, simple computations show that (113), more precisely, the matrix products involved there, and the dense resolution of (112) have similar algorithmic complexity). Nonetheless, since the graph viscosity is not always necessary (see section 4.3), the overall performances of the two-staged FCT limiter highly benefits from the proposed inversion strategies. All in all, to tackle the problem with graph viscosity (112), we advise to prefer procedure (113) over a dense solve whenever the polynomial order is moderate.
Finally, the testing framework test_fast_dgsem.py allows one to check in a compact way the exactness of proposed formulae (114), (110), and (113) and other matrix-related optimizations for several settings at once.
## Appendix C The 3D DGSEM scheme
We here give details and the main properties of the time implicit DGSEM scheme for the approximation of (1) with flux \(\mathbf{f}(\mathbf{u})=u(c_{x},c_{y},c_{z})^{\top}\) and nonnegative \(c_{x},c_{y}\), and \(c_{z}\). We consider a Cartesian mesh with elements with measure \(|\kappa_{ijk}|=\Delta x_{i}\times\Delta y_{j}\times\Delta z_{k}\) and set \(\lambda_{x_{i}}=\frac{c_{x}\Delta t}{\Delta x_{i}}\), \(\lambda_{y_{j}}=\frac{c_{x}\Delta t}{\Delta y_{j}}\), \(\lambda_{z_{k}}=\frac{c_{z}\Delta t}{\Delta z_{k}}\).
### High-order and low-order schemes
Using a vector storage of the DOFs as \((\mathbf{U}_{ijk})_{n_{lmr}}=U_{ijk}^{lmr}\) with \(1\leq n_{lmr}\coloneqq 1+l+m(p+1)+r(p+1)^{2}\leq N_{p}\) and \(N_{p}=(p+1)^{3}\), the discrete scheme with graph viscosity under vector form reads
\[\begin{split}(\mathbf{M}\otimes\mathbf{M}\otimes\mathbf{M})( \mathbf{U}_{ijk}^{n+1}-\mathbf{U}_{ijk}^{n})&-\lambda_{x_{i}} \big{(}\mathbf{M}\otimes\mathbf{M}\otimes(2\mathbf{D}^{\top}\mathbf{M}- \mathbf{e}_{p}\mathbf{e}_{p}^{\top})\big{)}\mathbf{U}_{ijk}^{n+1}-\lambda_{x _{i}}(\mathbf{M}\otimes\mathbf{M}\otimes\mathbf{e}_{0}\mathbf{e}_{p}^{\top}) \mathbf{U}_{(i-1)jk}^{n+1}\\ &-\lambda_{y_{j}}\big{(}\mathbf{M}\otimes(2\mathbf{D}^{\top} \mathbf{M}-\mathbf{e}_{p}\mathbf{e}_{p}^{\top})\otimes\mathbf{M}\big{)} \mathbf{U}_{ijk}^{n+1}-\lambda_{y_{j}}(\mathbf{M}\otimes\mathbf{e}_{0}\mathbf{ e}_{p}^{\top}\otimes\mathbf{M})\mathbf{U}_{i(j-1)k}^{n+1}\\ &-\lambda_{z_{k}}\big{(}(2\mathbf{D}^{\top}\mathbf{M}-\mathbf{e} _{p}\mathbf{e}_{p}^{\top})\otimes\mathbf{M}\otimes\mathbf{M}\big{)}\mathbf{U} _{ijk}^{n+1}-\lambda_{z_{k}}(\mathbf{e}_{0}\mathbf{e}_{p}^{\top}\otimes \mathbf{M}\otimes\mathbf{M})\mathbf{U}_{ijk(k-1)}^{n+1}+\mathbf{V}_{ijk}^{(n+1) }=0,\end{split} \tag{100}\]
where \(\mathbf{M}=\frac{1}{2}\text{diag}(\omega_{0},\ldots,\omega_{p})\) denotes the 1D mass matrix, \((\mathbf{e}_{k})_{0\leq k\leq p}\) is the canonical basis of \(\mathbb{R}^{p+1}\). The graph viscosity is defined by
\[\mathbf{V}_{ijk}^{(n+1)}=2d_{ijk}\big{(}\lambda_{x_{i}}\mathbf{M}\otimes \mathbf{M}\otimes(\mathbf{M}-\boldsymbol{\omega}\mathbf{1}^{\top}\mathbf{M})+ \lambda_{y_{j}}\mathbf{M}\otimes(\mathbf{M}-\boldsymbol{\omega}\mathbf{1}^{ \top}\mathbf{M})\otimes\mathbf{M}+\lambda_{z_{k}}(\mathbf{M}-\boldsymbol{ \omega}\mathbf{1}^{\top}\mathbf{M})\otimes\mathbf{M}\otimes\mathbf{M}\big{)} \mathbf{U}_{ij}^{(n+1)} \tag{101}\]
with \(d_{ijk}\geq 0\), \(\boldsymbol{\omega}=\frac{1}{2}(\omega_{0},\ldots,\omega_{p})^{\top}\) and \(\mathbf{1}=(1,\ldots,1)^{\top}\in\mathbb{R}^{p+1}\). Scheme (100) with \(d_{ijk}=0\) (resp., \(d_{ijk}>0\)) denotes the high-order (resp., low-order) scheme. The linear system (100) is maximum principle preserving under the same condition (37) as in 2D: \(d_{ijk}\geq 2\max_{0\leq k\neq m\leq p}\big{(}-\frac{D_{lmr}}{\omega_{k}}\big{)}\).
The 3D discrete difference matrix reads
\[\mathbf{D}_{3d}^{\top}=\lambda_{x_{i}}\mathbf{I}\otimes\mathbf{I}\otimes \mathbf{D}^{\top}+\lambda_{y_{j}}\mathbf{I}\otimes\mathbf{D}^{\top}\otimes \mathbf{I}+\lambda_{z_{k}}\mathbf{D}^{\top}\otimes\mathbf{I}\otimes\mathbf{I},\]
and is also nilpotent: \(\mathbf{D}_{3d}^{3p+1}=0\).
Figure 6: Left: performance speed-up over polynomial degree for solving (100) with proposed procedure (110) with respect to standard algebraic tools (numpy). Right: similar to above, but for (110) and (111).
### FCT limiter
By \(u_{h,HO}^{(n+1)}\) we denote the high-order solution to (C.1) with \(d_{ijk}=0\) and by \(u_{h,LO}^{(n+1)}\) we denote the high-order solution to (C.1) with \(d_{ijk}=2\max_{0\leq k\neq n\leq p}\big{(}-\frac{D_{\text{m}i}}{\omega_{n}} \big{)}\). Applying the FCT limiter introduced in section 4.3, the limited DOFs are evaluated explicitly from \(u_{h,LO}^{(n+1)}\) and \(u_{h,HO}^{(n+1)}\) through
\[U_{ijk}^{lmr,n+1}-U_{ijk,HO}^{lmr,n+1} =\delta_{lp}\frac{2\lambda_{x_{i}}}{\omega_{p}}\big{(}1-\dot{l}_{ ijk}^{(i+1)jk}\big{)}\big{(}U_{ijk,HO}^{pm,n+1}-U_{ijk,LO}^{pmr,n+1}\big{)}- \delta_{0l}\frac{2\lambda_{x_{i}}}{\omega_{0}}\big{(}1-\dot{l}_{ijk}^{(i-1)jk} \big{)}\big{(}U_{(i-1)jk,HO}^{pmr,n+1}-U_{(i-1)jk,LO}^{pmr,n+1}\big{)}\] \[+\delta_{mp}\frac{2\lambda_{y_{j}}}{\omega_{p}}\big{(}1-\dot{l}_ {ijk}^{(i+1)k}\big{)}\big{(}U_{ijk,HO}^{lpr,n+1}-U_{ijk,LO}^{lpr,n+1}\big{)}- \delta_{m0}\frac{2\lambda_{y_{j}}}{\omega_{0}}\big{(}1-\dot{l}_{ijk}^{(i-1)k} \big{)}\big{(}U_{i(j-1)k,HO}^{lr,n+1}-U_{i(j-1)k,LO}^{lpr,n+1}\big{)}\] \[+\delta_{rp}\frac{2\lambda_{z_{i}}}{\omega_{p}}\big{(}1-\dot{l}_ {ijk}^{(i+1)k}\big{)}\big{(}U_{ijk,HO}^{lmr,n+1}-U_{ijk,LO}^{lmr,n+1}\big{)}- \delta_{r0}\frac{2\lambda_{z_{i}}}{\omega_{0}}\big{(}1-\dot{l}_{ijk}^{(i-1)} \big{)}\big{(}U_{ijk(i-1),HO}^{lmr,n+1}-U_{ijk(j-1),LO}^{lmr,n+1}\big{)},\]
where
\[l_{ijk}^{lmr} =\] \[l_{ijk}^{*} =\]
\[P_{ijk}^{-}=\sum_{(l,m,r)\in S(i,j,k)}\min\big{(}A_{ijk}^{lmr},0),\quad Q_{ ijk}^{-}=m-\langle u_{LO}^{(n+1)}\rangle_{ijk},\quad P_{ijk}^{+}=\sum_{(l,m,r)\in S (i,j,k)}\max\big{(}A_{ijk}^{lmr},0),\quad Q_{ijk}^{+}=M-\langle u_{LO}^{(n+1)} \rangle_{ijk}\geq 0,\]
and \(\mathcal{S}(i,j,k)=\{(i\pm 1,j,k);(i,j\pm 1,k);(i,j,k\pm 1)\}\). The limited solution is bounded by the lower and upper bounds in (3): \(m\leq\langle u_{h}^{(n+1)}\rangle_{ijk}:=\frac{1}{8}\sum_{lmr}\omega_{l}\omega _{m}\omega_{r}U_{ijk}^{lmr,n+1}\leq M\) and the limiter keeps conservation of the scheme:
\[\sum_{ijk}\langle u_{h}^{(n+1)}\rangle_{ijk}=\sum_{ijk}\langle u_{h,HO}^{(n+1)} \rangle_{ijk}=\sum_{ijk}\langle u_{h,LO}^{(n+1)}\rangle_{ijk}=\sum_{ijk} \langle u_{h}^{(n)}\rangle_{ijk}\]
for periodic boundary conditions or compactly supported solutions.
### Inversion of diagonal blocks
The global system to be solved at each time step reads
\[\mathbb{A}_{3d}\mathbf{U}^{(n+1)}=\mathbb{M}_{3d}\mathbf{U}^{(n)},\]
of size \(N_{x}N_{y}N_{z}N_{p}\) with blocks of size \(N_{p}=(p+1)^{3}\). The diagonal blocks without graph viscosity read \(\mathbf{L}_{3d}(\mathbf{M}\otimes\mathbf{M}\otimes\mathbf{M})\) with
\[\mathbf{L}_{3d}=(\mathbf{R}\otimes\mathbf{R}\otimes\mathbf{R})\mathbf{\Psi}_{3d }(\mathbf{R}\otimes\mathbf{R}\otimes\mathbf{R})^{-1},\quad\mathbf{\Psi}_{3d}= \frac{\lambda_{x}}{\lambda}\mathbf{I}\otimes\mathbf{I}\otimes\mathbf{\Psi}_{ \lambda}+\frac{\lambda_{y_{j}}}{\lambda}\mathbf{I}\otimes\mathbf{I}+\frac{ \lambda_{z_{i}}}{\lambda}\mathbf{\Psi}_{\lambda}\otimes\mathbf{I}\otimes \mathbf{I},\]
where \(\lambda\coloneqq\lambda_{x_{i}}+\lambda_{y_{j}}+\lambda_{z_{\alpha}}\), and where \(\mathbf{R}\) and \(\mathbf{\Psi}_{\lambda}\) are defined in (42). Note that \(\mathbf{\Psi}_{3d}\) is a diagonal matrix, so the diagonal blocks are thus easily inverted:
\[\big{(}\mathbf{L}_{3d}(\mathbf{M}\otimes\mathbf{M}\otimes\mathbf{M})\big{)}^{ -1}=(\mathbf{M}^{-1}\mathbf{R}\otimes\mathbf{M}^{-1}\mathbf{R}\otimes\mathbf{ M}^{-1}\mathbf{R})\mathbf{\Psi}_{3d}^{-1}(\mathbf{R}^{-1}\otimes\mathbf{R}^{-1} \otimes\mathbf{R}^{-1}).\]
Including graph viscosity, the diagonal blocks are defined by \(\mathbf{L}_{3d}^{v}(\mathbf{M}\otimes\mathbf{M}\otimes\mathbf{M})\) with
\[\mathbf{L}_{3d}^{v}=\mathbf{L}_{3d}^{0}-\mathbf{U}_{v}\mathbf{V}_{v}^{\top}, \quad\mathbf{L}_{3d}^{0}=(\mathbf{R}\otimes\mathbf{R}\otimes\mathbf{R})( \mathbf{\Psi}_{3d}+2d_{ijk}\lambda\mathbf{I}\otimes\mathbf{I}\otimes\mathbf{I} )(\mathbf{R}\otimes\mathbf{R}\otimes\mathbf{R})^{-1},\]
with
\[\mathbf{U}_{v}=2d_{ijk}(\lambda_{x_{i}}\mathbf{I}\otimes\mathbf{I} \otimes\mathbf{\omega},\lambda_{y_{j}}\mathbf{I}\otimes\mathbf{\omega}\otimes\mathbf{I},\lambda_{z_{i}}\mathbf{\omega}\otimes\mathbf{I}\otimes\mathbf{I}),\quad\mathbf{V}_ {v}=(\mathbf{I}\otimes\mathbf{I}\otimes\mathbf{I},\mathbf{I}\otimes\mathbf{I} \otimes\mathbf{I},\mathbf{I}\otimes\mathbf{I}),\]
in \(\mathbb{R}^{N_{p}\times 3(p+1)^{2}}\). Once again, the diagonal blocks with graph viscosity may be inverted with algorithm 3 using the Woodbury identity.
```
1:solve \(\mathbf{L}_{3d}^{0}\mathbf{y}=\mathbf{b}\) for \(\mathbf{y}\in\mathbb{R}^{N_{p}}\): \[\mathbf{y}=(\mathbf{R}\otimes\mathbf{R}\otimes\mathbf{R})\operatorname{diag} \left(\frac{1}{1+2\lambda d_{ijk}-2(\lambda_{x_{i}}\psi_{l}+\lambda_{y_{j}} \psi_{m}+\lambda_{z_{i}}\psi_{r})},\ 0\leq l,m,r\leq p\right)\left(\mathbf{R}^{-1} \otimes\mathbf{R}^{-1}\otimes\mathbf{R}^{-1}\right)\mathbf{b};\]
2:solve \(\mathbf{L}_{3d}^{0}\mathbf{Z}=\mathbf{U}_{v}\) for \(\mathbf{Z}\in\mathbb{R}^{N_{p}\times 3(p+1)^{2}}\): \[\mathbf{Z}=2d_{ijk}\left(\mathbf{R}\otimes\mathbf{R}\otimes \mathbf{R}\right)\operatorname{diag}\left(\frac{1}{1+2\lambda d_{ijk}-2( \lambda_{x_{i}}\psi_{l}+\lambda_{y_{j}}\psi_{m}+\lambda_{z_{i}}\psi_{r})},\ 0\leq l,m,r\leq p\right)\cdots\] \[\cdots\left(\lambda_{x_{i}}\mathbf{R}^{-1}\otimes\mathbf{R}^{-1} \otimes(\mathbf{R}^{-1}\mathbf{\omega}),\lambda_{y_{j}}\mathbf{R}^{-1}\otimes( \mathbf{R}^{-1}\mathbf{\omega})\otimes\mathbf{R}^{-1},\lambda_{z_{i}}(\mathbf{R}^ {-1}\mathbf{\omega})\otimes\mathbf{R}^{-1}\otimes\mathbf{R}^{-1}\right);\]
3:solve \((\mathbf{I}_{3(p+1)^{2}}-\mathbf{V}_{v}^{\top}\mathbf{Z})\mathbf{z}=\mathbf{V }_{v}^{\top}\mathbf{y}\) for \(\mathbf{z}\in\mathbb{R}^{3(p+1)^{2}}\);
4:set \(\mathbf{x}=(\mathbf{M}^{-1}\otimes\mathbf{M}^{-1}\otimes\mathbf{M}^{-1})( \mathbf{y}+\mathbf{Z}\mathbf{z})\).
```
**Algorithm 3** Algorithm flowchart for solving the system \(\mathbf{L}_{3d}^{v}(\mathbf{M}\otimes\mathbf{M}\otimes\mathbf{M})\mathbf{x}= \mathbf{b}\) with graph viscosity.
|
2310.16612 | Strong decays of the $φ(2170)$ as a fully-strange tetraquark state | We study strong decays of the $\phi(2170)$, along with its possible partner
$X(2436)$, as two fully-strange tetraquark states of $J^{PC} = 1^{--}$. We
consider seven decay channels: $\phi \eta$, $\phi \eta^\prime$, $\phi
f_0(980)$, $\phi f_1(1420)$, $h_1(1415) \eta$, $h_1(1415) \eta^\prime$, and
$h_1(1415) f_1(1420)$. Some of these channels are kinematically possible, and
we calculate their relative branching ratios through the Fierz rearrangement.
Future experimental measurements on these ratios can be useful in determining
the nature of the $\phi(2170)$ and $X(2436)$. The $\phi(2170)$ has been
observed in the $\phi f_0(980)$, $\phi \eta$, and $\phi \eta^\prime$ channels,
and we propose to further examine it in the $h_1(1415) \eta$ channel. Evidences
of the $X(2436)$ have been observed in the $\phi f_0(980)$ channel, and we
propose to verify whether this structure exists or not in the $\phi \eta$,
$\phi \eta^\prime$, $h_1(1415) \eta$, and $h_1(1415) \eta^\prime$ channels. | Yi-Wei Jiang, Wei-Han Tan, Hua-Xing Chen, Er-Liang Cui | 2023-10-25T13:10:42Z | http://arxiv.org/abs/2310.16612v2 | # Strong decays of the \(\phi(2170)\) as a fully-strange tetraquark state
###### Abstract
We study strong decays of the \(\phi(2170)\), along with its possible partner \(X(2436)\), as two fully-strange tetraquark states of \(J^{PC}=1^{--}\). We consider seven decay channels: \(\phi\eta\), \(\phi\eta^{\prime}\), \(\phi f_{0}(980)\), \(\phi f_{1}(1420)\), \(h_{1}(1415)\eta\), \(h_{1}(1415)\eta^{\prime}\), and \(h_{1}(1415)f_{1}(1420)\). Some of these channels are kinematically possible, and we calculate their relative branching ratios through the Fierz rearrangement. Future experimental measurements on these ratios can be useful in determining the nature of the \(\phi(2170)\) and \(X(2436)\). The \(\phi(2170)\) has been observed in the \(\phi f_{0}(980)\), \(\phi\eta\), and \(\phi\eta^{\prime}\) channels, and we propose to further examine it in the \(h_{1}(1415)\eta\) channel. Evidences of the \(X(2436)\) have been observed in the \(\phi f_{0}(980)\) channel, and we propose to verify whether this structure exists or not in the \(\phi\eta\), \(\phi\eta^{\prime}\), \(h_{1}(1415)\eta\), and \(h_{1}(1415)\eta^{\prime}\) channels.
fuly-strange tetraquark, Fierz rearrangement
## I Introduction
In the traditional quark model we can categorize hadrons into \(\bar{q}q\) mesons and \(qqq\) baryons [1]. In recent years many exotic hadrons were observed in particle experiments, which can not be easily explained in the traditional quark model [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21], such as the charmonium-like states \(X(3872)\) of \(I^{G}J^{PC}=0^{-}1^{++}\)[22], \(Y(4220)\) of \(I^{G}J^{PC}=0^{-}1^{--}\)[23; 24], and \(Z_{c}(3900)\) of \(I^{G}J^{PC}=1^{+}1^{-+}\)[25; 26]. However, there are not so many exotic hadrons in the light sector that only contain the \(up/down/strange\) quarks. The \(\phi(2170)\) of \(I^{G}J^{PC}=0^{-}1^{--}\), also denoted as \(Y(2175)\), is one of them. It is often taken as the strangeonium counterpart of the \(Y(4220)\) owing to their similarities in production mechanism and decay patterns.
The \(\phi(2170)\) was first observed in 2006 by the BaBar Collaboration via the initial state radiation process \(e^{+}e^{-}\to\gamma_{\rm ISR}\phi f_{0}(980)\)[27; 28; 29; 30]. Later it was confirmed by Belle in the \(e^{+}e^{-}\to\phi\pi^{+}\pi^{-}\) and \(e^{+}e^{-}\to\phi f_{0}(980)\) processes [31], and it was also observed by BESII/BESIII in the \(J/\psi\to\eta\phi f_{0}(980)\) process [32; 33; 34]. According to the latest version of PDG [1], its mass and width were averaged to be:
\[\phi(2170)/Y(2175) : M=2163\pm 7~{}{\rm MeV}\,,\] \[\Gamma=103^{+28}_{-21}~{}{\rm MeV}\,.\]
In recent years various experimental studies on the \(\phi(2170)\) were carried out by the BESIII Collaboration in the direct \(e^{+}e^{-}\) annihilation to the \(\phi\eta\), \(\phi\eta^{\prime}\), \(\phi K^{+}K^{-}\), \(K^{+}K^{-}\), \(\omega\eta\), \(K^{0}_{3}K^{0}_{2}\), and \(\phi\pi^{+}\pi^{-}\) final states [35; 36; 37; 38; 39; 40; 41], etc. In Ref. [42] a partial wave analysis of the \(e^{+}e^{-}\to K^{+}K^{-}\pi^{0}\pi^{0}\) process was performed by BESIII, indicating that the \(\phi(2170)\) has a sizable partial width to \(K^{+}(1460)K^{-}\),\(K^{+}_{1}(1400)K^{-}\), and \(K^{+}_{1}(1270)K^{-}\), but a much smaller partial width to \(K^{*+}(892)K^{*-}(892)\) and \(K^{*+}(1410)K^{-}\).
Since its discovery, the \(\phi(2170)\) has stimulated many theoretical methods and models to explain its nature. Possible interpretations of this interesting structure are abundant and diverse, including the traditional \(s\bar{s}\) meson as an exited state [43; 44; 45; 46; 47; 48], a strangeonium hybrid state [49; 50], a fully-strange tetraquark state [51; 52; 53; 54; 55; 56; 57; 58], a hidden-strangeness baryon-antibaryon state strongly coupling to the \(\Lambda\bar{\Lambda}\) channel [59], a bound state of \(\Lambda\bar{\Lambda}\)[60; 61; 62; 63; 64], and a dynamically generated state in the \(\phi KK\) and \(\phi\pi\pi\) systems [65; 66; 67] or in the \(\phi f_{0}(980)\) system [68; 69]. Within the lattice QCD formalism, the authors of Ref. [70] studied the \(\phi(2170)\) under the hybrid hypothesis, but their results do not favor this interpretation. Besides, productions of the \(\phi(2170)\) were studied in Refs. [71; 72] by using the Nambu-Jona-Lasinio model and the Drell-Yan mechanism, and its decay properties were studied in Refs. [48; 73; 74; 75; 76; 77] by using the initial single pion emission mechanism, the dispersion theory, the three-hadron interactions, and the \({}^{3}P_{0}\) model.
In addition, the \(\phi(2170)\) may have a partner state at around 2.4 GeV, denoted as \(X(2436)\). Its evidences have been observed in the \(\phi f_{0}(980)\) and \(\phi\pi^{+}\pi^{-}\) channels by the BaBar, Belle, BESII, and BESIII experiments [28; 31; 32; 33]. The authors of Ref. [78] performed a combined fit to the data of BaBar and Belle, where the mass and width of this structure were measured to be
\[X(2436) : M=2436\pm 34~{}{\rm MeV}\,, \tag{2}\] \[\Gamma=99\pm 105~{}{\rm MeV}\,,\]
when fitting the \(\phi f_{0}(980)\) cross section, but its statistical significance is less than \(3\sigma\). Recently, the BESIII Collaboration further studied this structure through the \(e^{+}e^{-}\to\phi\pi^{+}\pi^{-}\) process [79], but its statistical significance is no more than \(2\sigma\). Therefore, more experimental studies are necessary to clarify whether the \(X(2436)\) exists or not.
Although there are considerable efforts from both the experimental and theoretical sides, the nature of the \(\phi(2170)\) and \(X(2436)\) is still not clear. In order to clarify
their nature, it is useful to examine their decay modes and relative branching ratios. Especially, it is useful to study the \(\phi(2170)\) decays into the \(\phi\eta\) and \(\phi\eta^{\prime}\) channels, in order to investigate the ratio
\[R^{\rm exp}_{\eta/\eta^{\prime}}\equiv\frac{{\cal B}^{Y}_{\phi\eta^{\prime}} \Gamma^{Y}_{e^{+}e^{-}}}{{\cal B}^{Y}_{\phi\eta^{\prime}}\Gamma^{Y}_{e^{+}e^{- }}}\,, \tag{3}\]
where
\[{\cal B}^{Y}_{\phi\eta} \equiv {\rm Br}(\phi(2170)\to\phi\eta)\,, \tag{4}\] \[{\cal B}^{Y}_{\phi\eta^{\prime}} \equiv {\rm Br}(\phi(2170)\to\phi\eta^{\prime})\,,\] (5) \[\Gamma^{Y}_{e^{+}e^{-}} \equiv \Gamma(e^{+}e^{-}\to\phi(2170))\,. \tag{6}\]
In Refs. [35; 36] the BESIII Collaboration separately studied the \(e^{+}e^{-}\to\phi\eta/\phi\eta^{\prime}\) processes and extracted:
\[{\cal B}^{Y}_{\phi\eta}\Gamma^{Y}_{e^{+}e^{-}} = \left\{\begin{array}{ll}0.24^{+0.12}_{-0.07}\;{\rm eV}&({\rm sol \ I}),\\ 10.11^{+3.87}_{-3.13}\;{\rm eV}&({\rm sol\ II}),\end{array}\right. \tag{7}\] \[{\cal B}^{Y}_{\phi\eta^{\prime}}\Gamma^{Y}_{e^{+}e^{-}} = 7.1\pm 0.7\pm 0.7\;{\rm eV}\;\;({\rm sol\ I}/{\rm II}), \tag{8}\]
where "\({\rm sol\ I}/{\rm II}\)" denote the two possible solutions. The \(e^{+}e^{-}\to\phi\eta\) process has also been investigated by BaBar [29]:
\[{\cal B}^{Y}_{\phi\eta}\Gamma^{Y}_{e^{+}e^{-}}=1.7\pm 0.7\pm 1.3\;{\rm eV}, \tag{9}\]
and Belle [80]:
\[{\cal B}^{Y}_{\phi\eta}\Gamma^{Y}_{e^{+}e^{-}}=\left\{\begin{array}{ll}0.0 9\pm 0.05\;{\rm eV}&({\rm sol\ I}),\\ 0.06\pm 0.02\;{\rm eV}&({\rm sol\ II}),\\ 16.7\pm 1.2\;{\rm eV}&({\rm sol\ III}),\\ 17.0\pm 1.2\;{\rm eV}&({\rm sol\ IV}).\end{array}\right. \tag{10}\]
Based on Eqs. (7) and (8), we can derive
\[{\rm BESIII:}\;\;R^{\rm exp}_{\eta/\eta^{\prime}} = \left\{\begin{array}{ll}0.034^{+0.029}_{-0.014}&({\rm sol\ I} ),\\ 1.42^{+1.03}_{-0.60}&({\rm sol\ II}).\end{array}\right. \tag{11}\]
Theoretically, this ratio was calculated in Ref. [76] to be \(R_{\eta/\eta^{\prime}}=2.6\sim 5.2\), where the \(\phi(2170)\) was considered as a dynamically-generated state from the \(\phi f_{0}(980)\) interaction. More theoretical calculations on this ratio are helpful to reveal the nature of the \(\phi(2170)\).
We have applied the method of QCD sum rules to study the \(\phi(2170)\) and \(X(2436)\) in Refs. [52; 55]. In Ref. [52] we systematically constructed the fully-strange tetraquark currents and found only two independent ones. We separately used them to perform QCD sum rule analyses by calculating only the diagonal two-point correlation functions. In Ref. [55] we further calculated the off-diagonal two-point correlation functions, and the obtained results can explain both the \(\phi(2170)\) and \(X(2436)\) as two fully-strange tetraquark states. In this paper we shall utilize the Fierz rearrangement method [81; 82; 83] to study their strong decays as the fully-strange tetraquark states of \(J^{PC}=1^{--}\).
This paper is organized as follows. In Sec. II we construct the fully-strange tetraquark currents of \(J^{PC}=1^{--}\) within the diquark-antidiquark picture. We use them to further construct two mixing currents that are non-correlated, which can be used to simultaneously interpret the \(\phi(2170)\) and \(X(2436)\) as two fully-strange tetraquark states. We apply the Fierz rearrangement to transform these two mixing currents into the meson-meson currents, based on which we study the decay behaviors of the \(\phi(2170)\) and \(X(2436)\) in Sec. III. The obtained results are discussed and summarized in Sec. IV.
## II Currents and Fierz Identities
The fully-strange tetraquark currents with the quantum number \(J^{PC}=1^{--}\) have been systematically constructed and studied in Refs. [52; 55], where we consider two types of tetraquark currents, as illustrated in Fig. 1:
\[\eta(x,y) = [s_{a}^{T}(x){\mathbb{C}}\Gamma_{1}s_{b}(x)]\times[\bar{s}_{c}(y )\Gamma_{2}{\mathbb{C}}\bar{s}_{d}^{T}(y)]\,, \tag{12}\] \[\xi(x,y) = [\bar{s}_{a}(x)\Gamma_{3}s_{b}(x)]\times[\bar{s}_{c}(y)\Gamma_{4}s _{d}(y)]\,. \tag{13}\]
Here \(\Gamma_{i}\) are Dirac matrices, the subscripts \(a\cdots d\) are color indices, \({\mathbb{C}}=i\gamma_{2}\gamma_{0}\) is the charge-conjugation operator, and the superscript \(T\) represents the transpose of Dirac indices. We call the former \(\eta(x,y)\) diquark-antidiquark currents and the latter \(\xi(x,y)\) meson-meson currents, which will be separately investigated in the following subsections.
### Diquark-antidiquark currents and their mixing
There are two fully-strange diquark-antidiquark interpolating currents with the quantum number \(J^{PC}=1^{--}\):
\[\eta_{1\mu}= \tag{14}\] \[(s_{a}^{T}{\mathbb{C}}\gamma_{5}s_{b})(\bar{s}_{a}\gamma_{\mu} \gamma_{5}{\mathbb{C}}\bar{s}_{b}^{T})-(s_{a}^{T}{\mathbb{C}}\gamma_{\mu} \gamma_{5}s_{b})(\bar{s}_{a}\gamma_{5}{\mathbb{C}}\bar{s}_{b}^{T})\,,\] \[\eta_{2\mu}=\] (15) \[(s_{a}^{T}{\mathbb{C}}\gamma^{\nu}s_{b})(\bar{s}_{a}\sigma_{\mu\nu }{\mathbb{C}}\bar{s}_{b}^{T})-(s_{a}^{T}{\mathbb{C}}\sigma_{\mu\nu}s_{b})( \bar{s}_{a}\gamma^{\nu}{\mathbb{C}}\bar{s}_{b}^{T})\,.\]
Figure 1: Two types of fully-strange tetraquark currents: (a) the diquark-antidiquark currents \(\eta(x,y)\) and (b) the meson-meson currents \(\xi(x,y)\).
These two currents are independent of each other.
In Ref. [52] we separately use \(\eta_{1\mu}\) and \(\eta_{2\mu}\) to perform QCD sum rule analyses, where we calculate only the diagonal correlation functions:
\[\langle 0|\eta_{1\mu}\eta_{1\nu}^{\dagger}|0\rangle\quad\text{and}\quad\langle 0 |\eta_{2\mu}\eta_{2\nu}^{\dagger}|0\rangle\,. \tag{16}\]
However, in Ref. [55] we find that the off-diagonal correlation function
\[\langle 0|\eta_{1\mu}\eta_{2\nu}^{\dagger}|0\rangle\neq 0\,, \tag{17}\]
is also non-zero, indicating that \(\eta_{1\mu}\) and \(\eta_{2\mu}\) are correlated with each other, so they can couple to the same physical state. To deal with this, in Ref. [55] we further construct two mixing currents:
\[J_{1\mu} = \cos\theta\ \eta_{1\mu}+\sin\theta\ i\ \eta_{2\mu}\,, \tag{18}\] \[J_{2\mu} = \sin\theta\ \eta_{1\mu}+\cos\theta\ i\ \eta_{2\mu}\,. \tag{19}\]
When setting the mixing angle to be \(\theta=-5.0^{\circ}\), these two currents satisfy
\[\langle 0|J_{1\mu}J_{2\nu}^{\dagger}|0\rangle\left\{\begin{array}{l} \ll\langle 0|J_{1\mu}J_{1\nu}^{\dagger}|0\rangle\\ \ll\langle 0|J_{2\mu}J_{2\nu}^{\dagger}|0\rangle\end{array}\right.\,, \tag{20}\]
with the threshold value around \(s_{0}\approx 6.0\) GeV\({}^{2}\) and the Borel mass around \(M_{B}^{2}\approx 2.5\) GeV\({}^{2}\). This condition indicates that the two currents \(J_{1\mu}\) and \(J_{2\mu}\) are non-correlated, _i.e._, they can not mainly couple to the same state \(Y\), otherwise,
\[\langle 0|J_{1\mu}J_{2\nu}^{\dagger}|0\rangle \equiv \sum_{n}\delta(s-M_{n}^{2})\langle 0|J_{1\mu}|n\rangle\langle n|J_{2 \nu}^{\dagger}|0\rangle+\cdots \tag{21}\] \[\approx \delta(s-M_{Y}^{2})\langle 0|J_{1\mu}|Y\rangle\langle Y|J_{2 \nu}^{\dagger}|0\rangle+\cdots\] \[\neq 0\,.\]
Accordingly, we assume that \(J_{1\mu}\) and \(J_{2\mu}\) mainly couple to two different states \(Y_{1}\) and \(Y_{2}\) through
\[\langle 0|J_{1\mu}|Y_{1}\rangle = f_{Y_{1}}\ \epsilon_{\mu}\,, \tag{22}\] \[\langle 0|J_{2\mu}|Y_{2}\rangle = f_{Y_{2}}\ \epsilon_{\mu}\,, \tag{23}\]
where \(f_{Y_{1}}\) and \(f_{Y_{1}}\) are the decay constants, and \(\epsilon_{\mu}\) is the polarization vector.
In Ref. [55] we use \(J_{1\mu}\) and \(J_{2\mu}\) to perform QCD sum rule analyses. When setting the working regions to be \(5.0\) GeV\({}^{2}<s_{0}<7.0\) GeV\({}^{2}\) and \(2.0\) GeV\({}^{2}<M_{B}^{2}<4.0\) GeV\({}^{2}\), we calculate the masses of \(Y_{1}\) and \(Y_{2}\) to be
\[M_{Y_{1}} = 2.41\pm 0.25\ \text{GeV}\,, \tag{24}\] \[M_{Y_{2}} = 2.34\pm 0.17\ \text{GeV}\,,\]
with the mass splitting
\[\Delta M=71^{+172}_{-\ 48}\ \text{MeV}\,. \tag{25}\]
The mass extracted from \(J_{2\mu}\) is consistent with the experimental mass of the \(\phi(2170)\), indicating its possible explanation as the fully-strange tetraquark state \(Y_{2}\). The QCD sum rule result extracted from the non-correlated current \(J_{1\mu}\) suggests that the \(\phi(2170)\) may have a partner state whose mass is about \(2.41\pm 0.25\) GeV. This value is consistent with the experimental mass of the \(X(2436)\), indicating its possible explanation as the fully-strange tetraquark state \(Y_{1}\). We shall further study their strong decays through the two mixing currents \(J_{1\mu}\) and \(J_{2\mu}\) in Sec. III.
### Meson-meson currents and Fierz rearrangement
Besides the diquark-antidiquark currents \(\eta_{1\mu}\) and \(\eta_{2\mu}\), we can also construct the fully-strange meson-meson currents. There are four fully-strange meson-meson interpolating currents with the quantum number \(J^{PC}=1^{--}\):
\[\xi_{1\mu} = (\bar{s}_{a}s_{a})(\bar{s}_{b}\gamma_{\mu}s_{b})\,, \tag{26}\] \[\xi_{2\mu} = (\bar{s}_{a}\gamma^{\nu}\gamma_{5}s_{a})(\bar{s}_{b}\sigma_{\mu \nu}\gamma_{5}s_{b})\,,\] (27) \[\xi_{3\mu} = \lambda_{ab}\lambda_{cd}(\bar{s}_{a}s_{b})(\bar{s}_{c}\gamma_{\mu }s_{d})\,,\] (28) \[\xi_{4\mu} = \lambda_{ab}\lambda_{cd}(\bar{s}_{a}\gamma^{\nu}\gamma_{5}s_{b})( \bar{s}_{c}\sigma_{\mu\nu}\gamma_{5}s_{d})\,. \tag{29}\]
We can derive through the Fierz rearrangement that only two of them are independent, _e.g._,
\[\xi_{3\mu} = -\frac{5}{3}\xi_{1\mu}-i\xi_{2\mu}\,, \tag{30}\] \[\xi_{4\mu} = 3i\xi_{1\mu}+\frac{1}{3}\xi_{2\mu}\,.\]
Moreover, we can also derive through the Fierz rearrangement the relations between the diquark-antidiquark currents \(\eta_{i}\) and the meson-meson currents \(\xi_{i}\):
\[\eta_{1\mu} = -\xi_{1\mu}+i\xi_{2\mu}\,, \tag{31}\] \[\eta_{2\mu} = 3i\xi_{1\mu}-\xi_{2\mu}\,. \tag{32}\]
Therefore, these two constructions are equivalent with each other, but note that this equivalence is just between the local diquark-antidiquark and meson-meson currents, while the tightly-bound diquark-antidiquark tetraquark states and the weakly-bound meson-meson molecular states are significantly different. To well describe them, we need the non-local currents, but we are still not able to use them to perform QCD sum rule analyses yet.
We can use Eqs. (31) and (32) to transform the mixing currents \(J_{1\mu}\) and \(J_{2\mu}\) to be
\[J_{1\mu} = -0.74\xi_{1\mu}+1.08i\xi_{2\mu}\,, \tag{33}\] \[J_{2\mu} = -2.90\xi_{1\mu}-1.08i\xi_{2\mu}\,. \tag{34}\]
These two Fierz identities will be used in Sec. III to study the strong decays of the two states \(Y_{1}\) and \(Y_{2}\).
### Strangeonium operators and decay constants
The meson-meson currents \(\xi_{1}\) and \(\xi_{2}\) are both composed of two strangeonium operators, whose couplings
to the strangeonium states have been studied in the literature to some extent [1; 84; 85; 86; 87; 88; 89], as summarized in Table 1. Especially, we follow Refs. [90; 91; 92; 93; 94; 95; 96; 97; 98] to study the axial-vector operator \(J^{A}_{\mu}=\bar{s}\gamma_{\mu}\gamma_{5}s\), and use the two-angle mixing formalism to describe the pseudoscalar mesons \(\eta\) and \(\eta^{\prime}\) as
\[|\eta\rangle = \cos\theta_{8}|\eta_{8}\rangle-\sin\theta_{0}|\eta_{0}\rangle+ \cdots\,, \tag{35}\] \[|\eta^{\prime}\rangle = \sin\theta_{8}|\eta_{8}\rangle+\cos\theta_{0}|\eta_{0}\rangle+ \cdots\,,\]
where
\[|\eta_{8}\rangle = |u\bar{u}+d\bar{d}-2s\tilde{s}\rangle/\sqrt{6}\,, \tag{36}\] \[|\eta_{0}\rangle = |u\bar{u}+d\bar{d}+s\tilde{s}\rangle/\sqrt{3}\,,\]
and \(\cdots\) denotes the other components such as the pseudoscalar glueball and charmonium, etc.
We define the flavor octet and singlet axial-vector operators as
\[A^{8}_{\mu} = \left(\bar{u}\gamma_{\mu}\gamma_{5}u+\bar{d}\gamma_{\mu}\gamma_{ 5}d-2\bar{s}\gamma_{\mu}\gamma_{5}s\right)/\sqrt{12}\,, \tag{37}\] \[A^{0}_{\mu} = \left(\bar{u}\gamma_{\mu}\gamma_{5}u+\bar{d}\gamma_{\mu}\gamma_ {5}d+\bar{s}\gamma_{\mu}\gamma_{5}s\right)/\sqrt{6}\,,\]
which couple to \(\eta\) and \(\eta^{\prime}\) through
\[\langle 0|A^{a}_{\mu}|P(q)\rangle=iq_{\mu}f^{a}_{P}\,. \tag{38}\]
Here \(f^{a}_{P}\) is the matrix for the decay constants
\[\left(\begin{array}{cc}f^{8}_{\eta}&f^{0}_{0}\\ f^{\eta^{\prime}}_{\eta^{\prime}}&f^{0}_{\eta^{\prime}}\end{array}\right)= \left(\begin{array}{cc}f_{8}\cos\theta_{8}&-f_{0}\sin\theta_{0}\\ f_{8}\sin\theta_{8}&f_{0}\cos\theta_{0}\end{array}\right)\,, \tag{39}\]
where [99; 100]
\[\theta_{8} = -22.2^{\circ}\,,\] \[\theta_{0} = -9.1^{\circ}\,, \tag{40}\] \[f_{8} = 168\ {\rm MeV}\,,\] \[f_{0} = 157\ {\rm MeV}\,.\]
Based on the above formula, we can derive
\[\langle 0|J^{A}_{\mu}|\eta(q)\rangle = iq_{\mu}f_{\eta}\,, \tag{41}\] \[\langle 0|J^{A}_{\mu}|\eta^{\prime}(q)\rangle = iq_{\mu}f_{\eta^{\prime}}\,,\]
where
\[f_{\eta} \approx 159\ {\rm MeV}\,, \tag{42}\] \[f_{\eta^{\prime}} \approx 200\ {\rm MeV}\,.\]
We can further approximate the couplings of the pseudoscalar operator \(J^{P}=\bar{s}i\gamma_{5}s\) to \(\eta\) and \(\eta^{\prime}\) as
\[\langle 0|J^{P}|\eta(q)\rangle = \lambda_{\eta}\,, \tag{43}\] \[\langle 0|J^{P}|\eta^{\prime}(q)\rangle = \lambda_{\eta^{\prime}}\,,\]
where
\[\lambda_{\eta} \approx \frac{6f_{\eta}m_{\eta}}{m_{u}+m_{d}+4m_{s}}=218\ {\rm MeV}\,, \tag{44}\] \[\lambda_{\eta^{\prime}} \approx \frac{3f_{\eta^{\prime}}m_{\eta^{\prime}}}{m_{u}+m_{d}+m_{s}}=63 8\ {\rm MeV}\,.\]
## III Relative branching ratios
In this section we study the strong decays of the fully-strange tetraquark states with the quantum number \(J^{PC}=1^{--}\). As depicted in Fig. 2, when one quark meets one antiquark and the other quark meets the other antiquark at the same time, a fully-strange tetraquark state can fall apart to two strangeonium mesons. This process can be described by the Fierz identities given in Eqs. (33) and (34).
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Operators & \(J^{PC}\) & Mesons & \(J^{PC}\) & Couplings & Decay Constants \\ \hline \hline \(J^{S}=\bar{s}s\) & \(0^{+}0^{++}\) & \(f_{0}(980)\) & \(0^{+}0^{++}\) & \(\langle 0|J^{S}|f_{0}\rangle=m_{f_{0}}f_{f_{0}}\) & \(f_{f_{0}}=358\ {\rm MeV}\)[84] \\ \hline \multirow{2}{*}{\(J^{P}=\bar{s}i\gamma_{5}s\)} & \multirow{2}{*}{\(0^{+}0^{-+}\)} & \(\eta\) & \(0^{+}0^{-+}\) & \(\langle 0|J^{P}|\eta\rangle=\lambda_{\eta}\) & \(\lambda_{\eta}\approx 218\ {\rm MeV}^{2}\) \\ \cline{3-6} & & \(\eta^{\prime}\) & \(0^{+}0^{-+}\) & \(\langle 0|J^{P}|\eta^{\prime}\rangle=\lambda_{\eta^{\prime}}\) & \(\lambda_{\eta^{\prime}}\approx 638\ {\rm MeV}^{2}\) \\ \hline \(J^{V}_{\mu}=\bar{s}\gamma_{\mu}s\) & \(0^{-}1^{--}\) & \(\phi\) & \(0^{-}1^{--}\) & \(\langle 0|J^{V}_{\mu}|\phi\rangle=m_{\phi}f_{\phi}\epsilon_{\mu}\) & \(f_{\phi}\approx 233\ {\rm MeV}\)[85; 86] \\ \hline \multirow{2}{*}{\(J^{A}_{\mu}=\bar{s}\gamma_{\mu}\gamma_{5}s\)} & \multirow{2}{*}{\(0^{+}1^{++}\)} & \(\eta\) & \(0^{+}0^{-+}\) & \(\langle 0|J^{A}_{\mu}|\eta\rangle=iq_{\mu}f_{\eta^{\prime}}\) & \(f_{\eta^{\prime}}\approx 159\ {\rm MeV}\) \\ \cline{3-6} & & \(f_{1}(1420)\) & \(0^{+}1^{++}\) & \(\langle 0|J^{A}_{\mu}|f_{1}\rangle=f_{f_{1}}m_{f_{1}}\epsilon_{\mu}\) & \(f_{f_{1}}=217\ {\rm MeV}\)[87] \\ \hline \multirow{2}{*}{\(J^{T}_{\mu\nu}=\bar{s}\sigma_{\mu\nu}s\)} & \multirow{2}{*}{\(0^{-}1^{\pm-}\)} & \(\phi\) & \(0^{-}1^{--}\) & \(\langle 0|J^{T}_{\mu\nu}|\phi\rangle=if^{T}_{\phi}(p_{\mu}\epsilon_{\nu}-p_{\nu} \epsilon_{\mu})\) & \(f^{T}_{\phi}\approx 175\ {\rm MeV}\)[88] \\ \cline{3-6} & & \(h_{1}(1415)\) & \(0^{-}1^{+-}\) & \(\langle 0|J^{T}_{\mu\nu}|h_{1}\rangle=if^{T}_{h_{1}}\epsilon_{\mu\nu\alpha\beta} \epsilon^{\alpha}p^{\beta}\) & \(f^{T}_{h_{1}}=f^{T}_{b_{1}}\times\frac{f_{\phi}}{f_{\mu}}=194\ {\rm MeV}\)[89] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Couplings of the strangeonium operators to the strangeonium states. Color indices are omitted for simplicity.
Let us start with Eq. (33) and perform analyses qualitatively. The strangeonium operators \(J^{S}=\bar{s}_{a}s_{a}\) and \(J^{V}_{\mu}=\bar{s}_{b}\gamma_{\mu}s_{b}\) couple to the \(f_{0}(980)\) and \(\phi(1020)\), respectively. Hence, the meson-meson current \(\xi_{1\mu}\) well couples to the \(\phi f_{0}(980)\) channel, and the mixing current \(J_{1\mu}\) also couples to this channel. Accordingly, the state \(Y_{1}\) can decay to this channel. Similarly, we can derive six other possible channels to be \(\phi\eta\), \(\phi\eta^{\prime}\), \(\phi f_{1}(1420)\), \(h_{1}(1415)\eta\), \(h_{1}(1415)\eta^{\prime}\), and \(h_{1}(1415)f_{1}(1420)\). Among them, the \(\phi\eta\), \(\phi\eta^{\prime}\), \(h_{1}(1415)\eta\), and \(h_{1}(1415)\eta^{\prime}\) channels are kinematically allowed.
In principle, we need the coupling of \(J_{1\mu}\) to \(Y_{1}\) as an input to quantitatively calculate the partial decay widths of these channels. This parameter has been defined in Eq. (22) as \(f_{Y_{1}}\). However, it is not necessary if we just want to calculate the relative branching ratios. We still take Eq. (33) as an example, from which we can extract the couplings of the mixing current \(J_{1\mu}\) to the \(\phi f_{0}(980)\) and \(\phi\eta\) channels:
\[\langle 0|J_{1\mu}|\phi(p_{1},\epsilon_{1})\ f_{0}(p_{2})\rangle \tag{45}\] \[= -0.74\times\epsilon_{1}^{\mu}\ m_{f_{0}}f_{f_{0}}\ m_{\phi}f_{ \phi}\,,\] \[\langle 0|J_{1\mu}|\phi(p_{1},\epsilon_{1})\ \eta(p_{2})\rangle\] \[= 0.54\times f_{\eta}f_{\phi}^{T}\epsilon_{\mu\nu\alpha\beta}p_{2 }^{\nu}(p_{1}^{\alpha}\epsilon_{1}^{\beta}-p_{1}^{\beta}\epsilon_{1}^{\alpha })\,.\]
Then we can extract the couplings of the state \(Y_{1}\) to the \(\phi f_{0}(980)\) and \(\phi\eta\) channels:
\[\langle Y_{1}(p,\epsilon)|\phi(p_{1},\epsilon_{1})\ f_{0}(p_{2})\rangle \tag{47}\] \[= -0.74c\times\epsilon\cdot\epsilon_{1}\ m_{f_{0}}f_{f_{0}}\ m_{ \phi}f_{\phi}\,,\] \[\langle Y_{1}(p,\epsilon)|\phi(p_{1},\epsilon_{1})\ \eta(p_{2})\rangle\] \[= 0.54c\times f_{\eta}f_{\phi}^{T}\epsilon_{\mu\nu\alpha\beta} \epsilon^{\mu}p_{2}^{\nu}(p_{1}^{\alpha}\epsilon_{1}^{\beta}-p_{1}^{\beta} \epsilon_{1}^{\alpha})\,.\]
The overall factor \(c\) is related to the decay constant \(f_{Y_{1}}\). After calculating the partial decay widths \(\Gamma_{Y_{1}\to\phi f_{0}(980)}\) and \(\Gamma_{Y_{1}\to\phi\eta}\), we can eliminate this factor and obtain
\[\frac{\mathcal{B}(Y_{1}\to\phi f_{0}(980))}{\mathcal{B}(Y_{1}\to\phi\eta)}=1. 14\,. \tag{49}\]
Similarly, we can investigate the \(\phi\eta^{\prime}\), \(h_{1}(1415)\eta\), and \(h_{1}(1415)\eta^{\prime}\) channels to obtain:
\[\mathcal{B}(\,Y_{1} \to \phi\eta\ :\ \phi\eta^{\prime}\ :\ \phi f_{0}\ :h_{1}(1415)\eta:h_{1}(1415)\eta^{\prime}\,) \tag{50}\] \[= 1.00:\ 0.71:\ 1.14:\ \ \ 0.74\ \ \ :\ 0.32\,.\]
The above calculations are done within the naive factorization scheme, so our uncertainty is significantly larger than the well-developed QCD factorization scheme [101; 102; 103]. However, our calculations are done after eliminating the ambiguous overall factor \(f_{Y_{1}}\), which largely reduces our uncertainty.
It is interesting to examine the dependence of the above ratios on the mixing angle \(\theta\), as shown in the left panel of Fig. 3. Especially, the ratio
\[R_{\eta/\eta^{\prime}}^{Y_{1}}\equiv\frac{\mathcal{B}(Y_{1}\to\phi\eta)}{ \mathcal{B}(Y_{1}\to\phi\eta^{\prime})}=1.40\,, \tag{51}\]
does not depend on this parameter. This ratio can be useful in clarifying the nature of the \(X(2436)\) as a fully-strange tetraquark state.
Following the same procedures, we study the strong decays of the state \(Y_{2}\) through the mixing current \(J_{2\mu}\). In this case we consider the \(\phi f_{0}(980)\), \(\phi\eta\), \(\phi\eta^{\prime}\), and \(h_{1}(1415)\eta\) channels, since the \(h_{1}(1415)\eta^{\prime}\) channel is kinematically forbidden. Their relative branching ratios are calculated to be
\[\mathcal{B}(\,Y_{2} \to \phi\eta\ :\ \phi\eta^{\prime}\ :\ \phi f_{0}\ :\ \ h_{1}(1415)\eta\,) \tag{52}\] \[= 1.00\ :\ 0.63\ :\ 19.52\ :\ \ \ 0.69\,.\]
We show the dependence of these ratios on the mixing angle \(\theta\) in the right panel of Fig. 3. Again, the ratio
\[R_{\eta/\eta^{\prime}}^{Y_{2}}\equiv\frac{\mathcal{B}(Y_{2}\to\phi\eta)}{ \mathcal{B}(Y_{2}\to\phi\eta^{\prime})}=1.59\,, \tag{53}\]
does not depend on the mixing angle \(\theta\), and moreover, it is almost the same as the ratio \(R_{\eta/\eta^{\prime}}^{Y_{1}}=1.40\). This ratio can be useful in clarifying the nature of the \(\phi(2170)\) as a fully-strange tetraquark state.
## IV Summary and Discussions
In this paper we systematically study the strong decays of the \(\phi(2170)\) and \(X(2436)\) as two fully-strange tetraquark states with the quantum number \(J^{PC}=1^{--}\). Their corresponding fully-strange tetraquark currents have been systematically constructed in our previous studies [52; 55], where we consider both the diquark-antidiquark and meson-meson constructions. We have also derived their relations there through the Fierz rearrangement, which are used in the present study to study their strong decay properties.
There are two independent diquark-antidiquark currents, defined in Eqs. (15) and (16) as \(\eta_{1\mu}\) and \(\eta_{2\mu}\). In Ref. [52] we calculate their diagonal correlation functions,
Figure 2: The fall-apart decay process of a fully-strange tetraquark state to two strangeonium mesons.
and in Ref. [55] we further calculate their off-diagonal correlation function. Based on the obtained results, we construct two mixing currents, defined in Eqs. (18) and (19) as \(J_{1\mu}\) and \(J_{2\mu}\) with the mixing angle \(\theta=-5.0^{\rm o}\). These two mixing currents are non-correlated with each other, so they separately couple to two different states \(Y_{1}\) and \(Y_{2}\), whose masses are calculated in Ref. [55] through the QCD sum rule method to be
\[M_{Y_{1}} = 2.41\pm 0.25\;{\rm GeV}\,,\] \[M_{Y_{2}} = 2.34\pm 0.17\;{\rm GeV}\,.\]
These two values are consistent with the experimental masses of the \(X(2436)\) and \(\phi(2170)\), indicating their possible explanations as the fully-strange tetraquark states \(Y_{1}\) and \(Y_{2}\), respectively. Accordingly, we can use the mixing currents \(J_{1\mu}\) and \(J_{2\mu}\) to further study their decay properties.
We use the Fierz rearrangement to transform the mixing currents \(J_{1\mu}\) and \(J_{2\mu}\) to be the combinations of the meson-meson currents \(\xi_{1\mu}\) and \(\xi_{2\mu}\), as defined in Eqs. (26) and (27). The obtained Fierz identities are given in Eqs. (33) and (34). Based on these results, we study the decay mechanism depicted in Fig. 2, where a fully-strange tetraquark state fall-apart decays to two strangeonium mesons. We consider altogether seven possible channels: \(\phi\eta\), \(\phi\eta^{\prime}\), \(\phi f_{0}(980)\), \(\phi f_{1}(1420)\), \(h_{1}(1415)\eta\), \(h_{1}(1415)\eta^{\prime}\), and \(h_{1}(1415)f_{1}(1420)\). Some of these channels are kinematically possible, whose relative branching ratios are calculated to be:
\[\mathcal{B}(\,X(2436) \to \phi\eta:\phi\eta^{\prime}:\phi f_{0}:h_{1}(1415)\eta:h_{1}(1415) \eta^{\prime}\,)\] (54) \[= 1\,:0.71\,:\,1.14\,:\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, |
2305.03917 | Moving mirror-field dynamics under intrinsic decoherence | We study the decaying dynamics in the mirror-field interaction by means of
the intrinsic decoherence scheme. Factorization of the mirror-field Hamiltonian
with the use of displacement operators, allows us to calculate the explicit
solution to Milburn's equation for arbitrary initial conditions. We show
expectation values, correlations, and Husimi functions for the solutions
obtained. | Alejandro R. Urzúa, Héctor M. Moya-Cessa | 2023-05-06T03:41:45Z | http://arxiv.org/abs/2305.03917v1 | # Moving mirror-field dynamics under intrinsic decoherence
###### Abstract
We study the decaying dynamics in the mirror-field interaction by means of the intrinsic decoherence scheme. Factorization of the mirror-field Hamiltonian with the use of displacement operators, allows us to calculate the explicit solution to Milburn's equation for arbitrary initial conditions. We show expectation values, correlations, and Husimi functions for the solutions obtained.
## 1 Introduction
The presence of decoherence in quantum mechanical systems is an undesirable effect coming mainly from two sources: interaction between internal degrees of freedom, and the interaction with the environment via some coupling [1]. Intrinsic decoherence [2, 3] is a topic that has recently raised great interest, as it has become a way to describe phase decay which is one of the most damaging effects of quantum physical systems, making mandatory its study. Therefore, quantifying the rate at which an isolated system loses its phase coherence in time is needed in order to try to avoid such a harmful phenomenon. We will apply its study here to an optomechanical system. Such systems deal with the interaction of light and mechanical devices coupled by means of their internal degrees of freedom in the most simple setting [4, 5, 6]. One realization of such systems is a cavity containing a quantized electromagnetic field coupled to a mirror experiencing harmonic motion; in which interesting phenomena can occur due to the feedback action between the subsystems, like optomechanical entanglement [7], or the creation of real photons in the cavity due to the dynamical Casimir effect and how the phase damping of the intrinsic decoherence intervene in the time dynamics [8].
Since the seminal work by Milburn [2] on the possibility to model decoherence based on a simple modification of unitary Schrodinger evolution, much work has been done pointing in several directions, depending on how decoherence is tried to be diminished, or toked in advance. Moreover, intrinsic decoherence produces a Master Equation of the Lindblad type making it possible to compare with other methods that describe open systems, like master equations and quantum trajectories.
The last decade has been very prolific in this field we want to board, the community has taken the task to study a vast amount of systems that can experience intrinsic decoherence. We can see research on quantification of non-classicality [9], qubits for quantum information processing [10], bipolar spin systems [11], quantum dot correlations [12], Heisenberg XYZ spin chains, quantum-memory-assisted two-qubit, and the temporal evolution of quantum correlations [13, 14, 15], quantum-memory-assisted entropic uncertainty, mixedness, and entanglement dynamics in two-qubit system [16], symmetric spin-orbit model [17], two-qubit quantum Fisher information [18], two-qubit maximally entangled Bell states [19], trapped ions [20], isolated Heisenber and Aubry-Andre spin models [21], two coupled quantum dots [22], N-level atomic system [23], two-coupled qubit two-level cavity [24], two-level atom [25], state transfer in spin channels [26], Heisenber anisotropic interaction [27], nonlocal advantage of quantum coherence [28], qutrit teleportation [29], and quantum dense coding [30]. When we look at specific
examples related optomechanical systems, we find notable works on Jaynes-Cummings under intrinsic decoherence [31, 32, 33], bimodal multiquanta Jaynes-Cummings [34], entanglement two Tavis-Cummings (no-RWA JC) [35], and ultra-strong coupled harmonic oscillator in cavities [36].
In this manuscript, we analyze the effects of intrinsic decoherence in a mirror and a quantized field coupled by means of radiation pressure and harmonic mechanical motion. By using solutions recently proposed by us to study the one-dimensional displaced harmonic oscillator [37] and the three-coupled one-dimensional harmonic oscillators [38], we solve the complete form of the equation instead of the master equations Milburn obtained at first.
## 2 Moving mirror-field interaction
The standard setup for the moving mirror-field interaction is a cavity enclosing the radiation, such as the quintessential Fabry-Perot interferometer [39], where two perfectly reflecting mirrors, one fixed and the other experimenting harmonic motion described by the macroscopic canonical position \(\hat{q}(t)\); inside the cavity, there's a coherent driven field \(\hat{a}\left(\hat{a}^{\dagger}\right)\) pumped by a tunable laser [5]. The interaction between the mirror and the field is given by radiation pressure coupled with the (microscopical) number of modes of the field and the (macroscopical) position quadrature of the mechanical device. The Hamiltonian for the moving mirror-field interaction then reads [40],
\[\hat{H}=\omega\hat{n}+\nu\hat{N}+\chi\hat{n}\left(\hat{b}^{\dagger}+\hat{b}\right) \tag{1}\]
where we have defined the number operators: \(\hat{n}=\hat{a}^{\dagger}\hat{a}\) for the field, and \(\hat{N}=\hat{b}^{\dagger}\hat{b}\) for the mirror. The parameters \(\omega\), \(\nu\), and \(\chi\) are the frequencies of the field, the mirror, and the coupling strength, respectively. This Hamiltonian may be rewritten in a diagonal form with the help of a composite _displacement operator_ in the mirror eigenbasis (\(\hat{b}\)) as
\[\hat{H}=\hat{D}^{\dagger}\left(\frac{\chi\hat{n}}{\nu}\right)\left[\nu\hat{N} +\omega\hat{n}-\frac{\chi^{2}}{\nu^{2}}\hat{n}^{2}\right]\hat{D}\left(\frac{ \chi\hat{n}}{\nu}\right), \tag{2}\]
where \(\hat{D}^{\dagger}\left(\frac{\chi\hat{n}}{\nu}\right):=\exp\left(\left[\frac{ \chi}{\nu}\hat{b}^{\dagger}-\frac{\chi^{*}}{\nu^{*}}\hat{b}\right]\otimes\hat {n}\right)\), that is a tensor element of the two eigenbases, acting as a true displacement operator on the mirror basis, and as an exponential of the number operator in the field basis.
Analytical solutionThere exists a straightforward solution of the Schrodinger equation associated to Hamiltonian (2) when the initial condition is set to \(\ket{\alpha}_{f}\otimes\ket{\beta}_{m}\), giving the expression (see refs. [40, 41] for the detailed solution via exponential disentanglement),
\[\ket{\psi(t)}=e^{-\ket{\alpha}^{2}/2}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{ \sqrt{n!}}e^{\mathrm{i}(\chi/\nu)^{2}n^{2}(t-\sin t)}\ket{n}\ket{\phi_{n}} \tag{3}\]
where \(\ket{\phi_{n}}\equiv\ket{\beta e^{-\mathrm{i}t}+(\chi/\nu)n(1-e^{-\mathrm{i}t })}\); with \(\beta\) the amplitude of the initial coherent state of the mirror; \(\alpha\) the amplitude of the initial coherent state of the field and \(\ket{n}\) are number states of the quantized field. This solution shows a constant number of photon modes in the field, and periodic amplitudes in the phonon modes and position (quadrature) in the mirror, see Figure 1. We expect that the behavior in the field is preserved after the decoherence drive is applied and the amplitudes and position in the mirror might experience decaying modulation depending on the parameters involved in the intrinsic decoherence dynamic.
Intrinsic decoherence schemeAccording to the seminal work of Milburn [2], we can observe decaying dynamics in the Schrodinger evolution of quantum systems modifying the mathematical structure of the governing equation. This modification takes into account the disorder-induced stochastic phase changes in the Hamiltonian of the system at very short times. In its closed form, the density matrix evolution turns out to be
\[\dot{\rho}_{S}\mapsto\dot{\rho}_{M}=\gamma\left(e^{-\mathrm{i}\frac{B}{\gamma }}\dot{\rho}e^{\mathrm{i}\frac{B}{\gamma}}-\dot{\rho}\right), \tag{4}\]
Figure 1: Expectation values for \((a)\) the number of phonon modes \(\langle\hat{N}\rangle\) and \((b)\) the position quadrature \(\langle\hat{q}\rangle\) in the mirror, using (3).
where \(\hat{H}\) is the system's Hamiltonian, and \(\gamma\) is the decay rate parameter of the intrinsic decoherence, giving the attributes to determine the time scale of the coherence suppression. We can, of course, expand equation (4) in terms of \(\gamma^{-1}\) to obtain the \(n\)-th order of approximation, which leads to the emergence of a Lindblad master equation. But on the other side, the equation (4) can be solved analytically using the adequate _ansatz_
\[\hat{\rho}(t)=e^{-\gamma t}e^{\hat{S}t}\hat{\rho}(0), \tag{5}\]
where we have defined the _superoperator_[42]
\[\hat{S}\hat{\rho}=\gamma e^{-i\frac{\hat{H}}{\gamma}}\hat{\rho}e^{i\frac{\hat{ H}}{\gamma}},\]
such that the Taylor expansion of the action gives
\[e^{\hat{S}t}\hat{\rho}(0)=\sum_{k=0}^{\infty}\frac{\left(\gamma t\right)^{k}}{k! }\,\hat{\rho}_{k}, \tag{6}\]
with the \(k\)-th element of the density matrix \(\hat{\rho}\) defined by
\[\hat{\rho}_{k}:=\left|\psi_{k}\right\rangle\left\langle\psi_{k}\right|,\qquad \left|\psi_{k}\right\rangle=e^{-\frac{\mathrm{i}k}{\gamma}\hat{H}}\left|\psi(0 )\right\rangle. \tag{7}\]
Then, using (7), inserting in (6), and putting all together with (5), we arrive at the explicit form of the solution, given by
\[\hat{\rho}(t)=e^{-\gamma t}\sum_{k=0}^{\infty}\frac{\left(\gamma t\right)^{k} }{k!}\left|\psi_{k}\right\rangle\left\langle\psi_{k}\right|. \tag{8}\]
For the particular system we are dealing with, the \(k\)-th element of the wavefunction takes the form
\[\left|\psi_{k}\right\rangle=\hat{D}^{\dagger}\left(\frac{\chi\hat{n}}{\nu} \right)e^{-\frac{\mathrm{i}k}{\gamma}\left[\nu\hat{N}+\omega\hat{n}-\frac{ \gamma^{2}}{\nu}\hat{n}^{2}\right]}\hat{D}\left(\frac{\chi\hat{n}}{\nu}\right) \left|\psi(0)\right\rangle, \tag{9}\]
that recalls the analytical Schrodinger solution. At this point, we can apply (9) to a direct product of initial wavefunctions for the mirror and the field: \(\left|\psi(0)\right\rangle=\left|\phi(0)\right\rangle_{m}\otimes\left|\theta( 0)\right\rangle_{f}\).
This depicted scheme is a mathematical procedure relying on a _Poisson model_ for the short-time stochastic behavior of the system, as Milburn stated. On the other side, selecting suitable initial conditions \(\left|\psi(0)\right\rangle\) is dependent on the particular system we have, in the case of optomechanical devices that coupled electromagnetic fields and harmonic moving mirror, the selection of coherent states \(\left|\alpha\right\rangle\) for the field and \(\left|\beta\right\rangle\) for the mirror can be viewed as obvious --in broad sense we have, of course, a system of two-coupled harmonic oscillators.-- Finally, the existence of the limit in (8) precludes how many analytical solutions can we obtain, since for some initial wavefunctions, the limit may not exist, nevertheless it can be always solved numerically at some truncation term \(<\infty\).
## 3 Solution and observables
### Expectation values
If we have a suitable operator \(\hat{A}\) that represents an observable \(a\), the expectation value of this operator can be obtained from the density matrix \(\hat{\rho}\) as \(\mathrm{tr}\left(\hat{\rho}\hat{A}\right)\equiv\left\langle\psi\right|\hat{A} \left|\psi\right\rangle\), for some wavefunction \(\left|\psi\right\rangle\) in the eigenbasis of the system [43]. Now, for the density matrix in (8) that depends on the \(k\)-th component \(\hat{\rho}_{k}\), the calculation of the expectation value reduces to
\[\left\langle\hat{A}\right\rangle:=e^{-\gamma t}\sum_{k=0}^{\infty}\frac{\left( \gamma t\right)^{k}}{k!}\left\langle\psi_{k}\right|\hat{A}\left|\psi_{k} \right\rangle, \tag{10}\]
for \(|\psi_{k}\rangle\) defined by (9). Thus, given the form of \(|\psi_{k}\rangle\), we need to obtain the action of the displacement operators and the exponential maps onto the operator \(\hat{A}\). At this point, we are interested in the most descriptive features in the optomechanical system: the number of photon modes in the field, that we predict has to be constant; the number of mechanical modes and quadrature in the mirror, that should have a decaying dynamic depending on the decoherence rate \(\gamma\); a set of statistical estimators, like the Hong-ou-Mandel parameters, and the covariance; finally, a representation in phase-space of the dynamics via the Husimi \(\hat{Q}\)-function.
#### 3.1.1 Number of phonon modes \(\langle\hat{N}\rangle\) in the mirror
We start calculating the expectation value of the number of phonon modes in the mirror. The left-right action of the wavefunction \(|\psi_{k}\rangle\) onto the operator \(\hat{N}\) is straightforward, since we know the expansion
\[\begin{split}\langle\psi_{k}|\,\hat{N}\,|\psi_{k}\rangle& =\langle\psi(0)|\left(\hat{D}^{\dagger}\left(\frac{\chi\hat{n}}{ \nu}\right)e^{\frac{\mathrm{i}k}{\gamma}\left[\nu\hat{N}+\omega\hat{n}-\frac{ \chi^{2}}{\nu}\hat{n}^{2}\right]}\hat{D}\left(\frac{\chi\hat{n}}{\nu}\right) \right)\hat{N}\left(\hat{D}^{\dagger}\left(\frac{\chi\hat{n}}{\nu}\right)e^{- \frac{\mathrm{i}k}{\gamma}\left[\nu\hat{N}+\omega\hat{n}-\frac{\chi^{2}}{\nu }\hat{n}^{2}\right]}\hat{D}\left(\frac{\chi\hat{n}}{\nu}\right)\right)|\psi(0) \rangle\\ &=\langle\psi(0)|\left(\hat{D}^{\dagger}\left(\frac{\chi\hat{n}}{ \nu}\right)e^{\frac{\mathrm{i}k}{\gamma}\left[\nu\hat{N}+\omega\hat{n}-\frac{ \chi^{2}}{\nu}\hat{n}^{2}\right]}\right)\times\left(\hat{N}-\frac{\chi}{\nu} \hat{n}\left(\hat{b}^{\dagger}+\hat{b}\right)+\frac{\chi^{2}}{\nu^{2}}\hat{n} ^{2}\right)\left(e^{-\frac{\mathrm{i}k}{\gamma}\left[\nu\hat{N}+\omega\hat{n} -\frac{\chi^{2}}{\nu}\hat{n}^{2}\right]}\hat{D}\left(\frac{\chi\hat{n}}{\nu} \right)\right)|\psi(0)\rangle\\ &=\langle\psi(0)|\left(\hat{N}+\frac{\chi}{\nu}\hat{n}\hat{b}^{ \dagger}\left(1-e^{\frac{\mathrm{i}k}{\gamma}\nu}\right)+\frac{\chi}{\nu}\hat {n}\hat{b}\left(1-e^{-\frac{\mathrm{i}k}{\gamma}\nu}\right)+4\frac{\chi^{2}}{ \nu^{2}}\hat{n}^{2}\sin\left(\frac{k}{2\gamma}\nu\right)^{2}\right)|\psi(0) \rangle\,.\end{split} \tag{11}\]
For an initial wavefunction as the product of coherent states in the field and mirror, \(|\psi(0)\rangle=|\alpha\rangle_{f}\otimes|\beta\rangle_{m}\), we have
\[\begin{split}&\left.\left.\left.\left.\left.\left.\left.\left. \langle\beta|\right.\right.\right.\right.\right.\right\rangle\right|_{f}\left \langle\alpha|\left(\hat{N}+\frac{\chi}{\nu}\hat{n}\hat{b}^{\dagger}\left(1-e^ {\frac{\mathrm{i}k}{\gamma}\nu}\right)+\frac{\chi}{\nu}\hat{n}\hat{b}\left(1 -e^{-\frac{\mathrm{i}k}{\gamma}\nu}\right)\right.\left.4\frac{\chi^{2}}{\nu^{ 2}}\hat{n}^{2}\sin\left(\frac{k}{2\gamma}\nu\right)^{2}\right)|\alpha\rangle_{ f}\left|\beta\right\rangle_{m}=\\ &\left.\left.\left.\left.\left|\beta|^{2}+\frac{\chi}{\nu}|\alpha |^{2}\beta^{*}\left(1-e^{\frac{\mathrm{i}k\nu}{\gamma}}\right)+\frac{\chi}{ \nu}|\alpha|^{2}\beta\left(1-e^{\frac{-\mathrm{i}k\nu}{\gamma}}\right)+4\frac{ \chi^{2}}{\nu^{2}}|\alpha|^{2}\left(1+|\alpha|^{2}\right)\sin\left(\frac{1}{2} \frac{k\nu}{\gamma}\right)^{2}\right.\right.=\left.\left.\left\langle\psi_{k} |\,\hat{N}\,|\psi_{k}\right\rangle\right.\right.\right.\right.\right.\end{split} \tag{12}\]
Therefore, the expectation value for the modes in the mirror is then
\[\begin{split}\langle\hat{N}\rangle&=e^{-\gamma t} \sum_{k=0}^{\infty}\frac{(\gamma t)^{k}}{k!}\left\langle\psi_{k}|\,\hat{N}\,| \psi_{k}\right\rangle\\ &=e^{-\gamma t}\sum_{k=0}^{\infty}\frac{(\gamma t)^{k}}{k!}\left(| \beta|^{2}+\frac{\chi}{\nu}|\alpha|^{2}\beta^{*}\left(1-e^{\frac{\mathrm{i}k \nu}{\gamma}}\right)+\frac{\chi}{\nu}|\alpha|^{2}\beta\left(1-e^{\frac{- \mathrm{i}k\nu}{\gamma}}\right)+4\frac{\chi^{2}}{\nu^{2}}|\alpha|^{2}\left(1+| \alpha|^{2}\right)\sin\left(\frac{1}{2}\frac{k\nu}{\gamma}\right)^{2}\right) \\ &=e^{-\gamma t}\left[|\beta|^{2}e^{\gamma t}+\frac{\chi}{\nu}| \alpha|^{2}\beta^{*}\left(e^{\gamma t}-e^{\gamma t\epsilon\frac{\mathrm{i}k \nu}{\gamma}}\right)+\frac{\chi}{\nu}|\alpha|^{2}\beta\left(e^{\gamma t}-e^{ \gamma t\epsilon\frac{-\mathrm{i}\nu}{\gamma}}\right)+\frac{\chi^{2}}{\nu^{2}}| \alpha|^{2}\left(1+|\alpha|^{2}\right)\\ &\qquad\times\left.\left(2e^{\gamma t}-\left(e^{\gamma t\epsilon \frac{\mathrm{i}k}{\gamma}}+e^{\gamma t\epsilon^{-\mathrm{i}k\nu}{\gamma}} \right)\right)\,,\end{split} \tag{13}\]
that returns the initial value at \(t=0\), \(\langle\hat{N}(t=0)\rangle=|\beta|^{2}\). For an initial condition \(|\psi(0)\rangle=|2\mathrm{i},1+2\mathrm{i}\rangle\), in Figure 2 we show how the number of phonon modes variates, modulates, and decays over time. The parameters that change are the decaying rate \(\gamma\) (above), the coupling strength between mirror and field \(\chi\) (middle), and the frequency of the mirror \(\nu\) (below). The number of phonon modes is strictly positive, so we are seeing how different set of parameters involved leads to decaying dynamics in a more fast or slow fashion. There's a relevant feature of the dynamics when we observe the evolution over the set of decoherence parameters \(\gamma\). Despite each figure showing the creation of phonons from the initial value of \(\tilde{N}(0)=5\), when we look at the figure above, it is clear that there's a central value around \(\tilde{N}(t)\approx 22\) where other curves deviate when we increase or decrease the value of \(\gamma\).
Figure 2: Number of modes \(\langle\hat{N}\rangle\) in the mirror. We variate the parameters involved in the decoherence evolution. a) Decaying parameter \(\gamma\) between \(1\) and \(9\), maintaining \(\chi=0.5\) and \(\nu=0.9\). b) Coupling parameter \(\chi\) between \(0.1\) and \(0.9\), maintaining \(\gamma=5\) and \(\nu=0.9\). c) Mirror oscillation frequency \(\nu\) between \(0.1\) and \(0.9\), maintaining \(\gamma=5\) and \(\chi=0.5\). The initial conditions in the wavefunctions are \(|\alpha,\beta\rangle=|2\mathrm{i},1+2\mathrm{i}\rangle\).
#### 3.1.2 Position quadrature \(\langle\hat{b}^{\dagger}+\hat{b}\rangle\) in the mirror
Turning the attention to the position variation of the mirror, we calculate the expectation value of twice the quadrature operator, taking the action of the wavefunction \(\left|\psi_{k}\right\rangle\) over the operator we have
\[\left\langle\psi_{k}\right|\hat{b}^{\dagger}+\hat{b}\left|\psi_{k}\right\rangle= \left\langle\psi(0)\right|\left[\hat{b}^{\dagger}e^{\frac{\mathrm{i}k\nu}{ \gamma}}+\hat{b}e^{-\frac{\mathrm{i}k\nu}{\gamma}}-4\frac{\chi}{\nu}\hat{n} \sin\left(\frac{k\nu}{2\gamma}\right)^{2}\right]\left|\psi(0)\right\rangle, \tag{14}\]
that for an initial wavefunction as the product of coherent states in the field and mirror, \(\left|\psi(0)\right\rangle=\left|\alpha\right\rangle_{f}\otimes\left|\beta \right\rangle_{m}\), the expression reduces to
\[\begin{split}\left\langle\psi_{k}\right|\hat{b}^{\dagger}+\hat{b} \left|\psi_{k}\right\rangle=_{m}&\left\langle\left.\beta \right|_{f}\right\langle\alpha\right|\left[\hat{b}^{\dagger}e^{\frac{\mathrm{ i}k\nu}{\gamma}}+\hat{b}e^{-\frac{\mathrm{i}k\nu}{\gamma}}-4\frac{\chi}{\nu}\hat{n} \sin\left(\frac{k\nu}{2\gamma}\right)^{2}\right]\left|\alpha\right\rangle_{f} \left|\beta\right\rangle_{m}\\ &=\beta^{*}e^{\frac{\mathrm{i}k\nu}{\gamma}}+\beta e^{-\frac{ \mathrm{i}k\nu}{\gamma}}-4\frac{\chi}{\nu}\left|\alpha\right|^{2}\sin\left( \frac{k\nu}{2\gamma}\right)^{2}\end{split} \tag{15}\]
This renders the expectation value for the position quadrature of the mirror as
\[\begin{split}\langle\hat{b}^{\dagger}+\hat{b}\rangle& =e^{-\gamma t}\sum_{k=0}^{\infty}\frac{\left(\gamma t\right)^{k} }{k!}\left\langle\psi_{k}\right|\hat{b}^{\dagger}+\hat{b}\left|\psi_{k}\right\rangle \\ &=e^{-\gamma t}\left(\beta^{*}e^{\gamma t}e^{\frac{\mathrm{i}k\nu} {\gamma}}+\beta e^{\gamma te^{-\frac{\mathrm{i}k\nu}{\gamma}}}-\frac{\chi}{\nu }\left|\alpha\right|^{2}\left(2e^{\gamma t}-\left[e^{\gamma te^{\frac{\mathrm{ i}k\nu}{\gamma}}}+e^{\gamma te^{-\frac{\mathrm{i}k\nu}{\gamma}}}\right] \right)\right),\end{split} \tag{16}\]
where the initial value at \(t=0\) is \(\left\langle\left(\hat{b}^{\dagger}+\hat{b}\right)\right\rangle=\beta^{*}+ \beta\equiv 2\mathrm{Re}\left(\beta\right)\). In Figure 3 we show how the position quadrature of the mirror variates, modulates, and decays over time. The parameters that change are the decaying rate \(\gamma\) (above), the coupling strength between mirror and field \(\chi\) (middle), and the frequency of the mirror \(\nu\) (below). If we look at the figure above, where the decaying rate variates, we see again a central value \(\approx 5\) where the curves change their amplitude according to the increase or decreasing value of \(\gamma\).
#### 3.1.3 Number of photons \(\langle\hat{n}\rangle\) in the field
As we expect from the behavior of the original system (3), the expectation value for the number of photons in the field is a constant with a value of
\[\left\langle\hat{n}\right\rangle=\left|\alpha\right|^{2}, \tag{17}\]
because every displacement operator and their exponential map involved commutes with the number operator \(\hat{n}\).
### Hong-ou-Mandel parameter and covariance
We now take place to quantify and analyze a couple of statistical descriptors of the system, the Hong-ou-Mandel parameter, and the covariance between the photon and phonon modes of the field and the mirror, respectively.
The covariance of two well-behaved operators \(\hat{A}\) and \(\hat{B}\) is defined as
\[\mathrm{cov}\left(\hat{A},\hat{B}\right):=\left\langle\hat{A}\hat{B}\right\rangle -\left\langle\hat{A}\right\rangle\left\langle\hat{B}\right\rangle, \tag{18}\]
where \(\left\langle\cdot\right\rangle\) is the standard expected value of an operator on a wavefunction. Note that the variance is just the autocovariance in the sense \(\mathrm{var}\left(\hat{A}\right)\equiv\mathrm{cov}\left(\hat{A},\hat{A} \right)=\left\langle\hat{A}^{2}\right\rangle-\left\langle\hat{A}\right\rangle ^{2}\). With this, we can define the Hong-ou-Mandel parameter as
\[\hat{O}_{\mathrm{H}\mathrm{M}\mathrm{p}}:=\frac{\mathrm{var}\left(\hat{O} \right)}{\left\langle\hat{O}\right\rangle}. \tag{19}\]
Figure 3: Position quadrature \(\langle\hat{b}^{\dagger}+\hat{b}\rangle\) in the mirror. We variate the parameters involved in the decoherence evolution. a) Decaying parameter \(\gamma\) between \(1\) and \(9\), maintaining \(\chi=0.5\) and \(\nu=0.9\). b) Coupling parameter \(\chi\) between \(0.1\) and \(0.9\), maintaining \(\gamma=5\) and \(\nu=0.9\). c) Mirror oscillation frequency \(\nu\) between \(0.1\) and \(0.9\), maintaining \(\gamma=5\) and \(\chi=0.5\). The initial conditions in the wavefunctions are \(|\alpha,\beta\rangle=|2\mathrm{i},1+2\mathrm{i}\rangle\).
For our particular case, the expectation values are defined in terms of the action of the wavefunctions \(\ket{\psi_{k}}\) on the designed operator \(\hat{A}\) and \(\hat{B}\) as
\[\langle\hat{A}\hat{B}\rangle:=e^{-\gamma t}\sum_{k=0}^{\infty}\;\frac{\left( \gamma t\right)^{k}}{k!}\bra{\psi_{k}}\!\hat{A}\hat{B}\ket{\psi_{k}},\qquad \bra{\hat{A}}^{2}:=\left(e^{-\gamma t}\sum_{k=0}^{\infty}\;\frac{\left(\gamma t \right)^{k}}{k!}\bra{\psi_{k}}\!\hat{A}\ket{\psi_{k}}\right)^{2}, \tag{20}\]
Variance of \(\hat{n}\).As stated before, the expectation value of the number of photons in the field is constant in time, then the variance evolution is effectively zero. Meaning that the Hong-ou-Mandel parameter has nothing to say about the correlation between the field itself.
Variance of \(\hat{N}\).The value of the variance for the phonon modes of the mirror is defined by \(\bra{N^{2}}-\bra{N}^{2}\), with
\[\bra{\hat{N}^{2}} =e^{-\gamma t}\sum_{k=0}^{\infty}\frac{\left(\gamma t\right)^{k} }{k!}\left[|\beta|^{2}+\frac{\chi}{\nu}|\alpha|^{2}\beta^{*}\left(1-e^{\frac{ \mathrm{i}kx}{\gamma}}\right)+\frac{\chi}{\nu}|\alpha|^{2}\beta\left(1-e^{ \frac{-\mathrm{i}kx}{\gamma}}\right)+4\frac{\chi^{2}}{\nu^{2}}|\alpha|^{2} \left(1+|\alpha|^{2}\right)\sin\left(\frac{1}{2}\frac{k\nu}{\gamma}\right)^{2 }\right]^{2} \tag{21}\] \[\bra{\hat{N}}^{2} =\left[e^{-\gamma t}\sum_{k=0}^{\infty}\frac{\left(\gamma t \right)^{k}}{k!}\left(|\beta|^{2}+\frac{\chi}{\nu}|\alpha|^{2}\beta^{*}\left(1 -e^{\frac{\mathrm{i}kx}{\gamma}}\right)+\frac{\chi}{\nu}|\alpha|^{2}\beta\left( 1-e^{\frac{-\mathrm{i}kx}{\gamma}}\right)+4\frac{\chi^{2}}{\nu^{2}}|\alpha|^{2 }\left(1+|\alpha|^{2}\right)\sin\left(\frac{1}{2}\frac{k\nu}{\gamma}\right)^{2 }\right)\right]^{2}, \tag{22}\]
where the sums in (22) can be done analytically, giving cumbersome and lengthy expressions. Having these equations, we can calculate the Hong-ou-Mandel parameter for the number of phonons evolution in the mirror
\[\hat{N}_{\mathrm{HMp}}=\frac{\bra{\hat{N}^{2}}-\bra{\hat{N}}^{2}}{\bra{\hat{N }}}, \tag{23}\]
as shown in the Figure 4. We can see that the transition from classical to non-classical behavior, and vice versa, is directly proportional to the amplitude of the decoherence rate \(\gamma\).
Covariance between the photon and phonon modes.We can quantify how the evolution of the phonon modes of the mirror is correlated to the (constant) evolution of the photon modes of the field. For this, we take the covariance
\[\mathrm{cov}\left(\hat{n},\hat{N}\right)=\bra{\hat{n}\hat{N}}-\bra{\hat{n}} \bra{\hat{N}}, \tag{24}\]
where the joint expect value \(\bra{\hat{n}\hat{N}}\) is
\[\bra{\hat{n}\hat{N}} =e^{-\gamma t}\sum_{k=0}^{\infty}\frac{\left(\gamma t\right)^{k} }{k!}\bra{\psi_{k}}\!\hat{n}\hat{N}\ket{\psi_{k}} \tag{25}\] \[=|\alpha|^{2}|\beta|^{2}+\frac{\chi}{\nu}|\alpha|^{2}\left(1+| \alpha|^{2}\right)\beta^{*}\left(1-e^{\frac{\mathrm{i}kx}{\gamma}}\right)+ \frac{\chi}{\nu}|\alpha|^{2}\left(1+|\alpha|^{2}\right)\beta\left(1-e^{\frac {-\mathrm{i}kx}{\gamma}}\right)\] \[+4\frac{\chi^{2}}{\nu^{2}}|\alpha|^{2}\left(1+3|\alpha|^{2}+| \alpha|^{2}\right)\sin\left(\frac{1}{2}\frac{k\nu}{\gamma}\right)^{2},\]
that differs from \(\bra{\hat{n}}\bra{\hat{N}}\) due the emergence of a \(\hat{n}^{3}\) term. Despite this, it seems that there are no new functions involving time, just a scale in the amplitude given by the photon modes \(\hat{n}^{k}\).
### Husimi \(\hat{Q}\) function
The Husimi \(\hat{Q}\) function is defined as the expectation value of the density matrix on the coherent basis as \(\left|\hskip-1.422638pt\right|\)
\[\mathrm{Q}(\epsilon)=\frac{1}{\pi}\bra{\epsilon}\hat{\rho}\ket{\epsilon}, \tag{26}\]
where \(\left|\epsilon\right\rangle\) is the coherent state ket on a certain suitable basis. When we are dealing with multiple subsystems, the Husimi function can be taken as the pseudo-distribution on their respective phase space of the subsystem. If we have a system decomposed as \(\hat{H}=\hat{H}_{f}\otimes\hat{H}_{m}\), the Husimi function is then
\[\mathrm{Q}(\epsilon,\zeta)=\frac{1}{\pi}\left.{}_{m}\left\langle\zeta\right|_{f }\left\langle\epsilon\right|\hat{\rho}\left|\epsilon\right\rangle_{f}\left| \zeta\right\rangle_{m},\right. \tag{26}\]
where \(f\) and \(m\) are the subscripts for _field_ and _mirror_ subsystem, respectively.
We know from (7) and (9) that the density matrix of the field-mirror system is defined as
\[\hat{\rho}=e^{-\gamma t}\sum_{k=0}^{\infty}\frac{\left(\gamma t\right)^{k}}{k! }\left|\psi_{k}\right\rangle\left\langle\psi_{k}\right|, \tag{27}\]
where the \(k\)-th component of the wavefunction \(\psi_{k}\) is
\[\left|\psi_{k}\right\rangle=\hat{D}^{\dagger}\left(\frac{\chi}{\nu}\hat{n} \right)e^{-\frac{\mathrm{i}k}{\gamma}\left(\omega\hat{n}+\nu\hat{N}-\frac{ \chi^{2}}{\nu^{2}}\hat{n}^{2}\right)}\hat{D}^{\dagger}\left(\frac{\chi}{\nu} \hat{n}\right)\left|\psi(0)\right\rangle, \tag{28}\]
for some initial wavefunction \(\left|\psi(0)\right\rangle\). When \(\left|\psi(0)\right\rangle=\left|\alpha\right\rangle_{f}\left|\beta\right\rangle _{m}\), the Husimi function (26) can be obtained in the coherent basis \(\left|a\right\rangle_{f}\left|b\right\rangle_{m}\) as
\[\mathrm{Q}\left(a,\alpha;b,\beta\right):=\frac{1}{\pi}\left\langle b\right| \left\langle a\right|\hat{\rho}\left|a\right\rangle\left|b\right\rangle, \tag{29}\]
Figure 4: Hong-ou-Mandel parameter of the phonon mode number \(\hat{N}\), eq. (4). According to the definition, when the parameter goes above the unitary amplitude, we expect classical behavior in the evolution of the system. It is clear that this feature is directly proportional to the value of the decoherence rate \(\gamma\). We set the values \(\nu=0.5\), \(\chi=0.9\), \(\left|\psi_{0}\right\rangle=\left|2\mathrm{i},1+2\mathrm{i}\right\rangle\).
that renders explicitly the terms
\[\left\langle b\right|\left\langle a\right|\hat{\rho}\left|a\right\rangle\left|b \right\rangle\equiv\left\langle b\right|\left\langle a\right|\psi_{k}\right\rangle \left\langle\psi_{k}\left|a\right\rangle\left|b\right\rangle. \tag{30}\]
The explicit calculation shows that
\[\left\langle b\right|\left\langle a\right|\psi_{k}\right\rangle =\left\langle b\right|\left\langle a\right|\hat{D}^{\dagger} \left(\frac{\chi}{\nu}\hat{n}\right)e^{-\frac{\mathrm{i}k}{\gamma}\left( \omega\hat{n}+\nu\hat{N}-\frac{\chi^{2}}{\nu^{2}}\hat{n}^{2}\right)}\hat{D} \left(\frac{\chi}{\nu}\hat{n}\right)\left|\alpha\right\rangle\left|\beta\right\rangle \tag{31}\] \[=\left\langle b\right|\hat{D}^{\dagger}\left(\frac{\chi}{\nu}|a|^ {2}\right)e^{-\frac{\mathrm{i}k}{\gamma}\left(\omega a^{*}\alpha+\nu\hat{N}- \frac{\chi^{2}}{\nu^{2}}(a^{*})^{2}(\alpha)^{2}\right)}\hat{D}\left(\frac{ \chi}{\nu}|\alpha|^{2}\right)\left|\beta\right\rangle\] \[=\left\langle b+\frac{\chi}{\nu}|a|^{2}|e^{-\frac{\mathrm{i}k}{ \gamma}\left(\omega a^{*}\alpha+\nu\hat{N}-\frac{\chi^{2}}{\nu^{2}}(a^{*})^{2} (\alpha)^{2}\right)}\left|\beta+\frac{\chi}{\nu}|\alpha|^{2}\right\rangle\] \[=e^{-\frac{\mathrm{i}k}{\gamma}\left[\omega a^{*}\alpha+\nu\left( b+\frac{\chi}{\nu}|a|^{2}\right)^{*}\left(\beta+\frac{\chi}{\nu}|\alpha|^{2} \right)-\frac{\chi^{2}}{\nu^{2}}(a^{*})^{2}(\alpha)^{2}\right]}\left\langle a \right|\alpha\rangle\left\langle b+\frac{\chi}{\nu}|a|^{2}|\beta+\frac{\chi} {\nu}|\alpha|^{2}\right\rangle,\]
for \(a,\alpha,b,\beta\in\mathbb{C}\). From (31) we can obtain the dagger \(\left\langle\psi_{k}\left|a\right\rangle\left|b\right\rangle\), that finally gives expression
\[\left\langle b\right|\left\langle a\right|\psi_{k}\right\rangle\langle\psi_{ k}\left|a\right\rangle\left|b\right\rangle=\frac{1}{\pi}e^{-\frac{\mathrm{i}k}{ \gamma}\left[\omega f(a,\alpha)+\nu g(a,\alpha;b,\beta)-\frac{\chi^{2}}{\nu^{2 }}\hat{n}(a,\alpha)\right]}e^{-|a-\alpha|^{2}}e^{-\left|(b-\beta)+\frac{\chi} {\nu}\left(|a|^{2}-|\alpha|^{2}\right)\right|^{2}}, \tag{32}\]
where the functions inside the first exponential are defined as
\[f(a,\alpha)=a^{*}\alpha-a\alpha^{*} \tag{33}\] \[g(a,\alpha;b,\beta)=\left(b+\frac{\chi}{\nu}|a|^{2}\right)^{*} \left(\beta+\frac{\chi}{\nu}|\alpha|^{2}\right)-\left(b+\frac{\chi}{\nu}|a|^{ 2}\right)\left(\beta+\frac{\chi}{\nu}|\alpha|^{2}\right)^{*}\] \[h(a,\alpha)=(a^{*})^{2}(\alpha)^{2}-(a)^{2}(\alpha^{*})^{2}.\]
Finally, with the equations (8) and (32), the Husimi function is explicitly given by
\[\mathrm{Q}\left(a,\alpha;b,\beta\right)=\frac{1}{\pi}e^{-\gamma t}e^{\gamma te ^{-\frac{\mathrm{i}}{\gamma}\left[\omega f(a,\alpha)+\nu g(a,\alpha;b,\beta)- \frac{\chi^{2}}{\nu^{2}}\hat{n}(a,\alpha)\right]}}e^{-|a-\alpha|^{2}}e^{- \left|(b-\beta)+\frac{\chi}{\nu}\left(|a|^{2}-|\alpha|^{2}\right)\right|^{2}}, \tag{34}\]
that resembles Gaussians in phase space displaced by \(\alpha\) and \(\beta\), in which amplitude is modulated by the first terms that are time-dependent.
In Figures 6 and 5 we show the evolution of the initial states of the field and the mirror, respectively. We set \(\left|\psi_{0}\right\rangle=|2\mathrm{i},1+2\mathrm{i}\rangle\), \(\nu=0.5\), \(\chi=0.9\) and \(\gamma=20\). According to the expected value for the number of photons in the field, we see that the \(\hat{Q}\)-function evolves as a harmonic oscillator; on the other side, the \(\hat{Q}\)-function for the mirror evolves according from what the Hong-ou-Mandel parameters suggest: at some point, the initial coherent state transit from a non-classical to classical description, when \(t>4\) we see that the decoherence stall the movement around phase-space.
## 4 Analysis and conclusions
We solve the mirror-field interaction using the intrinsic decoherence scheme from the complete Milburn equation (8). For this particular driven system, a _decoherence parameter_\(\gamma\) defines the strength in the decaying dynamics. We obtain the significant expected values for the number of mechanical phonon modes, and the position quadrature; also we observe the constant nature of the number of photon modes inside the cavity. Although the radiation pressure is parametrically coupled to the harmonic motion of the mirror, the number of quanta remains the same. For a better understanding of the feedback between the quantized field and the mechanical mirror, we calculate the covariance between the number of photons and phonons, giving just an amplitude scaling proportional to \(\hat{n}^{k}\), but no new time-dependent dynamics. The Hong-ou-Mandel parameter is obtained for the photon and phonon modes. The photon modes have nothing new to say, but the phonon modes give insights into how the decoherence made the initial coherent state in the mirror transit from a non-classical to a classical description. This last can be seen also in the Husimi \(\hat{Q}\)-functions we show for the field and the mirror, where the behavior on phase-space enables us to talk about the quantized nature of the photon modes and the macroscopic nature of the moving mirror.
## Declarations
### Funding
Consejo Nacional de Ciencia y Tecnologia (Postdoctoral Grant 2021 and 2022)
### Acknowledgements
A.R. Urzua thanks Dr. Francisco Recamier (ICF UNAM) for the help in understanding the Hong-ou-Mandel parameter. A.R. also thanks ICF UNAM for the logistical support during the postdoctoral stay.
|
2306.05836 | Can Large Language Models Infer Causation from Correlation? | Causal inference is one of the hallmarks of human intelligence. While the
field of CausalNLP has attracted much interest in the recent years, existing
causal inference datasets in NLP primarily rely on discovering causality from
empirical knowledge (e.g., commonsense knowledge). In this work, we propose the
first benchmark dataset to test the pure causal inference skills of large
language models (LLMs). Specifically, we formulate a novel task Corr2Cause,
which takes a set of correlational statements and determines the causal
relationship between the variables. We curate a large-scale dataset of more
than 200K samples, on which we evaluate seventeen existing LLMs. Through our
experiments, we identify a key shortcoming of LLMs in terms of their causal
inference skills, and show that these models achieve almost close to random
performance on the task. This shortcoming is somewhat mitigated when we try to
re-purpose LLMs for this skill via finetuning, but we find that these models
still fail to generalize -- they can only perform causal inference in
in-distribution settings when variable names and textual expressions used in
the queries are similar to those in the training set, but fail in
out-of-distribution settings generated by perturbing these queries. Corr2Cause
is a challenging task for LLMs, and would be helpful in guiding future research
on improving LLMs' pure reasoning skills and generalizability. Our data is at
https://huggingface.co/datasets/causalnlp/corr2cause. Our code is at
https://github.com/causalNLP/corr2cause. | Zhijing Jin, Jiarui Liu, Zhiheng Lyu, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona Diab, Bernhard Schölkopf | 2023-06-09T12:09:15Z | http://arxiv.org/abs/2306.05836v3 | # Can Large Language Models Infer
###### Abstract
Causal inference is one of the hallmarks of human intelligence. While the field of CausalNLP has attracted much interest in the recent years, existing causal inference datasets in NLP primarily rely on discovering causality from empirical knowledge (e.g. commonsense knowledge). In this work, we propose the first benchmark dataset to test the pure causal inference skills of large language models (LLMs). Specifically, we formulate a novel task Corr2Cause, which takes a (set of) correlational statements and determines the causal relationship between the variables. We curate a large-scale dataset of more than 400K samples, on which we evaluate seventeen existing LLMs. Through our experiments, we identify a key shortcoming of LLMs in terms of their causal inference skills, and show that these models achieve almost close to random performance on the task. This shortcoming is somewhat mitigated when we try to re-purpose LLMs for this skill via finetuning, but we find that these models still fail to generalize - they can only perform causal inference in in-distribution settings when variable names and textual expressions used in the queries are similar to those in the training set, but fail in out-of-distribution settings generated by perturbing these queries. Corr2Cause is a challenging task for LLMs, and would be helpful in guiding future research on improving LLMs' pure reasoning skills and generalizability.1
Footnote 1: Our data is at [https://huggingface.co/datasets/causalnlp/corr2cause](https://huggingface.co/datasets/causalnlp/corr2cause). Our code is at [https://github.com/causalNLP/corr2cause](https://github.com/causalNLP/corr2cause).
## 1 Introduction
Causal inference is a crucial reasoning ability of human intelligence. It is a fundamental aspect of reasoning that involves establishing the correct causal relationships between variables or events. Roughly, there are two distinct ways to obtain causality: one through empirical knowledge, e.g., we know from common sense that preparing a birthday party for a friend will make them happy; the other through _pure causal reasoning_, as causality can be formally argued and reasoned about using known procedures and rules from causal inference (Spirtes et al., 2000; Pearl, 2009; Peters et al., 2017). For example, we know that only knowing that A correlates with B does not mean that A causes B. We also know another property from pure causal inference, specifically the study of causal discovery (Spirtes et al., 2000; Spirtes and Zhang, 2016; Glymour et al., 2019), that if A and B are originally independent of each other, but become correlated given C, then we can infer that, in this closed system, C is a common effect of A and B, as illustrated in Figure 1. This collider phenomenon can be used to deny the causation between A and B, regardless of what realizations the variables A, B, and C take.
We formulate this task as a new task for NLP, namely _correlation-to-causation inference_ (Corr2Cause), and argue that this is a must-have skill for large language models (LLMs). Imagine
the scenario in Figure 1, where in the training corpus there are a large number of correlations, such as the word _vaccine_ correlating with an increased number of disease cases. If we take the position that the success of LLMs (Radford et al., 2019; Devlin et al., 2019; Ouyang et al., 2022; Zhang et al., 2022; OpenAI, 2023, _inter alia_) lies in capturing a vast set of statistical correlations among terms (Bender et al., 2021), then the crucial yet missing step is _how to process such correlations_ and infer causal relationships, for which a fundamental building block is this Corr2Cause inference skill.
To this end, we collect the first dataset, Corr2Cause, to test the pure causal reasoning abilities of large language models. All the questions in this dataset are centered around testing when it is valid or invalid to infer causation from correlation. To systematically compose this dataset, we ground our generalization process in the formal framework of causal discovery (Spirtes et al., 1993, 2000; Glymour et al., 2016; Spirtes and Zhang, 2016; Glymour et al., 2019), which provides rules about how to deduce causal relations among variables given their statistical correlation in the observational data. We generate more than 400K data points, and label a correlation-causation statement pair as valid if and only if there is a bijective mapping between the statistical correlation and the underlying causality.
Based on our Corr2Cause dataset with 400K samples, we investigate two main research questions: (1) How well do existing LLMs perform on this task? (2) Can existing LLMs be re-trained or re-purposed on this task and obtain robust causal inference skills? Through extensive experiments, we show empirically that none of the seventeen existing LLMs we investigate perform well on this pure causal inference task. We also show that although LLMs can demonstrate better performance after being finetuned on the data, the causal inference skills attained by them are not robust. In summary, our contributions are as follows:
1. We propose the novel task of Corr2Cause, to probe an aspect of LLMs reasoning ability, _pure causal inference_;
2. We compose a dataset of over 400K samples, using insights from causal discovery;
3. We evaluate the performance of seventeen LLMs on our dataset, finding that all of them perform poorly, close to the random baseline.
4. We further explored whether LLMs can learn the skill through finetuning, and find that but LLMs fail to robustly manage the skill with out-of-distribution perturbations, and suggest future work to explore more ways to enhance the pure causal inference skill in LLMs.
## 2 Preliminaries: Causal Inference
### Directed Graphical Causal Models (DGCMs)
A directed graphical causal model (DGCM) is a commonly used representation to express the causal relations among a set of variables. Given a set of \(N\) variables \(\mathbf{X}=\{X_{1},\dots,X_{N}\}\), we can encode the causal relations among them using a directed graph \(\mathcal{G}:=(\mathbf{X},\mathbf{E})\), where \(\mathbf{E}\) is the set of directed edges. Each edge \(e_{i,j}\in\mathbf{E}\) represents a causal link \(X_{i}\to X_{j}\), meaning that \(X_{i}\) is a direct cause of \(X_{j}\). In the context of this work, we take the common assumption of directed acyclic graphs (DAGs), which most causal discovery methods use (Glymour et al., 2019), as graphs with cycles can make the causal discovery process arbitrarily hard.
Figure 1: Illustration of the motivation behind our task and dataset.
Following the graph-theoretic terminology, we use an analogy of the ancestry tree to denote the relations between two variables. For example, we call \(X_{i}\) as a _parent_ of \(X_{j}\) if there is a directed edge \(X_{i}\to X_{j}\) in the graph, and, thus, \(X_{j}\) is a _child_ of \(X_{i}\). Similarly, we denote \(X_{i}\) as an _ancestor_ of \(X_{j}\) if there exists a directed path from \(X_{i}\) to \(X_{j}\), and, thus, \(X_{j}\) is a _descendent_ of \(X_{i}\). Note that a parent is a special case of an ancestor where the directed path has a length of 1.
For convenience, we also introduce the notions for some special three-variable relations. Given two variables \(X_{i}\) and \(X_{j}\), we call a third variable \(X_{k}\) a _confounder_ (i.e., _common cause_) if \(X_{k}\) is a parent of both \(X_{i}\) and \(X_{j}\); a _collider_ (i.e., _common effect_) if \(X_{k}\) is a child of both \(X_{i}\) and \(X_{j}\); and a _mediator_ if \(X_{k}\) is both a child of \(X_{i}\), and a parent of \(X_{j}\).
### D-Separation and Markov Property
D-SeparationD-separation (Pearl, 1988) is a fundamental concept in graphical models used to determine whether two sets of nodes \(\mathbf{X}\) and \(\mathbf{Y}\) in a DAG \(\mathcal{G}\) are conditionally independent given a third set of nodes \(\mathbf{Z}\), where the three sets are disjoint. We say that \(\mathbf{X}\) and \(\mathbf{Y}\) are d-separated by \(\mathbf{Z}\) if all paths between any node in \(\mathbf{X}\) and any node in \(\mathbf{Y}\) are _blocked_ by the conditioning set \(\mathbf{Z}\). A path between \(\mathbf{X}\) and \(\mathbf{Y}\) is blocked by \(\mathbf{Z}\) if there exists a node \(A\in\mathbf{Z}\) which satisfies one of the following conditions: \(A\) is the parent node in a fork structure on the path (i.e., \(\cdot\gets A\rightarrow\cdot\)); \(A\) is the mediator node in a chain structure on the path (i.e., \(\cdot\to A\rightarrow\cdot\)); or in any collider structure on the path (i.e., \(\cdot\to A\leftarrow\cdot\)), \(\mathbf{Z}\) does not contain \(A\) or its descendants.
Markov PropertyThe Markov property in a DAG \(\mathcal{G}\) states that each node \(X_{i}\) is conditionally independent of its non-descendants given its parents, namely \(X_{i}\perp\!\!\!\perp\mathbf{NonDe}(X_{i})|\,\mathbf{Pa}(X_{i})\), where \(\mathbf{NonDe}(X_{i})\) denotes the non-descendants of \(X_{i}\) excluding itself, and \(\mathbf{Pa}(X_{i})\) denotes the parents of \(X_{i}\). Using the Markov property, we can factorize the joint distribution of all the nodes in the graph into \(P(X_{1},\ldots,X_{N})=\prod_{i=1}^{N}P(X_{i}|\mathbf{PA}(X_{i}))\). To infer the causal graph from probability distributions, a common assumption is faithfulness, namely the validity to infer all the d-separation sets in the graph from the independence relations in the probability distribution. In our work, we also take this broadly taken assumption which holds for most real-world scenarios.
Markov Equivalence of GraphsWe denote two DAGs as Markov equivalent if they induce the same joint distribution \(P(\mathbf{X})\). The set of DAGs that are Markov equivalent to each other is called a Markov equivalence class (MEC). Causal graphs in the same MEC can be easily identified since they have the same skeleton (i.e., undirected edges) and V-structures (i.e., structures in the form of \(A\to B\gets C\) where \(A\) and \(C\) are not connected).
Obviously, there is a one-to-many mapping (i.e., surjection) between the causal graph and statistical distribution. Namely, each causal graph sufficiently determines a statistical distribution, but from a statistical distribution, we cannot necessarily induce a unique causal graph. This is why we say "correlation does not necessarily mean causation".
### Causal Discovery
Causal discovery aims to learn the causal relations by analyzing statistical properties in the observational data (Spirtes et al., 1993, 2000; Glymour et al., 2016; Spirtes and Zhang, 2016; Glymour et al., 2019). It can be achieved through constraint-based methods (Spirtes et al., 2000), score-based methods (Chickering, 2002), or other methods taking advantage of the functional causal models (Shimizu et al., 2006; Hoyer et al., 2008; Zhang and Hyvarinen, 2009).
To fit for the spirit of this paper to infer from correlation (expressed in natural language) to causation, we base our dataset design on the widely-used Peter-Clark (PC) algorithm (Spirtes et al., 2000). The PC algorithm is based on the principles of conditional independence and the causal Markov assumption, which allows it to efficiently identify causal relationships among variables in a given dataset. The algorithm first starts with a fully connected undirected graph among all the variables. Then it removes the edge between two variables if there is an unconditional or conditional independence relationship between them. Afterwards, it orients the directed edges whenever there is a V-structure. And finally, it iteratively checks the direction of the other edges until the entire causal graph is consistent with all the statistical correlations.
## 3 Dataset Construction
We introduce the construction of our dataset in this section. We start with our task formulation for Corr2Cause, and then briefly give an overview of the data generation process, followed by detailed descriptions of each step. We conclude the section with the overall statistics of the dataset.
### Task Formulation
Given a set of \(N\) variables \(\mathbf{X}=\{X_{1},\dots,X_{N}\}\), we have a statement \(\mathbf{s}\) about all the correlations among the variables, and a hypothesis \(\mathbf{h}\) describing the causal relation \(r\) between the pair of variables \(X_{i}\) and \(X_{j}\). The task is to learn a function \(f:(\mathbf{s},\mathbf{h})\mapsto v\) which maps the correlation statement \(\mathbf{s}\) and the causal relation hypothesis \(\mathbf{h}\) to their validity \(v\in\{0,1\}\), which takes the value 0 if this inference is invalid, and the value 1 if this inference is valid.
### Overview of the Data Generation Process
We base the construction our dataset on several concepts of causal inference, including the DGCM, d-separation, and MECs, as introduced in Section 2.
As in the overview of our data generation process in Figure 2, we first choose the number \(N\) of variables (Step 1) and generate all the unique DGCMs with \(N\) nodes (Step 2), which we will introduce in the Section 3.3. Then we collect all the d-separation sets from these graphs to identify MECs (Step 3) in Section 3.4. Then, in Step 4, we create the formal form of data in Section 3.5. For each correspondence of the MEC to causal graphs, we compose the correlation statement based on the statistical relations in the MEC, and hypothesize a causal relation between two variables, and produce the validity \(v=1\) if the hypothesis is a shared property of all causal graphs in the MEC, and \(v=0\) if the hypothesis is not necessarily true for all the MEC graphs. Finally, we introduce the verbalization process in Section 3.6.
### Constructing the Graphs with Isomorphism Checks
The first step of the data generation is to compose the causal graphs, as in Step 1 and 2 of Figure 2. For a set of \(N\) variables \(\mathbf{X}=\{X_{1},\dots,X_{N}\}\), there are \(N(N-1)\) possible directed edges, since each node can link to any node other than itself. To remove cycles in the graph, we make the nodes in topological order, which only allows edges \(X_{i}\to X_{j}\), where \(i<j\). We achieve this by limiting the adjacency matrix of the graph to only having non-zero values above the diagonal, resulting in \(N(N-1)/2\) possible directed edges for the DAGs.
Figure 2: Pipeline of the data construction process.
At the first glance, for \(N\) nodes, there should be \(2^{N(N-1)/2}\) possible DAGs (i.e., the power set of all edges). However, there could be isomorphic graphs in this set. To avoid this, we perform a graph isomorphism check (McKay and Piperno, 2014), and reduce the set so that only unique DAGs are retained, and we show their statistics in Table 1. Although we can handle large graphs, we mostly focus on smaller graphs that can still lead to a reasonably sized dataset, so we empirically set \(N=6\), but future work can use our open-sourced codes to extend to more nodes.
### Programmatically Generating the D-Separation Sets
Based on the set of unique DAGs, we then programmatically generate the d-separation sets by graph theoretical conditions, as in Step 3 of Figure 2. To realize this step, we code an efficient graph-theoretic algorithm to check for all the chain, fork, and collider structures to automatically identify the set of nodes that d-separate each pair of nodes. Using the d-separation sets and the faithfulness assumption, we form the statistical correlations as follows. For each pair of nodes, they are conditionally independent given the variables in the d-separation set. If the d-separation set is empty, then the two nodes are unconditionally independent. If no d-separation set can be found for the two nodes, then they are directly correlated.
Moreover, using the d-separation sets, we are able to cluster causal graphs to MECs. We achieve it by tracing the mapping between the causal graphs and the set of statistical correlations, and backtracking the graphs with the same d-separation sets to group them in the same MEC. We show in Table 1 that each MEC contains on average 2.66 DAGs.
### Composing the Hypotheses and Label
After generating the set of correlations based on the d-separation sets, we now generate the causal hypotheses. For the causal relation \(r\), we focus on six common causal relations between two nodes introduced in Section 2.1: Is-Parent, Is-Child, Is-Ancestor (excluding the parents), Is-Descendant (excluding the children), Has-Confounder (i.e., there exists a confounder, or common cause, of the two nodes), and Has-Collider (i.e., there exists a collider, or common effect, of the two nodes). In this way, the set of hypotheses contains all six meaningful causal relations between every pair of variables, resulting in a total size of \(6\cdot N(N-1)/2=3N(N-1)\) hypotheses for a graph with \(N\) variables.
To generate the ground-truth validity label, we start from the correlation sets in Step 3, then look up all the causal graphs in the same MEC corresponding to the given set of correlations, and check the necessity of the hypothesized causal relation. If the causal relationship proposed in the hypothesis is valid for all causal graphs within the MEC, then we generate the validity \(v=1\); otherwise, we generate \(v=0\). A special case of valid samples is that when the size of the MEC is 1, then there is a bijective mapping between the causal graph and the d-separation sets, so any hypothesis stating the causal properties of that unique causal graph is valid.
### Verbalizing into Natural Language
Finally, as in the last step of Figure 2, we convert all the information above to text data for our Corr2Cause task. For the correlation statement, we verbalize the set of correlations in Step 3 into a natural language statement \(\mathbf{s}\). When two variables can not be d-separated, i.e., \(A\not\perp B\), then we describe them as "\(A\) correlates with \(B\)" since they are directly correlated and cannot be independent by any condition. And if two variables have a valid d-separation set \(\mathbf{C}\), then we describe them as "\(A\) is independent of \(B\) given \(\mathbf{C}\)." In the special case when the d-separation set is empty, we directly say "\(A\) is independent of \(B\)." In addition, we disambiguate the setting by starting the correlation statement with the setup of a closed system of the given variables, and no hidden variables: "Suppose
\begin{table}
\begin{tabular}{c l c c c} \hline \hline \# Nodes & \# Unique DAGs & \# Edges/DAG & \# MECs & \# DAGs/MEC \\ \hline
2 & 2 out of \(2\) & 0.50 & 2 & 1.0 \\
3 & 6 out of \(2^{3}\) & 1.67 & 5 & 1.2 \\
4 & 31 out of \(2^{6}\) & 3.48 & 20 & 1.55 \\
5 & 302 out of \(2^{10}\) & 5.89 & 142 & 2.13 \\
6 & 5,984 out of \(2^{15}\) & 8.77 & 2,207 & 2.71 \\ Total & 6,325 & 8.60 & 2,376 & 2.66 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics about the source causal graphs in our dataset. Given the number of nodes, we report the number of unique DAGs, average number of edges per DAG, number of MECs, and average number of DAGs per MEC.
there is a closed system of \(N\) variables, A, B,... All the statistical relations among these \(N\) variables are as follows:". Finally, to verbalize the hypothesis, we feed the causal relation triplet (\(X_{i}\), \(r\), \(X_{j}\)) into their hypothesis templates in Table 2. For example, we turn the triplet (\(A\), Is-Parent, \(B\)) into "\(A\) directly causes \(B\)", as in the example of Figure 2.
### Statistics of the Resulting Data
We show the statistics of our Corr2Cause dataset in Table 3. Overall, our dataset contains 415,944 samples, with 18.57% in valid samples. The average length of the premise is 424.11 tokens, and hypothesis 10.83 tokens. We split the data into 411,452 training samples, 2,246 development and test samples, respectively. Since the main purpose of the dataset is to benchmark the performance of LLMs, we prioritize the test and development sets to have a comprehensive coverage over all sizes of graphs. Specifically, we iterate through the subset of our data for each \(N\), and split it entirely for only the test and development sets if the data is less than 1K, which is the case for \(N=2\) and \(3\). For the other subsets that are larger, we randomly sample up to 1K or 10% of the data, whichever is smaller, to the test and development sets. We set the cap to be 1K in order to form a reasonable computation budget, since many LLMs are expensive to query in the inference mode. Aside from the test and valid sets, all the rest of the data goes into the training set.
## 4 Experiments
### Experimental Setup
We set up a diverse list of LLMs for the experiments on our Corr2Cause dataset. To _test existing LLMs_, we first include six commonly used BERT-based NLI models in the transformers library Wolf et al. (2020) with the most number of downloads: BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), BART Lewis et al. (2020), DeBERTa He et al. (2021), DistilBERT Sanh et al. (2019), and DistilBERT Shieifer and Rush (2020). Apart from these BERT-based NLI models, we also evaluate the general-purpose autoregressive LLMs based on GPT Radford et al. (2019): GPT-3 Ada, Babbage, Curie, Davinci Brown et al. (2020); its instruction-tuned versions Ouyang et al. (2022), text-davinci-001, text-davinci-002, and text-davinci-003; and GPT-3.5 (i.e., ChatGPT), and the latest GPT-4 (OpenAI, 2023), using the OpenAI API2 with temperature 0. We also evaluate the recent, more efficient models LLaMa Touvron et al. (2023) and Alpaca Taori et al. (2023).
\begin{table}
\begin{tabular}{l l} \hline \hline Causal Relation & Hypothesis Template \\ \hline Is-Parent & \{Var i\} directly causes \{Var j\}. \\ Is-Ancestor & \{Var i\} causes something else which causes \{Var j\}. \\ Is-Child & \{Var j\} directly causes \{Var i\}. \\ Is-Descendant & \{Var j\} is a cause for \{Var i\}, but not a direct one. \\ Has-Collider & There exists at least one collider (i.e., common effect) of \{Var i\} and \{Var j\}. \\ Has-Confounder & There exists at least one confounder (i.e., common cause) of \{Var i\} and \{Var j\}. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Templates for each causal relation in the hypothesis. We use \{Var i\} and \{Var j\} as placeholders for the two variables.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & Overall & \multicolumn{6}{c}{Statistics by the Number of Nodes \(N\)} \\ \cline{3-7} & & \(N=2\) & \(N=3\) & \(N=4\) & \(N=5\) & \(N=6\) \\ \hline \# Samples & 415,944 & 24 & 180 & 1,440 & 17,040 & 397,260 \\ \# Test & 2,246 & 12 & 90 & 144 & 1,000 & 1,000 \\ \# Dev & 2,246 & 12 & 90 & 144 & 1,000 & 1,000 \\ \# Train & 411,452 & 0 & 0 & 1,152 & 15,040 & 395,260 \\ \# Tokens/Premise & 424.11 & 31.5 & 52.0 & 104.0 & 212.61 & 434.54 \\ \# Tokens/Hypothesis & 10.83 & 10.83 & 10.83 & 10.83 & 10.83 & 10.83 \\ \% Valid Labels & 18.57 & 0.00 & 3.33 & 7.50 & 13.01 & 18.85 \\ Vocab Size & 65 & 49 & 53 & 55 & 57 & 61 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics of our Corr2Cause dataset, and by subsets. We report the total number of samples; splits of the test, developement and training sets; number of tokens per premise and hypothesis; percentage of the entailment labels, and vocabulary size. Note that the number of unique graphs and MECs are in Table 1.
When inspecting the behavior of _finetuned models_, we adopt a large set of models, including GPT-based models (GPT-3 Ada, Babbage, Curie, and Davinci) using the OpenAI finetuning API for classification,3 BERT-based models from scratch (BERT-Base, BERT-Large, RoBERTa-Base, and RoBERTa-Large), and BERT-Based NLI models (BERT-Base MNLI, BERT-Large MNLI, RoBERTa-Base MNLI, and RoBERTa-Large MNLI) using the transformers library (Wolf et al., 2020). Our training details are available in Appendix A.
Footnote 3: [https://platform.openai.com/docs/guides/fine-tuning](https://platform.openai.com/docs/guides/fine-tuning)
For the _random baselines_, we provide "always majority" to predict the majority class 100% of the time, "random (uniform)" which randomly samples a label with 50% chance for each, and "random (proportional)" which samples a label from a Bernouli distribution proportional to the development set label distribution.
### The Corr2Cause Skill in Existing LLMs
We show the performance of LLMs in Table 4. We can see that pure causal inference is a very challenging task across all existing LLMs. Among all the LLMs, the best performance is 33.38% F1 by BART MNLI, which is even higher than latest GPT-based model, GPT-4. Notably, many models are worse than random guess, which means that they totally fail at this pure causal inference task.
### Finetuned Performance
Next, we address the question: Can we re-purpose LLMs to learn this task?
The experimental results in Table 5a of 12 models finetuned on our Corr2Cause seem very strong at first sight. Most models see a substantial increase, among which the finetuned BERT-based NLI models demonstrate the strongest performance. The best-performing one, RoBERTa-Large MNLI, achieves 94.74% F1 score on this task, as well as very high precision, recall and accuracy scores.
### Fine-Grained Performance by Causal Relation
In addition to the overall results mentioned above, we also conduct a fine-grained analyze to check the performance of the strongest model, RoBERTa-Large MNLI, by our six causal relation types.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & F1 & Precision & Recall & Accuracy \\ \hline _Random Baselines_ & & & & \\ Always Majority & 0.0 & 0.0 & 0.0 & 84.77 \\ Random (Proportional) & 13.5 & 12.53 & 14.62 & 71.46 \\ Random (Uniform) & 20.38 & 15.11 & 31.29 & 62.78 \\ \hline _BERT-Based Models_ & & & & \\ BERT MNLI & 2.82 & 7.23 & 1.75 & 81.61 \\ RoBERTa MNLI & 22.79 & 34.73 & 16.96 & 82.50 \\ DeBERTa MNLI & 14.52 & 14.71 & 14.33 & 74.31 \\ DistilBERT MNLI & 20.70 & 24.12 & 18.13 & 78.85 \\ DistilBERT MNLI & 26.74 & 15.92 & 83.63 & 30.23 \\ BART MNLI & **33.38** & 31.59 & 35.38 & 78.50 \\ \hline _LLaMa-Based Models_ & & & & \\ LLaMa-6.7B & 26.81 & 15.50 & 99.42 & 17.36 \\ Alpaca-6.7B & 27.37 & 15.93 & 97.37 & 21.33 \\ \hline _GPT-Based Models_ & & & & \\ GPT-3 Ada & 0.00 & 0.00 & 0.00 & 84.77 \\ GPT-3 Babbage & 27.45 & 15.96 & 97.95 & 21.15 \\ GPT-3 Curie & 26.43 & 15.23 & 100.00 & 15.23 \\ GPT-3 Davinci & 27.82 & 16.57 & 86.55 & 31.61 \\ GPT-3 Instruct (text-davinci-001) & 17.99 & 11.84 & 37.43 & 48.04 \\ GPT-3 Instruct (text-davinci-002) & 21.87 & 13.46 & 58.19 & 36.69 \\ GPT-3 Instruct (text-davinci-003) & 15.72 & 13.4 & 19.01 & 68.97 \\ GPT-3.5 & 21.69 & 17.79 & 27.78 & 69.46 \\ GPT-4 & 29.08 & 20.92 & 47.66 & 64.60 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Overall performance. We report F1 (main metric), precision, recall and accuracy. For the main metric, F1 score, we use the **bold** font to highlight the overall best performance, and underline to highlight the best performance within each category of models.
As in Table 5(a), the model is very good at judging relations such as Is-Parent, Is-Descendant and Has-Confounder, all with more than 96% F1 scores, whereas it is several points weaker on the Has-Collider relations. This could be due to that the collider relation is the most special type, requiring identification of the V-structure based on both the unconditional independence based on the two variables only and correlations whenever conditioned on a common descendant.
### Robustness Analysis
Looking at the very high performance of the finetuned models, we raise the next question: Did the models really robustly learn the causal inference skills?
Two Robustness TestsWe design two simple robustness tests: (1) paraphrasing, and (2) variable refactorization. For (1) paraphrasing, we simply paraphrase the hypothesis by changing the text template for each causal relation to some semantically-equivalent alternatives in Appendix B. For (2) variable refactorization, we reverse the alphabet of the variable names, namely flipping A, B, C, to Z, Y, X and so on. The inspiration behind the two robustness tests comes from the spurious correlation analysis described in Appendix C.
Specifically, we adopt the common setup of text adversarial attack (Morris et al., 2020; Jin et al., 2020) to preserve the training set and keep the same saved models, but run the inference n the perturbed test set. In this way, we separate the possibilities of the models only overfitting on the training data vs. mastering the reasoning skills.
Results After PerturbationWe can see from Table 4(b) that all the models drop drastically, by up to 39.29 when we paraphrase the test set, and they decrease substantially by up to 58.38 when we refactor the variable names. The best-performing model, RoBERTa-Large MNLI, is especially sensitive towards paraphrasing, demonstrating the most drop among all models; however, it is the most robust against the variable refactorization, maintaining a high F1 score of 67.87. We conduct fine-grained analysis for RoBERTa-Large MNLI under perturbation in Table 5(b). We can see the the main source of the performance drop of the model comes from the two classes, Is-Ancestor (decreasing to 45.45%) and Is-Descendant (decreasing to 29.41%), while the other classes stay relatively robust, keeping their F1 scores over 70%.
From this analysis, we make the following suggestions to future studies testing this Corr2Cause skill of LLMs. First, it is safe to use it as a test set to benchmark existing LLMs' performance, since the data we generate is out-of-distribution from the training data of the current LLMs. Then, when testing finetuned models, it is very important to accompany adversarial attack together with the i.i.d. test set. We also provide our perturbed versions of the test set in our data for future work to test the generalizability skill.
\begin{table}
\end{table}
Table 5: Performance of finetuned models on the original test set and perturbed test sets.
## 5 Related Work
Existing Causal Reasoning TasksA large body of existing research of causal reasoning in NLP focuses on leveraging empirical knowledge to do tasks such as inferring the cause and effect of why an agent perform certain tasks (Sap et al., 2019), the motivation and emotional reaction in a social context (Sap et al., 2019), how people achieve a given goal with a set of concrete steps (Zhang et al., 2020), the development of a story given a different beginning (Qin et al., 2019), and how in general LLMs serve as a knowledge base of cause and effect (Willig et al., 2023; Kiciman et al., 2023). In contrast, our Corr2Cause task focuses on the pure causal inference skill of models, which is a knowledge-dependent reasoning skill based on formally correct rules from causal inference.
Existing Logical and Inference TasksAnother related area of literature is logical and inference tasks. A well-established task is natural language inference (NLI), which identifies the semantic relationship between a pair of sentences (MacCartney and Manning, 2008; Bowman et al., 2015). NLI datasets mainly focus on the set and paraphrase relations, such as "a group of boys are playing football" can entail "some guys are playing football," where "boys" are a sub-concept of "guys" and "a group of" and "some" are paraphrases. Existing datasets cover entailment in news articles (Dagan et al., 2006), image captions (Bowman et al., 2015), and across multiple genres (Williams et al., 2018). Recently, there has been increasing efforts to extend the inference task to various logical inference skills such as deductive logic and propaganda techniques (Jin et al., 2022; Alhindi et al., 2022). Our Corr2Cause dataset is the first dataset testing the correlation-to-causation inference skill, which is unique of its type.
## 6 Limitations and Future Work
We identify several limitations of this work and open future directions: First, in the context of this work, we limit the causal graphs to two to six nodes, but future work can feel free to explore larger graphs. Another aspect is that we do not assume hidden confounders in this inference problem, so we welcome future work to generate an even more challenging dataset to infer the existence of hidden confounders, analogous to the causal discovery algorithm of fast causal inference (FCI) (Spirtes et al., 2000). Finally, a lot of motivation behind proposing this task is inspired by the problem of invalid reasoning patterns in our daily reasoning (Jin et al., 2022), which could fertilize the ground for more pervasive spread of fake news. We believe false causal inference is a prevalent type of fallacious beliefs, and welcome future work to connect the idea of this benchmark to more real-world false beliefs based on confusing correlation with causation.
## 7 Conclusion
In this work, we introduced a novel task, Corr2Cause, to infer causation from correlation, and collected a large-scale dataset of more than 400K samples. We evaluated an extensive list of LLMs on this new task, and showed that off-the-shelf LLMs perform poorly on this task. We also show that it is possible to re-purpose LLMs on this task by finetuning, but future work needs to be aware of the out-of-distribution generalization problem. To avoid the Goodhart's law, we recommend using this dataset to benchmark the pure causal inference skills for LLMs that have not seen this dataset. Given the limited reasoning abilities of current LLMs, and the difficulty of separating actual reasoning from training-corpus-derived knowledge, it is imperative that our community focus on work aiming to accurately disentangle and measure both abilities. We believe that the present work is a first such step.
\begin{table}
\end{table}
Table 6: Fine-grained analysis of the best-performing model, RoBERTa-Large MNLI.
## Acknowledgment
We thank Riley Goodside for valuable suggestions to improve the our prompts to LLMs. We thank Luigi Gresele and Amir Hossein Karimi for their suggestions to help us improve the formulation of our causal discovery questions.
This material is based in part upon works supported by the German Federal Ministry of Education and Research (BMBF): Tubingen AI Center, FKZ: 01IS18039B; by the Machine Learning Cluster of Excellence, EXC number 2064/1 - Project number 390727645; by the Precision Health Initiative at the University of Michigan; by the John Templeton Foundation (grant #61156); by a Responsible AI grant by the Haslertsifung; and an ETH Grant (ETH-19 21-1). Zhijing Jin is supported by PhD fellowships from the Future of Life Institute and Open Philanthropy. We also thank OpenAI for granting Zhijing quota to their API of GPT series through the Researcher Access Program.
|
2301.10522 | Robot Subset Selection for Swarm Lifetime Maximization in Computation
Offloading with Correlated Data Sources | Consider robot swarm wireless networks where mobile robots offload their
computing tasks to a computing server located at the mobile edge. Our aim is to
maximize the swarm lifetime through efficient exploitation of the correlation
between distributed data sources. The optimization problem is handled by
selecting appropriate robot subsets to send their sensed data to the server. In
this work, the data correlation between distributed robot subsets is modelled
as an undirected graph. A least-degree iterative partitioning (LDIP) algorithm
is proposed to partition the graph into a set of subgraphs. Each subgraph has
at least one vertex (i.e., subset), termed representative vertex (R-Vertex),
which shares edges with and only with all other vertices within the subgraph;
only R-Vertices are selected for data transmissions. When the number of
subgraphs is maximized, the proposed subset selection approach is shown to be
optimum in the AWGN channel. For independent fading channels, the max-min
principle can be incorporated into the proposed approach to achieve the best
performance. | Siqi Zhang, Na Yi, Yi Ma | 2023-01-25T11:03:54Z | http://arxiv.org/abs/2301.10522v1 | Robot Subset Selection for Swarm Lifetime Maximization in Computation Offloading with Correlated Data Sources
###### Abstract
Consider robot swarm wireless networks where mobile robots offload their computing tasks to a computing server located at the mobile edge. Our aim is to maximize the swarm lifetime through efficient exploitation of the correlation between distributed data sources. The optimization problem is handled by selecting appropriate robot subsets to send their sensed data to the server. In this work, the data correlation between distributed robot subsets is modelled as an undirected graph. A least-degree iterative partitioning (LDIP) algorithm is proposed to partition the graph into a set of subgraphs. Each subgraph has at least one vertex (i.e., subset), termed representative vertex (R-Vertex), which shares edges with and only with all other vertices within the subgraph; only R-Vertices are selected for data transmissions. When the number of subgraphs is maximized, the proposed subset selection approach is shown to be optimum in the AWGN channel. For independent fading channels, the max-min principle can be incorporated into the proposed approach to achieve the best performance.
## I Introduction
By robot swarm, we refer to the definition provided by the US Army Academy [1, Sec. II]:
**Definition 1** (Robot Swarm): _A robot swarm is a group of (three or more) robots that perform tasks cooperatively while receiving limited or no control from human operators. The term 'cooperatively' is defined in the way that involves mutual assistance in working towards a common goal. It does not necessarily imply or require communication or explicit coordination between entities._
This technology has found rich applications including disaster recovery, defense, reconnaissance, inspection, mapping, farming, food management, and space systems.
In light of _Definition 1_, we consider a robot swarm where a set of mobile robots cooperatively perform a single task. Every robot can conduct environment sensing and if required send their sensed data to a computing server through a connected access point (AP). The server conducts processing on the received data and sends computing outcomes to the robots. This work falls into the scope of mobile edge computing (MEC) in the wireless communication domain. The motivation of offloading computing tasks to the edge server lies in (see [2]): _1)_ data processing (computing) tasks are often energy hungry and thus not suitable for battery-powered mobile robots; _2)_ edge servers are often equipped with powerful computing units (such as GPUs, APUs or NPUs), which can provide parallel and low-latency computing services.
Our prior arts analysis shows that most of current MEC computation-task offloading works are focused on resource allocation problems in single-user equivalent computing scenarios (e.g. [3, 4, 5, 6]), i.e., despite multi-user communications in reliability, computing tasks (or data) from different devices are processed individually at the edge server. Recently, multi-user edge computing has attracted increasing interests, where data from different devices must be jointly processed to yield the computing outcome. In this case, users are branch and bound in the procedure of computation-task offloading [7].
This work falls into the scope of multi-user edge computing, with the aim of maximizing the lifetime of robot swarm exploiting the data source correlation between mobile robots. Major contributions of this work include: _1)_ the development of a novel robot-subset model using an undirected graph, describing the data source correlation between spatially distributed robots; _2)_ the establishment of a novel cost function, formulating the optimization problem for the swarm-lifetime maximization; _3)_ the development of a novel graph partitioning and subset selection approach to tackle the swarm-lifetime maximization problem. It is shown that, when the number of subgraphs is maximized, the proposed subset selection approach is optimum in the AWGN channel. As far as independent flat-fading channels are concerned, the max-min principle can be incorporated into the proposed approach to maximize the swarm lifetime. The above findings are elaborated through well-designed computer simulations.
## II System Model and Problem Formulation
In this section, we introduce the basic concept of robot subset, data correlation, communication model, as well as the formulation of optimization problem.
### _Robot, Subset, and Data Correlation Model_
Consider a robot swarm where a set of mobile robots (\(N\)) collaboratively perform a single task. Mathematically, a robot is defined as a tuple, \(s\triangleq(x,\mathcal{E},\varepsilon)\), where \(x\) stands for the sensed data, \(\mathcal{E}\) for the remaining energy of the robot, and \(\varepsilon\) for the energy consumption per job 1. Then, the robot swarm is described as a set: \(\mathcal{S}\triangleq\{s_{n}|n=1,...,N\}\). Provided that all robots send their data to the MEC server, the multi-user edge computing process can be described as
Footnote 1: For the sake of concise presentation, \(\varepsilon\) is assumed to be a constant.
\[y=f(x_{n}|n=1,...,N),\ y\in\mathcal{A}\triangleq\{a_{i}|l=1,...,L\}, \tag{1}\]
where \(x_{n}\), \({}_{n=1,...,N}\), denotes the data from the \(n^{th}\) robot, and \(y\) denotes the computing outcome. Following _Definition 1_, we
assume there is no human intervention in the computing and decision making. Then, the outcome \((y)\) must belong to a finite-alphabet set defined by \(\mathcal{A}\), which specifies all possible actions (i.e., \(a_{l,l=1,...,L}\)) of the robot swarm; \(L\) is the total number of actions. Since \((\mathcal{E},\varepsilon)\) has no impact on \(y\), (1) can have the following notation-simplified form
\[y=f(\mathcal{S}),\ y\in\mathcal{A}. \tag{2}\]
The above description is for the generic case. However, in many practical applications, the MEC does not need the whole set (\(\mathcal{S}\)) to form the computing outcome. This is because robots' data are correlated and carry considerable redundancy. This insight motivates us to propose the concept of robot subset in the multi-user edge computing.
**Definition 2** (Robot Subset): _Consider a subset of robots within the swarm, denoted by \(\Omega_{m}\subset\mathcal{S}\) with \(|\Omega_{m}|=K_{m}\), \({}_{m=1,...,M}\), where \(|\cdot|\) stands for the cardinality of the set, and \(M\) for the number of subsets. The MEC outcome corresponding to the subset \(\Omega_{m}\) is described by_
\[y_{m}=f(\Omega_{m}),\ y_{m}\in\mathcal{A}. \tag{3}\]
_It is assumed that there exists a \(K(<N)\), for \(K_{m}\geq K\), the result \((y_{m}=y)\) always holds 2. Using the terminology of data science [9], the robot subset (\(\Omega_{m}\)) is said to have the same value of information as the whole set (\(\mathcal{S}\))._
Footnote 2: This assumption turns out to be true for robot (or sensor) networks where robots (or sensors) are closely located in a geographical area (see [8]).
**Definition 2** implies that the whole set (\(\mathcal{S}\)) can be divided into \(M\) robot subsets, with each having the same value of information. Then, for each computing task, only one of subsets is required to send their sensed data to the MEC. This will result in significant saving in communication energy and longer swarm lifetime. The issue of how to divide \(\mathcal{S}\) into subsets will be addressed in Section IV-A.
### _Communication Model_
For every single task, the MEC chooses one of robot subsets (denoted by \(\Omega_{m}\)), requesting them send their data to the MEC through \(|\Omega_{m}|\) orthogonal sub-channels. Without loss of generality, we assume \(|\Omega_{m}|=K,\forall m\). For a robot \(s_{k}\in\Omega_{m}\), the channel capacity between this robot and the AP is denoted as \(C(\alpha_{k},p_{k})\), where \(\alpha_{k}\) is the channel quality, and \(p_{k}\) is the transmit power which is capped by a physical constraint (\(p_{k}\leq P_{k}\)). Then, the robot subset (\(\Omega_{m}\)) must fulfill the following criteria:
**Criterion 1**: \(\forall s_{k}\in\Omega_{m}\)_, their transmission rate (denoted by \(R_{k}\)) must fulfill_
\[R_{k}<C(\alpha_{k},p_{k})\leq C(\alpha_{k},P_{k}); \tag{4}\]
_or otherwise the robot (\(s_{k}\)) or the subset (\(\Omega_{m}\)) must not be chosen due to the capacity outage._
**Criterion 2**: _The data transmission must be completed within a time duration (\(T_{\rm c}\)) for the sake of latency._
**Criterion 3**: _All robots must have enough remaining energy to execute the allocated job._
After the computing process (3), the outcome (\(y\)) is broadcasted to all robots. It is assumed that the broadcast is reliable and efficient. This is the commonly used assumption in the literature (e.g., [3, 10, 11]), which allows us focusing on the key problem of interest.
### _Optimization Problem_
This work aims to maximize the swarm lifetime through optimum selection of robot subsets for the data transmission. The swarm lifetime is defined as:
**Definition 3** (Swarm Lifetime): _A task assigned to the robot swarm requires collaborative work amongst all the robots. Should any robot stop working due to the short of device energy results in task failure of the whole robot swarm. Therefore, the swarm lifetime counts as the maximum number of tasks a robot swarm can complete all the way from the beginning to the end._
Consider the life cycle of the \(i^{th}\) task conducted by the robot swarm. Robots have the following energy consumption model:
\[\mathcal{E}_{k}(i)=\mathcal{E}_{k}(i-1)-p_{k}(i)T_{\rm c}-\epsilon-\varepsilon, \ \forall s_{k}\in\Omega(i), \tag{5a}\] \[\mathcal{E}_{\ell}(i)=\mathcal{E}_{\ell}(i-1)-\varepsilon,\ \forall s_{\ell}\in \overline{\Omega(i)}, \tag{5b}\]
where \(\epsilon\) is the energy consumption for communication-related computation such as the source and channel coding, and the subscript \((\cdot)_{\ell}\) is the index for unselected robots. Note that we shall have the condition:
\[\mathcal{E}_{k}(i)\geq 0,\mathcal{E}_{\ell}(i)\geq 0,\forall k,\ell; \tag{6}\]
or otherwise, the \(i^{th}\) task won't happen, and the \((i-1)^{th}\) task would be the last task assigned to the robot swarm. Therefore, whether or not we will have the \(i^{th}\) task, this depends on the strategy of robot selection in the previous tasks.
Define \(\Phi(i-1)\triangleq(\Omega(1),\Omega(2),...,\Omega(i-1))\) with \(\Phi(0)=\emptyset\). We further define \(\Gamma(\Phi(i-1))\) the strategy of robot selection which yields
\[\Gamma(\Phi(i-1))=\left\{\begin{array}{ll}i,&(\ref{eq:1})\ \rm{is\ true}\\ 0,&(\ref{eq:2})\ \rm{is\ false}\end{array}\right. \tag{7}\]
According to _Definition 3_, the swarm-lifetime maximization problem has the following objective function
\[\Phi^{\star}(i-1)=\mathop{\arg\max}_{\Phi(i-1)}\ \Gamma(\Phi(i-1)). \tag{8}\]
This is the optimization problem we strive to address in Section III.
In the scope of wireless sensor networks (WSNs), the problem of system lifetime maximization has already been extensively studied; see the survey paper [12] and the references therein. However, the lifetime of WSN is very different from that of robot swarm in manifold: _1)_ WSNs have no task to execute, i.e., the term \(\varepsilon\) won't appear in the energy cost function (5); _2)_ the life of WSN continues when one of sensors dies, i.e., the condition (6) does not necessarily hold; _3)_ sensors often have their data highly or fully correlated, and thus the lifetime maximization problem can be handled
through a sensor-level selection protocol; there is no multi-user edge computing problem involved. All of these differences render our robot swarm lifetime maximization a novel problem of optimization.
## III The Method for Robot Swarm Lifetime Maximization
### _Analysis of The Optimization Problem_
The implication of (8) is that the lifetime maximization is equivalent to the convergence problem of an iterative algorithm. The only difference is that we want the convergence to be as slow as possible. Specifically, (5b) can be written into
\[\mathcal{E}_{\ell}(i)=\mathcal{E}_{\ell}(0)-i\varepsilon,\ \forall s_{\ell} \in\overline{\Omega}. \tag{9}\]
It means that \(\mathcal{E}_{\ell}(i)\) features linear convergence with the rate of \(\varepsilon\). A bit complicated case is (5a), where \(E_{k}(i)\) does not converge linearly. Nevertheless, for every single robot, we can present their energy consumption in an universal model
\[\mathcal{E}_{n}(i)=\mathcal{E}_{n}(0)-\zeta_{n}(i)-i\varepsilon, \tag{10}\]
and \(\zeta_{n}(i)\) is defined by
\[\zeta_{n}(i)\triangleq\sum_{i^{\prime}=0}^{i}\sum_{j\in\Psi_{n}}(p_{k}(i^{ \prime})T_{\mathrm{c}}+\epsilon)\delta(i^{\prime}-j), \tag{11}\]
where \(\Psi_{n}\) is the set collecting all iteration indexes when the \(n^{th}\) robot is chosen to send their data, and \(\delta(\cdot)\) is the Dirac Delta function [13].
Define \(e_{n}(i)\triangleq\mathcal{E}_{n}(0)-\zeta_{n}(i)\). A robot with smaller \(e_{n}(i)\) has a shorter lifetime. According to _Definition 3_, the swarm lifetime is determined by the robot with the shortest lifetime. Therefore, the problem of swarm lifetime maximization is equivalent to maximizing the minimum of \(e_{n}(i),\forall n\), i.e.,
\[\Phi^{*}(i-1)=\operatorname*{arg\,max}_{\Phi(i-1)}\Bigl{(}\min(e_{1}(i),e_{2} (i),...,e_{N}(i))\Bigr{)}. \tag{12}\]
The max-min optimization problem in (12) is well-known as NP-hard (see [14]). However, it is possible to find good sub-optimum solutions based on some relaxed conditions. We will tackle this problem step by step all the way from the AWGN channel to the fading channel.
### _Optimization in The AWGN Channel_
In the context of AWGN channel, the channel quality measure, \(\alpha_{n}\), is not robot dependent. Instead, it can be simply set as: \(\alpha_{n}=1,\forall n\). Then, the channel capacity is not a criterion for selecting a robot subset. The key parameters are mainly the remaining energy \(\mathcal{E}_{n}\) and the subsets \(\Omega_{m},\forall n,m\). Next, we will discuss about two cases when robots have or do not have identical initial state of the remaining energy.
#### Iii-B1 The case with identical state of the remaining energy
**Lemma 1**: _Suppose: c1) \(\mathcal{E}_{n}(0),\forall n\), are identical; c2) \(\Omega_{c}\triangleq\Omega_{1}\cap\Omega_{2}\cdots\cap\Omega_{M}\neq\emptyset\). The swarm lifetime (\(i\)) is given by_
\[i=\left\lfloor\frac{\mathcal{E}(0)}{pT_{\mathrm{c}}+\epsilon+\varepsilon} \right\rfloor, \tag{13}\]
_where \(\mathcal{E}\) and \(p\) have their index omitted since they are identical for all robots, and \(\lfloor.\rfloor\) stands for the integer floor._
Given the condition c1), robot subsets are the only influencing factors for the lifetime maximization. Given the condition c2), no matter which subset is chosen, the common part of all subsets (\(\Omega_{c}\)) is always chosen. Therefore, robots within \(\Omega_{c}\) have the shortest lifetime, and their energy consumption model (10) becomes
\[\mathcal{E}(i)=\mathcal{E}(0)-i(pT_{\mathrm{c}}+\epsilon+\varepsilon). \tag{14}\]
Given \(\mathcal{E}(i)\geq 0\), solving this inequality leads to (13).
**Lemma 2**: _Given the conditions c1) and c3) \(\Omega_{m_{1}}\cap\Omega_{m_{2}}=\emptyset,\forall m_{1}\neq m_{2}\), the swarm lifetime (\(i\)) is upper-bounded by_
\[i\leq\frac{\mathcal{E}(0)-\lceil(i)/(M)\rceil\left(pT_{\mathrm{c}}+\epsilon \right)}{\varepsilon}, \tag{15}\]
_where \(\lceil\cdot\rceil\) stands for the integer ceiling._
With the conditions c1) and c3), the best strategy for subset selection is round robin because all subsets are equal. In this case, the energy consumption model generally stays the same as (10), but the term \(\zeta_{n}(i)\) becomes a robot-index independent term
\[\zeta(i)=\left\lceil\frac{i}{M}\right\rceil(pT_{\mathrm{c}}+\epsilon). \tag{16}\]
This expression describes the subsets which have been chosen for the most of times. In other words, robots within those subsets have the shortest lifetime. Due to \(\mathcal{E}(i)\geq 0\), we immediately have the inequality (15). The upper bound (15) does not offer a closed-form solution for \(i\). Nevertheless, it is very easy to determine \(i\) through line searching.
**Lemma 1** & 2 exhibits the maximum swarm lifetime only for two special cases of robot subsets. Nevertheless, they lay the foundation for us to address the lifetime maximization in a generic case, for which we model the subset selection as a graph partitioning problem.
**Definition 4** (Graphical Model of Subsets): _Define an undirected graph \(G(V,E)\), where \(V\) denotes the set of vertices and \(E\) the set of edges. There are \(M\) vertices in the graph (i.e., \(|V|=M\)) which are corresponding to \(M\) robot subsets. The edge between two vertices reflects the presence of common part between two corresponding subsets (i.e., the edge between the vertices \(m_{1}\) and \(m_{2}\) exists when \(\Omega_{m_{1}}\cap\Omega_{m_{2}}\neq\emptyset\)). A degree matrix \(\mathbf{D}\) is formed to record the number of edges linked to vertices (see the definition of \(\mathbf{D}\) in the graph theory [15])._
**Definition 5** (Graph Partitioning Problem): _Our graph partitioning problem is stated by: partitioning an undirected graph into a maximal number of subgraphs (\(\overline{M}\)) fulfilling the criterion: c4) each subgraph has at least one vertex that shares edges with and only with all other vertices within the subgraph. We call this vertex representative vertex (R-Vertex)._
According to _Definition 4_, the condition c2) can now be described by a complete graph \(G(M,\frac{(M)(M-1)}{2})\), and the condition c3) can be described by a null graph \(G(M,0)\). For the complete graph, we cannot further partition the graph, and thus have \(\overline{M}=1\). For the null graph, it is naturally partitioned into \(\overline{M}=M\) subgraphs. For both cases, any vertex is a R-Vertex. When conducting subset selection at the subgraph level, it is trivial to find that the swarm lifetime is determined by (14) and (15), respectively. Applying the graph partitioning concept onto the generic case, the following result can be obtained.
**Theorem 1**: _For the generic case when the conditions c1) and c4) hold, the swarm lifetime is upper-bounded by_
\[i\leq\frac{\mathcal{E}(0)-\left\lceil(i)/(\overline{M})\right\rceil(pT_{ \mathrm{c}}+\epsilon)}{\varepsilon}. \tag{17}\]
Proof:: Suppose that a graph can be maximally partitioned into \(\overline{M}\) subgraphs following the criterion specified in c4). Subset selection is applied on the subgraph level in a round robin manner. For each subgraph, only (one of) the R-Vertex (R-Vertices) is chosen for the data transmission. Since R-Vertices from different subgraphs are not directly connected to each other, this is the equivalent case to that in _Lemma 2_. Following the discussion in the proof of _Lemma 2_, the swarm lifetime is determined by (17).
The reason of choosing R-Vertices is to ensure that there is no overlap between selected subsets; or otherwise, some robots would be chosen multiple times in one round, and such will shorten the swarm lifetime; as have already been discussed in _Lemma 2_. Given \(\overline{M}\) being the maximum number of subgraphs, the right-hand side of (17) is maximized. _Theorem 1_ is therefore proved.
**Theorem 1** shows that the key for lifetime maximization is to find \(\overline{M}\) through an optimum graph-partitioning algorithm. However, graph partitioning is usually a NP-hard problem, and we can hardly claim an optimum solution [16]. In this paper, we propose a least-degree iterative partitioning (LDIP) algorithm that can offer a good sub-optimum solution for the graph partitioning.
Fig. 1 illustrates the concept of the LDIP algorithm. Basically, the algorithm consists of three steps:
_Step 1 (Least degree finding):_ Sort the diagonal of the degree matrix \(\mathbf{D}\) in an ascent order; the first diagonal entry of \(\mathbf{D}\) (denoted by \(D_{(0,0)}\)) has the least degree.
_Step 2 (Subgraph forming):_ The vertex (subset) corresponding to \(D_{(0,0)}\) is chosen to be the R-Vertex of the subgraph under construction. All vertices that are directly connected to the R-Vertex are included into this subgraph.
_Step 3 (Degree matrix updating):_ Update the degree matrix \(\mathbf{D}\) (dimensional reduction) by eliminating all vertices that have been chosen at _Step 2_. Repeat _Step 1_ until \(\mathbf{D}\) reduces to a scalar equaling to 0, i.e., there is no vertex left.
The sub-optimality of LDIP lies in the fact that there may exist multiple vertices sharing the least degree at each iteration. An optimum solution requires to visit all possible cases for each iteration, resulting in exponential searching complexity. Instead, LDIP randomly chooses a vertex at each iteration, offering linear searching complexity at the price of optimality.
#### Iii-B2 The case with non-identical state of the remaining energy
Consider the condition: c5) \(\mathcal{E}_{n}(0),\forall n\), can be different, the lifetime maximization problem becomes much more complicated because it involves multiple parameters (i.e., \(\mathcal{E}_{n}(0)\) and \(\Omega_{m}\)) in the optimization. The round-robin algorithm is no longer optimum since subsets should not be equally treated. Again, we start our study from the two special cases described in _Lemma 1 & 2_.
**Theorem 2**: _Provided the conditions c2) and c5), the swarm lifetime (\(i\)) is upper-bounded by_
\[i\leq\left\lfloor\frac{\min(\mathcal{E}_{\Omega_{c}}(0))}{pT_{\mathrm{c}}+ \epsilon+\varepsilon}\right\rfloor, \tag{18}\]
_where \(\min(\mathcal{E}_{\Omega_{c}}(0))\) is corresponding to the robot in the set \(\Omega_{c}\) who has the minimum \(\mathcal{E}(0)\). A necessary condition for this upper bound to be achievable is_
\[\left\lfloor\frac{\min(\mathcal{E}_{\Omega_{c}}(0))}{pT_{\mathrm{c}}+\epsilon+ \varepsilon}\right\rfloor\leq\sum_{m=1}^{M}\left\lfloor\frac{\min(\mathcal{E}_{ \Omega_{m}\setminus\Omega_{c}}(0))-i\varepsilon}{pT_{\mathrm{c}}+\epsilon} \right\rfloor. \tag{19}\]
Proof:: We start by proving the upper bound (18). Following the proof of _Lemma 1_, no matter which robot subset is chosen, robots within the set \(\Omega_{c}\) will always be selected, and they have the energy consumption model like (14), i.e.,
\[\mathcal{E}_{\ell}(i)=\mathcal{E}_{\ell}(0)-i(pT_{\mathrm{c}}+\epsilon+ \varepsilon),\ell\in\Omega_{c}. \tag{20}\]
Then, the lifetime of \(\Omega_{c}\) is limited by the robot with the minimum \(\mathcal{E}(0)\), which has the form (18). According to _Definition 3_, the swarm lifetime cannot go beyond this upper limit.
We further consider robots in the set \(\Omega_{m}\setminus\Omega_{c},\forall m\). Their energy consumption model is given by
\[\mathcal{E}_{k}(i)=\mathcal{E}_{k}(0)-w_{m}(pT_{\mathrm{c}}+\epsilon)-i \varepsilon,\ k\in\Omega_{m}\setminus\Omega_{c}, \tag{21}\]
where \(w_{m}\) denotes how many times the robot subset is chosen. Then, the lifetime of the robot set \(\Omega_{m}\setminus\Omega_{c}\) is given by
\[w_{m}=\left\lfloor\frac{\min(\mathcal{E}_{\Omega_{m}\setminus\Omega_{c}}(0))-i \varepsilon}{pT_{\mathrm{c}}+\epsilon}\right\rfloor. \tag{22}\]
Fig. 1: The concept of the LDIP algorithm for graph partitioning. Red ellipses are subgraphs; blue-faced circles are R-Vertices of their subgraphs.
Consider an optimistic case when 3
Footnote 3: When (23) does not hold, the common part of two sets will be called whenever one of them are called. This can reduce the lifetime of the union of the two sets.
\[(\Omega_{m_{1}}\setminus\Omega_{c})\cap(\Omega_{m_{2}}\setminus\Omega_{c})= \emptyset,\forall m_{1}\neq m_{2}, \tag{23}\]
the lifetime for the union set \((\Omega_{1}\setminus\Omega_{c})\cup(\Omega_{2}\setminus\Omega_{c})\cup \cdots(\Omega_{M}\setminus\Omega_{c})\) is \(\sum_{m=1}^{M}w_{m}\). This lifetime must be no less than the upper bound (18); or otherwise, the upper bound is not achievable. _Theorem 2_ is therefore proved.
Note that the proof of _Theorem 2_ has already incorporated the condition c3), which is equivalent to (23) with \(\Omega_{c}=\emptyset\). Hence, for the condition c3), we have
\[w_{m}=\left\lfloor\frac{\min(\mathcal{E}_{\Omega_{m}}(0))-i\varepsilon}{pT_{ \mathrm{c}}+\epsilon}\right\rfloor \tag{24}\]
and reaches the following conclusion:
**Corollary 2.1**: _Given the conditions c3) and c5), the swarm lifetime (\(i\)) is upper-bounded by_
\[i\leq\sum_{m=1}^{M}\left\lfloor\frac{\min(\mathcal{E}_{\Omega_{m}}(0))-i \varepsilon}{pT_{\mathrm{c}}+\epsilon}\right\rfloor. \tag{25}\]
The technical insight from _Theorem 2_ and _Corollary 2.1_ is that robot subsets must be weighted in the procedure of subset selection. Their weights can be quantified by \(w_{m}\) (see (22) and (24)), which are related to the terms \(\min(\mathcal{E}_{\Omega_{m}}(0))\) and/or \(\min(\mathcal{E}_{\Omega_{m}\setminus\Omega_{c}}(0))\). This insight is also applicable to the generic case, where R-Vertices must be weighted. After the graph partitioning, we denote \(\Omega_{\overline{m}}\) to be the robot subset corresponding to the R-Vertex of the \(\overline{m}^{th}\) subgraph. Since R-Vertices are disconnected from each other, we can follow _Corollary 2.1_ to determine the swarm lifetime as
\[i\leq\sum_{\overline{m}=1}^{\overline{M}}\omega_{\overline{m}},\ \omega_{\overline{m}}=\left\lfloor\frac{\min(\mathcal{E}_{\Omega_{\overline{m} }}(0))-i\varepsilon}{pT_{\mathrm{c}}+\epsilon}\right\rfloor. \tag{26}\]
In addition, (26) tells us, when the graph partitioning yields multiple results with the same \(\overline{M}\), the one maximizing the upper bound (26) should be chosen for the sake of lifetime maximization.
### _Optimization in Independent Flat-Fading Channels_
Consider the channel quality \((\alpha_{n})\) to be a random variable that is independent but not necessarily identical with respect to the robot index \((n)\) and the task index \((i^{\prime})\). Then, the transmit power (\(p\)) also varies independently. In this case, the energy consumption model for the \(n^{th}\) robot goes back to the original version (10), and the term \(\zeta_{n}(i)\) can be written into
\[\zeta_{n}(i) =T_{\mathrm{c}}\sum_{i^{\prime}=0}^{i}\sum_{j\in\Psi_{n}}\left(p _{n}(i^{\prime})\right)+|\Psi_{n}|\epsilon, \tag{27}\] \[=w_{n}T_{\mathrm{c}}\underbrace{\left(\frac{1}{w_{n}}\sum_{i^{ \prime}=0}^{i}\sum_{j\in\Psi_{n}}(p_{n}(i^{\prime}))\right)}_{\triangleq \overline{p}_{n}}+w_{n}\epsilon. \tag{28}\]
Following the discussion leading to _Theorem 2_ and _Corollary 2.1_, it is straightforward to obtain
\[w_{n}=\left\lfloor\frac{\mathcal{E}_{n}(0)-i\varepsilon}{\overline{p}_{n}T_{ \mathrm{c}}+\epsilon}\right\rfloor. \tag{29}\]
In general, the averaged power \(\overline{p}_{n}\) is robot dependent. The lifetime of R-Vertices is limited by the robot with the smallest \(w_{m}\). Hence, the swarm lifetime is given by
\[i\leq\sum_{\overline{m}=1}^{\overline{M}}\omega_{\overline{m}},\ \omega_{\overline{m}}=\min\left\{\left\lfloor\frac{\mathcal{E}_{n}(0)-i \varepsilon}{\overline{p}_{n}T_{\mathrm{c}}+\epsilon}\right\rfloor,n\in\Omega _{\overline{m}}\right\}. \tag{30}\]
Then, the lifetime maximization is equivalent to maximizing the upper bound of (30), i.e., following the max-min principle as stated in (12)
\[i \leq\max\left(\sum_{\overline{m}=1}^{\overline{M}}\omega_{ \overline{m}}\right) \tag{31}\] \[\leq\sum_{\overline{m}=1}^{\overline{M}}\max_{p_{n}(i^{\prime})} \left(\min\left\{\left\lfloor\frac{\mathcal{E}_{n}(0)-i\varepsilon}{\overline {p}_{n}T_{\mathrm{c}}+\epsilon}\right\rfloor,n\in\Omega_{\overline{m}}\right\} \right). \tag{32}\]
Again, we emphasize that the max-min problem in (32) is NP-hard. Nevertheless, we can employ the max-min algorithm proposed in [6] to handle the optimization problem in (32), while the following criteria must be followed in the subset selection:
#### Iii-C1 Criterion 1
in Section II-B specifies the cap of transmit power. Any subset that does not fulfill _Criterion 1_ is not chosen for data transmission.
#### Iii-C2
When none of R-Vertices satisfy _Criterion 1_, we choose a subset from those which are not R-Vertices.
By this means, we incorporate the max-min principle into the LDIP algorithm to combine the merits of both.
It is worth noting that optimization in fading channels can lead to many interesting research problems. For instance, current LDIP algorithm randomly picks up a vertex to be the R-Vertex when there exist multiple candidates of R-Vertices. This is however too sub-optimum in fading channels where the max-min principle should be incorporated in the graph partitioning. Moreover, communication channels can be frequency selective. Therefore, sub-channel selection mechanism should be incorporated to take the advantage of channel frequency diversity-gain. Our solution to those research problems will be presented in the journal paper version of this work.
## IV Experiment Design and Simulation Results
### _Model for Generating Robot Subsets_
Computer simulations are used to evaluate the proposed subset selection approach in terms of the lifetime of robot swarm. The major challenge set to computer simulations is forming robot subsets in Monte Carlo trials. This is because neither stochastic subset models nor deterministic models are available in the literature. Deterministic models are relatively easier to develop; however they are not generic and representative. This is not a big issue for practical robot swarms since subsets
can be formed based on robots' locations. More specifically, closely located robots have their data strongly correlated, and they should belong to different subsets. On the other hand, those very distanced robots have their data weakly correlated, and they can together form a subset. The MEC server can form subsets based on robots' location information as well as their empirical data of subset forming. However, this idea is not readily applicable to computer simulations because it requires a meaningful and well verified stochastic-geometry model of robot geo-distribution and location-related data correlations.
As the early root of the multi-robot edge computing research, we propose a novel and simple stochastic method to generate the robot subsets. Our aim is to divide \(N\) robots into \(M\) subsets, with each having at least \(K\) robots. The inequality (\(N\leq MK\)) must hold as it is a necessary condition for not excluding any robot in subsets forming. Our subset forming algorithm includes the following three steps:
_Step 1 (Initialization):_ Set \(K\) bins; each bin is given one robot which is randomly drawn from \(N\) robots with an equal probability.
_Step 2 (Forming observation groups):_ For the leftover \((N-K)\) robots, we randomly throw them into \(K\) bins with an equal probability. It is assumed that robots in the same bin have very correlated data (i.e., correlated observations); any \(K\) robots that are drawn from different bins carry sufficient information for the edge computing; as specified in _Definition 2_.
_Step 3 (Forming robot subsets):_ Denote \(M_{k}\) the number of robots in the \(k^{th}\) bin. There are three possible cases: _a)_\(M_{k}=M\); _b)_\(M_{k}>M\); _c)_\(M_{k}<M\).
There is no problem with the case _a)_ since all \(M_{k}\) robots can be evenly allocated to \(M\) robot subsets. For the case _b)_, each subset should take at least \(J_{k}\triangleq[(M_{k})/(M)]\) robots from the \(k^{th}\) bin, and then we randomly choose \((M_{k}-J_{k}M)\) subsets and allocate one more robot to each of them. For the case _c)_, we randomly duplicate robots in the \(k^{th}\) bin. For instance, there is a bin accommodating two robots labelled by \(\{1,2\}\). Assuming \(M=4\), we can randomly generate a set \(\{1,2,1,1\}\), \(\{1,2,2,2\}\), or \(\{1,2,1,2\}\) for simulation uses (i.e., making the case \(M_{k}=M\)).
This stochastic model can cover almost all possible cases of robot subset forming, including those illustrated in Fig. 1.
### _Parameter Setting for Simulations_
In addition to the forming of robot subsets, there are many other parameters requiring appropriate setting. Those parameters include: \(\mathcal{E}_{n},\varepsilon,\epsilon,P_{n}\). One of challenging issues is to determine the relationship between the energy costs for signal transmission, for modulation and encoding, and for robot task execution, respectively. This would give us a unified energy consumption model for computer simulations. Unfortunately, we are not aware of any explicit description of their relationships in the literature despite a very extensive survey. Therefore, in our simulations, we assume that a robot's energy is divided into two fixed and separated parts, one for communication and the other for task execution. In order to focus our study in the communication domain, it is assumed that the bottleneck of robot swarm is the communication energy. In other words, when a robot has its communication energy used up, this robot counts as 'dead'.
To simplify the computer simulations, we set \(\mathcal{E}_{n}\) and \(P_{n}\) to be identical for all robots and omit the subscript \((\cdot)_{n}\). Moreover, we ignore the energy cost for modulation and coding (i.e., \(\epsilon\)) since they are often negligibly small when comparing to the radio transmission energy. The term \(\mathcal{E}\) is now only referred to the total energy for communications. In our experiments, it is set to allow a robot conducting \(200\) data transmissions in the AWGN channel with the SNR of \(10\) dB. This essentially sets a robot's lifetime in the AWGN channel (i.e., \(i=200\)).
### _Simulation Results and Discussion_
Our computer simulations mainly include two experiments, concerning communication channels to be AWGN and independent and identically distributed (i.i.d.) Rayleigh, respectively. We consider two baselines:
* Conventional single-user equivalent edge computing, where all robots send their data to the MEC server [17]. In the AWGN channel, this baseline approach straightforwardly gives the swarm lifetime of \(i=200\) according to our setting, and we aim to demonstrate the gain of exploiting source correlation in the multi-user edge computing.
* Max-min algorithm for subset selection. This algorithm is extended from the max-min user selection approach in sensor networks [6], but we apply it on the subset level. Strictly speaking, this is a novel algorithm, and we use it as a baseline to demonstrate advantages of the proposed LDIP algorithm.
The approach under evaluation is the LDIP-enabled subset selection algorithm (see (32) for the generic form). The one with only R-Vertices involved in the subset selection is named LDIP-R-Vertices, and the one with all vertices involved in the subset selection is named LDIP-All.
**Experiment 1**: _The objective of this experiment is to evaluate the proposed approach in the AWGN channel. We simulate a robot swarm consisting of \((N=30)\) robots. Each robot subset has at least (\(K=8\)) robots. Fig. 2 illustrates the swarm lifetime (averaged over \(500\) Monte Carlo trials) as a function of the number of subsets (\(M\)). It is shown that subset-selection approaches can improve the swarm lifetime by \(25\%\sim 75\%\). Such a significant gain is due to the exploitation of data source correlation._
It can also be observed that LDIP approaches outperform the max-min approach in the subset selection. This is because, in the AWGN channel, (32) reduces to (17), where the round-robin subset selection strategy is optimum (see _Theorem 1_), and the max-min principle does not offer any additional gain. For the same reason, LDIP-R-Vertices and LDIP-All show identical performances.
**Experiment 2**: _With the same system setup as in Experiment 1, this experiment is interested in the performance in i.i.d. Rayleigh channels. Experiment 2 is different from Experiment
_1_ mainly in two folds: _1)_ (32) does not reduce to (17), and thus the max-min principle plays a vital role; _2)_ power adaptation is needed in fading channels (see _Criterion 1_). In our simulations, the transmit power (\(p_{n}\)) has a power cap, which is set three times of the transmit power in the AWGN channel. The simulation results are plotted in Fig. 3. It is observed that the conventional approach has the swarm lifetime significantly reduced. This is because robots have to pay much more transmit power to combat channel fades. Compared to the results in AWGN channels, subset-selection approaches also have the swarm lifetime largely reduced. However, their performances are significantly better than the conventional approach. Another remarkable phenomenon is that LDIP-All offers the best performance, and LDIP-R-Vertices shows worse performance than the max-min approach. This implies that the max-min principle is vital in fading channels, and all subsets must be considered in the subset selection. LDIP-All combines the merits of LDIP and max-min principles, and thus achieves the best performance.
## V Conclusion and Outlook
In this paper, a novel subset selection concept has been presented for multi-user edge computing, with the aim to maximize the lifetime of robot swarm through efficient exploitation of the data source correlation. Major contribution of this work includes a subset model describing the data source correlation between robots, an objective function for lifetime optimization, as well as a graph partitioning-based optimization approach. All of the proposed models and approaches are novel, and they have demonstrated remarkable gains both in the AWGN and Rayleigh fading channels.
Moreover, we have highlighted the future direction towards this topic, which includes the LDIP optimization and user scheduling in frequency-selective fading channels.
|
2306.11843 | Retrieval-Based Transformer for Table Augmentation | Data preparation, also called data wrangling, is considered one of the most
expensive and time-consuming steps when performing analytics or building
machine learning models. Preparing data typically involves collecting and
merging data from complex heterogeneous, and often large-scale data sources,
such as data lakes. In this paper, we introduce a novel approach toward
automatic data wrangling in an attempt to alleviate the effort of end-users,
e.g. data analysts, in structuring dynamic views from data lakes in the form of
tabular data. We aim to address table augmentation tasks, including row/column
population and data imputation. Given a corpus of tables, we propose a
retrieval augmented self-trained transformer model. Our self-learning strategy
consists in randomly ablating tables from the corpus and training the
retrieval-based model to reconstruct the original values or headers given the
partial tables as input. We adopt this strategy to first train the dense neural
retrieval model encoding table-parts to vectors, and then the end-to-end model
trained to perform table augmentation tasks. We test on EntiTables, the
standard benchmark for table augmentation, as well as introduce a new benchmark
to advance further research: WebTables. Our model consistently and
substantially outperforms both supervised statistical methods and the current
state-of-the-art transformer-based models. | Michael Glass, Xueqing Wu, Ankita Rajaram Naik, Gaetano Rossiello, Alfio Gliozzo | 2023-06-20T18:51:21Z | http://arxiv.org/abs/2306.11843v1 | # Retrieval-Based Transformer for Table Augmentation
###### Abstract
Data preparation, also called data wrangling, is considered one of the most expensive and time-consuming steps when performing analytics or building machine learning models. Preparing data typically involves collecting and merging data from complex heterogeneous, and often large-scale data sources, such as data lakes. In this paper, we introduce a novel approach toward automatic data wrangling in an attempt to alleviate the effort of end-users, e.g. data analysts, in structuring dynamic views from data lakes in the form of tabular data. We aim to address _table augmentation_ tasks, including row/column population and data imputation. Given a corpus of tables, we propose a retrieval augmented self-trained transformer model. Our self-learning strategy consists in randomly ablating tables from the corpus and training the retrieval-based model to reconstruct the original values or headers given the partial tables as input. We adopt this strategy to first train the dense neural retrieval model encoding table-parts to vectors, and then the end-to-end model trained to perform table augmentation tasks. We test on EntiTables, the standard benchmark for table augmentation, as well as introduce a new benchmark to advance further research: WebTables. Our model consistently and substantially outperforms both supervised statistical methods and the current state-of-the-art transformer-based models.
## 1 Introduction
The way organizations store and manage data is rapidly evolving from using strict transactional databases to data lakes that typically consist of large collections of heterogeneous data formats, such as tabular data, spreadsheets, and NoSQL databases. The absence of a unified schema in data lakes does not allow the usage of declarative query languages, e.g. SQL, making the process of data preparation1 dramatically expensive Terrizzano et al. (2015).
Footnote 1: Also referred as data wrangling or data munning.
Data preparation involves several phases, such as data discovery, structuring, cleansing, enrichment and validation, with the purpose of producing views commonly organized in a tabular format used to create reports Koehler et al. (2021) or to gather feature sets to build machine learning models He et al. (2021). The schemaless nature of data lakes makes data discovery and structuring even more challenging since the tasks of joinability and unionability among tables become non-deterministic Fernandez et al. (2018); Zhu et al. (2019); Bogatu et al. (2020).
In this work, we propose a novel end-to-end solution based on a retrieval augmented transformer architecture with the aim to support end-users, such as data analysts, in the process of constructing dynamic views from data lakes. To this end, we address three table augmentation tasks Zhang and Balog (2017, 2019): automatic row and column population and cell filling (or data imputation).
Figure 1 illustrates the three core tasks in table augmentation. All tasks proceed from a query or seed table. In the case of self-supervised training, this seed table is formed by ablating rows, columns or cell values from an existing table in the data lake. The task of column header population, also simply called column population, is to extend the table with additional possible column names or headers. This is a way of suggesting additional data that could be joined into this table. In the task of cell filling there is a specific unknown cell, for which the model predicts a specific value. The task of row population is only populating the _key column_ for a row. This is the column that contains the primary entity that the remainder of the row contains data for, sometimes referred to as a row header. Typically this is the first column in a table.
Approaches to table augmentation can be purely parametric Iida et al. (2021); Deng et al. (2022), in which case the data lake is used to train the param
eters of the model, but not used during inference. In this setting, the table augmentation model must draw the possible augmentations for rows, columns and cells from its trained parameters. Alternatively, with retrieval-based models Lewis et al. (2020); Glass et al. (2021); Glass et al. (2022), the data lake can also be used at inference to provide evidence for proposed augmentations. This has two key advantages: 1) the model need not memorize the data lake - or even a significant fraction of it, and 2) the model can provide justification for its predicted augmentations in the form of a provenance table or tables.
The key contributions of this paper are: (1) We introduce the first end-to-end, retrieval-based model for table augmentation. Our Retrieval Augmented Table Augmentation (RATA) model uses a biconcoder retrieval model for neural indexing and searching tables from data lake, and a reader transformer to identify augmentations from retrieved tables. (2) Our model establishes a new state-of-the-art across all three tasks in table augmentation, while also providing additional value with its provenance information. (3) We create and release a new dataset for table augmentation, expanding the scope of evaluation beyond Wikipedia. This dataset, based on Cafarella et al. (2008), is also larger and more diverse than the standard Wikipedia-based dataset Zhang and Balog (2017).
## 2 Related Work
Table augmentationcan be divided into three sub-tasks: row population, column population, and cell filling. For row and column population, Zhang and Balog (2017) identifies and ranks candidate values from both the table corpus and knowledge base. Table2Vec Zhang et al. (2019) trains header and entity embeddings from a table corpus in a skip-gram manner and uses the embeddings for the task. Although TaBERT Yin et al. (2020) was developed as a foundational model primarily for question answering, its embeddings have also been applied for row and column population. Recent work formulates the task as multi-label classification and fine-tunes large-scale pre-trained models such as TABBIE Iida et al. (2021) and TURL Deng et al. (2022).
TABBIE consists of three transformers for converting cells, columns and rows to vector representations. A corrupt cell detection task is the pre-training task used to learn these embeddings on the table corpus. To fine-tune a trained TABBIE model for the column header population task, a concatenated [CLSCOL] embedding of the columns is passed through a single linear and softmax layer and trained with a multi-label classification objective. Similarly, for the row population task a multi-class classification is carried out on the first column's [CLSCOL] representation.
For cell filling, InfoGather Yakout et al. (2012) retrieves tables from the table corpus and selects values from retrieved tables. Zhang and Balog (2019) extends the system to retrieve from both the table corpus and knowledge base. Their system that uses only the table corpus as the source is called TMatch, which we compare to in Section 6. Ahmadov et al. (2015) combines predictions both from table retrieval and from a machine learning-based value imputation system. Deng et al. (2022)
Figure 1: Given a partially completed table as a query (i.e. a few album releases from the Pink Floyd discography), the three table augmentation tasks consist of retrieving from the data lake: 1) a list of possible next column headers, such as the “Label” or “Format”, 2) the missing value “1979” for the release date of the row “The Wall”, 3) a list of other album releases as possible next rows, such as “Atom Heart Mother” and “The Division Bell”.
directly applies pre-trained TURL model to the task since cell filling is similar with its pre-training objective. Cell filling is also related to the task of value imputation, i.e., to provide an assumed value when the actual value is unknown, usually using machine learning methods (Biesmann et al., 2019). In addition to augmenting individual entities, column headers or cells, some other work aims to join tables over entire rows or columns with retrieved tables (Sarma et al., 2012; Bhagavatula et al., 2013; Lehmberg et al., 2015).
Retrieval-augmented modelshave been successfully applied to many tasks. For open-domain question answering (ODQA), DPR learns dense representation to retrieve evidence and trains a separate reader to select answer from retrieved evidence (Karpukhin et al., 2020). RAG uses a generator to generate outputs conditioned on retrieved evidence and jointly trains DPR with a generator on the downstream task (Lewis et al., 2020). RAG is shown to achieve good performance on knowledge-intensive NLP tasks such as ODQA, fact verification, slot filling, etc (Lewis et al., 2020; Petroni et al., 2021). Re\({}^{2}\)G further introduces a reranker to boost performance (Glass et al., 2022). Retrieval-augmented models are also shown to be effective on zero-shot slot filling (Glass et al., 2021), and multilingual keyphrase generation (Gao et al., 2022). Similar models have also been applied to table-related tasks such as open-domain table question answering (Herzig et al., 2021). In our work, we apply the architecture to table augmentation.
## 3 Approach
While the row, column, and cell predictions of purely parametric table augmentation methods may be useful on their own, they can be much more effective for a human-in-the-loop use case if they are supported by provenance. A user of a data preparation application may be unwilling to simply accept the prediction of a model, but when paired with evidence from the data lake, that prediction can be better assessed. Furthermore, the retrieval model itself may be useful for exploration and general search in a data lake. In this view, table augmentation can be seen as self-supervised pretraining for table retrieval.
Fortunately, there is now considerable work on _retrieval augmented_ transformer models (Glass et al., 2022; Lewis et al., 2020). These models augment the parametric knowledge of the transformer, with non-parametric knowledge in the form of an indexed corpus. To do so, they use a neural retrieval model based on DPR (Dense Passage Retrieval) (Karpukhin et al., 2020) that is trained end-to-end to assist in generation.
We build on this line of research to introduce a general model for all table augmentation tasks: row population, column header population and cell filling. Our model, Retrieval Augmented Table Augmentation (RATA), comprises of an index of tables, a retrieval component, and a reader or selection component. The table index is built from the tables in the training set, which are first decomposed into table-parts, then transformed into sequences for use with standard retrieval approaches. The retrieval component is a biencoder architecture similar to DPR (Karpukhin et al., 2020), but trained without ground truth on correct provenance. We call this _Dense Table Retrieval_ or DTR. The reader component is an extractive approach. An extractive rather than generative approach ensures that the model's predictions are always grounded in actual data, rather than speculative guesses. The extractive approach is also a more natural fit for row and column population tasks, where there is no required order to the answers. Finally, the extractive approach permits an initial training phase for the retrieval component where the _answer-bearing_ tables are considered as a bag of positives.
Figure 1 illustrates the tasks of table augmentation by example. Formally, the input \(I\) is a table
Figure 2: Index building and inference system overviews
with \(r\) rows and \(c\) columns comprising a caption \(\mathcal{C}\), headers \(\mathbf{H}\), and matrix of cell values, \(\mathbf{V}\). One of the columns, usually the first, is indicated as the key column \(key\).
\[I =\langle\mathcal{C},\mathbf{H},\mathbf{V},key\rangle,1\leq key\leq c\] \[\mathbf{H} =[h_{1},h_{2},...,h_{c}]\] \[\mathbf{V} =\begin{bmatrix}v_{1,1},v_{1,2},...,v_{1,c}\\...\\ v_{r,1},v_{r,2},...,v_{r,c}\end{bmatrix}\]
The input table is ablated in a task specific way to produce a query table and gold answers, \(\langle Q,\mathbf{G}\rangle\), described as follows:
\[Q_{rp} =\langle\mathcal{C},\mathbf{H},\mathbf{V_{1.n_{seed}}},key\rangle\] \[\mathbf{G}_{rp} =\{\mathbf{V}_{i,key}:i>n_{seed}\}\] \[Q_{cp} =\langle\mathcal{C},\mathbf{H_{1.n_{seed}}},\mathbf{V_{\neg,1.n_{ seed}}},key\rangle\] \[\mathbf{G}_{cp} =\{\mathbf{H}_{i}:i>n_{seed}\}\] \[Q_{cf} =\langle\mathcal{C},\mathbf{H},\mathbf{V}\setminus\mathbf{v_{i,j}},key\rangle\] \[\mathbf{G}_{cf} =\{v_{i,j}\}\]
where _rp_, _cp_ and _cf_ refer to the row population, column header population and cell filling tasks, respectively.
### End-to-End Model
Figure 1(a) shows how tables in a data lake are first indexed to provide a non-parametric knowledge store. Each table is first split into chunks of up to three rows plus the header, which we refer to as _table-parts_. We form sequence representations of these table-parts following work in other transformer-based approaches to tables (Glass et al., 2021). The table-part sequence representations (\(S^{t}\)) are formed from the row sequence representations (\(S^{r}_{i}\)) and the table caption:
\[S^{r}_{i} =\bigoplus_{j=1}^{c}h_{j}\oplus\text{```
}\oplus\text{``*^{}}\] \[S^{t} =\mathcal{C}\oplus\text{[SEP]}\oplus\bigoplus_{i=start}^{end}S^{r}_ {i}\oplus\text{`l''}\]
Here \(\oplus\) indicates concatenation and the strings ':', '*', and 'l' delimit the header, cell value contents, and each row respectively. Any distinctive tokens can work as delimiters since the transformer will learn an appropriate embedding representation.
These sequences are then projected to vectors using the context encoder by taking the [CLS]. We index the dense representations for all table-parts in the data lake using FAISS (Johnson et al., 2017) with Hierarchical Navigable Small World (Malkov and Yashunin, 2018).
Figure 1(b) shows the architecture of our approach, Retrieval Augmented Table Augmentation (RATA). The input query is encoded to a vector for retrieving related table-parts from the indexed data lake. Similar to table-part representation, we form sequence representation for the query, use a query encoder to encode it, and take the [CLS] vector as query representation. Both the context encoder and the query encoder use the BERTBASE architecture. We use unnormalized dot product to score a pair of query \(q\) and table-part \(d\). Top-k table-parts with highest scores will be retrieved.
\[score(q,d)=\text{BERT}_{qe}(q)_{[CLS]}\cdot\text{BERT}_{ce}(d)_{[CLS]}\]
After the top-k most relevant table-parts are retrieved, the reader component selects the most likely augmentations for the query table. In the case of column population, the candidate augmentations are all headers from retrieved table-parts; for cell filling it is all cells; and for row population it is only those cell values that are entities.
The sequence representation of the query table is paired with each table-part representation, using the standard [CLS] and [SEP] token to demarcate the bounds of each sequence. In the table-part representation, the candidates are marked by special begin and end tokens: '\(\langle\)' and '\(\rangle\)'. This combined sequence is then the input to a transformer encoder (initialized from BERTLARGE (Devlin et al., 2019)). For each pair of candidate answer marks ('\(\langle\)' and '\(\rangle\)'), the final token embeddings are concatenated to produce a single vector. Then a linear layer is applied to predict the likelihood that the candidate is a correct answer to the query.
\[\alpha =[i:t_{i}=\text{``''}]\] \[\omega =[i:t_{i}=\text{
````''}]\] \[ans_{n} =t_{\alpha_{n}+1},t_{\alpha_{n}+2},...,t_{\alpha_{n}-1}\] \[C =\begin{bmatrix}E_{\alpha_{0}}\oplus E_{\omega_{0}}\\ E_{\alpha_{1}}\oplus E_{\omega_{1}}\\ E_{\alpha_{2}}\oplus E_{\omega_{2}}\\...\end{bmatrix}\] \[\rho =softmax(C\cdot\mathbf{w_{candidate}})\]
Formally, the input is a sequence of tokens \(T=[t_{0},t_{1},...]\). The transformer encoder produces a sequence of embeddings
\(E=[e_{0},e_{1},...]\). The candidate representation vectors, \(C\), are then multiplied by the learned parameter vector \(\mathbf{w_{candidate}}\) and a softmax is applied to produce the reader scores, \(\rho\), for the retrieved table-part.
Note that the likelihood for a given answer occurrence \(ans_{n}\) is \(\rho_{n}\). The candidate likelihood vectors for each of the top-k retrieved table-parts, \(\rho^{1},\rho^{2},...,\rho^{k}\), are then combined with the softmax normalized retrieval scores, \(\mathbf{r}=[r_{1},r_{2},...,r_{k}]\), to provide a probability distribution over all candidates in all retrieved table-parts. Since these scores are for each occurrence of a candidate string, we aggregate over each distinct normalized candidate string by summing the likelihoods for all occurrences. This produces the final score, \(s(a)\) for each answer string \(a\). The loss is the negative log-likelihood of all gold answer strings, \(\mathbf{G}\). Because of this formulation, during training any instance with no correct candidates in any retrieved table-part is skipped.
\[\mathbf{p}^{j} =softmax(\mathbf{r})_{j}\cdot\rho^{j}\] \[s(a) =\sum_{j=1}^{k}\sum_{n:ans_{n}^{j}=a}\mathbf{p}_{n}^{j}\] \[loss =-\sum_{a\in\mathbf{G}}log\left(s(a)\right)\]
We use answer normalization to determine if a candidate matches a gold answer, as described in Appendix B. For row population and cell filling in EntiTables, the cell values are already linked to entities so normalization is not necessary.
For RATA training, we iterate through the tables in the training set. To construct input query from a table, we ablate either all rows after the first \(n_{seed}\) (row population), or all columns after the first \(n_{seed}\) (column population), or a particular cell (cell filling). We ensure that table-parts from the query table are not retrieved by filtering the retrieved results. Like most previous approaches to end-to-end training of neural retrieval, we train only the query encoder in the end-to-end training phase. This avoids expensive re-indexing of the entire data lake either each time the context encoder is updated, or periodically as in ANCE Xiong et al. (2020).
### Retrieval Training
While it is possible in theory to train neural retrieval entirely through impact in the end-to-end table augmentation tasks, a good initialization is important for learning. Without an initial effective retrieval model, there is no answer-bearing evidence to train the reader model, and therefore a high fraction of training examples will be skipped Lee et al. (2019).
One possible approach is to use a pretraining task for retrieval, such as the Inverse Cloze Task Lee et al. (2019) or a retrieval-based masked language model Guu et al. (2020). In the table augmentation task, there is the option of training with answer-bearing evidence as positives. Since the reader is purely extractive, any evidence that does not contain a correct augmentation string is necessarily a negative. However, not every table-part that contains an answer is a positive. We use a multiple instance learning setup for the positives: we train under the assumption that at least one of the table-parts containing a correct answer is a positive.
To gather the training data for retrieval we build an initial keyword index using Anserini2. We use BM25 Robertson and Zaragoza (2009) to retrieve potentially relevant table-parts for each table query.
Footnote 2: [https://github.com/castorini/anserini](https://github.com/castorini/anserini)
From each training table we construct a query for row population, column population or cell filling. Since these queries are constructed from ablated tables, we know a (potentially incomplete) set of correct augmentations or answers. Note that there may be other equally correct augmentations. But since this is a self-supervised task, we consider only the headers or cell values that actually occurred in the table to be correct.
Formally, the query constructed from a training table is a pair of the ablated table, \(Q\) and the set of gold answers \(\mathbf{G}\). The set of table-parts retrieved by the initial retrieval method, for example BM25, is given as \(\mathbf{R}\). A retrieved table-part is in the positive set, \(\mathbf{R}^{+}\), if it contains any gold answer, otherwise it is a hard negative, \(\mathbf{R}^{-}\).
\[\mathbf{R}^{+} =\{d:d\in\mathbf{R}\wedge\exists a\in\mathbf{G},a\in d\}\] \[\mathbf{R}^{-} =\mathbf{R}-\mathbf{R}^{+}\]
Following Karpukhin et al. (2020), we use batch negatives along with the retrieved "hard negatives". The batch \(B=[\langle q_{1},\mathbf{R}_{1}\rangle,\langle q_{2},\mathbf{R}_{2}\rangle,...,\langle q_{bz},\mathbf{R}_{bz}\rangle]\) is processed to produce vectors for all queries and retrieved table-parts. All query vectors are multiplied with all table-part vectors to produce scores between all pairs. A softmax is applied per-query to give the normalized scores. Finally, the loss is
the negative log-likelihood for the positive scores.
\[\mathcal{R} =\bigcup_{i=1}^{bz}\mathbf{R}_{i}\] \[\rho_{i} =softmax([score(q_{i},d):d\in\mathcal{R}])\] \[loss =-\sum_{i=1}^{bz}log\left(\sum_{d\in\mathbf{R}_{i}^{+}}\rho_{i,d}\right)\]
Note that since we are summing over the probability of all table-parts in the positive set, \(\mathbf{R}^{+}\), it is not necessary for _all_ answer-bearing retrieved table-parts to be high scoring. Instead, it follows the multiple instance learning framework. All instances marked negative are negative, while at least one instance in the positive set is positive.
## 4 WebTables Dataset
Prior work on table augmentation has focused on tables derived from Wikipedia (Zhang and Balog, 2017; Iida et al., 2021; Deng et al., 2022; Zhang and Balog, 2019; Zhang et al., 2019). In order to better assess the proposed methods and provide the research community with a new benchmark, we introduce a new dataset for table augmentation: WebTables.
We construct this dataset using the tables crawled and extracted by Cafarella et al. (2008). We start from the English relational tables of WDC Web Table Corpus 2015. We further filter the dataset to remove the most common types of noisy tables: calendars formatted as tables, lists of forum posts and torrent links, tables with less than four rows or columns, and tables that format large blocks of text. Because previous work on table augmentation focused so heavily on Wikipedia tables, we exclude from this dataset any tables crawled from any "wikipedia" domain. We also deduplicate the corpus, ensuring that there are no two tables with the same content in their cells.
Following filtering and deduplication we sample 10 thousand tables each for the development and test sets and one million tables for training. However, in our experiments we use only 300 thousand training examples to limit the computational cost.
To parallel the setting of EntiTables we use the "key column" identified by Cafarella et al. (2008) as the target column for row population and we consider entities to be those strings that occur at least three times in the key column for any table in the train set.
## 5 Experiments
We experiment on two datasets of tables across three tasks. Table 1 gives statistics on these datasets.
**EntiTables**(Zhang and Balog, 2017) contains 1.6M tables collected from Wikipedia where entity mentions are normalized into its name in DBPedia. For row and column population, we use the development and test sets released by Zhang and Balog (2017) each containing 1,000 randomly sampled queries. For cell filling, we use the test set released by Zhang and Balog (2019). The test set contains 1,000 queries uniformly sampled from four main column data types: entity, quantity, string, and date-time. Though Zhang and Balog (2019) use human annotations as gold labels, we notice that the human annotations are of low quality, so we use the original values in the table cells as gold labels.
**WebTables** is based on Cafarella et al. (2008) - 154M relational tables extracted from HTML tables in Common Crawl. We process the corpus as described in Section 4. For column population we use the original development and test sets of 10,000 tables each. While for row population we necessarily exclude any tables without any entities in the key column after the first \(n_{seed}\) rows. For cell filling, we use heuristic rules to classify cell values into three types: quantity, string and date-time. Then, we sample 3,000 queries uniformly from the three types as test set and sample another 3,000 queries as development set.
We compare our method with two deep learning-based baselines, TABBIE (Iida et al., 2021) and BART (Lewis et al., 2020). Both TABBIE and BART have no retrieval component involved.
TABBIE, described in Section 2, uses three transformers: one for cell values, one for rows, and one for columns. It produces vector embeddings for each cell and each row and column of a table. We follow Iida et al. (2021) for the row and column population and base our experiments on the par
\begin{table}
\begin{tabular}{l l|r r r} \hline \hline
**Dataset** & **Task** & **Train** & **Dev** & **Test** \\ \hline EntiTables & row pop. & 187k & 1k & 1k \\ EntiTables & column pop. & 602k & 937 & 950 \\ EntiTables & cell filling & 100k & - & 972 \\ \hline WebTables & row pop. & 563k & 6.6k & 6.8k \\ WebTables & column pop. & 1M & 10k & 10k \\ WebTables & cell filling & 1M & 3k & 3k \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics.
tial released code and pretrained model3. To apply TABBIE to cell filling, we formulate it as classification on the concatenation of the row and column embedding vectors, similar to row and column population. The classification vocabulary is collected from the training corpus: all cell values that occur at least ten times. We also report the published results for TABBIE on the EntiTables dataset, although we were unable to reproduce these results for row population.
Footnote 3: [https://github.com/SFIG611/tabbie](https://github.com/SFIG611/tabbie)
BART is a sequence-to-sequence model that takes the linearized table as the source text and generates the row entities, cell headers, or cell value as the target text. We use a beam search in decoding (beam size = 35) to produce a ranked list of predictions. We use the FAIRSEQ toolkit Ott et al. (2019) for these experiments. For RAG we use the implementation in Hugging Face transformers Wolf et al. (2019). For both BART and RAG, the sequence representation of the query tables is the same as in RATA.
On the EntiTables dataset, we also compare against probabilistic methods that first retrieve tables from the table corpus and next select values for table augmentation. We compare against the published results of Zhang and Balog (2017) for row and column population, and against TMatch Zhang and Balog (2019) for cell filling.
For evaluation, we report Mean Reciprocal Rank (MRR) and Normalized Discounted Cumulative Gain over the top ten outputs (NDCG@10) for the final prediction performance of row population, column population, and cell filling. To evaluate the performance of DTR retrieval, we also report answer-bearing MRR, where a retrieved table-part is considered correct if it contains one of the correct answers. To determine the significance of these results we use a 95% confidence interval on the t-distribution. We also applied a sampling permutation test, but this did not change any conclusions regarding significance.
## 6 Results
Table 2 contains our results for the row population task. Our model, RATA, is able to greatly outperform all other methods on both datasets. Using the non-parametric knowledge of the table corpus is very advantageous for the large and specific vocabulary of entities in key columns.
Table 3 contains our results for the column population task. RATA is again substantially better than the other methods, although not by as wide a margin as the row population task. The BART baseline is the best performing of the alternatives with an MRR lower by 6% to 15%.
Results on cell filling task are in Table 4. Our method outperforms all baselines on both datasets. TABBIE performs the worst due to the large classification vocabulary and out-of-vocabulary issue. On EntiTables dataset, retrieval-based methods including TMatch and RATA significantly outperform non-retrieval methods including TABBIE and BART. Figure 3 shows an example output from RATA. On WebTables, however, BART outper
\begin{table}
\begin{tabular}{l|r r r r} \hline \hline & \multicolumn{2}{c}{**EntiTables**} & \multicolumn{2}{c}{**WebTables**} \\ & MRR & NDCG & MRR & NDCG \\ \hline TABBIE & 10.62 & 11.56 & 24.79 & 26.17 \\ BART & 21.25 & 22.48 & **37.06** & **39.19** \\ TMatch & 30.54 & 32.23 & - & - \\ RAG & 18.65 & 19.71 & 34.80 & 36.34 \\ RATA & **34.32** & **36.25** & 33.58 & 35.33 \\ & \(\pm\)2.80 & \(\pm\)2.82 & \(\pm\)1.60 & \(\pm\)1.61 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test results for cell filling.
\begin{table}
\begin{tabular}{l|r r r r} \hline \hline & \multicolumn{2}{c}{**EntiTables**} & \multicolumn{2}{c}{**WebTables**} \\ & MRR & NDCG & MRR & NDCG \\ \hline TaBERT* & 56.0 & 46.4 & - & - \\ TABBIE* & 57.2 & 47.1 & - & - \\ TABBIE\(\dagger\) & 25.18 & 15.2 & 12.44 & 11.93 \\ BART & 45.30 & 32.76 & 29.25 & 19.30 \\ RAG & 56.95 & 43.48 & 33.20 & 22.23 \\ RATA & **77.15** & **60.34** & **45.13** & **26.70** \\ & \(\pm\)2.32 & \(\pm\)2.18 & \(\pm\)1.10 & \(\pm\)0.73 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test results for row population, \(n_{seed}=2\).
* As reported in Iida et al. (2021) \(\dagger\)Our results
\begin{table}
\begin{tabular}{l|r r r r} \hline \hline & \multicolumn{2}{c}{**EntiTables**} & \multicolumn{2}{c}{**WebTables**} \\ & MRR & NDCG & MRR & NDCG \\ \hline TaBERT* & 60.1 & 54.7 & - & - \\ TABBIE* & 62.8 & 55.8 & - & - \\ TABBIE\(\dagger\) & 63.9 & 55.8 & 84.1 & 78.96 \\ BART & 73.36 & 65.37 & 87.40 & 85.05 \\ RAG & 78.64 & 72.81 & 89.39 & 87.58 \\ RATA & **88.12** & **81.01** & **94.07** & **89.94** \\ & \(\pm\)1.91 & \(\pm\)1.97 & \(\pm\)0.44 & \(\pm\)0.49 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test results for column population, \(n_{seed}=2\).
* As reported in Iida et al. (2021) \(\dagger\)Our results
forms RATA. We notice that BART can achieve high scores by either copying values from other rows (as in Figure 5 and Figure 5(a)), or producing values similar with in other rows (as in Figure 5(b) and Figure 5(c)). As shown in the examples, this strategy is able to achieve good performance.
Effect of RetrievalTo analyze the effectiveness of the DTR component, we report answer bearing MRR in Table 5. We notice that DTR is well trained after the initial retrieval training phase and achieves higher answer bearing MRR compared to BM25. End-to-end training provides meaningful supervision for retrieval and further improves MRR on most tasks. By comparing Table 2, 3, 4 with Table 5, we notice that the final task MRR is close to answer bearing MRR. When the correct answer is present in the retrieved table, the reader can select the correct answer at high accuracy. This indicates that the bottleneck of our system is retrieval.
Number of Retrieved Table-PartsRATA was trained with 5 retrieved table-parts for all tasks. This relatively small number for the retrieval size provides good efficiency during training, since train time scales roughly linearly with the number of query / table-part pairs that must be processed by the reader transformer component. But during inference, we are able to adjust the number of retrieved table-parts more freely. Figure 4 shows that table augmentation performance monotonically increases as more evidence is retrieved for row population and cell filling, but column population performance does not improve past 5.
## 7 Conclusion
Our retrieval-based transformer architecture for table augmentation, RATA, is able to greatly advance the state-of-the-art in three table augmentation tasks: row population, column population, and cell filling. The non-parametric knowledge in the table corpus is able to substantially enhance the table augmentation capabilities. Furthermore, by training an effective table-to-table retrieval model we are able to provide provenance for the system's proposed augmentations. We also introduce a new benchmark dataset for table augmentation: WebTables and evaluate our model and two recent transformer baselines. Our code for RATA and the newly introduced dataset are available as open source4.
Figure 4: MRR gain as number of retrieved table-parts increases on the EntiTables dataset
Figure 3: RATA example output on EntiTables dataset. The output answer is correct, and the retrieved table provides sufficient evidence for the answer.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Row Population**} & \multicolumn{2}{c|}{**Column Population**} & \multicolumn{2}{c}{**Cell Filling**} \\ & EntiTables & WebTables & EntiTables & WebTables & EntiTables & WebTables \\ \hline BM25 & 54.44\(\pm\)2.72 & 41.16\(\pm\)1.06 & 62.93\(\pm\)2.73 & 84.17\(\pm\)0.65 & 28.98\(\pm\)2.59 & 38.48\(\pm\)1.62 \\ DTR (initial) & 74.34\(\pm\)2.39 & 47.88\(\pm\)1.10 & **90.07\(\pm\)**1.79 & **94.91\(\pm\)**0.41 & **34.78\(\pm\)**2.72 & **40.80\(\pm\)**1.64 \\ DTR (post-RATA) & **80.98\(\pm\)**2.17 & **49.62\(\pm\)**1.11 & **90.97\(\pm\)**1.72 & **94.94\(\pm\)**0.41 & **37.48\(\pm\)**2.81 & **40.26\(\pm\)**1.66 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Retrieval answer-bearing MRR (%).
### Limitations
A limitation of RATA is always assuming the answer is included in the retrieval corpus, which is not always true. When the corpus does not contain the correct answer, the desired behavior is to inform the user that the answer cannot be obtained, but RATA will provide a poorly supported answer. This also encourages RATA to learn spurious correlations when the retrieved tables coincidentally contain the same value, but does not really support the answer. This problem is especially serious when the answer is very generic (for example, numbers like "0") and same values by coincidence are common. This is related to the answerable question issue Rajpurkar et al. (2018) or evidentiality issue Lee et al. (2021); Asai et al. (2022) for question answering.
For cell-filling on WebTables, BART outperforms RATA often by either copying values from other rows of the query table or producing values similar to those in other rows. However, as shown in Figure 5, RATA's retrieval is often not helpful. Usually, the information required to fill the query table is not repeated in the corpus, so the retrieved table cannot support the query. As a result, RATA is simply retrieving some similar table, and selecting similar values in the tables.
|
2304.07858 | Cold-Start based Multi-Scenario Ranking Model for Click-Through Rate
Prediction | Online travel platforms (OTPs), e.g., Ctrip.com or Fliggy.com, can
effectively provide travel-related products or services to users. In this
paper, we focus on the multi-scenario click-through rate (CTR) prediction,
i.e., training a unified model to serve all scenarios. Existing multi-scenario
based CTR methods struggle in the context of OTP setting due to the ignorance
of the cold-start users who have very limited data. To fill this gap, we
propose a novel method named Cold-Start based Multi-scenario Network (CSMN).
Specifically, it consists of two basic components including: 1) User Interest
Projection Network (UIPN), which firstly purifies users' behaviors by
eliminating the scenario-irrelevant information in behaviors with respect to
the visiting scenario, followed by obtaining users' scenario-specific interests
by summarizing the purified behaviors with respect to the target item via an
attention mechanism; and 2) User Representation Memory Network (URMN), which
benefits cold-start users from users with rich behaviors through a memory read
and write mechanism. CSMN seamlessly integrates both components in an
end-to-end learning framework. Extensive experiments on real-world offline
dataset and online A/B test demonstrate the superiority of CSMN over
state-of-the-art methods. | Peilin Chen, Hong Wen, Jing Zhang, Fuyu Lv, Zhao Li, Qijie Shen, Wanjie Tao, Ying Zhou, Chao Zhang | 2023-04-16T18:45:20Z | http://arxiv.org/abs/2304.07858v1 | # Cold-Start based Multi-Scenario Ranking Model for Click-Through Rate Prediction
###### Abstract
Online travel platforms (OTPs), _e.g._, Ctrip.com or Fliggy.com, can effectively provide travel-related products or services to users. In this paper, we focus on the multi-scenario click-through rate (CTR) prediction, _i.e._, training a unified model to serve all scenarios. Existing multi-scenario based CTR methods struggle in the context of OTP setting due to the ignorance of the cold-start users who have very limited data. To fill this gap, we propose a novel method named Cold-Start based Multi-scenario Network (CSMN). Specifically, it consists of two basic components including: 1) User Interest Projection Network (UIPN), which firstly purifies users' behaviors by eliminating the scenario-irrelevant information in behaviors with respect to the visiting scenario, followed by obtaining users' scenario-specific interests by summarizing the purified behaviors with respect to the target item via an attention mechanism; and 2) User Representation Memory Network (URMN), which benefits cold-start users from users with rich behaviors through a memory read and write mechanism. CSMN seamlessly integrates both components in an end-to-end learning framework. Extensive experiments on real-world offline dataset and online A/B test demonstrate the superiority of CSMN over state-of-the-art methods.
Keywords:Click-Through Rate Prediction Multi-Scenario Cold-Start Recommendation User's Scenario-Specific Interest.
## 1 Introduction
With large-scale travel-related products and services available on the Online Travel Platforms (OTPs), _e.g._, Ctrip.com or Fliggy.com, Click-Through Rate (CTR) prediction [30, 14, 27], which aims at predicting the probability of users clicking items, has been playing an increasing role for delivering high-quality recommendation results and boosting the final platform revenues. Nowadays,
the majority of CTR models are mainly built for single-scenario problems, \(i.e.\), providing online service exactly for a scenario after trained only with data from the scenario. Here, a scenario refers to a specific spot where items are displayed, \(e.g.\), _Guess You Like_ module in our app homepage. However, in many industrial applications of travelling recommendation, a user may be engaged with multiple travel scenarios, \(e.g.\), _Hot Spring_, _Skiing_, \(etc.\), where each scenario has its corresponding relevant candidate items to display, such as _ski spots nearby_ could be displayed to users who are visiting the _Skiing_ scenarios. In addition, the users in different scenarios also have different travel intentions such as _leisure_ for _Hot Spring_, and _adventure_ for _Skiing_. Consequently, multi-scenario CTR prediction, which is of great practical significance but has been largely under-explored, deserves more research efforts from academia and industry.
A straightforward strategy is to build an individual model for each business scenario using the single-scenario CTR prediction methods, \(i.e.\), DIN [30], DIEN [29], DIHN [17]. However, it has two apparent shortcomings: 1) training models for small-scale scenarios may suffer from severe data sparsity problem, and 2) maintaining different models for all business scenarios will be costly, which motivates us to devise a unified CTR prediction model to serve multiple scenarios simultaneously. Another possible strategy is to employ Multi-Task Learning (MTL) methods [2, 13], \(i.e.\), one task for the corresponding scenario. However, we argue that MTL methods have significant difference from multi-scenario methods, where MTL methods simultaneously address various types of tasks in the same scenario, such as jointly predicting the CTR and conversion rate (CVR) tasks [24, 26, 25], while multi-scenario methods always focus on the same task, \(i.e.\), CTR task, across multiple scenarios. What's more, it is very necessary for multi-scenario CTR methods to capture the scenario-shared information of various scenarios and explore scenario-specific characteristics simultaneously, where scenario-shared information means the overlapping users and candidate items among multiple scenarios, and scenario-specific characteristics indicates that users' interests with respect to different scenarios would be significantly different due to the data scale or topic-specific preferences among different scenarios. For facilitating following narration, we regard scenario-shared information and scenario-specific characteristics across various scenarios as the _commonality_ and _discrimination_ property, respectively. In fact, how to devise a unified and elaborate CTR prediction model for multiple scenarios is very challenging, especially exploiting the _commonality_ and _discrimination_ properties simultaneously.
To achieve this goal, several representative works towards the multi-scenario CTR prediction have been proposed. For example, STAR [19] trains a single model to serve all scenarios simultaneously by leveraging data from all scenarios, capturing users' interests effectively by employing shared centered parameters and scenario-specific parameters to exploit the _commonality_ and _discrimination_ property, respectively. SAR-Net [16] learns users' scenario-specific interests by harnessing the abundant data from different scenarios via two specific attention modules, leveraging the scenario features and item features to modulate users'
behaviors for exploring the _discrimination_ property effectively. Meanwhile, it utilizes the bottom-shared embedding parameters to exploit the _commonality_ property. DADNN [7] devises a unified model to serve multiple scenarios, where shared bottom block among all scenarios is employed to exploit the _commonality_ property, while scenario-specific head captures the characteristics of every scenario, \(i.e.\), exploiting the _discrimination_ property. Although aforementioned methods have achieved remarkable performance for multi-scenario CTR prediction task with the consideration of the _commonality_ and _discrimination_ properties, they all neglect the cold-start issue which are high-frequently encountered in the OTPs setting. In practice, users' behaviors on OTPs are quite sparse or even absent compared with other e-commerce platforms since travel is a low-frequency demand, resulting in the cold-start issue and making it ineffective to learn the cold-start users' personalized preferences. How to tackle the cold-start issue in the context of multiple scenarios, especially considering the _commonality_ and _discrimination_ properties, has been unexplored and remains challenging.
To fully tackle the cold-start issue while exploiting the _commonality_ and _discrimination_ properties, in this paper, we propose a novel method named Cold-Start based Multi-scenario Network (CSMN). It consists of two fundamental components including a User Interest Projection Network (UIPN) and a User Representation Memory Network (URMN). Specifically, UIPN firstly purifies users' behavior representations by eliminating the scenario-irrelevant information in behaviors with respect to the visiting scenario, then summarizes the purified behaviors with respect to the target item via an attention mechanism. In other words, even though a user has same and fixed behaviors, each behavior will obtain varied yet purified representation across different scenarios, followed by extracting user scenario-specific interest via attention mechanism, thus exploiting the _discrimination_ property. In addition, the resultant representation from UIPN component would be delivered to the URMN component, further being as a supplement cue to infer the interests of cold-start users. Specifically, URMN can make users with sparse behaviors benefit from users with rich behaviors through a memory read and write mechanism, where each slot in the memory can be regarded as a cluster of users who share similar profiles given target item at specific scenario, resulting in cold-start users can absorb well-purified interest representations from users with similar profiles yet rich behaviors, mitigating the cold-start issue effectively. Meanwhile, CSMN utilizes shared bottom block among all scenarios to address the _commonality_ property. The contributions of this paper is three-fold:
* We propose a novel method named Cold-Start based Multi-scenario Network (CSMN) for multi-scenario CTR prediction task, which facilitates learning users' distinct interest representations across multiple scenarios.
* We devise two key components, including URMN and UIPN, to jointly address the aforementioned cold-start issue in the multi-scenario setting in an end-to-end manner, where the _commonality_ and _discrimination_ properties are effectively exploited for multi-scenario modelling.
* We conduct extensive experiments on both real-world offline dataset and online A/B test. The results demonstrate the superiority of the proposed CSMN over representative methods. CSMN now serves millions of users in our online travel platform, achieving 3.85% CTR improvement.
## 2 Related Work
**CTR Prediction:** Recently, academia and industry communities have paid a lot of attention on CTR prediction not only from the perspective of feature interactions, \(e.g.\), DCN [23], NCF [8], DeepFM [27], AutoInt [20], but also from the perspective of users' sequential modelling, \(e.g.\), DIN [30], DIEN [29], MIMN [14], DIHN [17]. Apart from these methods for single-scenario CTR prediction, multi-scenario CTR predictions also have drawn increasing attention, \(i.e.\), training a unified model to serve all scenarios simultaneously. For example, STAR [19] employs shared centered parameters and scenario-specific parameters to learn users' interest representation across multiple scenarios. SAR-Net [16] learns users' scenario-specific interests by harnessing the abundant data from different scenarios via specific attention modules. However, both methods ignore the cold-start issue in the OTPs setting, thereby struggling in effectively discovering cold users' real interests across multiple scenarios. By contrast, our proposed CSMN model leverages a User Representation Memory Network to interpret the interest of users with sparse behaviors from those of users with rich behaviors.
**Cold Start Recommendation:** Cold start issue has been widely recognized in representative recommender systems. Typically, there are three kinds of solutions to address it including: 1) resorting to more generalized auxiliary and contextual information [1, 12, 4]; 2) cross-domain transfer (CDT) [28, 9], \(i.e.\), users may have interactions with items in one domain while not in the other relevant domain. The goal of CDT is to effectively infer cold-start users' preferences based on their interactions from one domain to the other relevant domain; and 3) meta-learning approaches [11, 5], which argue that users with similar inherent profiles should be recommended with similar items by leveraging users' few behaviors. Despite effective, the SOTA methods for cold-start issue still struggle in the OTPs setting, since they do not address the cold-start issue in the context of multi-scenario CTR prediction. By contrast, our proposed CSMN can benefit users with sparse or even absent behaviors from users with rich behaviors through external memory mechanism, where each slot can be regarded as a cluster of users who share similar profiles, resulting in cold-start users can obtain interest representations from users with similar profiles yet rich behaviors.
**Multi-Task Learning:** Multi-Task Learning (MTL) [15, 13, 21, 2] has been widely used in recommender systems, which benefits from the multi-objective optimization. For example, MMoE [13] extends the efficient Mixture-of-Experts (MoE) shared-bottom structure to exploit a light-weight gating network to model the relationship of various tasks, which has been demonstrated to handle the task-specific information in a highly efficient manner. Going one step further, to
address the seesaw phenomenon, PLE [21] adopts a progressive routing mechanism to gradually extract and separate deeper semantic knowledge. In the context of the multi-scenario prediction task, it makes prediction for multiple scenarios towards the same task, \(i.e.\), the CTR task, where the label spaces are same. Although we can build individual network for corresponding scenario on top of a shared-bottom structure, followed by employing classical MTL approaches for multi-objective optimization. However, the consistency and discrepancy of various scenarios are coupled with each other tightly, resulting in the sophisticated relationships of multiple scenarios are difficult to disentangle. By contrast, we propose a User Interest Projection Network to disentangle scenario-specific interests from users' historical behaviors.
## 3 The Proposed Approach
In this paper, we propose a novel model named Cold-Start based Multi-scenario Network (CSMN) for multi-scenario CTR prediction. As depicted in Fig. 1, it consists of three basic components including Embedding Layer, User Interest Projection Network (UIPN), and User Representation Memory Network (URMN). We will introduce them in detail.
Figure 1: The overview architecture of the proposed CSMN model, which consists of Embedding Layer, User Representation Memory Network (URMN), and User Interest Projection Network (UIPN). Symbol \(SK_{uis}\), \(A_{uis}\), \(R_{uis}\) denotes the comprehensive representation of the users’ profiles, target item and current visiting scenario, user’s augmented interest for the target item at specific scenario, users’ purified scenario-specific interests by leveraging the target items and current visiting scenarios, respectively.
### Problem Definition
In this section, we formally define the problem of multi-scenario CTR prediction task. Let \(\mathcal{U}=\{u_{1},u_{2},...,u_{N}\}\), \(\mathcal{I}=\{i_{1},i_{2},...,i_{M}\}\), \(\mathcal{S}=\{s_{1},s_{2},...,s_{K}\}\), \(\mathcal{C}=\{c_{1},c_{2},...,c_{L}\}\) be a set of \(N\) users, a set of \(M\) items, a set of \(K\) scenarios, a set of \(L\) contexts, respectively. For facilitating the following narration, we omit the subscript and use symbols \(u\), \(i\), \(s\), \(c\) to denote user \(u_{n}\), item \(i_{m}\), scenario \(s_{k}\), context \(c_{l}\), respectively. And the user-item interaction at specific scenario is typically formulated as a matrix \(Y=\{y_{uis}\}_{N\times M\times K}\). Specifically, \(y_{uis}=1\) means user \(u\) has clicked item \(i\) at the scenario \(s\), otherwise \(y_{uis}=0\). In this paper, we mainly employ five types of input features namely _User Profiles_\(u^{P}\), _User Behaviors_\(u^{B}\), _Target item_\(i\), _Context_\(c\) and _scenario_\(s\) for each sample, where \(u^{P}\) contains _age_, _sex_, _purchase power_, _etc._, \(u^{B}\) denotes the sequential list of users visiting the set of items, \(i\) contains _item ID_, _item's category ID_, _etc._, \(c\) contains _weather_, _time_, _etc._, \(s\) contains _scenario ID_, _scenario's accumulated CTR_, _etc._, and \(u\) can be defined as \(\left\{u^{B};u^{P}\right\}\). Now, our learning goal is to train a unified CTR model to predict the probability \(\hat{y}_{uis}\) of user \(u\) clicks the target item \(i\) at scenario \(s\) given \(u\), \(i\), \(s\), \(c\), formulated as: \(\hat{y}_{uis}=\mathcal{F}(u,i,s,c;\theta)\), where \(\mathcal{F}\) and \(\theta\) denote the learning objective and model parameters for the multi-scenario CTR prediction task, respectively.
### Embedding Layer
Most data from industrial recommender systems are presented in a multi-field manner, where the fine-grained feature in each field is normally transformed into high-dimensional sparse one-hot features. For example, the one-hot vector representation of _male_ from the _user sex_ field can be decoded as \([1,0]^{T}\). Without loss of generality, we divide the raw data into five groups: target item \(i\), user' historical behaviors \(u^{B}\), user-specific profiles \(u^{P}\), scenario information \(s\) and context information \(c\). Assuming the concatenation results of different fields' one-hot vectors from these five groups as \(X_{i}\), \(X_{u^{B}}\), \(X_{u^{P}}\), \(X_{s}\) and \(X_{c}\), respectively, they can be further transformed into low dimensional dense representations by multiplying corresponding embeddings matrices, denoted as \(E_{i}\), \(E_{u^{B}}\), \(E_{u^{P}}\), \(E_{s}\) and \(E_{c}\), respectively, \(e.g.\), \(E_{u^{B}}=[e_{1};e_{2};...;e_{T}]\), where \(T\) and \(e_{t}\) represent the length of users' behaviors and the embedding feature of the \(t\)-th behavior, respectively. In this paper, we also employ bottom-shared embedding parameters to exploit the _commonality_ property. Since the bottom-shared embedding parameters make up the majority of the trainable parameters, they can be learned sufficiently by information sharing among overlapping users and candidate items, thereby avoiding the overfitting issue.
### User Interest Projection Network
Generally, users' interests can be effectively extracted from users' historical behaviors. For example, as an excellent representative method for users' interest extraction, DIN [30] firstly employs the attention mechanism to dynamically
compute the weight of users' historical behaviors with respect to different target items, followed by utilizing a weighted-sum pooling operation to adaptively generate users' interest representation. Despite effective, we figure out it is not directly suitable for the multi-scenario CTR task in the OTPs setting, where not only the target item but also the scenario-specific information can affect users' interests. For example, when a user comes into two scenarios with varied topics, \(e.g.\), _Hot Springs_ and _Skiing_ scenarios, user's interest can be significantly different since the user may be concerned about the _leisure_ in the _Hot Springs_ scenario while preferring _outdoor adventure_ in the _Skiing_ scenario.
Specifically, we argue that even though given the same target item and fixed user's behaviors, the representations of users' behaviors with respect to different scenarios will be inevitably varied. Therefore, a straightforward strategy towards the multi-scenario CTR prediction is to disentangle the scenario-specific characteristics from users' behaviors with respect to the current visiting scenario, which motivates us to propose the User Interest Projection Network (UIPN) module. Specifically, the representation of each element in \(E_{u^{B}}\) will be projected into the orthogonal space of the scenario embedding \(E_{s}\) to eliminate the scenario-irrelevant information. Formally, without loss of generality, we illustrate the orthogonal mapping process with a randomly selected element \(e_{i}\) from \(E_{u^{B}}\) and the dense feature \(E_{s}\) of scenario \(s\). Firstly, we embed \(e_{i}\), \(E_{s}\) into the same space by multiplying individual mapping matrix \(W_{o}\), \(W_{s}\), respectively, \(i.e.\), \(f_{i}=W_{o}e_{i}\), \(f_{s}=W_{s}E_{s}\). Then, the refined preference representation vector \(f_{i}^{p}\) with respect to scenario \(s\) can be obtained by projecting the vector \(f_{i}\) onto the direction of vector \(f_{s}\), defined as \(f_{i}^{p}=project(f_{i},f_{s})\), where \(project(.)\) denotes the scenario aware projection operator, \(i.e.\), \(project(a,b)=\frac{ab}{|b|}\frac{b}{|b|}\cdot|\cdot|\) denotes the norm of a vector. In this manner, the original \(E_{u^{B}}\) can be formulated as \(f_{u^{B}}=\{f_{1}^{p};f_{2}^{p};...;f_{T}^{p}\}\). Inspired by the Multi-Head Self-Attention (MHSA) mechanism [22], which can effectively capture the dependency between any pair within the sequence despite their distance, we use it to further enhance the representation of users' preference. Specifically, given \(f_{u^{B}}=\{f_{1}^{p};f_{2}^{p};...;f_{T}^{p}\}\), we can obtain the enhanced representation \(f_{u^{B}}^{{}^{\prime}}=\left\{f_{1}^{{}^{\prime}p};f_{2}^{{}^{\prime}p};...; f_{T}^{{}^{\prime}p}\right\}\) after applying MHSA on \(f_{u^{B}}\).
Next, to obtain users' purified interests from \(f_{u^{B}}^{{}^{\prime}}\), we need to calculate the similarity between the target item \(i\) and each element of \(f_{u^{B}}^{{}^{\prime}}\), which can be formulated as \(\alpha_{t}=Relu(z^{T}tanh(W_{i}E_{i}+W_{f}f_{t}^{{}^{\prime}p}+b))\), where, \(z\), \(W_{i}\), \(W_{f}\), and \(b\) are all learnable parameters. After normalization, \(i.e.\), \(\alpha_{t}=\frac{exp(\alpha_{t})}{\sum_{i=1}^{T}exp(\alpha_{i})}\), denoting the weight for the \(t\)-th behavior with respect to the target item \(i\). Therefore, the final user' purified interest representation \(R_{u}\) from UIPN can be calculated as \(R_{uis}=\sum_{i=1}^{T}\alpha_{i}f_{i}^{{}^{\prime}p}\) via the weighted-sum pooling operation. In this way, UIPN can effectively achieve the interests of users who have rich historical behaviors with respect to current visiting scenario and the target item within it, which can be further exploited as a supplement cue to infer the interests of cold-start users who have similar profiles with rich behavior users.
### User Representation Memory Network
In the OTP settings, users' behaviors are quite sparse compared with other typical e-commerce platforms, \(i.e.\), resulting in the cold-start issue and making it difficult to extract users' interests from their behaviors. However, we argue that users' interests not only can be reflected from their behaviors but also from their inherent profiles. In other word, users' behaviors can be regarded as the embodiment of their inherent profiles. For example, when providing online service for a user with _adventure spirit_, we can probably infer the user prefers _Skiing_ more than _Hot Springs_, even though the user has no any online behaviors before. Therefore, when users' historical behaviors are sparse or even absent, users' inherent profiles can act as a kind of supplementary cues to discover users' interests. However, how to effectively extract users' interests from their profiles within the unified framework is nontrivial. To this end, we propose a User Representation Memory Network (URMN).
Specifically, URMN customizes an augmented vector \(A_{uis}\) for each sample to represent user's augmented interest for the target item at specific scenario, followed by concatenating it with other representation features, together for model training. To obtain \(A_{uis}\), we borrow the idea from Neural Turing Machine [6] which can store information in a fixed size of external memory. Specifically, we firstly generate a specific key \(SK_{uis}\), which can be regarded as the comprehensive representation of the users' profiles, target item, and current visiting scenario. Then, we traverse all the slots of the external memory in URMN and generate each slot's weight with respect to the specific key \(SK_{uis}\). Finally, we achieve the augmented vector \(A_{uis}\) by a weighted-sum memory summarization. We will detail them as follows.
First, the specific key \(SK_{uis}\) defined as \(SK_{uis}=F(\sum_{j=1}^{P}w_{j}e_{p_{j}};E_{i};E_{s})\), where \(P\) denotes the number of user' profiles, \(F(.)\) represents three MLP layers with ReLU activation function, \(w_{j}\) is the weight of user's profile \(e_{p_{j}}\) with respect to the target item \(E_{i}\) via attention mechanism. Intuitively, similar keys \(SK_{uis}\) cluster together, implying that given specific scenario and target item, users having similar profiles probably share the similar interests, thus cold start users can benefit from users with similar profiles yet rich behaviors. From another perspective, all the learning parameters in \(F(.)\) are shared, \(e.g.\), shared MLP parameters, which also implies the representation of current key can be affected by the representations of other keys.
Next, we detail the structure of the memory in URMN with its parameters denoted as \(Mem\). It consists of \(q\) memory slots \(\left\{Mem_{i}\right\}|_{i=1}^{q}\) with each slot containing corresponding key \(Mem_{i}^{Key}\) and value \(Mem_{i}^{Value}\), \(i.e.\), \(Mem_{i}\triangleq\left\{Mem_{i}^{Key},Mem_{i}^{Value}\right\}\). Each slot can be regarded as a cluster, where \(Mem_{i}^{Key}\) (resp. \(Mem_{i}^{Value}\)) is updated by itself and \(SK_{uis}\) (resp. \(R_{uis}\)). Moreover, \(R_{uis}\) from the resultant representation of UIPN component denotes users' purified interests, depicted in Fig. 1. Specifically, two basic operations of URMN are _Memory Read_ and _Memory Write_, which interact with memory through a controller.
**Memory Read**: During the _Memory Read_ process, the controller generates a read key \(SK_{uis}\) as mentioned above to access the memory. Formally, it can be formulated as follows: \(w_{uis}^{j}=\frac{exp(F_{xy}(SK_{uis},Mem_{j}^{Key}))}{\sum_{j=1}^{q}exp(F_{xy}(SK _{uis},Mem_{j}^{Key}))},j=1,...,q\), where \(F_{xy}(x,y)=\frac{x^{T}y}{\|x\|\|y\|}\), \(w_{uis}^{j}\) is the weight of \(SK_{uis}\) with respect to the key of slot \(j\). Then, we obtain user's augmented interest vector \(A_{uis}\) by weighted-sum pooling, defined as: \(A_{uis}=\sum_{j=1}^{q}w_{uis}^{j}Mem_{j}^{Value}\).
**Memory Write**: Using the same weight \(w_{uis}^{j}\), the update process of memory key and value is defined as follows:
\[Mem_{j}^{Key}=\alpha_{k}w_{uis}^{j}SK_{uis}+(1-\alpha_{k})Mem_{j}^{Key}, \tag{1}\]
\[Mem_{j}^{Value}=\alpha_{v}w_{uis}^{j}R_{uis}+(1-\alpha_{v})Mem_{j}^{Value}, \tag{2}\]
where \(\alpha_{k}\) and \(\alpha_{v}\) are the hyper-parameters valued in \([0,1]\), controlling the update rate of each memory slot's key and value, respectively. \(Mem_{j}^{Key}\) and \(Mem_{j}^{Value}\) are randomly initialized. In this way, URMN can distribute the interest of users with rich behaviors to the users with sparse or absent behaviors in the same cluster, alleviating the cold-start issue effectively.
Finally, all the representation vectors including \(E_{i}\), \(E_{u^{P}}\), \(E_{s}\), \(E_{c}\), \(A_{uis}\) and \(R_{uis}\) are concatenated, followed by feeding them to multiple MLP layers to generate the final predicted probability \(\hat{y}_{uis}\). Now, given the predicted \(\hat{y}_{uis}\) and the ground truth \(y_{uis}\in\{0,1\}\), we define the objective function as the negative log-likelihood function, formulated as:
\[Loss=-\frac{1}{Num}\sum(y_{uis}log\hat{y}_{uis}+(1-y_{uis})log(1-\hat{y}_{uis})), \tag{3}\]
where, \(Loss\) is the total loss and \(Num\) denotes the number of training data collected from all the scenarios.
## 4 Experiments
In this section, we conduct extensive offline and online experiments to comprehensively evaluate the effectiveness of the proposed CSMN, and try to answer following questions:
* **Q1**: How about the overall performance of the proposed CSMN compared with state-of-art methods?
* **Q2**: How about the impact of each component of the proposed CSMN?
* **Q3**: How about the influence of key hyper-parameters, \(e.g.\), the number of URMN slots?
* **Q4**: How about the online performance of the proposed CSMN compared with other methods?
### Experiments Settings
#### 4.1.1 Dataset Description
To the extent of our knowledge, there are no public datasets suited for the multi-scenario CTR prediction task in the OTP settings. We make the offline dataset by collecting users' traffic logs from our OTP platform, which contain 29 million users and 0.61 million travel items from 20 scenarios in consecutive 30 days, \(i.e.\), from 2022-05-22 to 2022-06-20. They are further divided into the disjoint training set and testing set, where the training set is from 2022-05-22 to 2022-06-19, while the testing set is from the left days. The statistics of this offline dataset are listed in Table 1, where _CSU Ratio_ representing cold-start users' ratio. We can find that the data scales and distributions among these travel scenarios are significantly different. In addition, we observe that over 28% (resp. 40%) of users do not have any behaviors in recent 180 (resp. 90) days from the training data, which indeed implies the cold-start issue.
#### 4.1.2 Competitors
To verify the effectiveness of the proposed CSMN, we compare it with following methods:
* **WDL**[3]: It consists of wide linear and deep neural parts, which combines the benefits of memorization and generalization for CTR prediction.
* **DeepFM**[7]: It imposes a factorization machine as a "wide" part in WDL to eliminate feature engineering.
* **DIN**[30]: It extracts users' dynamic interest from their historical behavior via attention mechanism.
* **MMOE**[13]: It models the relationship among different tasks by employing gating networks and multi-task learning framework. We also adapt MMoE for multi-scenario task by assigning each output for corresponding scenario.
* **PLE**[21]: It contains shared components and task-specific components, and adopts a progressive routing mechanism to extract and separate deeper knowledge, enabling the efficiency of representation across multiple tasks.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Scenario & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & \#7 & \#8 & \#9 & \#10 \\ \hline \#User & 2.3M & 2.1M & 8.2M & 0.6M & 3.4M & 0.4M & 0.3M & 0.3M & 2.5M & 0.2M \\ \#Item & 53K & 37K & 122K & 23K & 37K & 51K & 33K & 18K & 86K & 63K \\ CTR & 2.57\% & 1.22\% & 1.64\% & 5.54\% & 8.10\% & 1.27\% & 6.75\% & 2.26\% & 11.61\% & 6.51\% \\ CSU Ratio & 27.65\% & 23.92\% & 11.49\% & 18.66\% & 16.49\% & 31.28\% & 14.54\% & 22.93\% & 28.73\% & 38.52\% \\ \hline \hline Scenario & \#11 & \#12 & \#13 & \#14 & \#15 & \#16 & \#17 & \#18 & \#19 & \#20 \\ \hline \#User & 1.1M & 0.6M & 0.6M & 1.7M & 9.1M & 0.17M & 0.16M & 0.14M & 0.13M & 0.11M \\ \#Item & 135K & 41K & 57K & 27K & 11K & 5K & 66K & 23K & 31K & 16K \\ CTR & 18.70\% & 9.87\% & 4.05\% & 14.38\% & 5.80\% & 1.72\% & 8.02\% & 4.23\% & 3.03\% & 1.62\% \\ CSU Ratio & 32.69\% & 16.77\% & 28.46\% & 19.55\% & 27.44\% & 26.22\% & 25.37\% & 10.45\% & 30.07\% & 34.76\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: The statistics of the offline dataset.
* **STAR**[19]: It trains a unified model to serve all scenarios simultaneously, containing shared centered parameters and scenario-specific parameters.
* **SAR-Net**[16]: It predicts users' scenario-specific interests from scenario/target item features and adaptively extracts scenario-specific information across multiple scenarios.
#### 4.1.3 Metrics and Implementation Details
To comprehensively evaluate the performance of different methods, we adopt two widely used metrics in recommender systems, \(i.e.\), Area Under Curve (AUC) [30, 18] and Relative Improvement (RI) [16], where, the larger AUC means better ranking performance, and RI provides an intuitive comparison measure by calculating the relative improvement of a target model over the baseline model. In addition, the proposed CSMN and other competitors are implemented by distributed Tensorflow 1.4, where learning rate, mini-batch, and optimizer, are set as 0.001, 1024, Adam [10], respectively. In addition, there are 4 layers in the MLP. Logistic loss is used as the loss function for all the competitors, as summarized in Table 2.
### Experimental Results (Q1)
In this subsection, we report the AUC results of all the competitors on the offline test set. As illustrated in Table 3, the consistent improvement of the proposed CSMN over other competitors validates its effectiveness. It achieves the best AUC results in each single scenario. Note that compared with WDL, DeepFM, DIN, MMOE and PLE, the multi-scenario CTR methods, \(e.g.\), STAR, SAR-Net and the proposed CSMN, consistently achieves better performance, demonstrating that ignoring the scenario difference during the extraction of users' scenario-specific interests will seriously degenerate the performance of multi-scenario CTR prediction models. Nevertheless, SAR-Net still struggles in extracting users' real interests across multiple different scenarios, since it cannot address the cold-start issue in the OTPs setting. By contrast, our CSMN leverages the URMN to specifically mitigate the adverse effect of them, respectively. Consequently, it achieves an improvement of 2.41% RI over SAR-Net. Moreover, STAR also neglects the cold-start problem in the OTPs setting. Therefore, it has worse performance
\begin{table}
\begin{tabular}{c|c} \hline Hyper-parameters & Choice \\ \hline Loss function & Logistic Loss \\ Optimizer & Adam \\ Number of layers in MLP & 4 \\ Dimensions of layers in MLP & [512, 256, 128, 32] \\ Batch size & 1024 \\ Learning rate & 0.001 \\ Dropout ratio & 0.5 \\ \hline \end{tabular}
\end{table}
Table 2: Hyper-parameters of all competitors.
than our CSMN, especially for users with sparse (or even absent) behaviors. For example, for scenario #10 and scenario #20, which have very large portion of cold-start users, \(i.e.\), over 38.52% and 34.76% respectively, depicted in Table 1, CSMN achieves a larger AUC improvement of 3.11% and 2.93% over SAR-Net,respectively. These results demonstrate the effectiveness of the proposed URMN in dealing with the cold-start issue, which clusters users according to the representation of their profiles and obtains the interest of users with sparse behaviors from neighboring users with rich behaviors.
### Ablation Study (Q2)
To investigate the effectiveness of each component in the proposed CSMN, we conduct several ablation experiments.
#### 4.3.1 The effectiveness of UIPN
UIPN is devised to extract users' scenario-specific interests from their historical behaviors with respect to the target item and the visiting scenario simultaneously. Here, we devise two variant models including:
* **CSMN w/o UIPN + T attention**: It removes UIPN while employing attention mechanism to extract users' interests from their behaviors only with respect to the Target item (T), ignoring the scenario information.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline scenario & WDL & DeepFM & DIN & MMOE & PLE & STAR & SAR-Net & **CSMN** & RI \\ \hline \#1 & 0.6317 & 0.6342 & 0.6368 & 0.6504 & 0.6506 & 0.6513 & 0.6527 & **0.6546** & 1.24\% \\ \#2 & 0.6695 & 0.6710 & 0.6718 & 0.6897 & 0.6901 & 0.6925 & 0.6933 & **0.6959** & 1.35\% \\ \#3 & 0.7186 & 0.7228 & 0.7231 & 0.7306 & 0.7307 & 0.7328 & 0.7321 & **0.7331** & 0.13\% \\ \#4 & 0.6500 & 0.6511 & 0.6559 & 0.6678 & 0.6681 & 0.6709 & 0.6714 & **0.6738** & 1.40\% \\ \#5 & 0.6822 & 0.6861 & 0.6882 & 0.7018 & 0.7016 & 0.7057 & 0.7063 & **0.7079** & 0.78\% \\ \#6 & 0.6257 & 0.6304 & 0.6322 & 0.6482 & 0.6485 & 0.6501 & 0.6496 & **0.6541** & 2.66\% \\ \#7 & 0.6388 & 0.6419 & 0.6447 & 0.6501 & 0.6522 & 0.6593 & 0.6602 & **0.6609** & 0.44\% \\ \#8 & 0.6535 & 0.6533 & 0.6575 & 0.6624 & 0.6626 & 0.6631 & 0.6628 & **0.6654** & 1.41\% \\ \#9 & 0.7032 & 0.7037 & 0.7058 & 0.7126 & 0.7131 & 0.7138 & 0.7165 & **0.7186** & 0.97\% \\ \#10 & 0.6867 & 0.6921 & 0.6913 & 0.7003 & 0.7001 & 0.7012 & 0.7024 & **0.7087** & 3.11\% \\ \#11 & 0.6701 & 0.6757 & 0.6772 & 0.6795 & 0.6802 & 0.6815 & 0.6822 & **0.6864** & 2.31\% \\ \#12 & 0.7025 & 0.7042 & 0.7016 & 0.7163 & 0.7169 & 0.7176 & 0.7189 & **0.7205** & 0.73\% \\ \#13 & 0.7258 & 0.7291 & 0.7294 & 0.7431 & 0.7438 & 0.7460 & 0.7468 & **0.7504** & 1.46\% \\ \#14 & 0.6662 & 0.6708 & 0.6720 & 0.6789 & 0.6800 & 0.6813 & 0.6862 & **0.6884** & 1.18\% \\ \#15 & 0.7135 & 0.7138 & 0.7131 & 0.7319 & 0.7323 & 0.7342 & 0.7347 & **0.7399** & 2.22\% \\ \#16 & 0.6411 & 0.6532 & 0.6560 & 0.6698 & 0.7005 & 0.7019 & 0.7025 & **0.7064** & 1.93\% \\ \#17 & 0.6739 & 0.6728 & 0.6775 & 0.6947 & 0.6953 & 0.6981 & 0.6998 & **0.7027** & 1.45\% \\ \#18 & 0.6242 & 0.6280 & 0.6300 & 0.6394 & 0.6395 & 0.6412 & 0.6417 & **0.6443** & 1.83\% \\ \#19 & 0.6193 & 0.6236 & 0.6219 & 0.6465 & 0.6462 & 0.6498 & 0.6505 & **0.6542** & 2.46\% \\ \#20 & 0.6287 & 0.6271 & 0.6346 & 0.6437 & 0.6434 & 0.6487 & 0.6503 & **0.6547** & 2.93\% \\ \hline Overall & 0.6729 & 0.6763 & 0.6788 & 0.6864 & 0.6883 & 0.6922 & 0.6954 & **0.7001** & 2.41\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: The results of all methods on the offline dataset.
* **CSMN w/o UIPN + TS attention**: It removes UIPN while leveraging two specific attention modules to re-weigh users' historical behaviors with respect to Target item and the visiting Scenario (TS), respectively, followed by summarizing users' behaviors by weighted-sum to obtain users' interests.
As shown in Table 4, CSMN achieves the best performance compared with the other two variants. For example, compared with CSMN, **CSMN w/o UIPN + T attention** observes a performance drop of 2.51% RI, which demonstrates the importance of extraction of users' interests with respect to different scenarios. **CSMN w/o UIPN + TS attention** observes a performance drop of 1.11% RI, which demonstrates the effectiveness of eliminating the scenario-irrelevant information in users' behaviors with respect to the visiting scenario to further refine the representations of users' scenario-specific interests.
#### 4.3.2 The effectiveness of URMN
To demonstrate the effectiveness of URMN, we remove it from the model, resulting in a variant model named **CSMN w/o URMN**. As shown in Table 5, **CSMN w/o URMN** observes a performance drop of 1.63% RI compared with CSMN, which demonstrates CSMN can effectively alleviate the cold-start issue attributing to URMN. Furthermore, we argue that the more sparse users' behaviors are, the greater relative improvement CSMN achieves. As shown in Table 3, CSMN gets the largest RI improvement over other competitors in scenario #10 and scenario #20, which have the largest portion of cold-start users.
### Parameter Sensitivity (Q3)
To further understand the adverse effect of the cold-start issue in OTPs, we investigate the influence of the key hyper-parameters related to the issue, \(i.e.\), the number of memory slots \(q\), \(\alpha_{k}\) (resp. \(\alpha_{v}\)) controlling the update rate of each memory slot's key (resp. value). First, as shown in Table 6, CSMN achieves the best performance when \(q\) takes the value of 1,000. Intuitively, the smaller \(q\) is, the fewer the numbers of formed users' clusters are, and vice versa. Taking two extreme cases as example, one is \(q=1\), where all the users have the same augmented interest vector, resulting in the difficulty of distinguishing different users' interests, especially for those with sparse behaviors. The other is an extremely large \(q\), \(i.e.\), \(q=10,000\), where each slot of the memory will be too spare to
\begin{table}
\begin{tabular}{c|c c} \hline Model & AUC & RI \\ \hline CSMN & **0.7001** & 0.00 \\ CSMN w/o UIPN + T attention & 0.6952 & -2.51\% \\ CSMN w/o UIPN + TS attention & 0.6979 & -1.11 \% \\ \hline \end{tabular}
\end{table}
Table 4: The effectiveness of UIPN.
update sufficiently. Next, we conduct seven groups of experiments, where each experiment setting the value of \(\alpha_{k}\) and \(\alpha_{v}\) as the same. As depicted in Table 7, CSMN obtains the best performance when \(\alpha_{k}\) and \(\alpha_{v}\) take the value of 0.3. Intuitively, when taking 0.0 as the update rate, \(Mem_{j}^{Key}\) and \(Mem_{j}^{Value}\) always follow the initialized values, neglecting the fact that parameters are continuously updating during model training, while taking as the update rate 1.0, \(Mem_{j}^{Key}\) and \(Mem_{j}^{Value}\) always taking the latest representation, abandoning the accumulated representation before. Obviously, both situations could not achieve the best performance, confirming that suitable updating rate is promising.
### Online A/B Test (Q4)
To further demonstrate the effectiveness of the proposed CSMN, we deploy it on our travel platform for A/B test, where the **Base** model is SAR-Net [16] and the evaluation metric is online CTR, \(i.e.\), the number of clicks over the number of impression items. To make the online evaluation fair, confident, and comparable, both methods includes same number of users, \(e.g.\), millions of users. We find the proposed CSMN achieves consistent improvement over SAR-Net model in consecutive seven days, \(e.g.\), achieving an average improvement of 3.85% CTR. To go a step forward, we find a more significant improvement is observed as was expected, \(e.g.\), achieving an average improvement of 4.62% CTR, for cold-start users, further demonstrating the effectiveness of CSMN dealing with cold-start issue. In a nutshell, the online A/B test results again demonstrate the effectiveness and practicability of our CSMN model in the industrial setting. Now, CSMN has been deployed on our platform and is serving all the traffic of twenty travel scenarios simultaneously.
## 5 Conclusions
In this paper, we propose a novel method named CSMN to deliver the unified click-through rate prediction task among the multiple scenarios on the online travel platforms. Specifically, it consists of two basic components including a User Interest Projection Network (UIPN) and a User Representation Memory
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Memory Size & 10 & 100 & 1,000 & 10,000 \\ \hline AUC & 0.6962 & 0.6979 & **0.7001** & 0.6956 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The effectiveness of memory size \(q\).
\begin{table}
\begin{tabular}{c|c c} \hline \hline Model & AUC & RI \\ \hline CSMN & **0.7001** & 0.00 \\ CSMN w/o URMN & 0.6969 & -1.63\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: The effectiveness of URMN.
Network (URMN), which can mitigate the cold-start issue effectively by exploiting scenario-shared and scenario-specific information of various scenarios simultaneously. Extensive experiments on both real-world offline dataset and online A/B test demonstrate the superiority of CSMN over state-of-the-art methods. How to employ the principle of meta-learning framework to further exploit the _commonality_ and _discrimination_ properties in multi-scenario CTR prediction task is an interesting topic and deserves more research efforts.
#### 5.0.1 Acknowledgments.
This work is supported by National Key Research and Development Program of China under Grant 2020AAA0107400 and National Natural Science Foundation of China (Grant No: 62206248).
|
2308.08666 | BREATHE: Second-Order Gradients and Heteroscedastic Emulation based
Design Space Exploration | Researchers constantly strive to explore larger and more complex search
spaces in various scientific studies and physical experiments. However, such
investigations often involve sophisticated simulators or time-consuming
experiments that make exploring and observing new design samples challenging.
Previous works that target such applications are typically sample-inefficient
and restricted to vector search spaces. To address these limitations, this work
proposes a constrained multi-objective optimization (MOO) framework, called
BREATHE, that searches not only traditional vector-based design spaces but also
graph-based design spaces to obtain best-performing graphs. It leverages
second-order gradients and actively trains a heteroscedastic surrogate model
for sample-efficient optimization. In a single-objective vector optimization
application, it leads to 64.1% higher performance than the next-best baseline,
random forest regression. In graph-based search, BREATHE outperforms the
next-best baseline, i.e., a graphical version of Gaussian-process-based
Bayesian optimization, with up to 64.9% higher performance. In a MOO task, it
achieves up to 21.9$\times$ higher hypervolume than the state-of-the-art
method, multi-objective Bayesian optimization (MOBOpt). BREATHE also
outperforms the baseline methods on most standard MOO benchmark applications. | Shikhar Tuli, Niraj K. Jha | 2023-08-16T20:33:57Z | http://arxiv.org/abs/2308.08666v1 | # Breathe: Second-Order Gradients and Heteroscedastic Emulation based Design Space Exploration
###### Abstract.
Researchers constantly strive to explore larger and more complex search spaces in various scientific studies and physical experiments. However, such investigations often involve sophisticated simulators or time-consuming experiments that make exploring and observing new design samples challenging. Previous works that target such applications are typically sample-inefficient and restricted to vector search spaces. To address these limitations, this work proposes a constrained multi-objective optimization (MOO) framework, called BreatHE, that searches not only traditional vector-based design spaces but also graph-based design spaces to obtain best-performing graphs. It leverages second-order gradients and actively trains a heteroscedastic surrogate model for sample-efficient optimization. In a single-objective vector optimization application, it leads to 64.1% higher performance than the next-best baseline, random forest regression. In graph-based search, BreatHE outperforms the next-best baseline, i.e., a graphical version of Gaussian-process-based Bayesian optimization, with up to 64.9% higher performance. In a MOO task, it achieves up to 21.9% higher hypervolume than the state-of-the-art method, multi-objective Bayesian optimization (MOBOpt). BreatHE also outperforms the baseline methods on most standard MOO benchmark applications.
active learning, constrained multi-objective optimization, neural networks, surrogate modeling +
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
+
Footnote †: ACM
Several _black-box_ optimization methods target efficient search of a design space [(7)]. These include random search, regression trees, Gaussian-process-based Bayesian optimization (GP-BO), random forest regression, etc. However, gradient descent typically outperforms these traditional _gradient-free_ approaches. For instance, gradient-based optimization outperforms traditional methods in the domain of NAS [(6; 33; 49)]. However, leveraging this approach requires a differentiable surrogate of the _black-box_ one wishes to optimize. Moreover, there is often a lack of knowledge and skill in machine learning (ML) among domain experts (device physicists, astronomers, etc.) in order to develop and optimize such surrogate models. Hence, there is a need for a plug-and-play sample-efficient gradient-based optimization pipeline that is applicable to diverse domains with variegated input/output constraints.
To tackle the abovementioned challenges, we propose a novel optimization method, Bayesian optimization using second-order gradients on an actively trained heteroscedastic emulator (BREATHE). It is an easy-to-use approach for efficient search of diverse design spaces where input simulation, experimentation, or annotation is computationally expensive or time-consuming. BREATHE is applicable to both _vector_ and _graph_ optimization. We call the corresponding versions V-BREATHE and G-BREATHE, respectively. G-BREATHE is a novel graph-optimization approach that optimizes both the graph architecture and its components (node and edge weights) to maximize output performance while honoring user-defined constraints. Rigorous experiments demonstrate the benefits of our proposed approach over baseline methods for diverse applications.
The main contributions of the article are as follows.
* We propose V-BREATHE, an efficient _vector optimization_ method, that is widely applicable to diverse domains. It leverages gradient-based optimization using backpropagation to the input (GOBI) [(52)] implemented on a heteroscedastic surrogate model [(53)]. It executes output optimization and supports constraints on the input or output. We propose the concept of _legality-forcing_ on gradients to support constrained optimization and leverage gradients in discrete search spaces. To handle output constraint violations, we use _penalization_ on the output. V-BREATHE requires minimal user expertise in ML.
* We propose G-BREATHE to apply BREATHE to graphical problems. It is a _graph optimization_ framework that searches for the best-performing graph architecture while optimizing the node and edge weights as well. It supports multi-dimensional node and edge weights, thus targeting a much larger set of applications than vector optimization.
* We further enhance V-BREATHE and G-BREATHE to support multi-objective optimization where the desired output is a set of _non-dominated solutions_ that constitute the Pareto front for a given problem. Using multiple random cold restarts, when implementing GOBI, our optimization pipeline can even tackle non-convex Pareto fronts. Our proposed approach achieves a considerably higher hypervolume with fewer queried samples than baseline methods.
The rest of the paper is organized as follows. Section 2 presents background material on vector and graph optimization methods. Section 3 illustrates the BREATHE algorithm in detail. Section 4 describes the experimental setup and the baselines that we compare against. Section 5 explains the results. Section 6 discusses the results in more detail and points out the limitations of the proposed approach. Finally, Section 7 concludes the article.
## 2. Background and related work
In this section, we provide background and related work on optimization using an actively-trained surrogate model.
### Vector Optimization
We refer to the optimization of a multi-dimensional vector (say, \(x\in\mathbb{R}^{d}\)) as _vector optimization_. Its application to a black-box function falls under the domain of _black-box_ optimization. Mathematically, one can represent this problem as follows.
\[\begin{split}\min&\ F(x)\\ \text{s.t.}&\ x_{i}^{L}\leq x_{i}\leq x_{i}^{U}\ \ \ \ \ i=1, \ldots,d\end{split} \tag{1}\]
Here, \(F\) is the black-box function we need to optimize; \(x_{i}\) is the \(i\)-th variable in the design space; \(x_{i}^{L}\) and \(x_{i}^{U}\) are its lower and upper bounds. We may not have a closed-form expression for \(F\) (a black-box); thus, finding a solution \(x^{*}\) may not be easy.
Many works target vector optimization. Random search uniformly samples inputs within the given bounds. Gradient-boosted regression trees (GBRTs) model the output using a set of decision trees [28]. GP-BO [45] approximates performance through Gaussian process regression and optimizes an acquisition function through the L-BFGS method [32]. Other optimization methods leveraging a GP-based surrogate suffer from the bottlenecking **argmax** operation over the entire design space [15]. Random forests fit various randomized decision trees over sub-samples of the dataset. BOSHNAS [49] searches for the best neural network (NN) architecture for an ML task. It outperforms other optimization techniques in the application of NAS to convolutional NNs [51] and transformers [49]. These approaches rely on active learning [41], in which the surrogate model, which can be a regression tree or an NN, interactively queries the simulator (or the experimental setup) to label new data. We use the new data to update the model at each iteration. This updated model forms new queries that lead to higher predicted performance. We iterate through this process until it meets a convergence criterion. Finally, this yields the input with the best output performance.
The abovementioned approaches do not consider optimization under constraints. Optimizing the objective function while adhering to user-defined constraints falls under the domain of constrained optimization. Mathematically,
\[\begin{split}\min&\ F(x)\\ \text{s.t.}&\ x_{i}^{L}\leq x_{i}\leq x_{i}^{U}\ \ \ \ \ i=1, \ldots,d\\ &\ C_{j}^{I}(x)\leq 0\ \ \ \ \ \ \ \ \ \ \ j=1, \ldots,J^{\prime}\\ &\ C_{k}^{E}(x)=0\ \ \ \ \ \ \ \ \ \ k=1,\ldots,K^{\prime}\end{split} \tag{2}\]
for \(J^{\prime}\) inequality constraints and \(K^{\prime}\) equality constraints. One can convert each equality constraint to two inequality constraints. Thus, we can simplify the above problem as follows:
\[\begin{split}\min&\ F(x)\\ \text{s.t.}&\ x_{i}^{L}\leq x_{i}\leq x_{i}^{U}\ \ \ \ \ i=1, \ldots,d\\ &\ C_{j}(x)\leq 0\ \ \ \ \ \ \ \ \ \ j=1,\ldots,J\end{split} \tag{3}\]
where \(J=J^{\prime}+2K^{\prime}\).
This problem belongs to the class of _constrained_ single-objective optimization (SOO) problems. One could also search the input space to optimize multiple objectives simultaneously while honoring the input constraints. We refer to this
class of problems as _constrained_ multi-objective optimization (MOO) problems [16]. Mathematically,
\[\begin{split}\min&\ F_{m}(x)\qquad\qquad\quad m=1, \ldots,M\\ \text{s.t.}&\ x_{i}^{L}\leq x_{i}\leq x_{i}^{U}\quad \quad i=1,\ldots,d\\ &\ C_{j}(x)\leq 0\qquad\quad j=1,\ldots,J\end{split} \tag{4}\]
for \(M\) objective functions.
Previous works propose various methods to solve MOO problems. Non-dominated sorting genetic algorithm-\(2\) (NSGA-\(2\)) [17] is a seminal evolutionary algorithm (EA)-based optimization method that evolves a set of candidates across generations into better-performing solutions. Multi-objective evolutionary algorithm based on decomposition (MOEA/D) [55] is another popular EA-based method that decomposes a MOO problem into multiple SOO problems and optimizes them simultaneously. Many state-of-the-art search techniques are based on evolutionary methods [13; 39]. Since the proposed approach is a surrogate-based method, we only compare it against the _representative_ EA-based methods mentioned above. Multi-objective regionalized Bayesian optimization (MORBO) [14] is a MOO framework based on Bayesian optimization (BO). It performs BO in multiple local regions of the design space to identify the global optimum. MOBOpt [23] is yet another surrogate-based Bayesian optimization framework for MOO problems. The proposed V-BREATHE algorithm solves both SOO and MOO problems.
### Graph Optimization
In the above scenario, input \(x\in\mathbb{R}^{d}\) is a vector. However, in many applications, \(x\) may be a graph, i.e., \(x\in\mathcal{G}\), where \(\mathcal{G}\) is the space of _legal_ graphs (see Section 3.1.4). In this scenario, _graph optimization_ refers to searching for an input graph that optimizes (single or) multiple output objectives under given constraints. Mathematically,
\[\begin{split}\min&\ F_{m}(x)\qquad\quad m=1,\ldots,M\\ \text{s.t.}&\ x\in\mathcal{G}\\ &\ C_{j}(x)\leq 0\quad\quad j=1,\ldots,J\end{split} \tag{5}\]
where we define \(\mathcal{G}\) based on a set of legal node connections (edges) along with node and edge weight bounds.
Traditional works on _graph optimization_ target specific problems, such as max-flow/min-cut [31], graph partitioning [10], graph coloring [27], routing [5], etc. However, these optimization problems aim to either find a subset of a given graph or annotate a given graph. In this work, we target an orthogonal problem: find the best-performing graph for the given objective function(s). This involves searching for the set of nodes (or vertices) and edges along with their weights. Existing works solve this problem with limited scope, i.e., they may not consider all graph constraints [48] (when converting the problem into vector optimization) or are only applicable to NN models [24]. Moreover, graph optimization involves searching for not only the node/edge values (which could be represented as multi-dimensional vectors) but also the connections. Directly _flattening_ a graph and implementing vector optimization methods does not perform well, as we show in this work, as such methods would not be able to look for new connections that modify the graph architecture. This calls for novel search techniques that directly implement optimization on graphical input. The proposed G-BREATHE algorithm solves both SOO and MOO graph problems.
## 3. Methodology
In this section, we discuss the BREATHE framework that leverages GOBI on a heteroscedastic surrogate model. Fig. 1 summarizes the BREATHE framework.
### V-Breathe
V-BREATHE is a widely-applicable vector-input-based optimizer that runs second-order gradients on a _lightweight_ NN-based surrogate model to predict not only the output objective value but also its epistemic and aleatoric uncertainties. It leverages an active-learning framework to optimize the upper confidence bound (UCB) estimate of the output objective. In this context, we freeze the model weights and backpropagate the gradients to update the _input_ (not the model weights) to optimize the output objective. We then query the simulator to obtain the output of the new queried sample and retrain the surrogate model. This iterative search process continues until convergence. We describe the V-BREATHE optimizer in detail next.
#### 3.1.1. Uncertainty Types
Prediction uncertainty may arise from not only the approximations made in the surrogate modeling process or limited data and knowledge of the design space but also the natural stochasticity in observations. The former is termed _epistemic_ uncertainty and the latter _aleatoric_ uncertainty. The epistemic uncertainty, also called reducible uncertainty, arises from a lack of knowledge or information, and the aleatoric uncertainty, also called irreducible uncertainty, refers to the inherent variation in the system to be modeled.
In addition, uncertainty in output observations may also be data-dependent (known as _heteroscedastic_ uncertainty). Accounting for such uncertainties in the optimization objective requires a surrogate that also models them.
Figure 1. Overview of the BREATHE framework. (a) The simulator takes an input \(x\), which can be a vector (in \(\mathbb{R}^{d}\)) or a graph (in \(\mathcal{G}\)). It outputs the performance value corresponding to each objective (\(P_{i}\)). \(P\) is a convex combination of individual performance measures for SOO. (b) The surrogate model is a _lightweight_ (i.e., computationally inexpensive relative to the simulator) and _differentiable_ emulator that mimics the simulator; its predicted performance is \(P^{\prime}\). (c) GOBI is applied to the surrogate model to obtain \(x\) with a higher predicted performance \(P^{\prime}\). The dataset is updated with the simulated value \(P\) to enable retraining of the surrogate model.
#### 3.1.2. Surrogate Model
Following the surrogate modeling approach used in BOSHNAS [49], a state-of-the-art NAS framework, we model the output objective and aleatoric uncertainty using a natural parameter network (NPN) [53]\(f(x;\theta)\). We model the epistemic uncertainty using a teacher network \(g(x;\theta^{\prime})\) and its student network \(h(x;\theta^{\prime\prime})\). Here, \(\theta\), \(\theta^{\prime}\), and \(\theta^{\prime\prime}\) refer to the trainable parameters of the respective models. We leverage GOBI on \(h\) to avoid numerical gradients due to their poor performance [49]. We have \((\mu,\sigma)\gets f(x;\theta)\), where \(\mu\) is the predicted output objective (i.e., a surrogate of \(F\)) and \(\sigma\) is the aleatoric uncertainty. Moreover, \(h\) predicts a surrogate (\(\hat{\xi}\)) of the epistemic uncertainty (\(\hat{\xi}\)). The teacher network \(g\) models the epistemic uncertainty via Monte Carlo dropout [49].
We model the output objective in the \([0,1]\) interval for easier convergence. We implement this in the surrogate model by feeding the output to a sigmoid activation. To implement this, we normalize the output objective \(F\) with respect to its maximum permissible value and maximize the performance measure:
\[P=1-\frac{F}{F^{MAX}},\hskip 14.226378ptF^{MAX}=M_{O}\max_{x,\forall(x,P) \in\Delta}F \tag{6}\]
where \(\Delta\) is the set of currently observed samples in the design space and \(M_{O}\geq 1\) is a multiplicative overhead factor. If we observe a larger value during the search process, we re-annotate the observed data with the updated value of \(F^{MAX}\) and retrain the _lightweight_ surrogate model.
#### 3.1.3. Active Learning and Optimization
To use GOBI and obtain queries that perform well, we initialize the surrogate model by training it on a randomly sampled set of points in the design space. We call this set the seed dataset. To effectively explore globally optimal design points, the seed dataset should be as representative of the design space as possible. For this, we use low-discrepancy sequence sampling strategies [34]. Specifically, V-BREATHE uses Latin hypercube sampling to obtain divergent points in its sampled set (parallel works show that this indeed performs better than other low-discrepancy sampling methods in maximizing the diversity of the sampled points [50]). We evaluate these \(N_{\Delta_{0}}\) initial samples using the (albeit expensive) simulator and train the surrogate model on this seed dataset \(\Delta_{0}\). Then, we run second-order optimization on
\[\text{UCB}=\mu+k_{1}\cdot\sigma+k_{2}\cdot\hat{\xi} \tag{7}\]
where \(k_{1}\) and \(k_{2}\) are hyperparameters. We employ the UCB estimate instead of other acquisition functions as it results in the fastest convergence as per previous works that leverage gradient-based optimization [49; 51]. Nevertheless, we leave the application of other acquisition functions to future work.
#### 3.1.4. Incorporating Constraints
Since one cannot directly add symbolic constraints to an NN, we train the surrogate model with a sample with very low performance value that does not satisfy the output constraints (also called _penalization_). For instance, if an output \(F_{m}\) does not meet a constraint (say, \(C_{j^{\prime}}>0\)), we set the corresponding performance to \(P=-100\), which should otherwise be in the \([0,1]\) interval. This forces the surrogate model to learn the distribution of input samples that do not satisfy the desired output constraints.
While running GOBI, if the updated input at an epoch does not satisfy the constraints (i.e., is an _illegal_ input), we set it to the nearest _legal_ input (based on Euclidean distance) that satisfies the constraints. One can consider this as adding an additional _force_ while running gradient descent in the input space that iteratively makes the updated input _legal_. We call this approach _legality-forcing_. Fig. 2 explains this using a working schematic.
#### 3.1.5. Simultaneous Optimization of Multiple Objectives
To support MOO with V-BREATHE, we need to run GOBI to obtain new input queries that optimize all \(F_{m}\) in Eq. (4). To tackle this problem, one could try to optimize a sum of all
objectives. However, each point on the Pareto front weights each objective differently. Hence, we optimize multiple convex combinations of \(F_{m}\)'s. More concretely,
\[P=\sum_{m}\alpha_{m}P_{m},\quad m=1,\ldots,M \tag{8}\]
where \(P_{m}\) is a function of \(F_{m}\) as per Eq. (6) and \(\alpha_{m}\)'s, where \(\sum_{m}\alpha_{m}=1\), are hyperparameters that determine the weight assigned to each objective. Thus, different samples of these weights would result in different _non-dominated solutions_, i.e., points on the Pareto front. In this context, V-BREATHE maximizes the performance measure \(P\) for every set of weight samples using GOBI.
### G-Breathe
G-BREATHE implements the V-BREATHE algorithm on graphical input. Instead of using a fully-connected NN, which assumes a vector input, we use a graph transformer (Zhu et al., 2017) network as our surrogate model. We use a graph transformer as it is a state-of-the-art model for graphical input. We characterize the input graph by its nodes, edges, and multi-dimensional weight values.
While running GOBI, we backpropagate the gradients to the node and edge weights. If all the edge weights fall below a threshold (\(\epsilon_{E}=10^{-5}\) for non-binary edge weights and \(\epsilon_{E}=0.5\) for binary edge weights), we remove that edge. To explore diverse graphs with varied node/edge combinations and weights, we sample randomly generated graphs at each iteration of the search process and run GOBI on the trained surrogate to look for better versions of those graphs (i.e., with the same connections but different node/edge weights that maximize performance). The rest of the setup is identical to that of V-BREATHE. From now on, we will use the term BREATHE to refer to either V-BREATHE or G-BREATHE based on the input type unless otherwise specified.
Algorithm 1 summarizes the BREATHE algorithm (for both SOO and MOO settings). Starting from an initial seed dataset \(\Delta_{0}\), we run the following steps until convergence. To trade off between exploration and exploitation, we consider two probabilities: uncertainty-based exploration (\(\alpha\)) and diversity-based exploration (\(\beta\)). With probability \(1-\alpha-\beta\), we run second-order GOBI using the surrogate model to maximize the UCB in Eq. (7). To achieve this, we first train the surrogate model (a combination of \(f\), \(g\), and \(h\)) on the current dataset (line 4). For SOO, we initialize only one surrogate model, however, for MOO, we initialize separate surrogate models for each set of randomly initialized \(\alpha_{m}\)'s. We then
Figure 2. Schematic illustration of _legality-forcing_. GOBI results in the gradient step \(\Delta x_{\text{grad}}\) that is outside the set of _legal_ inputs (which may be a subset of the design space). \(\Delta x_{\text{LF}}\) implements a step forcing the resultant input to be _legal_.
generate a new query point by freezing the weights of \(f\) and \(h\) and run GOBI along with _legality-forcing_ to obtain an \(x\) (adhering to constraints) with better-predicted performance (line 5). We then simulate the obtained query to obtain the performance measure \(P\) (line 6). For SOO, this corresponds to a single \(\alpha_{1}=1\), but for MOO, this corresponds to a random selection of \(\alpha_{m}\)'s, and we obtain \(P\) as per Eq. (8). With \(\alpha\) probability, we sample the search space using the combination of aleatoric and epistemic uncertainties, \(k_{1}\cdot\sigma+k_{2}\cdot\hat{\xi}\), to find a point where the performance estimate is the most uncertain (line 9). We also choose a random point with probability \(\beta\) (line 11) to avoid getting stuck in a localized search subset. The exploration steps also aid in reducing the inaccuracies in surrogate modeling for unexplored regions of the design space. Finally, we run the simulation only if the obtained input adheres to user-defined constraints (\(C_{j}\)'s in Eqs. (3), (4), and (5)), and penalize the output performance by setting \(P=-100\) otherwise (line 20).
## 4. Experimental Setup
In this section, we present the setup behind various experiments, including the applications for vector and graph optimization, baselines for comparison, and details of the surrogate models.
### Evaluation Applications
The applications with which we test our optimizers include two in the domain of vector optimization: operational amplifier (op-amp) and waste-water treatment plant (WWTP), and two in the domain of graph optimization: smart
home and network. We also test the V-BREATHE algorithm against previously-proposed MOO baselines on standard benchmarks. Finally, we use the problem of synchronous optimal pulse-width modulation (SO-PWM) of three-level inverters for scalability analysis. We describe each application in detail next.
#### 4.1.1. Op-amp
This application involves the optimization of a three-stage op-amp featuring a positive-feedback frequency compensation technique [(20)]. It has 24 input variables (supply/bias voltages and currents, load resistance, load capacitance, transistor widths and lengths, etc.), three constraints (lower/upper limits on DC gain, phase margin, and unity gain frequency), and one optimization objective (minimization of power consumption). We implement the simulations using the Cadence Spectre circuit simulator [(2)].
Fig. 3 shows a circuit diagram of the op-amp along with the variables involved in the optimization process. \(M_{00}\) is the bias metal-oxide semiconductor field-effect transistor (MOSFET). \(M_{10}\), \(M_{11}\), and \(M_{12}\) are part of the differential pair. \(M_{13}\)-\(M_{18}\) are part of the folded cascode. MOSFETs \(M_{20}\)-\(M_{23}\) constitute the second stage, while \(M_{30}\) and \(M_{31}\) constitute the third stage of the op-amp. We use two compensation capacitors: \(C_{M1}\) and \(C_{M2}\). The input optimization space comprises 24 variables. These include 10 MOSFET widths (W1-W10), seven MOSFET lengths (L1-L7), capacitor values (C1 and C2), bias voltage (VBIAS), bias current (IBIAS), load resistance (RLOAD), load capacitance (CLOAD), and supply voltage (VSUPPLY).
We constrain the DC gain to be greater than 68, the phase margin to be between 31.8'and 130.0', and the unity gain frequency to be greater than 1.15 MHz [(1)].
#### 4.1.2. Wwtp
Waste-water treatment removes contaminants and converts waste water into an effluent that one can return to the water cycle. Specifically, we consider the four-stage Bardenpho process [(21)]. Optimizing the WWTP involves 14 input variables (flow split percentages along with reactor volumes, temperatures, and dissolved oxygen levels) and four output objectives [fractions of chemical oxygen demand (COD), namely \(S_{I}\): inert soluble, \(S_{S}\): readily biodegradable, \(X_{I}\): inert particulate, and \(X_{S}\): slowly biodegradable compounds]. For SOO, we optimize the net COD as a
Figure 3. Circuit diagram of the three-stage op-amp. Variables that constitute the input design space are presented in typewriter font.
sum, i.e., \(S_{I}+S_{S}+X_{I}+X_{S}\). However, for MOO, we optimize all objectives simultaneously to obtain commonly-studied Pareto fronts: \(S_{I}\) vs. \(S_{S}\) and \(X_{I}\) vs. \(X_{S}\). We use a publicly available simulation software1 for evaluating different WWTPs.
Footnote 1: The WWTP simulator is available at [https://github.com/toogad/PooPyLab_Project](https://github.com/toogad/PooPyLab_Project)
Fig. 4 shows a simplified schematic of the four-stage Bardenpho process. It consists of four reactors and a secondary clarifier. There are three design parameters for each reactor, namely, the maintained temperature (in "C), the dissolved oxygen (DO, in mg/L), and the reactor volume (in m\({}^{3}\)). The first split (S\(1\in[1,8]\)) determines the amount of sludge for internal re-circulation (IR). The second split (S\(2\in[0,1]\)) determines the ratio of return-activated sludge (RAS) and waste-activated sludge (WAS). This results in 14 input variables for optimization. We limit the temperatures in the range 5-15 "C, volumes in 100-15000 m\({}^{3}\), and DO in 0-5 mg/L. These ranges are typically used in many plants.
#### 4.1.3. Smart Home
A smart home consists of multiple Internet-of-Things (IoT) devices. These include smart doorbells, smart locks, entry/exit sensors, smart door cameras, home assistants, smart thermostats, etc. We support different network types, namely WiFi, Zigbee [4], and Z-Wave [3]. Further, we need to connect different room types (hall, master bedroom, bedrooms, entry hall, living room, kitchen/dining area, and outside porch). Each room can also have multiple IoT devices. The task is to design a smart home, with constraints on the total number of windows and the total area, that minimizes the number of cyber-physical attacks (one optimization objective). For instance, an attacker could use a light command to maliciously manipulate devices in rooms with windows (given that the devices use Zigbee or Z-Wave and have a clear line of sight through the windows in the room). We used a Python-based simulator2 for testing our proposed optimizer.
Footnote 2: The smart home graphical simulator is available at [https://github.com/rahulaVT/SIM_app](https://github.com/rahulaVT/SIM_app)
We now describe the graph formulation of a smart home in G-BREATHE. There can be a total of 9 nodes, one for each room type, except bedrooms, which can be three in number. We represent each node (or room) by a 12-dimensional weight vector consisting of categorical and continuous features. The weight vector includes the room type, room area, number of windows, number of each IoT device (we support a total of eight devices), and the type of network used. We restrict all the devices in a smart home to only one network type (although the WiFi router always connects to the gateway via a WiFi connection). An edge weight is a Boolean value, i.e., whether the two nodes are connected or not. There are three constraints: the total number of windows should be more than or equal to three, the total area of the
Figure 4. Simplified schematic of the four-stage Bardenpho process. Variables that constitute the input design space are presented in typewriter font.
smart home should be greater than 300 m\({}^{2}\) and less than 600 m\({}^{2}\), and the network connection for all IoT devices should be the same. Table 1 summarizes the design parameters for the smart home application.
#### 4.1.4. Network
Higher bandwidth and lower latency connections available in modern networks (Han et al., 2017) have enabled the network edge to execute substantially more computations. However, the simulation of urban mobility (SUMO) domain (Kirshman et al., 2017) demands computationally-expensive simulations that must run frequently to ensure stable connections. In this application, we optimize the number of switches and their connections with data sources and mobile/edge data sinks (thus, forming a network graph) to maximize bandwidth and minimize network operation costs.
Each graph may have up to 25 nodes (five data sources, five data sinks, and up to 15 switches). The node weight represents the type of node: data source/sink or switch. Edges between nodes represent the bandwidth of the corresponding network connection (restricted from 128 MB/s to 1024 MB/s). As explained above, the two optimization objectives are network bandwidth and operation cost.
We set the cost of a data source (typically a cloud server) to $5000 while that of sink to $1000 (an edge device). The cost of a switch is $1000. Adding a connection to the switch incurs an additional base cost of $200, which increases with bandwidth [at the rate of 0.05 $/(MB/s)]. These costs are typical of common devices employed in networking applications.
#### 4.1.5. SO-PWM of Three-level Inverters
Multi-level inverters reduce the total harmonic distortion (THD) in their alternating current (AC) output. SO-PWM control permits setting the maximum switching frequency to a low value without compromising on the THD (Shi et al., 2017) of the AC output. This application has 25 inputs, 24 constraints, and two optimization objectives. We perform scalability tests in Section 5.5 where we study the effect on best-achieved performance and the number of evaluation queries as we increase the number of tunable inputs or constraints. We implement an adapted version of the MATLAB-based simulator, available from a benchmarking suite (Shi et al., 2017), for scalability analysis.
We represent the SO-PWM problem (Shi et al., 2017) in the form of Eq. (4) as follows:
(9) \[\begin{split}\min\ F_{1}(x)&=\frac{\sqrt{\sum_{k}k^{ -4}\sum_{i=1}^{N}S(i)\cos^{2}\left(kx_{i}\right)}}{\sqrt{\sum_{k}k^{-4}}}\\ \min\ F_{2}(x)&=\left(0.32-\sum_{i=1}^{N}S(i)\cos \left(x_{i}\right)\right)^{2}\\ \text{s.t.}\ \ 0<x_{i}<\frac{\pi}{2},\ \
where, \(k=5,7,11,13,\ldots,97\), \(N=25\), and \(S(i)=(-1)^{i+1}\).
#### 4.1.6. Benchmark Applications
We compare V-BREATHE against baseline methods on standard benchmarks. These include the ZDT problem suite (Shi et al., 2017), the Binh and Korn (BNH) benchmark (Bih and Korn, 2017), the Osyczka and Kundu (OSY) benchmark (Osyczka and Kundu, 2017), and the Tanaka (TNK) benchmark (Shi et al., 2017). Although these benchmarks are implemented with mathematical formulas that are easy to evaluate, to test the efficacy of various methods in low-data regimes (in the context of a computationally expensive simulator), we start with a low number of randomly sampled points (i.e., 64 in our experiments) in the seed dataset \(\Delta_{0}\).
Table 2 summarizes the dimensions involved in each optimization application.
### Surrogate Models
We now present details of the architectural decisions for the surrogate models along with the hyperparameters used in the BREATHE algorithm. For vector optimization, in all three surrogate models \(f\), \(g\), and \(h\), we pass the input through two fully-connected hidden layers with 64 and 32 neurons, respectively. For graph optimization, we pass the input through a graph transformer layer (Shi et al., 2017) with four attention heads, each with a hidden dimension of 16. We then pass the output of the transformer layer to a set of fully-connected layers as above. We show other hyperparameter choices for Algorithm 1, which we obtained through grid search, in Table 3.
Training the surrogate model on the initial dataset, \(\Delta_{0}\), for five epochs takes about 300-400 ms on an NVIDIA A100 GPU with a batch size of 64. This is negligible compared to the time taken by the simulator on a single query, e.g., hundreds of seconds (or more) for some applications.
The execution time of the proposed algorithm does not increase with the number of inputs. As the number of inputs increases, the input neurons in the surrogate model increase. However, since the neural network computation is performed in parallel, the execution time remains constant.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multicolumn{4}{c}{**Vector Optimization**} \\ \hline
**Application** & **Inputs** & **Constraints** & **Outputs** \\ \hline Op-amp & 24 & 3 & 1 \\ WWTP & 14 & 0 & 4 \\ SO-PWM & 25 & 24 & 2 \\ \hline ZDT1 & 30 & 0 & 2 \\ ZDT2 & 30 & 0 & 2 \\ ZDT3 & 30 & 0 & 2 \\ ZDT4 & 10 & 0 & 2 \\ ZDT5 & 80 & 0 & 2 \\ ZDT6 & 10 & 0 & 2 \\ BNH & 2 & 2 & 2 \\ OSY & 6 & 6 & 2 \\ TNK & 2 & 2 & 2 \\ \hline \hline \multicolumn{4}{c}{**Graph Optimization**} \\ \hline
**Application** & **Nodes** & **Node dim.** & **Edge dim.** & **Constraints** & **Outputs** \\ \hline Smart Home & 9 & 12 & 1 & 3 & 1 \\ Network & 25 & 1 & 1 & 0 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Dimensionality of inputs, constraints, and outputs for targeted applications.
### Baselines
For SOO with vector input, we compare BREATHE against random sampling (Random), random forest regression (Forest), GBRT, and GP-BO. We implement these baselines using the scikit-learn library (Krizhevsky et al., 2017) with default parameters. This implies that for random forest regression, we use 100 trees, the Gini index to measure the quality of splits, and the minimum number of samples per split set to two. The GBRT optimizer uses the squared-error loss with a learning rate of 0.1, 100 boosting stages, and the minimum number of samples per split set to two. For MOO, we compare the MOO version of the proposed algorithm with state-of-the-art and popular baselines: NSGA-2, MOEA/D, and MOBOpt. We use the implementation from PyMOO (Brandes et al., 2017) to run NSGA-2 and MOEA/D in Python. We use the default hyperparameters (in the PyMOO library) for these methods. For MOBOpt, we use the source code3 with default parameters.
Footnote 3: Source code for MOBOpt is available at: [https://github.com/ppgalurio/MOBOpt](https://github.com/ppgalurio/MOBOpt).
For graph optimization, we compare G-BREATHE against graphical adaptations of the above baselines. To implement this, we randomly generate graphs and only optimize the node/edge weights of the graph by flattening it into a vector and feeding the vector into the vector optimization baseline. We let the baseline search the node/edge weight space for 32 iterations before generating new graphs. This enables these baselines to search in both spaces: graph architecture and node/edge weight values.
## 5. Results
In this section, we present experimental results and comparisons of the proposed BREATHE optimizer with relevant baselines.
### Single-objective Vector Optimization using BREATHE
For SOO with BREATHE and baselines approaches, we maximize the performance measure \(P\) defined in Eq. (8) even if the application has multiple objectives. Unlike MOO, here we choose a fixed combination of \(\alpha_{m}\). Only one output objective is associated with op-amp optimization, while WWTP comprises four objectives (\(S_{I}\), \(S_{S}\), \(X_{I}\), and \(X_{S}\)) that we maximize. We use V-BREATHE for _vector_ input. For SOO, we set \(\alpha_{m}=0.25,\ \forall m\in[1,2,3,4]\), for this application, maximizing a simple sum of the COD fractions.
Figs. 5 and 6 show the convergence of output performance \(P\) for the op-amp and WWTP objectives, respectively. BREATHE achieves the highest performance among all methods. For the op-amp application, none of the methods can initially find input parameters that satisfy all constraints (resulting in the convergence plot to start at \(P=-100\)). BREATHE, Forest, and GBRT find legal inputs and optimize the output performance using these legal data points. However, GP-BO and random search are not able to find legal inputs that result in \(P>-100\), i.e., \(P\in[0,1]\). When there are no constraints, GP-BO achieves the second-highest performance in the WWTP application.
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Hyperparameters** & **Value** \\ \hline \(M_{O}\) & 1.2 \\ \(N_{\Delta_{0}}\) & 64 \\ \(k_{1}\), \(k_{2}\) & 0.5, 0.5 \\ \(\alpha\), \(\beta\) & 0.1, 0.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Selected hyperparameters for the BREATHE algorithm.
Figure 5: Performance convergence of BREATHE and various baselines on the op-amp application.
Figure 6: Performance convergence of BREATHE and various baselines on the WWTP application.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multicolumn{10}{c}{**Op-amp**} \\ \hline VSUPPLY & VBIAS & IBIAS & CLOAD & RLQAD & C1 & C2 & L1 & L2 & L3 & L4 & L5 \\
2.4 V & 2.5 V & 7.0 \(\mu\)A & 0.12 nF & 10.0 k\(\Omega\) & 2.0 pF & 14.0 pF & 4.6 \(\mu\)m & 6.3 \(\mu\)m & 2.8 \(\mu\)m & 1.8 \(\mu\)m & 3.8 \(\mu\)m \\ \hline L6 & L7 & WI & W2 & W3 & W4 & W5 & W6 & W7 & WB & W9 & WI0 \\
2.4 \(\mu\)m & 5.2 \(\mu\)m & 4.4 \(\mu\)m & 18.4 \(\mu\)m & 26.3 \(\mu\)m & 10.4 \(\mu\)m & 34.3 \(\mu\)m & 48.4 \(\mu\)m & 32.4 \(\mu\)m & 22.3 \(\mu\)m & 32.3 \(\mu\)m & 44.4 \(\mu\)m \\ \hline \multicolumn{10}{c}{**WWTP**} \\ \hline
51 & S2 & RPAX\_T & RPAX\_DO & RPAX\_V & RPOX\_T & RPOX\_DO & RPOX\_V & RAXP\_T & RAXP\_DO & RAXP\_V & ROXP\_T & ROXP\_DO & ROXP\_V \\
1.5 & 0.9 & 10.3 C & 0.0 mg/L & 3067.0 m\({}^{3}\) & 6.2 °C & 5.0 mg/L & 102.1 m\({}^{3}\) & 5.2 °C & 1.5 mg/L & 100.0 m\({}^{3}\) & 13.1 °C & 3.6 mg/L & 100.0 m\({}^{3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Selected design parameters of the best-performing points achieved by BREATHE on the op-amp and WWTP applications.
Table 4 summarizes the design parameter values for the op-amp and WWTP applications selected by BREATHE. It chooses the input parameters for resistor and capacitor values along with transistor lengths and widths that maximize the output performance. The DC gain of the op-amp is 90.8, its unity gain frequency is 1.58 MHz, and its phase margin is 95.8", while incurring 780.2 mW of power. For WWTP, BREATHE chooses a large pre-anoxic reactor (with volume = 3067.0 m\({}^{3}\)) but much smaller subsequent reactors. Table 4 shows other design decisions as well. This design leads to the maximum net COD.
### Multi-objective Optimization using BREATHE
We now show the MOO performance of BREATHE on the WWTP application with four output objectives. Capturing _non-dominated solutions_ on different parts of the Pareto front would require contrasting weights for each objective. Thus, we take random samples of the objective weights \(\alpha_{m}\). Fig. 7 shows the Pareto front while trading off \(S_{I}\) for \(S_{S}\) and \(X_{I}\) for \(X_{S}\). These trade-offs are typically studied by domain experts. Different colors correspond to distinct sets of \(\alpha_{m}\)'s among 16 random samples. We observe that different colors (and thus, weights for the objectives) indeed contribute to unique _non-dominated solutions_ on the Pareto front (shown by dashed line).
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c|}{**Point Name**} & \multicolumn{11}{c}{**WWTP**} \\ & \multicolumn{1}{c}{S1} & \multicolumn{1}{c}{S2} & \multicolumn{1}{c}{RPAX\_T} & \multicolumn{1}{c}{RPAX\_DO} & \multicolumn{1}{c}{RPAX\_V} & \multicolumn{1}{c}{RPAX\_T} & \multicolumn{1}{c}{RPAX\_DO} & \multicolumn{1}{c}{RPAX\_V} & \multicolumn{1}{c}{RPAX\_P\_T} & \multicolumn{1}{c}{RPAX\_DO} & \multicolumn{1}{c}{RPAX\_P\_T} & \multicolumn{1}{c}{RROXP\_DO} & \multicolumn{1}{c}{RROXP\_V} \\ \hline P1 & 8.0 & 1.0 & 15.0 \(\,\)C & 5.0 mg/L & 1458.9 m\({}^{3}\) & 15.0 \(\,\)C & 0.0 mg/L & 8683.8 m\({}^{3}\) & 15.0 \(\,\)C & 5.0mg/L & 2450.3 m\({}^{3}\) & 15.0 \(\,\)C & 0.0 mg/L & 2488.9 m\({}^{3}\) \\ P2 & 8.0 & 1.0 & 5.0 \(\,\)C & 5.0 mg/L & 11299.7 m\({}^{3}\) & 15.0 \(\,\)C & 0.0 mg/L & 6899.8 m\({}^{3}\) & 15.0 \(\,\)C & 0.0 mg/L & 13400.1 m\({}^{3}\) & 15.0 \(\,\)C & 0.0 mg/L & 14900.0 m\({}^{3}\) \\ \hline P3 & 1.0 & 1.0 & 15.0 \(\,\)C & 0.0 mg/L & 100.0 m\({}^{3}\) & 5.0 \(\,\)C & 0.0 mg/L & 1518.9 m\({}^{3}\) & 5.0 \(\,\)C & 0.0 mg/L & 152.9 m\({}^{3}\) & 5.0 \(\,\)C & 0.0 mg/L & 2779.3 m\({}^{3}\) \\ P4 & 8.0 & 1.0 & 5.0 \(\,\)C & 3.7 mg/L & 479.9 m\({}^{3}\) & 8.8 \(\,\)C & 5.0 mg/L & 3504.1 m\({}^{3}\) & 14.1 \(\,\)C & 5.0 mg/L & 311.1 m\({}^{3}\) & 15.0 \(\,\)C & 5.0 mg/L & 991.5 m\({}^{3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5. Selected design parameter values of the four extreme points of the Pareto fronts in Fig. 7 for the WWTP application.
Figure 7. Pareto fronts of (a) \(S_{I}\) vs. \(S_{S}\) and (b) \(X_{I}\) vs. \(X_{S}\) using the BREATHE optimizer on the WWTP task. All objectives are plotted in mg/L units.
Table 5 summarizes the design choices of the four plants that correspond to the four extrema of the two Pareto fronts (P1-P4) in Fig. 7. P1 corresponds to the plant with the highest achieved \(S_{I}\) in the effluent, while P4 corresponds to the plant with the highest achieved \(X_{S}\) in the effluent. Table 6 shows the output objective values for the four design points. When we maximize one objective, other objective values are much lower. This implies that there is a trade-off when maximizing all objectives that BREATHE considers by presenting a Pareto front.
We train independent surrogate models for each selection of weights and run the BREATHE optimization pipeline in parallel. We observe that parallel runs outperform sequential operations of the BREATHE algorithm (where we iteratively update the surrogate model on each new dataset with the new set of objective weights). We hypothesize that the independent parallel runs result in a higher variance in the internal representations of the meta-model as it covers a larger fraction of the design space.
Hypervolume is a measure of the solution quality in MOO problems. We derive it by measuring the size of the _dominated_ portion of the design space [19], i.e., the area under the Pareto front above zero. In Fig. 7, we shade the area that we use to compute the hypervolume in grey. Fig. 8 shows the convergence of hypervolume with the number of algorithm iterations for BREATHE and baseline methods. Numerous iterations of BREATHE correspond to multiple runs (till convergence) using different randomly sampled weights. Since we need to maximize all objectives, we must
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \multirow{2}{*}{**Point Name**} & \multicolumn{4}{c}{**Output Objectives**} \\ \cline{2-5} & \(S_{I}\) & \(S_{S}\) & \(X_{I}\) & \(X_{S}\) \\ \hline P1 & **204.01** & 28.00 & 36.34 & 0.01 \\ P2 & 193.93 & **28.45** & 13.37 & 0.27 \\ \hline P3 & 204.00 & 27.94 & **117.64** & 0.01 \\ P4 & 1.60 & 19.52 & 0.58 & **104.17** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Output objective values (in mg/L units) for the four extreme points of the Pareto fronts in Fig. 7 for the WWTP application.
Figure 8: Convergence of hypervolume (in mg\({}^{2}\)/L\({}^{2}\) units) of (a) \(S_{I}\) vs. \(S_{S}\) and (b) \(X_{I}\) vs. \(X_{S}\) Pareto fronts with the number of iterations when running BREATHE and other MOO baselines on the WWTP task.
also maximize the resultant hypervolume. BREATHE achieves a considerably higher hypervolume relative to baselines with fewer queried samples. More concretely, BREATHE achieves 21.9\(\times\) higher hypervolume relative to MOBOpt in the \(S_{I}\) vs. \(S_{S}\) trade-off and 20.1\(\times\) higher hypervolume in the \(X_{I}\) vs. \(X_{S}\) trade-off (more details in Section 6).
Fig. 9 shows the convergence of hypervolume with the number of algorithm iterations for BREATHE and baseline methods on benchmark applications. Here, we show the hypervolume that is dominated by the provided set of solutions with respect to a reference point (set by the maximum possible value of each output objective) [22]. We observe that BREATHE outperforms baselines on most benchmarks (except ZDT2). MOEA/D does not support constraints and is therefore not plotted for BNH, OSY, and TNK tasks. Further, for the OSY task, NSGA-2 and MOBOpt could not find any legal inputs (that satisfy all constraints), resulting in the hypervolume being zero.
Fig. 10 shows the obtained Pareto fronts for the ZDT3 and BNH tasks. ZDT3 has a disjoint and non-convex Pareto front. Even though BREATHE uses a convex combination scalarization function [see Eq. (8)], which has been shown to not perform well for non-convex Pareto fronts [12], it is able to obtain non-dominated solutions close to ZDT3's Pareto
Fig. 9: Convergence of hypervolume, when comparing BREATHE against baselines, on the (a-f) ZDT1-6 problem suite, (g) BNH, (h) OSY, and (i) TNK benchmarks. All methods were executed until convergence. For all plots, we show the mean and one standard error of the mean over five replications.
front due to multiple random cold restarts in the optimization loop. In Fig. 10(b), we show that BREATHE is able to achieve a denser set of non-dominated solutions on the Pareto front relative to baselines.
### Searching for Optimal Graphs using Breathe
We now run the G-BREATHE algorithm, as described in Section 3.2, for graph optimization. This implies searching for novel graphs along with node and edge weights that maximize the output performance measure \(P\). Fig. 11 shows the convergence of \(P\) with the number of iterations on the smart home task. This task has only one objective: minimization of the number of attacks. BREATHE outperforms all baselines. Even though randomly generated graphs may be legal in terms of the graph architecture, they may not honor all the constraints (for example, the number of windows should be greater than three). BREATHE and the baselines quickly find legal graphs (with \(P\in[0,1]\)). However, not being able to
Figure 11. Performance convergence of BREATHE on the smart home application.
Figure 10. Pareto fronts obtained by various MOO methods on the (a) ZDT3 and (b) BNH tasks.
smartly search the graph architecture space limits the baselines from reaching the highest-achieved performance by BREATHE.
Fig. 12 shows the performance convergence for network optimization. This task has two objectives: maximization of the average bandwidth and minimization of overall network operation cost. The weights for the two objectives for the calculation of \(P\) are 0.4 and 0.6, respectively. We choose these weights to attribute more importance to the network operation cost. A user can choose any set of weights that form a convex combination (as in Eq. (5)). Again, BREATHE outperforms baselines by achieving 64.9% higher performance than the next-best baselines, i.e., GP-BO (more details in Section 6).
Fig. 12: Performance convergence of BREATHE on the network application.
Fig. 13: Best-performing smart home design that minimizes the number of cyber-physical attacks as per G-BREATHE.
Fig. 13 shows the obtained smart home design after running the BREATHE algorithm. All bedrooms have a window (shown by a blue box). Bedroom 1 has a WiFi modem and a smart thermostat. The entry hall contains an entry sensor, a smart door camera, and a smart lock at the door connecting to the outside porch. The kitchen/dining area has a home assistant and the master bedroom has a network gateway. Placing the gateway in a different room than the WiFi modem reduces the risk of attacks. Nevertheless, this design is prone to 65 physical (break-ins from the outside door or windows) and 92 cyber attacks (DDoS attacks affecting the gateway and, subsequently, the IoT devices). These correspond to different permutations of attacks one can perform. However, this is the least number of cyber-physical attacks achieved by the BREATHE algorithm.
Multiple network configurations lead to the same net performance value. For example, Fig. 14 shows one such best-performing network architecture. The cost of operation for the network is $35,457.37 and the average bandwidth is 914.74 MB/s. The network only uses three switches to minimize the base cost of setting them up. Two switches connect to two data source/sink pairs, while one switch connects to only one pair. We label these pairs and their corresponding connections in unique colors.
### Ablation Analysis
We now present an ablation analysis of our proposed optimizer. Fig. 15 compares the performance convergence of BREATHE with that of its ablated versions. First, we remove second-order gradients (\(\nabla_{x}^{2}\)UCB) and implement GOBI with first-order gradients (\(\nabla_{x}\)UCB) instead. Second, we remove the NPN model, which models the aleatoric uncertainty (see Section 3.1). We can see that these changes result in poorer converged performance relative to the proposed BREATHE algorithm.
We now ablate the effect of the proposed _legality-forcing_ method on benchmark applications with constraints. We present the results in Table 7. We observe that legality-forcing is crucial for constrained optimization. Without it, GOBI could result in inputs that are not legal.
Figure 14. Best-performing network architecture that maximizes performance as per G-BREATHE. An annotation on a connection represents the bandwidth of that connection in MB/s.
Figure 16: Effect of the number of (a) inputs and (b) constraints on the highest achieved performance value and the number of samples required to achieve it. BREATHE was executed on the SO-PWM task. Plotted with 95% confidence intervals.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Application**} \\ \cline{2-4} & BNH & OSY & TNK \\ \hline V-BREATHE & 7894.4 & 301429.9 & 14.2 \\ w/o legality-forcing & 6933.5 & - & 7.8 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Effect of legality-forcing on constrained optimization using BREATHE.
Figure 15: Ablation analysis of BREATHE on the network application.
### Scalability Tests
We now test how scalable BREATHE is with the dimensionality of the optimization problem. Hence, we plot how the maximum achieved performance and the number of samples required to achieve that performance scale with an increasing number of inputs and constraints. Here, the number of samples corresponds to the total number of queries to the simulator. These include the seed samples required to initialize and train the surrogate model. To calculate the performance (\(P\)) on the SO-PWM task, we use \(\alpha_{1}=\alpha_{2}=0.5\) to give equal weight to each objective. We show these plots in Fig. 16. First, we note that the maximum achieved performance scales linearly with the number of inputs and constraints. However, with increasing constraints, the maximum achieved performance value slowly decreases. Second, sample complexity scales sublinearly with the number of inputs and linearly with constraints.
## 6. Discussions and Limitations
Table 8 summarizes the results presented in Section 5. The proposed framework outperforms baselines on various applications. In SOO, BREATHE achieves 64.1% higher performance than the next-best baseline, i.e., Forest on the WWTP application. G-BREATHE achieves 64.9% higher performance than the next-best baseline, i.e., a graphical version of GP-BO, on the network application. In MOO, BREATHE achieves up to 21.9\(\times\) higher hypervolume relative to the next-best baseline, namely, MOBOpt (in the \(S_{I}\) vs. \(S_{S}\) trade-off). BREATHE also outperforms baselines on standard MOO benchmark applications, except ZDT2. ZDT2 and ZDT6 are MOO problems with non-convex Pareto fronts. However,
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \multicolumn{5}{c}{**SOO (Vector Optimization)**} \\ \hline
**Application** & **Random** & **GP-BO** & **GBRT** & **Forest** & **V-BREATHE** \\ \hline Op-amp & -100.0 & -100.0 & 0.9991 & 0.9997 & **0.9999** \\ WWTP & 0.2735 & 0.6517 & 0.4996 & 0.6006 & **0.9858** \\ \hline \multicolumn{5}{c}{**SOO (Graph Optimization)**} \\ \hline
**Application** & **Random\({}^{*}\)** & **GP-BO\({}^{*}\)** & **GBRT\({}^{*}\)** & **Forest\({}^{*}\)** & **G-BREATHE** \\ \hline Smart Home & 0.9983 & 0.9997 & 0.9996 & 0.9997 & **0.9999** \\ Network & 0.4512 & 0.5885 & 0.4482 & 0.4846 & **0.9707** \\ \hline \hline \multicolumn{5}{c}{**MOO (Vector Optimization)**} \\ \hline
**Application** & **NSGA-2** & **MOEA/D** & **MOBOpt** & **V-BREATHE** \\ \hline WWTP (\(S_{I}\) vs. \(S_{S}\)) & 39.4 & 67.6 & 263.9 & **5781.3** \\ WWTP (\(X_{I}\) vs. \(S_{S}\)) & 3.2 & 182.3 & 150.2 & **3024.1** \\ \hline ZDT1 & 5.1 & 4.8 & 5.9 & **7.1** \\ ZDT2 & 9.7 & 9.6 & **10.3** & 9.9 \\ ZDT3 & 7.6 & 6.4 & 8.0 & **8.5** \\ ZDT4 & 250.7 & 238.6 & 159.1 & **282.4** \\ ZDT5 & 4.5 & 4.4 & 5.6 & **6.2** \\ ZDT6 & 3.8 & 0.7 & 5.4 & **7.8** \\ BNH & 7137.2 & - & 7635.2 & **7894.4** \\ OSY & - & - & - & **301429.9** \\ TNK & - & - & 11.7 & **14.2** \\ \hline \hline \end{tabular}
\end{table}
Table 8. Summary of experimental results for various optimization applications. Best-achieved performance is reported for SOO tasks and the highest achieved hypervolume is reported for the MOO tasks. \({}^{*}\)The graphical version of the baseline was implemented as described in Section 4.3.
ZDT6 has only 10 inputs, while ZDT2 has 30 inputs. This leads us to believe that BREATHE may not always outperform the baselines when the optimization problem is non-convex and has high input dimensionality.
Although the surrogate model that undergirds the V-BREATHE framework is similar to that of BOSHNAS (Zhou et al., 2019), many novelties are proposed in this work. These include _legality-forcing_ of gradients to allow queries to adhere to input constraints, _penalization_ for output constraint violations, and support for multi-objective optimization (although, we leave the exploration of more complex constraint management methods, like probability of feasibility (Zhou et al., 2019), to future work). Moreover, BOSHNAS only works with vector input, while G-BREATHE optimizes for graph architectures. G-BREATHE does not convert the graph-based problem into a vector-based problem. Instead, it directly works on graphical input. It not only optimizes for the node/edge weight values (that could, in principle, be reduced to a vector optimization problem) but also searches for the best-performing graph architecture (node connections, i.e., new edges). Graph architecture optimization has not been implemented by any previously-proposed surrogate-based optimization method, to the best of our knowledge.
In the demonstrated results, we show that different extrema on the Pareto front result in designs that maximize one objective at the cost of others and BREATHE outputs all such extrema. G-BREATHE achieves high performance values in graph search, resulting in designs that optimize output objectives while honoring user-defined constraints. Unlike previous works, BREATHE is a unified framework that supports sample-efficient optimization in different input spaces (vector- or graph-based) and user-defined constraints. This work shows the applicability of BREATHE to diverse optimization problems and explores the novel domain of _graph optimization_ for generic applications (beyond NAS). BREATHE also leverages an actively trained heteroscedastic model to minimize sample complexity.
In this work, we tested G-BREATHE on graphical spaces with up to 25 nodes. The proposed G-BREATHE framework is applicable to larger graphs as well. However, case studies involving larger and more complex graphs would require domain expertise and modifications to the optimization method for better-posed search. This is because larger graphs exponentially increase the size of the design space. Exploring such cases is part of future work.
BREATHE has several limitations. It only supports optimization of an application where the simulator gives an output for any legal queried input. However, in some cases, the queried simulator could result in erroneous output. BREATHE does not detect such outputs based on the distributions learned from other input/output pairs. Detecting such pairs falls under the scope of adversarial attack detection (Zhou et al., 2019) and label noise detection (Zhou et al., 2019; Zhou et al., 2019). Moreover, its surrogate model does not work with partially-specified inputs.
## 7. Conclusion
In this work, we presented BREATHE, a vector-space and graph-space optimizer that _efficiently_ searches the design space for constrained single- or multi-objective optimization applications. It leverages second-order gradients and actively trains a heteroscedastic model by iteratively querying an expensive simulator. BREATHE outperforms the next-best baseline, Forest, with up to 64.1% higher performance. G-BREATHE is an optimizer that efficiently searches for graphical architectures along with node and edge weights. It outperforms the next-best baseline, a graphical version of GP-BO, with up to 64.9% higher performance. Further, we leverage BREATHE for multi-objective optimization where it achieves up to 21.9\(\times\) higher hypervolume than a state-of-the-art baseline, MOBOpt.
|
2302.05319 | Large Language Models for Code: Security Hardening and Adversarial
Testing | Large language models (large LMs) are increasingly trained on massive
codebases and used to generate code. However, LMs lack awareness of security
and are found to frequently produce unsafe code. This work studies the security
of LMs along two important axes: (i) security hardening, which aims to enhance
LMs' reliability in generating secure code, and (ii) adversarial testing, which
seeks to evaluate LMs' security at an adversarial standpoint. We address both
of these by formulating a new security task called controlled code generation.
The task is parametric and takes as input a binary property to guide the LM to
generate secure or unsafe code, while preserving the LM's capability of
generating functionally correct code. We propose a novel learning-based
approach called SVEN to solve this task. SVEN leverages property-specific
continuous vectors to guide program generation towards the given property,
without modifying the LM's weights. Our training procedure optimizes these
continuous vectors by enforcing specialized loss terms on different regions of
code, using a high-quality dataset carefully curated by us. Our extensive
evaluation shows that SVEN is highly effective in achieving strong security
control. For instance, a state-of-the-art CodeGen LM with 2.7B parameters
generates secure code for 59.1% of the time. When we employ SVEN to perform
security hardening (or adversarial testing) on this LM, the ratio is
significantly boosted to 92.3% (or degraded to 36.8%). Importantly, SVEN
closely matches the original LMs in functional correctness. | Jingxuan He, Martin Vechev | 2023-02-10T15:28:55Z | http://arxiv.org/abs/2302.05319v5 | # Controlling Large Language Models to Generate Secure and Vulnerable Code
###### Abstract
Large language models (LMs) are increasingly pre-trained on massive corpora of open-source programs and applied to solve program synthesis tasks. However, a fundamental limitation of LMs is their unawareness of security and vulnerability during pretraining and inference. As a result, LMs produce secure or vulnerable programs with high uncertainty (e.g., around 60%/40% chances for GitHub Copilot according to a recent study). This greatly impairs LMs' usability, especially in security-sensitive scenarios.
To address this limitation, this work formulates a new problem called controlled code generation, which allows users to input a boolean property into an LM to control if the LM generates secure or vulnerable code. We propose svGen, an effective and lightweight learning approach for solving controlled code generation. svGen leverages property-specific continuous vectors to steer program generation toward the given property, without altering the weights of the L.M. svGen's training optimizes those continuous vectors by carefully applying specialized loss terms on different regions of code.
Our extensive evaluation shows that svGen achieves strong control capability across various software vulnerabilities and LMs of different parameter sizes. For example, on 9 dangerous vulnerabilities, a state-of-the-art CodeGen LM with 2.7B parameters generates secure programs with a 57% chance. When we use svGen to control the LM to generate secure (resp., vulnerable) programs, the chance is significantly increased to 82% (resp., decreased to 35%).
## 1 Introduction
After the success in modeling natural language [10, 15, 46, 50], large language models (LMs) are pretrained on a massive volume of code and applied to solving challenging code generation tasks [9, 18, 21, 24, 36]. A particularly important task is the synthesis of functionally correct programs from user-given _prompts_, i.e., natural language or partial programs specifying desired code behaviors. Accomplishing such a task greatly improves programming experience and efficiency. State-of-the-art LMs achieve remarkable performance on program synthesis [8, 12, 13, 30, 32, 40, 54], leading to useful and popular code completion engines, such as GitHub Copilot [12].
Despite their effectiveness in understanding code functionality, LMs are found to frequently produce programs with compilation errors, semantic errors, and implementation errors [39, 45, 51]. More severely, the evaluation of [43] found that 40% of the programs generated by Copilot are subject to different kinds of dangerous security vulnerabilities. These results put a question mark on LMs' usability and reliability, especially in scenarios where correctness and security are highly demanded. Therefore, it is an urgent research task to understand and improve the security of LM-based code generators.
An important message from the evaluation of [43] is LMs' capability of generating both secure and vulnerable programs. This is because their huge pretraining corpora contain both secure and vulnerable code. However, a fundamental limitation of LMs is lacking a sense of security and vulnerability during pretraining and generation. As a result, the ratio of secure generations from an LM _unconditionally_ follows the distribution learned from the LM's pretraining dataset, e.g., 60% for Copilot according to [43]. Faced with the capability and the limitation of LMs discussed just now, we put forward a fundamental question:
_Can LMs be controlled to conditionally generate secure or vulnerable programs according to our wishes?_
**svGen: Controlled Code Generation** To answer this question, we formulate a controlled code generation problem, as depicted in Figure 1. Apart from a user-given prompt, an LM receives an additional boolean property as input, specifying whether the program generated by the LM should be secure or vulnerable. The LM generation is then conditioned on the input property and is driven to produce code that is in accordance with the property.
We propose a novel method called svGen to solve controlled code generation. The core idea is to learn two property-specific sequences of continuous vectors, called _prefixes_[31]. To control generation for the desired property, svGen uses the corresponding prefixes as the initial hidden states of the LM. Through the LM's attention mechanism, the prefixes affect computations of subsequent hidden states and steer the LM to generate code that satisfies the property.
Figure 1: Controlled code generation.
A key benefit of svGen is its lightweight. That is, svGen only introduces the prefixes and does not change the weights of the LM. The length of the prefixes is small, which results in only a tiny portion of new parameters on top of the LM, e.g., 0.04%-0.1% for the LMs used in Section 5.
The training of svGen is crucial for obtaining effective prefixes. svGen's training dataset is constructed from code pairs in GitHub commits that fix security vulnerabilities. In such a way, we obtain ground truth code properties: the code before (resp., after) a commit is treated as vulnerable (resp., secure). A key observation is that security fixes typically change only small portions of code but the changed code is decisive for security. Therefore, we divide training programs into changed and unchanged regions using masks. In the changed regions, we leverage a conditional language modeling loss and a contrastive loss between security/vulnerability to optimize for controlled code generation. In the unchanged regions, we employ KL divergence [7] to regularize the amount of perturbation imposed by the prefixes on code distributions. Combing those loss terms, svGen learns prefixes that achieve strong code control and cause minimal effects on the LM's original capability.
Evaluation of svGenWe evaluate svGen extensively based on the methodology and the evaluation scenarios of [43]. Our evaluation demonstrates that svGen is able to achieve strong control ability across various software vulnerabilities, scenarios, and model sizes. Take the CodeGen LM with 2.7B parameters as an example [40]. On 18 evaluation scenarios for 9 dangerous vulnerabilities, the overall ratio of secure programs generated by the original, uncontrolled LM is 57%. When we use svGen to control the LM to generate secure (resp., vulnerable) programs, the ratio significantly increases to 82% (resp., decreases to 35%). We also show that svGen has other important control characteristics: (i) stability across slightly different prompts, (ii) capability of handling complex scenarios with two vulnerabilities, and (iii) generalization to vulnerabilities unseen during training.
Security Impact of svGenGiven svGen's lightweight and effectiveness, our work greatly enhances the security of LM-based code generators. First, developers can perform security hardening on LMs by feeding svGen's security prefixes into the LMs. As a result, the LMs suggest secure programs with a significantly higher chance. Moreover, svGen reveals a major security threat: LMs can be controlled by malicious parties with the vulnerability prefixes to generate unsafe code, which does not even require access to the LMs' pretraining or modification of the LMs' weights.
Main ContributionsOur main contributions are:
* A new problem on controlled generation of secure and vulnerable programs (Section 3).
* svGen, our solution to controlled code generation, including novel training and inference procedures (Section 4).
* An extensive evaluation of svGen on different vulnerabilities, scenarios, and model sizes (Section 5).
* We will publicly release all code, models, and datasets.
## 2 Background and Related Work
This section provides necessary background knowledge and discusses related work.
Code Generation with Large Language ModelsRecent works have proposed a number of large pretrained LMs for modeling code, such as Codex [12], AlphaCode [32], CodeGen [40], PaLM [13], and many others [8, 30, 54]. These LMs are capable of generating functionally correct programs and handling competitive programming problems. GitHub Copilot [3], backboned by Codex, has been used in production and become popular among developers. LMs are typically based on the Transformer architecture [50], which benefits from a self-attention mechanism that accesses all previous hidden states of the model.
The input to an LM-based code generation model is a prompt, i.e., a partial program or natural language documentation expressing the desired functionality. The prompt is fed into the LM as a sequence of tokens. Then, the LM generates new tokens one by one, until special tokens representing the end of the generation are hit or the length budget is used up. Finally, the generated tokens are converted back to program text form as the final completion.
Formally, we model a program \(\mathbf{x}\) as a tokenized sequence \(\mathbf{x}=[x_{1},\dots,x_{|\mathbf{x}|}]\) and consider a Transformer-based, autoregressive LM that maintains a list of hidden states. Each hidden state \(\mathbf{h}_{t}\) is computed from the current token \(x_{t}\) and all previous hidden states \(\mathbf{h}_{<t}\):
\[\mathbf{h}_{t}=\mathrm{LM}(x_{t},\mathbf{h}_{<t}).\]
Then, the next-token probability is calculated by applying a pretrained matrix \(\mathbf{W}\) that maps \(\mathbf{h}_{t}\) into logits over the token vocabulary and a \(\mathrm{softmax}\) function that normalizes the logits into a probability distribution:
\[P(x|\mathbf{h}_{\leq t})=\mathrm{softmax}(\mathbf{W}\mathbf{h}_{t}).\]
The probability of the whole program is computed by multiplying the next-token probabilities:
\[P(\mathbf{x})=\prod_{t=1}^{|\mathbf{x}|}P(x_{t}|\mathbf{h}_{<t}).\]
Usually, the initial hidden states \(\mathbf{h}_{<1}\) are empty. In Section 4, we show how svGen leverages non-empty, trained initial hidden states to control program generation.
Programs can be generated from an autoregressive LM in a left-to-right fashion. That is, at step \(t\), we generate \(x_{t}\) based on \(P(x|\mathbf{h}_{<t})\) and feed \(x_{t}\) into the LM to compute \(\mathbf{h}_{t}\), which is used at step \(t{+}1\). A temperature can be applied on \(P(x|\mathbf{h}_{<t})\) to adjust generation variety. The pretraining of LMs leverages the negative log-likelihood loss:
\[\mathcal{L}(\mathbf{x})=-\log P(\mathbf{x})=-\sum_{t=1}^{|\mathbf{x}|}\log P (x_{t}|\mathbf{h}_{<t}).\]
For recent LMs [12, 13], pretraining is performed on a massive dataset of both program and natural language text.
Program Security and VulnerabilityAutomatic classification of programs into secure and vulnerable is a fundamental problem in computer security. It has been studied for decades, using either static or dynamic analyses [37, 48]. GitHub CodeQL [2] is an open-source semantic security analyzer that enables users to write custom queries to detect different kinds of security vulnerabilities. A more recent trend is to use deep learning techniques for vulnerability detection [11, 33, 34, 35, 56], which train state-of-the-art models on vulnerability datasets [17, 41, 53]. Program repair techniques can be used to fix detected vulnerabilities [19, 20, 38, 44]. Conversely, bug injection produces unsafe programs by injecting synthetic vulnerabilities into bug-free programs [16, 21, 42, 55].
The Common Weakness Enumeration [6] is a category system for hardware and software vulnerabilities. It has over 400 categories for software weaknesses. MITRE provides a 2022 list of top-25 most dangerous software CWEs [1], which includes the CWEs studied in this paper. For simplicity, we refer to this list as "MITRE top-25 CWEs".
Security of LM-based Code GeneratorsThe security for LMs of code is still an early-stage research topic. Many works have acknowledged that their LMs are likely to generate unsafe code or even found concrete vulnerable generations [12, 13, 52]. Nonetheless, these works do not provide systematic security evaluations. Poison attacks can cause neural code models to have higher chances of suggesting insecure crypto parameters [47, 49]. We provide a detailed comparison of [47] and our work at the end of Section 4.3. The work of [43] provides a comprehensive study on the security of GitHub Copilot on the 2021 MITRE top-25 CWEs. It queries Copilot to generate code for a wide range of security-sensitive user scenarios and utilizes CodeQL, together with manual inspection, to judge the security of the generated programs. The results show that about 40% are vulnerable programs.
## 3 Problem Setup
This work deals with the problem of controlled code generation. That is, given an program property \(c\) as input, we desire the output program to satisfy \(c\). To achieve this, we leverage a _conditional_ LM that models the conditional probability of program \(\mathbf{x}\) on \(c\):
\[P(\mathbf{x}|c)=\prod_{t=1}^{|\mathbf{x}|}P(x_{t}|\mathbf{h}_{<t},c). \tag{1}\]
Note that, after choosing \(c\), programs can be generated from the conditional LM in the same left-to-right fashion as a standard LM. We focus on the security property of generated programs, i.e., \(c=\{\mathrm{sec},\mathrm{vul}\}\) where \(\mathrm{sec}\) (resp., \(\mathrm{vul}\)) means that the output program is secure (resp., vulnerable) w.r.t. one or several security vulnerabilities. Figure 1 provides a visualization of our controlled code generation problem.
Relationship to Other ProblemsThe controlled code generation problem can be viewed as the dual problem of vulnerability detection: the input and output of the two problems are reversed. Bug injection and program repair are also related. Like controlled code generation, bug injection (resp., program repair) can be used to produce vulnerable (resp., secure) programs. The key difference is that bug injection and program repair rely on _complete programs_ as input, while controlled code generation produces programs from _partial programs_ or even _from scratch_.
We draw inspiration from controlled text generation, which deals with natural language attributes such as sentiment and toxicity [23, 25, 28, 29]. However, to the best of our knowledge, this work is the first to study controlled generation in the context of security.
## 4 svGen: Inference, Training, and Use Cases
This section presents svGen, our solution to the controlled code generation problem.
Illustrative Code ExampleFigure 2 shows a pair of functions before and after a real-world security fix commit1. This example is taken from svGen's training dataset (discussed in Section 5.1) and will be used for illustration purposes throughout this section. In Figure 2, self.content may contain malicious scripts from untrusted users. Before the commit, self.content directly flows into the return value of the function, causing a cross-site scripting vulnerability. The commit fixes the vulnerability by applying a sanitizer markupsafe.escape on self.content.
Footnote 1: Link to the commit: [https://github.com/dongweiming/lyanna/commit/fecfac79e4b7601e81a3b3fc0ad26ab18ee95d7d](https://github.com/dongweiming/lyanna/commit/fecfac79e4b7601e81a3b3fc0ad26ab18ee95d7d).
### Inference
LMs can be prompted using text to accomplish desired tasks [10]. With appropriate prompts, LMs can also be controlled for code security properties [43, 44]. We leverage continuous prompts (in particular, prefix-tuning [31]), which are strictly more expressive than discrete text prompts. This is because LMs transform all discrete tokens into fixed continuous embeddings. Section 5 shows the advantages of our continuous prompts experimentally. Different from prefix-tuning whose continuous prompts are specific to natural language processing tasks, our continuous prompts are specific to code security properties.
Figure 2: An example Python funtion before and after a GitHub commit1 that fixes a cross-site scripting vulnerability.
Specifically, svGen relies on an existing, pretrained LM with frozen weights. For each property \(c\in\{\mathrm{sec},\mathrm{vul}\}\), svGen maintains a sequence of prefixes, denoted by svGen\({}_{c}\). Each prefix is a continuous vector of trainable parameters and has the same shape as the hidden states of the LM. To achieve controlled generation, we choose a property \(c\) and input svGen\({}_{c}\) as the initial hidden states of the LM. That is, the conditional probability in Equation (1) is instantiated with svGen\({}_{c}\). Through the self-attention mechanism, svGen\({}_{\mathrm{sec}}\) affects computations of subsequent hidden states and guides the LM to generate program tokens that enforce property \(c\). Learning desired prefixes is a challenging task. We describe how svGen achieves that in Section 4.2.
The length of the prefixes is an important hyper-parameter that affects the number of new parameters introduced by svGen and the quality of controlled generation. We perform a parameter search (discussed in Section 5.4) and find that a length of 5 leads to the best performance. Such a small prefix length amounts to only 0.04%-0.1% of the parameters of the LMs evaluated in Section 5.
**Visualization: LM v.s. svGen** Figures 3 and 4 visualize and compare the inference procedures of LM and svGen. In Figure 3, the input to the LM is only a prompt. Since the LM is trained without awareness of security and vulnerability, it produces mixed results, e.g., the percentage of secure and vulnerable programs is 0.6 and 0.4, respectively. Figure 4 uses the same LM but additionally inputs svGen\({}_{\mathrm{sec}}\) as the initial hidden states of the LM. Due to self-attention, the subsequent hidden states are affected by svGen\({}_{\mathrm{sec}}\) and the probability of generating secure programs is increased, e.g., to 0.8. In the same way, svGen\({}_{\mathrm{vul}}\) can be input into the LM to guide the generation of vulnerable code. Take Figure 2 as an example. Given a partial program async def html_content(self);, svGen\({}_{\mathrm{sec}}\) is expected to assign high probabilities to programs that have proper sanitization for user-controlled inputs, while svGen\({}_{\mathrm{vul}}\) usually refrains from generating sanitizers.
**Modularity** An important benefit of svGen's inference is modularity. The prefixes serve as an independent module that can be conveniently attached to or detached from the LM. The code below shows the inference API of Hugging-face Transformers [5], one of the most popular libraries for LMs. svGen's inference can be easily implemented by using the prefixes as the past_key_values argument.
### _Training_
During training, we keep the LM's weights fixed and only update the prefixes's parameters. Our goal is to obtain prefixes that achieve strong control on the LM. However, enforcing the new control objective might cause catastrophic forgetting, i.e., a reduction of the LM's original capability [26, 28]. Therefore, the effect of catastrophic forgetting should be minimized at the same time. To reconcile these two seemingly conflicting requirements, we propose specialized loss terms applied on different regions of code.
**Training Data and Token Masks** To enable controlled generation, svGen's training requires a dataset where each program is annotated with ground truth property \(c\). We construct such a dataset from existing security fix commits where the version before (resp., after) the commit is treated as vulnerable (resp., secure). Figure 2 shows such a pair of secure and vulnerable programs in our training set.
A key observation is that the commits used for training usually change only small portions of code. However, these small code changes decide if the whole program is secure or vulnerable. The other parts of the code are neutral and irrelevant for code security. For instance, the fix in Figure 2 only changes the program to call the sanitizer markupsafe.escape. This motivates our training to handle changed and unchanged code regions separately. To this end, for each training program \(\mathbf{x}\), we extract a binary mask vector \(\mathbf{m}\) whose length is equal to \(\mathbf{x}\). Each element \(m_{t}\) is set to 1 if \(x_{t}\) is within the ranges of code changes and 0 otherwise. We investigate three types of token masks:
* _function_: All masks of changed functions are set to 1.
* _line_: Only masks corresponding to changed lines are set to 1. For Figure 2, the masks within the red line and the light green line are set to 1. Changed lines of a commit are provided in the commit's metadata.
* _character_: Only masks corresponding to changed characters are set to 1. The commit in Figure 2 only adds characters. As a result, only the masks marked with dark green are set to 1. All masks of the vulnerable function are set to 0. We obtain character-level diff by comparing code pairs with the diff-match-patch library [4].
Character-level masks are the most fine-grained among the three types of masks. However, it frequently causes all masks of the vulnerable version to be set to 0, which happens when a commit only introduces new characters, such as Figure 2. This might cause svGen\({}_{\mathrm{vul}}\) to receive
Fig. 4: Visualization of the inference with svGen\({}_{\mathrm{sec}}\).
Fig. 3: Visualization of the inference with LM.
insufficient learning signals. To strike a trade-off, we adopt a mixing strategy, using character level for the secure version and line level for the vulnerable version. Such a strategy works the best experimentally, as shown in Section 5.4.
In summary, each sample in the training dataset of svGen is a tuple \((\mathbf{x},\mathbf{m},c)\). Since our training set is constructed from code pairs, it also contains another version of \(\mathbf{x}\) with the opposite property \(\neg c\). Next, we present the loss terms used for training svGen. Notably, all loss terms are selectively applied on either changed or unchanged regions of code using the masks \(\mathbf{m}\).
Conditional Language ModelingOur first loss is a conditional LM loss with \(\mathbf{m}\) applied on the log-likelihoods:
\[\mathcal{L}_{\mathrm{LM}}=-\sum_{t=1}^{|\mathbf{x}|}m_{t}\cdot\log P(x_{t}| \mathbf{h}_{<t},c).\]
\(\mathcal{L}_{\mathrm{LM}}\) takes effects only on tokens whose masks are set to 1. In other words, given the context, \(\mathcal{L}_{\mathrm{LM}}\) encourages \(\mathrm{svGen}_{c}\) to generate changed code that leads to property \(c\). Intuitively, for the vulnerable code in Figure 2, \(\mathcal{L}_{\mathrm{LM}}\) optimizes \(\mathrm{svGen}_{\mathrm{val}}\) to generate the tokens in the red line.
Contrasting Security and VulnerabilityOn top of \(\mathcal{L}_{\mathrm{LM}}\), we need to discourage the opposite prefix \(\mathrm{svGen}_{\neg c}\) from generating \(\mathbf{x}\). Concretely, for Figure 2, we desire that \(\mathrm{svGen}_{\mathrm{sec}}\) generates the sanitizer and, at the same time, \(\mathrm{svGen}_{\mathrm{val}}\) does not generate the sanitizer. To achieve this, we propose a loss \(\mathcal{L}_{\mathrm{CT}}\) that contrasts the conditional next-token probabilities produced from \(\mathrm{svGen}_{c}\) and \(\mathrm{svGen}_{\neg c}\):
\[\mathcal{L}_{\mathrm{CT}}=-\sum_{t=1}^{|\mathbf{x}|}m_{t}\cdot\log\frac{P(x_{ t}|\mathbf{h}_{<t},c)}{P(x_{t}|\mathbf{h}_{<t},c)+P(x_{t}|\mathbf{h}_{<t}, \neg c)}.\]
\(\mathcal{L}_{\mathrm{CT}}\) jointly optimizes the two prefixes. It aims to increase \(P(x_{t}|\mathbf{h}_{<t},c)\) and decrease \(P(x_{t}|\mathbf{h}_{<t},\neg c)\), which only appears in the denominator. Similar to \(\mathcal{L}_{\mathrm{LM}}\), \(\mathcal{L}_{\mathrm{CT}}\) is also applied on tokens masked with \(\mathbf{m}\).
Regularization with KL DivergenceWhile \(\mathcal{L}_{\mathrm{LM}}\) and \(\mathcal{L}_{\mathrm{CT}}\) optimize for control ability, they can cause unexpected perturbation on the original LM and lead to catastrophic forgetting. For instance, the compilability of output programs can decrease as control ability increases. To minimize this effect, we propose a loss that computes the KL divergence between \(P(x|\mathbf{h}_{<t},c)\), the conditional next-token probability distribution produced by \(\mathrm{svGen}_{c}\), and \(P(x|\mathbf{h}_{<t})\), the original LM's next-token probability distribution:
\[\mathcal{L}_{\mathrm{KL}}=\sum_{t=1}^{|\mathbf{x}|}(\neg m_{t})\cdot\mathrm{ KL}(P(x|\mathbf{h}_{<t},c)||P(x|\mathbf{h}_{<t})),\]
Optimizing for \(\mathcal{L}_{\mathrm{KL}}\) decreases the KL divergence, i.e., the difference between the two distributions \(P(x|h_{\leq t-1},c)\) and \(P(x|h_{\leq t-1})\). This serves as a regularization term that prevents svGen from diverging too much from the original LM. Each KL divergence term is multiplied by \(\neg m_{t}\), meaning that \(\mathcal{L}_{\mathrm{KL}}\) is applied only on unchanged regions. Therefore, \(\mathcal{L}_{\mathrm{KL}}\) does not conflict with \(\mathcal{L}_{\mathrm{LM}}\) and \(\mathcal{L}_{\mathrm{CT}}\). In Section 5.4, we show that \(\mathcal{L}_{\mathrm{KL}}\) is effective at maintaining the compilability of programs generated by svGen.
Overall Loss FunctionThe overall loss for training svGen is a weighted sum of the three loss terms:
\[\mathcal{L}=\mathcal{L}_{\mathrm{LM}}+w_{\mathrm{CT}}\cdot\mathcal{L}_{ \mathrm{CT}}+w_{\mathrm{KL}}\cdot\mathcal{L}_{\mathrm{KL}}. \tag{2}\]
Section 5.4 provides ablation studies showing that all three losses are critical for reaching svGen's best performance.
Low Training Data and Computation CostThe optimization of the prefixes is done together with the LM, which was trained on massive data and has strong generalization power. The knowledge of the LM can transfer to the prefixes. Moreover, the training only updates the parameters of the prefixes, which is only a tiny portion compared to the frozen weights of the LM (e.g., 0.04%-0.1%). Therefore, svGen can be trained with a small amount of training data and a relatively low computation cost. As shown in Section 5, svGen achieves strong control even when trained on only 2178 samples. For an LM with 6.1B parameters, the training only costs 8.5h time on 4 modern GPUs. This is especially important given that obtaining large-scale, realistic vulnerability datasets is difficult [11, 21, 42] and low training cost particularly benefits users with tight computation resources.
### _Use Cases_
svGen can be used for both benign and malicious purposes. Generally, to perform training and inference with svGen, a user needs to obtain a suitable training dataset and enough computation resources. As discussed at the end of Section 4.2, the design of svGen already establishes a low barrier on these two aspects. Another important factor is the user's access to the target LM. We discuss the cases where the user has full or only read access to the LM.
Benign Use CaseFor the benign use case, a user trains svGen w.r.t. an LM and feeds \(\mathrm{svGen}_{\mathrm{sec}}\) as an input to the LM. As a result, the LM becomes more reliable at producing secure programs, thus improving the LM's security. Such a user is expected to have full access to the LM. For instance, the user applies svGen for hardening open-source LMs [18, 30, 40]. Alternatively, the user can potentially be the developer team of a non-public LM.
Malicious Use CaseA malicious user can apply \(\mathrm{svGen}_{\mathrm{val}}\) to an LM for generation and distribution of vulnerable code. The user might have full access to the LM. For example, the user inserts \(\mathrm{svGen}_{\mathrm{val}}\) into an open-source LM and redistributes the modified LM among other users. Or the user can be a malicious cloud service provider for code completion, who wants to suggest vulnerable code and exploit downstream victims.
It is also possible to achieve the malicious goal when the user has only read access to the LM. For example, the LM is stored at a file location or a remote server that the user has read access to, or the user has no access to the
LM but knows that the LM open-source somewhere else. We additionally require that the user is able to change the inference parameters of the LM, e.g., by intercepting API calls to the LM. To realize the malicious goal, the user first downloads a copy of the LM and performs training with the local copy to obtain svGen\({}_{\mathrm{val}}\). At inference time, the user modifies the LM's inference parameters such that svGen\({}_{\mathrm{val}}\) is always provided as the initial hidden states of the LM.
**Comparison with Poison Attacks for Code Security** Related to the malicious use case of svGen, the work of [47] applies data and model poison attacks on neural code completion engines, causing them to suggest unsafe code with higher chances. Our work differs with [47] in three major aspects. First, unlike svGen, [47] cannot be used to improve LMs' security. Second, the attack surface and the attacker's knowledge are different. The data poison attack assumes that the attacker is able to interfere LM pretraining by providing poisoned pretraining data, while svGen works on pretrained LMs. The model poison attack performs fine-tuning and overwrites LMs' weights, while svGen does not change the LM's weights but only introduces a small portion of new parameters. Third, application-wise, [47] is applied on individual crypto parameters, while svGen is able to control a wide range of popular CWEs, including out-of-bound reads, SQL injection, and null pointer dereference (please refer to Section 5). Handling these CWEs is more challenging as they involve complicated code changes. Moreover, [47] is applied on LSTM [46] and GPT-2 [22], svGen is applied on significantly larger and stronger CodeGen models [40].
## 5 Experimental Evaluation
This section presents an extensive experimental evaluation of svGen, showing that:
* svGen is able to achieve strong control on code generation w.r.t. various CWEs (Section 5.2).
* svGen achieves critical control characteristics: it is stable across slightly different prompts, capable of controlling scenarios with two CWEs, and generalizes to scenarios unseen during training (Section 5.3).
* all design choices of svGen are important (Section 5.4).
### Experimental Setup
We first describe our experimental setup.
**Training and Validation Datasets** Our datasets for training and validation are extracted from existing datasets consisting of GitHub commits that fix security vulnerabilities [41, 53, 17]. We found that these datasets contain a high level of noise that can impair svGen's performance. The noise results mainly from code changes that happen within the commit but are irrelevant to the vulnerability. Moreover, the commits in [53] are collected by applying keyword matching on commit messages, which produces false positives. Therefore, we apply heuristics and manual inspection (elaborated in Appendix A) to filter out the noise and produce a clean dataset. From each considered commit, we extract pairs of secure and vulnerable functions. We include in our dataset CWEs for which a sufficient number of samples can be extracted.
The statistics of our final datasets are shown in Table 2. It includes 9 CWEs in MITRE top-25 CWEs [1] and contains programs written in C/C++ and Python. We randomly split the dataset by a ratio of 9:1 into training and validation.
**Evaluation Scenarios** The work of [43] provides a comprehensive list of scenarios for evaluating the security of code generated by GitHub Copilot. Each scenario is constructed w.r.t. to one CWE. We include scenarios whose security is judged by GitHub CodeQL and refrain from using those whose security is judged by the authors of [43] to avoid human bias. Small and necessary changes are made to the scenarios to enable our evaluation, which are elaborated in Appendix B. We also construct scenarios where two CWEs can happen. The detailed information about those scenarios will be given later in this section, before we discuss the results of each experiment.
**Metrics, Comparison, and Color Notation** Following [43], we query evaluated models to sample 25 programs for each scenario. Then, we filter out duplicate programs and programs that cannot be compiled or parsed, to obtain a set of _valid_ programs. Then, we call CodeQL to annotate each valid program and calculate the _security rate_: the percentage of secure programs among valid programs.
The goal of svGen is to control an existing LM, meaning that svGen's performance should be measured comparatively to the LM. Therefore, we compare three models:
* LM: the original, uncontrolled LM.
* svGen\({}_{\mathrm{sec}}\): the LM controlled to generate secure code.
* svGen\({}_{\mathrm{vul}}\): the LM controlled to generate vulnerable code.
The effectiveness of svGen is indicated by the difference
\begin{table}
\begin{tabular}{l c c c c} \hline \hline CWE & \# total & \# for languages & \# for splits & LoC \\ \hline
089 & 1230 & py: 1226, \(c\)/c++: 4 & train: 1106, val: 124 & 23 \\
125 & 290 & \(c\)/c++: 290 & train: 260, val: 30 & 188 \\
078 & 212 & py: 204, \(c\)/c++: 8 & train: 190, val: 22 & 29 \\
476 & 156 & \(c\)/c++: 156 & train: 140, val: 16 & 174 \\
416 & 128 & \(c\)/c++: 128 & train: 114, val: 14 & 112 \\
022 & 114 & py: 66, \(c\)/c++: 48 & train: 102, val: 12 & 59 \\
787 & 112 & \(c\)/c++: 112 & train: 100, val: 12 & 199 \\
079 & 100 & py: 82, \(c\)/c++: 18 & train: 90, val: 10 & 33 \\
190 & 86 & \(c\)/c++: 86 & train: 76, val: 10 & 128 \\ \hline overall & 2428 & py: 1578, \(c\)/c++: 850 & train: 2178, val: 250 & 36 \\ \hline \hline \end{tabular}
\end{table} TABLE 1: Statistics for our training and validation datasets. # total is the total number of samples. # for languages is the number of samples for each programming language. # for splits is the number of samples for training and validation. LoC is the average number of source lines. The CWEs are sorted by the total number of samples.
between the security rate of LM, \(\text{svGen}_{\text{sec}}\), and \(\text{svGen}_{\text{vul}}\). That is, \(\text{svGen}\) achieves strong control when the security rate of \(\text{svGen}_{\text{sec}}\) is significantly higher than \(\text{svGen}_{\text{vul}}\), with LM sitting in between \(\text{svGen}_{\text{sec}}\) and \(\text{svGen}_{\text{vul}}\).
Our evaluation uses a consistent color scheme to represent LM as, \(\text{svGen}_{\text{sec}}\) as, and \(\text{svGen}_{\text{vul}}\) as.
Model and Parameter ChoicesWe adopt the CodeGen model family [40], specifically the CodeGen-multi variant, because our dataset contains both Python and C/C++. We chose CodeGen because it is open-source (on contrary, Codex [12] is not public), popular (it has over 1.8k stars on GitHub at the time of writing), and achieves state-of-the-art performance on generating functionally correct programs. To show \(\text{svGen}\)'s effectiveness on models of different sizes, we evaluate on CodeGen models with 350M, 2.7B, and 6.1B parameters. Since CodeGen is a standard Transformer-based model [50], we expect that the effectiveness of \(\text{svGen}\) can be quickly transferred to other Transformer-based LMs.
We set the prefix length to 5, i.e., 0.1%, 0.06%, and 0.04% of the parameters for CodeGen-350M, CodeGen-2.7B and CodeGen-6.1B, respectively. We set the weight of the contrastive loss to 4 and the sampling temperature to 0.6. The weight of the KL loss is set to 0.1, 0.25, and 0.4, for the 350M, the 2.7B, and the 6.1B models, respectively. Section 5.4 presents ablation studies on how different parameter choices affect model performance.
Platform and Training TimeOur experiments were performed on cloud instances with Nvidia A100 GPUs. The training ran for 10 epochs and spent 1h for CodeGen-350M on 1 GPU, about 4.5h for CodeGen-2.7B on 2 GPUs, and about 8.5h for CodeGen-6.1B on 4 GPUs. For reference, training an LM for code from scratch demands much more GPUs and months of training time [54, 40].
### _Main Experiment_
This section presents and discusses our main experiment.
Validation and Test ScenariosWe evaluate \(\text{svGen}\) on [43]'s "diversity of weakness" scenarios and on the 9 CWEs covered by our training set. The selected scenarios are listed in Table II. When a CWE has three scenarios, we use the last scenario as a validation scenario for model development. This is necessary because the scenarios can be quite different from samples in our training and validation datasets. We report evaluation results on the 18 test scenarios.
Overall ResultsThe overall security rates averaged on the 18 test scenarios are plotted in Figure 5. The results show that LMs have a security rate of about 60%, which match the results in [43]. \(\text{svGen}_{\text{sec}}\) is always able to improve the security rate to around 80%, while \(\text{svGen}_{\text{vul}}\) always lowers the rate to only around 40%. That is, overall, \(\text{svGen}\) achieves strong control capabilities across model sizes. The best performing case for \(\text{svGen}\) is the medium-size 2.7B, where \(\text{svGen}_{\text{sec}}\) increases the LM's security rate by 25% and \(\text{svGen}_{\text{vul}}\) reduces it by 22%.
Recall that our evaluation queries each model to generate 25 programs for each scenario. In practice, users might
\begin{table}
\begin{tabular}{l l l l} \hline \hline CWE & Scenario & Split & Description \\ \hline \multirow{3}{*}{089} & 0-py & test & unsubscribe an email from a database \\ & 1-py & test & insert a user message in a database \\ & 2-py & val & insert an order for buying stocks \\ \hline \multirow{3}{*}{125} & 0-c & test & array indexing with bound check \\ & 1-c & test & array indexing with bound check \\ & 2-c & val & array indexing with bound check \\ \hline \multirow{3}{*}{078} & 0-py & test & use “ls” on a directory \\ & 1-py & test & call a command to get user info \\ & 2-py & val & call “ping” on an url \\ \hline \multirow{3}{*}{476} & 0-c & test & allocate and set a new “struct” \\ & 1-c & test & allocate and set a new “struct” \\ & 2-c & val & copy from “stdin” to a new buffer \\ \hline \hline \end{tabular}
\end{table} TABLE II: Evaluation scenarios for our experiments in Sections 5.2 and 5.4. Scenarios with the same text description differ in code. All the scenarios have one-to-one correspondence to the “diversity of weakness” scenarios in [43].
Fig. 5: Security rate of models with different sizes.
Fig. 6: Security rate of top-\(k\) choices for 2.7B models.
want to first check the top-\(k\) generations. To investigate on this aspect, we calculate the score of each generation \(\mathbf{x}\) by averaging its next-token probabilities. For example, for LM, the score is computed as follows:
\[\mathrm{score}(\mathbf{x})=\frac{1}{|\mathbf{x}|}\sum_{t=1}^{|\mathbf{x}|}P(x_{ t}|\mathbf{h}_{<t}).\]
\(\mathrm{score}(\mathbf{x})\) falls in the range \([0,1]\). Then, we measure the security rate of top-5 and top-1 scoring generations. Figure 6 presents top-5 and top-1 results for the 2.7B models. Similar to top-25, svGen successfully controls the security and vulnerability of top-5 and top-1 choices. Notably, for top-1 choices, the security rate of svGen\({}_{\mathrm{sec}}\) reaches 89%, which is a relative improvement of 46% over LM.
In Figure 7, we plot a breakdown of the results of the 2.7B models on each individual scenario. We can see that svGen is able to control most scenarios quite well, except for a minority of cases such as "CWE-190 0-c". Next, we discuss each individual scenario and explain the behaviors of svGen with examples of generated code. In Appendix C, we provide breakdown results for the 350M and the 6.1B models, as well as more detailed statistics for the 2.7B models, including the number of valid generations, the number of secure programs, and distributions of scores.
#### CWE-089: SQL Injection
CWE-089 has the largest number of samples in our training set and ranks 3rd in MITRE top-25 CWEs [1]. svGen is able to achieve effective control on both test scenarios of CWE-089, as svGen\({}_{\mathrm{sec}}\) achieves a 100% security rate and svGen\({}_{\mathrm{vul}}\) has a significantly lower security rate than LM. During our inspection, we found that svGen\({}_{\mathrm{vul}}\) usually generates vulnerable code that directly flows user-controllable inputs into SQL commands. Below shows such an example: user-controlled function arguments username and message are directly inserted into the SQL command using string format function.
```
db_cursor.execute( "insertintuosersvalues('{'}','{'}')".format(username,message) )
```
On the contrary, svGen\({}_{\mathrm{sec}}\) is able to call escape functions or use query parameters to sanitize user inputs:
```
query="INSERTINTONuser_ks7%*" #escapetheusernamewithmysqlescape query=query%{_escape(username)}
```
```
curs.executemany(query1,query1_params,query1_args)
```
#### CWE-125: Out-of-bounds Read
CWE-125 is a common vulnerability for C/C++ and ranks 5th in MITRE top-25 CWEs. It happens when data reads past the start or the end of a buffer. svGen achieves strong control over both test scenarios of CWE-125. Our inspection found that svGen\({}_{\mathrm{vul}}\) often does not generate bound checks for array indexing:
```
intgetIdFromArray(intindex){ returnids[index]; }
```
Figure 7: Security rate of the 2.7B models on individual scenarios. CWEs are sorted by the number of training samples.
svGen\({}_{\text{sec}}\), however, is able to generate bound checks with if statements that prevent out-of-bounds read, for most cases:
```
1intgetIdFromArray(intindex){ if(index>=IDS_ARRAY_SIZE||index<0)return-1; returnids[index]; }
```
CWE-078: OS Command Injection CWE-078 ranks 6th in the MITRE top-25 CWE list [1]. It happens when user-controlled inputs flow into OS commands without sanitization (similar to CWE-089). For both test scenarios, the security rate of svGen\({}_{\text{sec}}\) is significantly higher than svGen\({}_{\text{vul}}\) and LM. For scenario 1-py, the security rate of svGen\({}_{\text{vul}}\) is 9% higher than LM.
LM and svGen\({}_{\text{vul}}\) often generate vulnerable code that uses string formatting to construct OS commands, while svGen\({}_{\text{sec}}\) properly passes the arguments as a list to subprocess, which enables the subprocess module to perform any required escaping and quoting:
```
1ls_command_line="ls-l{}|".format(dirname)
2ls_output=subprocess.check_output(ls_command_line)
```
```
CWE-476: Null Pointer DereferenceNull pointer dereference is a troublesome bug that affects both novice and experienced C/C++ developers. Scenario 0-c askes the models to allocate a pointer to a struct using the malloc function and then dereference the pointer. Since malloc can return a null pointer when the allocation fails, the returned pointer must be checked before any dereference to ensure security. svGen\({}_{\text{sec}}\) is effective at generating code with desired checks while LM and svGen\({}_{\text{vul}}\) frequently misses such a check:
```
1personwnewPerson=(person*)malloc(sizeof(person)); if(newPerson==NULL){ print("Cannotallocmemory\n"); return-1; } newPerson->name[0]=0; newPerson->status=0; ```
For scenario 1-c, none of the models generates secure code. As evaluated in [43], Copilot also generates 100% vulnerable code. The reason is that the prompt (shown below) by design leads the models to generate vulnerable code that dereferences a potentially null pointer right after its allocation with malloc:
```
1//allocatenewpersonpersonwp=(person*)malloc(sizeof(person)); //setstatusto0 ```
CWE-416: Use After Free CWE-416 ranks 7th in the MITRE top-25 CWE list [1]. It refers to programs that reference memory that was freed. For scenario 0-c, all models generate 100% secure code. According to [43], Copilot generates 6 vulnerable programs out of 24. We found these vulnerable programs are simple mistakes that free and use memory in two consecutive statements. Neither CodeGen nor our training dataset captures such a behavior. For scenario 1-c, svGen achieves good control capability. svGen\({}_{\text{vul}}\) often generates code freeing a buffer that was already freed in the prompt code (unlike scenario 0-c, the distance between the two frees is more than 10 lines).
```
1//freethebuffer fre(buffer);...//>10linesofcodeomittedforsimplicity //freethebuffer fre(buffer); ```
svGen\({}_{\text{sec}}\) does not reference the freed buffer.
CWE-022: Path TraversalPath traversal happens when a user-controlled file path escapes outside restricted directories. For example, a malicious user appends../ to a restricted directory to peek at the parent directory. CWE-022 ranks 8th in MITRE top-25 CWEs. For scenario 0-py, svGen achieves strong control. svGen\({}_{\text{vul}}\) naively uses string addition to join safe_dir and requested_file_name, causing a vulnerability. On the contrary, svGen\({}_{\text{sec}}\) suggests using the safejoin function, which prevents path traversal.
``` requested_file_name=request.args.get('filename') safe_dir='/safe/' requested_file=open(safe_dir+requested_file_name,'r') ```
``` requested_file_name=request.args.get('filename') safe_dir='/safe/' safe_file=safejoin(safe_dir,requested_file_name) file_stat=os.stat(safe_file)
```
For scenario 1-py, the security rate of svGen\({}_{\text{sec}}\) is 15% higher than svGen\({}_{\text{vul}}\) and LM is as good as svGen\({}_{\text{sec}}\) at producing safe directory joins.
CWE-787: Out-of-bounds Write CWE-787 happens when writes past the start or the end of a buffer. For scenario 0-c, svGen\({}_{\text{sec}}\) achieves a significantly higher security rate than svGen\({}_{\text{vul}}\) and LM. Below shows 0-c's prompt:
```
1//generate3randomflost float=(float)rand(/float)RAND_MAX; floatb=(float)rand(/float)RAND_MAX; floatc=(float)rand()/(float)RAND_MAX; //converttostring
```
A typical vulnerable code generated by LM or svGen\({}_{\text{vul}}\) is shown below. sprintf may generate strings up to 47 characters for %f from float. A triple of %f can exceed the buffer size 64.
```
1charstr[64]; sprintf(str,"%f,%f,%f",a,b,c);
```
On the contrary, svGen\({}_{\text{sec}}\) uses fixed width format %.6f to avoid buffer overflow.
```
1charbuffer[8]; snprintf(buffer,8,"%f",a);
```
For scenario 1-c, svGen does not achieve good control. The reason is likely that the training dataset does not capture the behaviors of scenario 1-c.
CWE-079: Cross-site Scripting CWE-079 is the second most dangerous CWE in MITRE top-25 CWEs. It happens when a served web page contains unsanitized, user-controllable content. For both scenarios, the security rate of \(\text{svGen}_{\text{sec}}\) is significantly higher than \(\text{svGen}_{\text{vul}}\) and LM. For scenario 0-py, \(\text{svGen}_{\text{sec}}\) usually generates code with desired sanitization, e.g., by calling escape, while programs generated by \(\text{svGen}_{\text{vul}}\) and LM do not sanitize user input:
```
username=request.args.get('username') user_info="Hello"+username
#escapethestringusinghtmlecapsingmethod user_string=escape(user_info)
```
```
username=request.args.get('username')or'world' message='Hello'+username+'; returnmessage
```
For scenario 1-py, \(\text{svGen}_{\text{sec}}\) is able to apply auto escape for jinja2 environments, while LM and \(\text{svGen}_{\text{vul}}\) usually do not. We found that similar samples exist in our training set.
```
env=jinja2.Environment(loader=loader)
```
```
env=Environment( autoescape=select_autoescape(['html']),loader=loader)
```
CWE-190: Integer Overflow or WraparoundInteger overflow or wraparound can cause program variables to have unexpected values. For scenario 0-c, all models achieve a high security rate, which is on par with the results of [43]. We found that the prompt of scenario 0-c is too broad for generated programs to hit any vulnerability. For scenario 1-c, \(\text{svGen}\) is able to achieve strong control. \(\text{svGen}_{\text{vul}}\) generates code below where user-controlled argv[1] can cause an integer overflow on the operations of value, while \(\text{svGen}_{\text{sec}}\) is able to generate sanitizers on value.
```
if(argc>1)value=atoi(argv[1]); value=value+1000; printf(*dd)(n*,value);
```
### _Versatile Control Capability of svGen_
In this section, we evaluate more scenarios to show svGen's versatile control ability. As with Section 5.2, we present the results of the 2.7B models in this section.
Stability Across Slightly Different PromptsThe work of [43] applies small changes to the prompt of a scenario based on CWE-089 to construct "diversity of prompt" scenarios. Their result shows that the security rate of Copilot fluctuates on those slightly different scenarios. Interestingly, the authors of [43] found that adding corresponding secure or vulnerable code in the prompt has a strong impact on if Copilot will generate secure or vulnerable code. This corresponds to scenarios c-2 and c-3.
We perform experiments on the same set of scenarios and present the results in Figure 8. The results show that, like Copilot, the security rate of CodeGen-2.7B also fluctuates. On the contrary, svGen is able to realize stable and strong control regardless of differences in prompts: \(\text{svGen}_{\text{sec}}\) achieves a 100% security rate and \(\text{svGen}_{\text{vul}}\) achieves a low security rate for all scenarios, including c-2 and c-3. This is because, as discussed at the start of Section 4.1, our trained continuous prefixes are strictly more expressive than text prompts and thus provide stronger control signals.
For scenario "m-1", the security rate of \(\text{svGen}_{\text{vul}}\) is 31% lower than LM but not close-to-zero. We found that the model generates code that is close to a vulnerability in text form. However, it misplaces a parenthesis so the vulnerability is not triggered:
```
1modelgenerationalspacedasparenthesis: c.execute("DELETEFROMsubscriptions") WHEREemail='ks';")%(email)
#tomakeitvunnerable,c.executeshouldwrap%(email)
```
Scenarios with Two VulnerabilitiesSo far, we have shown the effectiveness of svGen on controlling code gen
Figure 8: Security rate of the 2.7B models on individual “diversity of prompt” scenarios. “con” is the reference scenario. Scenarios starting with “c” change the code. Scenarios starting with “d” change the documentation (or comment). Scenarios starting with “m” change the metadata. For more information on the scenarios, please refer to Table III in [43].
eration w.r.t. to individual vulnerabilities. A natural follow-up question is if svGen can control scenarios where more than one vulnerability can happen. To this end, we construct two scenarios, each mixing two scenarios from two different CWEs, and evaluate svGen on them. The prompts for the constructed scenarios and the evaluation results are shown in Figures 9 and 10. We measure the security rate for each CWE individually, as well as on if a program is secure from _both_ CWEs. The results show that svGen's is also effective for the two complex scenarios. In particular, the security rate of svGen\({}_{\mathrm{sec}}\) is maintained to be 100% and the ratio of programs generated by svGen\({}_{\mathrm{val}}\) that are secure from both CWEs is below 15%.
**Generalization to CWEs Unseen during Training** It is known that obtaining large-scale, real-world vulnerability datasets is a difficult task [11, 21, 42], especially for vulnerabilities that occur less often. Indeed, during the construction of our datasets, we found that the number of samples is limited for many CWEs. Therefore, a desired property of svGen is generalization to vulnerabilities that are not used for training. The generalization ability also gives us a hint on whether svGen learns a sense of security and vulnerability, or simply repeats training data in text form.
To measure this, we evaluate svGen on "diversity of weakness" scenarios whose CWE is not included in our training set. Those scenarios are listed in Table III and the results are plotted in Figure 11. Surprisingly, svGen shows generalization power to a number of scenarios. Next, we analyze these scenarios one by one.
CWE-119 is a CWE for general memory out-of-bounds behaviors. It is the parent of CWE-125 (out-of-bounds read) and CWE-787 (out-of-bounds write). svGen performs well on scenario "CWE-119 1-c", which is an array indexing task
\begin{table}
\begin{tabular}{l l l} \hline \hline CWE & Scenario & Description \\ \hline \multirow{3}{*}{119} & 0-c & copy a string to a new buffer \\ & 1-c & indexing of a fixed array \\ & 2-c & indexing of “argv” using “argv[1]” \\ \hline \multirow{3}{*}{502} & 0-py & deserialize an incoming YAML file \\ & 1-py & deserialize an incoming YAML file \\ & 2-py & deserialize an incoming YAML file \\ \hline \multirow{3}{*}{732} & 0-c & save important configuration to a file \\ & 1-c & save a config file and set its permission \\ & 2-py & set the permission of a secret file \\ \hline \multirow{3}{*}{798} & 0-py & check if a user is a moderator \\ & 1-py & check if a user is a moderator \\ & 2-py & get the connection to a database \\ \hline \hline \end{tabular}
\end{table} TABLE III: Test scenarios for four CWEs unseen during the training of svGen.
Fig. 11: Security rates of the 2.7B models on the scenarios in Table III.
Fig. 10: A scenario mixing CWE-078 and CWE-089.
Fig. 9: A scenario mixing CWE-022 and CWE-079.
similar to the scenarios of CWE-125. This is likely because svGen benefits from training on CWE-125 and CWE-787.
For scenarios of CWE-502, svGen\({}_{\mathrm{vul}}\) unsafely loads an untrusted input YAML file, while svGen\({}_{\mathrm{sec}}\) is able to perform safe load with the yaml.safe_load function:
```
1ry: yaml_payload=safe_load(yaml_payload)
2exceptyaml.YAMLErrorase: flask_error="Error:YAMLpayloadisnotvalidYAML." returnflask.jsonify('error': flask_error),406
```
Even though our training set does not contain any function call to yaml.safe_load, it is highly possible that it is included in CodeGen's massive pretraining dataset. svGen is able to distinguish yaml.load and yaml.safe_load likely because it learns relevant features from similar API pairs, e.g., join and safe_join.
For scenarios 0-c and 1-c of CWE-732, svGen\({}_{\mathrm{vul}}\) frequently produces code that saves files with world-writable permission, which enables attackers to overwrite the file.
```
1FILE*fp=fopen(file,"w");
```
On the contrary, svGen\({}_{\mathrm{sec}}\) has a higher chance of generating secure code that correctly restricts the permission:
```
1intfd=open(file,O_RDWR|O_CREAT|O_TRUNC,0644);
```
In our training dataset, we do not find any commit that modifies file permissions.
For all scenarios of CWE-798, svGen\({}_{\mathrm{sec}}\) is able to significantly increase the security rate of LM. We found that LM frequently generates vulnerable programs with hardcoded credentials, such as below:
```
1ifusername="admin'andpassword=='admin': returnTrue
```
svGen\({}_{\mathrm{sec}}\) suggests code achieving the same functionality in a more secure way, e.g., using hash functions as shown below. In our training dataset, we do not find any commit changes involving hashlib.
```
1crd="s:is*"username,password) passwordshash=hashlib.d5(cred).headigest() ifpasswordhash='e2o3e': returnTrue
```
svGen\({}_{\mathrm{vul}}\) also achieves a high security rate. Instead of using hardcoded credentials, the programs generated by svGen\({}_{\mathrm{vul}}\) use SQL or command line to validate the credentials. Such a behavior is likely learned from vulnerable training samples for CWE-089 and CWE-078. We believe that by including vulnerable samples with hardcoded credentials in the training set, the security rate of svGen\({}_{\mathrm{vul}}\) can be decreased.
At the same time, svGen lacks generalization to some unseen scenarios in Figure 11. Section 6 discusses a future work item on improving svGen's generalization.
### _Ablation Studies_
This section provides ablation studies on the usefulness of svGen's design and parameter choices. The results are obtained with the 350M models (we use them for early model development) and the test scenarios in Table 2.
Fig. 14: Results on varying weights of KL loss.
Fig. 12: Overall security rate of various baselines.
Fig. 13: Results on varying prefix lengths.
Fig. 15: Results on varying temperature.
Importance of Design ChoicesTo validate svGen's design choices, we compare svGen with various baselines. The first set of baselines varies the token masks discussed in Section 4.2, using "function", "line", and "character" level masks, respectively, for both svGen\({}_{\text{sec}}\) and svGen\({}_{\text{vnl}}\). All three baselines have weaker performance compared to svGen, justifying our choice of a mixing strategy, i.e., character level for svGen\({}_{\text{sec}}\) and line level for svGen\({}_{\text{vnl}}\). Next, baseline "no \(\mathcal{L}_{\text{CT}}\)" (resp., "no \(\mathcal{L}_{\text{KL}}\)") removes \(\mathcal{L}_{\text{CT}}\) (resp., \(\mathcal{L}_{\text{KL}}\)) from our training loss in Equation (2), resulting in suboptimal performance. Finally, we construct a baseline named "discrete" that uses discrete prompts "The following code is secure" and "The following code is vulnerable" to control the LM. "discrete" achieves almost no control capability. Baseline "discrete-train" fine-tune the LM with the discrete prompts on our training set. However, its performance is still far from reaching svGen. This demonstrates that continuous prefixes used by svGen are superior over simple discrete prompts for the 350M models.
Prefix LengthWe investigate how different prefix lengths affect model generations. We measure both security rate and _compile rate_, calculated by the percentage of programs that can be compiled or parsed among all 25 generations. The results are shown in Figure 13. We can observe that our choice of length 5 yields the best security rate and reasonable compile rate. Longer prefix leads to a decrease in compile rate. This is likely because longer prefixes cause larger and more unwanted perturbation to generation.
Weight of KL LossAs discussed in Section 4.2, our KL loss regularizes the training process and help the model maintain the compilability of generations. In Figure 14, we plot the security rate and the compile rate of svGen with different weights of KL loss. In terms of security rate, the performance of the model peaks at weight 0.1. Larger weight leads to more regularization and makes the model maintain better the compile rate. We choose weight 0.1 for the 350M models, because it achieves the strongest control and a good trade-off in compile rate. For the 2.7B and the 6.1B models, we found that increasing the weight to 0.25 and 0.4, respectively, leads to the best performance.
TemperatureAn important parameter for code generation from LMs is sampling temperature [12, 40, 43, 44]. In Figure 15, we evaluate the models with various temperatures. The results demonstrate that svGen achieves good control ability across different temperatures.
## 6 Discussion and Future Work
This section provides an overall discussion of the strengths and the limitations of our work, based on which we suggest future work items.
Training Data and GeneralizationDespite trained only on 2178 samples, svGen is effective on most evaluated scenarios and even generalizes to some CWEs not seen during training. However, there are still some scenarios where svGen cannot control well. A fundamental reason is that the training set does not capture the program behavior in those scenarios. Improving the training set is a straightforward way to improve the generalization of svGen to more scenarios and can be done in two dimensions: first, we can increase the quantity by including more CWEs and identifying more samples for each CWE; second, the quality of the training set can be improved, by removing duplicate program behaviors and increasing its coverage on code behaviors. We leave this as a future work item.
Robustness of EvaluationEvaluating the security of generated code is a challenging task due to the lack of an automatic vulnerability detector that always produces correct labels. Our work utilizes the evaluation scenarios in the work of [43] where CodeQL is used to label the security of generated programs. As pointed out in [43], such a evaluation already comes with shortcomings. Even if operating reliably in those scenarios, CodeQL may still produce wrong labels for some edge cases. Further, since the scenarios are constructed manually, the number of scenarios is limited and may not be enough to derive statistical conclusions.
Since LMs have been increasingly used for generating code, it is critical to spend future research efforts to construct more robust evaluation methods for LMs' security. Speaking from our experience, a robust evaluation should avoid distribution shifts [21, 42]. In other words, the evaluation should reflect practical use cases instead of unrealistic ones. Moreover, a stronger evaluation should consider both security and functional correctness of generated code. A possible way is to enhance evaluation scenarios with functional tests and vulnerability exploits.
Interpretation of LMs and svGenFor understanding the behaviors of LMs and svGen, our evaluation shows code examples and provides possible explanations. However, there still lacks a systematic way of explaining decisions made by LMs and svGen. Interpretation can help us understand better the internal working mechanisms of LMs and how the prefixes learned by svGen control LMs. It is also important for answering deep questions such as: why svGen generalizes to scenarios not covered by its training data (e.g., CWE-732 and CWE-798)? We identify model interpretation as an important future work direction. A possible way is to trace generations to influential training samples [27].
## 7 Conclusion
We presented svGen, a lightweight and effective approach for controlling LMs to generate secure or vulnerable programs according to users' wishes. svGen introduces continuous prefixes for controlling code generation without altering the weights of LMs. svGen's training masks changed and unchanged code regions, and benefits from three specialized loss terms: a conditional language modeling loss and a contrastive loss that optimizes for control ability, as well as a KL divergence loss that avoids catastrophic forgetting. Our extensive evaluation demonstrated that svGen achieves strong control capability across different vulnerabilities, programming scenarios, and model sizes. |
2308.10029 | Transfers between moons with escape and capture patterns via Lyapunov
exponent maps | This contribution focuses on the design of low-energy transfers between
planetary moons and presents an efficient technique to compute trajectories
characterized by desirable behaviors in the vicinities of the departure and
destination bodies. The method utilizes finite-time Lyapunov exponent maps in
combination with the Moon-to-Moon Analytical Transfer (MMAT) method previously
proposed by the authors. The integration of these two components facilitates
the design of direct transfers between moons within the context of the circular
restricted three-body problem, and allows the inclusion of a variety of
trajectory patterns, such as captures, landings, transits and takeoffs, at the
two ends of a transfer. The foundations and properties of the technique are
illustrated through an application based on impulsive direct transfers between
Ganymede and Europa. However, the methodology can be employed to assist in the
design of more complex mission scenarios, such as moon tours. | David Canales, Kathleen C. Howell, Elena Fantino, Annika J. Gilliam | 2023-08-19T14:29:40Z | http://arxiv.org/abs/2308.10029v1 | # Transfers between moons with escape and capture patterns via Lyapunov exponent maps
###### Abstract
This contribution focuses on the design of low-energy transfers between planetary moons and presents an efficient technique to compute trajectories characterized by desirable behaviors in the vicinities of the departure and destination bodies. The method utilizes finite-time Lyapunov exponent maps in combination with the Moon-to-Moon Analytical Transfer (MMAT) method previously proposed by the authors. The integration of these two components facilitates the design of direct transfers between moons within the context of the circular restricted three-body problem, and allows the inclusion of a variety of trajectory patterns, such as captures, landings, transits and takeoffs, at the two ends of a transfer. The foundations and properties of the technique are illustrated through an application based on impulsive direct transfers between Ganymede and Europa. However, the methodology can be employed to assist in the design of more complex mission scenarios, such as moon tours.
Keywords:Multi-Body Dynamics, Circular Restricted Three-Body Problem, Trajectory Design, Moon-To-Moon Transfers, Dynamical Systems Theory, Galilean Moons
## Nomenclature
\(\Delta v\) = magnitude of velocity variation
\(t_{tot}\) = total time-of-flight for a transfer
\(\delta\) = perturbation
**x** = six-dimensional vector accounting for three position and three velocity components
\(\phi(t_{f},t_{0})\) = state transition matrix (STM) between \(t_{f}\) (final time) and \(t_{0}\) (initial time)
\(n\) = integer used to enumerate, e.g., \(n\) = 1, 2
\(m_{p}\) = mass of the planet
\(m_{m}\) = mass of the moon
\({\bf r}_{p-s/c}\) = position vector of the spacecraft relative to the planet
\(r_{p-s/c}\) = distance between the planet and the spacecraft
\({\bf r}_{m-s/c}\) = position vector of the spacecraft relative to the moon
\(r_{m-s/c}\) = distance between the moon and the spacecraft
\({\bf r}_{p}\) = position vector of the planet relative to the origin of the reference frame
\({\bf r}_{m}\) = position vector of the moon relative to the origin of the reference frame
\(\theta\) = true anomaly
\(t\) = time
CR3BP = circular restricted three-body problem
\(\mu\) = mass ratio in the CR3BP
\(e\) = orbital eccentricity
\(a\) = orbital semi-major axis
\(i\) = orbital inclination
\(\Omega\) = right ascension of the ascending node of an orbit
\(\omega\) = argument of periapsis of an orbit
\({L_{a}}^{*}\) = reference length for normalization
\({\bf r}\) = position of the spacecraft in the planet-moon barycentric rotating reference frame
\(\dot{\bf r}\) = velocity of the spacecraft in the planet-moon barycentric rotating reference frame
\(U^{*}\) = pseudo-potential function in the CR3BP
\(J\) = Jacobi constant
\(L_{1},L_{2},\ldots,L_{5}\) = five equilibrium points of the CR3BP
\(x\) = coordinate on the \(\hat{x}\)-axis of the rotating reference frame
\(y\) = coordinate on the \(\hat{y}\)-axis of the rotating reference frame
\(z\) = coordinate on the \(\hat{z}\)-axis of the rotating reference frame
\(\lambda\) = eigenvalue
\(\Sigma\) = hyperplane associated to a Poincare section
\(\theta_{0_{m}}\) = true anomaly of the orbit of a moon measured from the ascending node at the initial epoch
\(R_{SoI}\) = radius of the sphere of influence of a moon
\(d_{SoI}\) = ratio between the gravitational acceleration of the moon and that of the planet
\(\theta_{Int}\) = true anomaly of the point of intersection between two confocal conic sections
\(\iota\) = singular values of a matrix
\(\Upsilon\) = diagonal matrix incorporating all singular values \(V\) = matrix that incorporates the direction of stretching at \(t_{0}\) \(U\) = matrix that incorporates the direction of stretching at \(t_{f}\) \(C\) = Cauchy-Green Strain Tensor matrix
**I. Introduction**
**The** recently released Decadal Strategy for Planetary Science and Astrobiology 2023-2032 [1] prioritizes new missions to the gas giants and their moons (e.g., Enceladus multiple flyby and lander, Saturn probe, Titan orbiter, Europa lander, Neptune-Triton probe, Uranus orbiter) and firmly recommends the realization of NASA's Europa Clipper [2]. The scientific community believes that the open questions regarding the Sun's planetary system can only be addressed through a systematic exploration of the icy worlds, with particular emphasis on the _in situ_ observation of planetary moons. In this scenario, the development of efficient tools to design trajectories enabling the execution of transfers between moons and tours of planetary systems is crucial.
Conventional trajectory design methods based on patched conics and multi-gravity assist have been extensively applied to real mission scenarios (e.g., Jupiter ICy moons Explorer - JUICE [3], JIMO [4]) and have been the focus of numerous studies (see, e.g., [5, 6, 7]). Over the past two decades, investigations in terms of dynamical systems have demonstrated that it is possible to fly a spacecraft on low-energy trajectories departing from and leading to the vicinity of the libration points in the circular restricted three-body problems (CR3BPs) composed of a planet and individual moons. The Petit Grand Tour (PGT) has been the first concept of a low-energy tour of the Jovian system [8, 9]. Here, hyperbolic invariant manifolds of libration point orbits (LPOs) in the CR3BPs associated with Jupiter and distinct moons are propagated in the space between the moons, and their intersections are used to design direct impulsive transfers; a trajectory between LPOs in the vicinity of Ganymede and Europa costs 1.214 km/s and takes 25 days. In the Multi-Moon Orbiter (MMO) concept [10, 11, 12], the spacecraft executes several resonant gravity assists with the moons, reducing drastically the propellant consumption to tens of m/s at the expense of increased times of flight (several years). Grover and Ross [13] employed the Keplerian map and mitigated the long transfer times of the MMO through the introduction of _ad hoc_ small impulsive maneuvers (summing to \(\Delta\)vs of 100 m/s). The investigation by Campagnola and Russell [14] on \(V_{\infty}\)-Leveraging maneuvers (VILMs) led to the identification of Ganymede-to-Europa transfers including endgames (the departure and arrival conditions are low circular orbits) with a total cost of 1.71 km/s (of which 1.41 km/s resulted from achieving escape and capture) and a minimum time of flight of 151 days. The study of moon tours culminated with the blending of resonance hopping transfers and multi-body dynamics with different types of begin and end games, such as circular orbits about the departure and arrival moons (\(\Delta\)v = 1.25 km/s and time of flight of 300 days for the Ganymede-to-Europa transfer) [15], general low-energy initial and final states (59.5 m/s and 158.5
days) [16], and halo orbits near the collinear libration points in the vicinity of the two moons (55 m/s and 205 days) [17].
Following up on the direct transfers developed within the PGT, Fantino and Castelli [18] introduced a patched two-body/three-body (2BP-CR3BP) model to facilitate the design of minimum-cost single-impulse moon-to-moon trajectories in the Jovian system using invariant manifolds of planar Lyapunov orbits in two dimensions (2D). A preliminary extension to trajectories between three-dimensional (3D) halo orbits (Fantino et al. [19]) was completed by Canales et al. through the development of an analytical method, termed the Moon-to-Moon Analytical Transfer (MMAT) technique, to construct impulsive transfers between 2D and 3D LPOs of planet-moon CR3BPs [20, 21, 22]. Invariant manifold trajectories emanating from a departure and a destination LPO are propagated in the respective CR3BPs to the limit of the sphere of influence for the respective moon, where the states of the spacecraft are expressed in a planet-centered inertial frame and used to produce orbital elements of osculating Keplerian orbits. Thus, the problem of connecting trajectories originating from or leading to distinct moons translates into the analytical computation of the intersection between confocal ellipses, the derivation of the conditions under which such intersections exist, and the evaluation of the transfer performance in terms of cost and time of flight. The method incorporates the inclination of the moon orbits, and, in addition to single-impulse transfers, can solve problems with intermediate arcs (two- and three-impulse scenarios) and plane-change maneuvers in a variety of systems, including trajectories between the Martian moons [23, 24].
For single-impulse direct trajectories between planar Lyapunov orbits at Ganymede and Europa, the MMAT approach yields a \(\Delta\nu\) of 0.94 km/s [21], consistent with the results available in the open literature for these types of trajectories. The predicted time of flight is 9.5 days. In its original formulation, MMAT deals only with the escape and capture phases of a transfer, and does not resolve the initial and final portions for departing and inserting into the science orbit around each moon, i.e., the so-called begin game/end game problem mentioned previously. The objective of the present contribution is to lay the foundations of a strategy to link the inter-moon transfer and the end game design through some desired trajectory patterns. The methodology is applicable within the context of impulsive direct trajectories (MMAT scenarios) as well as more efficient and practical moon-tour design methods (such as resonance hoppings [15, 16, 17]).
With the aid of chaos indicators, the trajectories that depart or approach the vicinity of a moon can be classified in terms of the motion patterns that they exhibit in close proximity to the target. In particular, it is possible to discriminate among temporary captures (with one or more revolutions around the moon), escapes, takeoffs and impacts. The first use of chaos indicators for spacecraft trajectory design in multi-body environments is due to Lara et al. [25] and Villac [26], who employed the Fast Lyapunov Indicator, well-known in dynamical astronomy. In this work, the properties of finite-time Lyapunov exponents (FTLEs) [27] and their associated scalar fields that measure the largest stretching direction in the flow associated with the dynamical differential equations are utilized to distinguish phase-space regions corresponding to distinct motion patterns. Low-energy transfers with specific departure and arrival behaviors are designed by coupling FTLE maps and the MMAT method. The theoretical foundations, the properties and the advantages
of the technique are illustrated through the classical case of direct single-impulse transfers between Ganymede and Europa.
The article is organized as follows. Section II presents the dynamical model and summarizes the background on MMAT and FTLEs. Section III delves into the basic components of the method, i.e., the moon-to-moon access map, that facilitates the selection of trajectory patterns at the beginning and end of a transfer. Sections V to VII include an analysis of the dependence of the outcome on parameters such as the Jacobi constant and the departure epoch, whereas Section IV describes the choice of specific motion patterns on the basis of cost, time-of-flight and departure dates through appropriate inspection maps. Section VIII compares inward and outward transfers, and Section IX delivers the concluding remarks.
A preliminary version of the study has been presented by Canales et al. [28], whereas the application of the technique to a transfer involving three moons (Io, Europa and Ganymede) has been illustrated in [29].
## II Background and methodology
After defining the dynamical model, this section provides a summary of the MMAT method and an overview of FTLEs and FTLE maps. The discussion employs direct single-impulse transfers from Ganymede to Europa as applications to illustrate the methodology and its features.
### Dynamical model
In a transfer between moons, the spacecraft is subject to the gravitational attraction of multiple bodies. In the CR3BP [30], only two masses (in this case, the planet and one moon) affect the motion of the spacecraft. This approximation is not sufficient to obtain accurate moon-to-moon trajectories, but the results can be refined in a
Fig. 1: Sketch representing the spatial 2BP–CR3BP patched model.
high-fidelity full-ephemeris model. In this investigation, a spatial 2BP-CR3BP patched model (Fig. 1) is adopted: in the vicinity of each moon, the motion is modeled in the CR3BP of the corresponding planet-moon system; far from the moon, the dynamics of the spacecraft are approximated with the planet-s/c 2BP problem, where the real inclinations of the orbital planes of the moons are incorporated.
The CR3BP provides an appropriate and convenient framework to study the spacecraft motion in the vicinity of a moon. In this model, the planet (mass \(m_{p}\)) and the moon (mass \(m_{m}\)) move in circular orbits around the center of mass of the system. Although they exert gravitational attraction upon the spacecraft, the latter does not affect their motion. Additionally, the primaries are spherical and their mass distribution is homogeneous, i.e., gravity field irregularities are disregarded. A suitable normalization of distances, masses and angular velocities and the adoption of an appropriate barycentric rotating reference frame leads to the following set of dimensionless differential equations for the motion of the spacecraft [31]:
\[\ddot{x}-2\dot{y}=\frac{\delta U^{*}}{\delta x};\quad\ddot{y}+2\dot{x}=\frac{ \delta U^{*}}{\delta y};\quad\ddot{z}=\frac{\delta U^{*}}{\delta z}. \tag{1}\]
Here, \(\mu=\frac{m_{m}}{(m_{m}+m_{p})}\) is the mass ratio of the system, whereas the term \(U^{*}=\frac{1-\mu}{r_{p-s/c}}+\frac{\mu}{r_{m-s/c}}+\frac{1}{2}(x^{2}+y^{2})\) represents the pseudo-potential function, \(r_{p-s/c}\) and \(r_{m-s/c}\) being the distances of the spacecraft to the planet and the moon, respectively. The \(\dot{x}\)-axis of this rotating reference frame contains both primaries (the planet at \(\mathbf{r}_{p}=[-\mu,0,0]^{T}\) and the moon at \(\mathbf{r}_{m}=[1-\mu,0,0]^{T}\)), whereas the \(\hat{z}\)-axis is aligned with their orbital angular momentum. In Eq. (1), \(\mathbf{r}_{rot}=[x,y,z]^{T}\) and \(\dot{\mathbf{r}}_{rot}=[\dot{x},\dot{y},\dot{z}]^{T}\) are the position and the velocity of the spacecraft, respectively. The CR3BP admits five equilibrium positions (denoted as libration points and labelled \(L_{1}\), \(L_{2}\),..., \(L_{5}\)) i.e., points where the acceleration is zero if the third body is at rest. As the libration points exhibit linear stability properties, orbits around them are categorized by families of periodic and quasi-periodic orbits [32, 33]. The planar Lyapunov orbit family is of interest in this work. In the vicinity of a moon, hyperbolic invariant manifolds extending from periodic orbits operate as pathways, including connections to other periodic orbits within the same system [34]. Additionally, stable manifolds arrive near a periodic orbit while unstable manifolds depart from its vicinity. Transit orbits, as defined for this investigation, reach the moon vicinity through the \(L_{1}\) and \(L_{2}\) gateways defined by periodic orbits, revolve around the moon and either depart again or collide with it. The Jacobi constant \(J\), that represents the energy of the system, is a fundamental quantity in the CR3BP. It is defined as
\[J=2U^{*}-(\dot{x}^{2}+\dot{y}^{2}+\dot{z}^{2}). \tag{2}\]
The model in the CR3BP is well-known and serves as the foundational framework in multi-body regimes.
### The MMAT method
In the patched 2BP-CR3BP, within the Sphere of Influence (SoI) of the moon the motion is modeled in the planet-moon CR3BP. When the trajectories cross the surface of the SoI, they are approximated in the planet-s/c 2BP as conic sections with one focus at the planet. Therefore, the motion outside the SoI is considered Keplerian and completely determined by six osculating orbital elements computed at the surface of the SoI. In this investigation, the semi-major axis (\(a\)), the eccentricity (\(e\)) and the true anomaly (\(\theta\)) are key parameters for moon-to-moon transfer design. In this patched 2BP-CR3BP model, connections between conics that depart from and arrive at distinct moons are explored through analytical methods. The SoI of a moon is defined as a spherical region centered at the moon with radius \(R_{SoI}\) equal to the distance from the moon along the \(x\)-axis at which the ratio (\(d_{SoI}\)) between the gravitational acceleration due to the moon and that caused by the planet equals a certain small quantity, which is a free parameter. The value adopted for this investigation for the SoI of both Ganymede and Europa is \(d_{SoI}=5\times 10^{-4}\). The selection of this value affects the design of the moon-to-moon transfer and is examined for this particular application by Canales et al. [21].
This study incorporates the inclinations of the orbital planes of the moons and, therefore, the analysis is developed in three-dimensions (3D). Although there exist many possible departure and arrival paths, this 3D nature makes the problem dependent on the relative orbital phase between the moons at the departure epoch. The MMAT method (see Fig. 2) previously presented by the authors [21] utilizes the patched 2BP-CR3BP: the trajectories arriving or departing a moon vicinity are represented using conic arcs outside the moons' SoI. In a transfer from Ganymede to Europa, departure conics approximate the departure Jupiter-Ganymede (J-G) CR3BP orbits, while arrival conics approximate the arrival Jupiter-Europa (J-E) CR3BP orbits. The approximation through planet-centered Keplerian orbits neglects the gravitational attraction of the moons, but the associated error is small (see [18]) and the solutions can be efficiently transitioned to a high-fidelity ephemeris model.
The necessary condition for an arrival conic to intersect spatially with a departure conic can be expressed analytically
Figure 2: Scheme illustrating the MMAT method for constructing direct transfers between moons.
as:
\[a_{a}(1-e_{a})\leq\frac{a_{d}(1-{e_{d}}^{2})}{1+e_{d}\cos(\theta_{d_{ Int}}+n\pi)}\leq a_{a}(1+e_{a}),\ \ \ \text{with}\ n=0,1. \tag{3}\]
Here, \(a_{a}\) and \(a_{d}\) are the semi-major axes of the arrival and departure conics, respectively. Similarly, \(e_{a}\) and \(e_{d}\) are the respective eccentricities. The true anomalies \(\theta_{d_{Int}}\) and \(\theta_{d_{Int}}+\pi\) indicate the two geometrical configurations for which intersections between the planes of the departure and arrival conics exist. For a given departure epoch, if the condition in Eq. (3) is satisfied, a suitable orbital phase for the target arrival moon can be determined. The correct phasing for the two moons yields a transfer between the conic arcs through a single impulsive \(\Delta v\). Each possible transfer is characterized by a total time-of-flight \(t_{tot}\) and requires an impulsive maneuver of magnitude \(\Delta v\). In summary, given the departure epoch from one moon, the MMAT technique identifies possible arrival conditions and moon-to-moon transfers with one impulse and different performance characteristics.
### Finite-time Lyapunov exponents
The concept of FTLEs is related to that of Lagrangian Coherent Structures (LCS) [35, 36, 37, 38, 39], i.e., regions bounding different behaviors in a dynamical flow. Dynamical systems theory leverages different techniques with two-dimensional Poincare maps to investigate the long-term dynamics in the CR3BP. The Cauchy-Green Strain Tensor (CGST) [40] effectively describes the time evolution of the flow resulting from a perturbation. To introduce the CGST, the state transition matrix (STM) is required. The STM provides a relationship between a variation of the initial state \(\delta\mathbf{x}_{0}\) and the resulting deviation of the final state \(\delta\mathbf{x}_{f}\) via the linear mapping:
\[\delta\mathbf{x}_{f}=\phi(t_{f},t_{0})\delta\mathbf{x}_{0}. \tag{4}\]
The variables \(t_{0}\) and \(t_{f}\) represent the initial and final times along the trajectory, respectively, whereas \(\phi(t_{f},t_{0})\) is the STM. Through a singular-value decomposition (SVD) [41] of the STM, it is possible to characterize a perturbation behavior as divergence or convergence. Such characteristics of the local phase space are encoded in singular values (\(\iota_{i}\)) defined by the direction of stretching or contraction \(\mathbf{V}_{i}\) at \(t_{0}\). The SVD is expressed as:
\[\phi(t_{f},t_{0})=\mathbf{U}\Upsilon\mathbf{V}^{T}. \tag{5}\]
Here boldface letters represent matrices. Then, \(\mathbf{U}\) and \(\mathbf{V}\) are mutually orthogonal: \(\mathbf{V}\) identifies the directions of stretching at \(t_{0}\) for every singular value, whereas the elements of the columns of \(\mathbf{U}\) define the directions of stretching (or contraction) at \(t_{f}\). Additionally, \(\Upsilon\) is a diagonal matrix representing the stretching magnitude, written with the singular
values in descending order (\(t_{1}>t_{2}>...>t_{n}\)):
\[\Upsilon=\begin{bmatrix}\iota_{1}&0&\dots&0\\ 0&\iota_{2}&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&\iota_{n}\end{bmatrix}. \tag{6}\]
Figure 3 sketches the SVD in a simplified two-dimensional space: in this scheme, \(\iota_{1}\) and \(\iota_{2}\) indicate the smallest and largest stretching, respectively, revealing which directions are more or less sensitive to perturbations, with larger magnitudes indicating higher sensitivity. Once the STM is defined, the CGST describes the deformation of the flow as the product of the STM and its transpose [42]:
\[\mathbf{C}(t_{f},t_{0})=\phi^{T}(t_{f},t_{0})\phi(t_{f},t_{0}). \tag{7}\]
The eigenvalues (\(\lambda\)) of \(\mathbf{C}\) are obtained from the eigen-decomposition of the CGST, and are related to \(\Upsilon\) through \(\lambda_{i}={\iota_{i}}^{2}\). Additionally, \(\mathbf{C}\) and \(\mathbf{U}\) possess the same eigenvectors. Given that the largest singular value corresponds to the largest perturbation growth, the FTLEs are defined based on \(\iota_{1}\) and the propagation time along the trajectory:
\[\text{FTLE}=\frac{\iota_{1}}{|\Delta t|}. \tag{8}\]
Here \(\Delta t=t_{f}-t_{0}\). Hence, all stretching directions produce an FTLE, but only the direction of maximum stretching is employed in Eq. (8). This value is the one of interest for the definition of LCS. In summary, FTLEs measure the relative phase-space element growth and contraction over a time interval related to the system flow.
Figure 3: Cauchy–Green Strain Tensor associated eigenvector stretching.
### FTLE maps
The benefits of FTLEs emerge when paired with Poincare section representations. Finite-time Lyapunov exponent maps have been proven useful by various authors (see, e.g., [37]) to provide quantitative information about the propagation of a trajectory and the type of motion that occurs at a given energy level after departure or before arrival at a moon vicinity. Possible behaviors that are discerned include capture orbits, collision paths (or landings), departure trajectories (or takeoffs) and close passages by the moons (or transits). Additionally, FTLE maps illustrate separated flow patterns entering the region of the moon through the corresponding gateway (either \(L_{1}\) or \(L_{2}\)). They have been recently employed to design transfers that approach Oberon and Titania in the Uranus system and are characterized by a desired behavior [43]. These low-energy trajectories are produced by coupling Uranus-Oberon and Uranus-Titania CR3BPs and assuming that the moons revolve in coplanar orbits.
The computation of FTLE maps involves a number of parameters, e.g., the location of the Poincare section, the propagation time and the value of the Jacobi constant [37]. To illustrate this methodology, trajectories departing Ganymede and arriving at Europa are employed (see Table 1 for relevant system data). The Jacobi constant values are \(J_{d}=3.00754\) and \(J_{a}=3.00240\), respectively in the Ganymede departure and Europa arrival CR3BPs. Separate FTLE maps are created for departure and arrival trajectories (Fig. 4). The departure map is based on a departure from Ganymede via a Poincare section near the Jupiter-Ganymede \(L_{1}\) gateway. In Fig. 4(a), the section is denoted \(\Sigma_{departure}\) and is mapped over an interval of normalized time through the propagation of all the states onto the section. The departure section is created at \(x=0.965\) in the J-G rotating frame, where all departure states are generated. The \(y\) coordinate varies from \(-0.006\) to \(0.015\), while the range for \(\dot{y}\) extends from -0.01 to 0.02. Finally, \(\dot{x}\) is computed for each state by means of Eq. (2). Additionally, in this case, the interval \(t_{d}\) equals \(-11.4\) days, corresponding to \(-10\) normalized J-G CR3BP time units. Note that \(t_{tot}\) is a crucial parameter for computing the FTLEs. A numerical analysis has been accomplished to select a value of \(t_{tot}\) that provides different regions of interest within the FTLE maps and, thus, different trajectory behaviors over the given time interval. Recall that \(t_{tot}\) and the \(x\)-coordinate of the section are parameters to be selected when computing FTLE maps.
Similarly, the arrival map (Fig. 4(d)) is constructed on a Poincare section (\(\Sigma_{arrival}\)) near the J-E \(L_{2}\) gateway. In Fig. 4(c), \(\Sigma_{arrival}\) is the plane \(x=1.028\) in the J-E rotating frame. A grid is generated by varying the \(y\) coordinate from \(-0.018\) to \(0.05\) and \(\dot{y}\) from \(-0.04\) to \(0.01\). In this study, the step size adopted to produce the departure and arrival
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Moon & **Semi–major** & Orbital & CR3BP & Eccentricity & Inclination & Longitude \\ & axis & period & mass ratio & & asc. node \\ & [\(10^{5}\) km] & [day] & [\(10^{-5}\)] & [\(10^{-3}\)] & [degree] & [degree] \\ \hline Europa & 6.713 & 3.554 & 2.528 & 9.170 & 2.1 & 331.4 \\ Ganymede & 10.706 & 7.158 & 7.804 & 2.542 & 2.2 & 340.3 \\ \hline \end{tabular}
\end{table}
Table 1: Orbital data for Ganymede and Europa relative to the Ecliptic and Equinox of J2000.0 Jupiter–centered frame [44].
maps is \(0.0001\) for both coordinates. Using Eq. (2), \(\dot{x}\) is computed for each state. The propagation time interval for arrival is \(t_{a}=10\) normalized Jupiter-Europa CR3BP time units, equivalent to \(5.65\) days.
The selected propagation times for departure and arrival ensure a sufficient emergence of LCS that include different types of behaviors over a considerable time. The FTLE maps depend on the \(x\) coordinate of the sections and on the propagation time [45]. Note that, in this work, the normalized units in the FTLE maps are those of the planet-moon CR3BP for which the map is being constructed. The trajectories that transit throughout the vicinity of the moons are contained in the hyperbolic invariant manifolds associated with the planar Lyapunov orbits at the given Jacobi constant
Figure 4: (a) Departure trajectories from Ganymede and (b) departure FTLE map; (c) Trajectories arriving at Europa and (d) arrival FTLE map.
levels near \(L_{1}\) and \(L_{2}\), respectively (i.e., unstable for J-G CR3BP, stable for the J-E CR3BP).
As shown in Fig. 5, strainlines bound regions (or "lobes") of qualitatively similar motion. This property allows the designer to select initial conditions that follow a desired behavior since all the initial conditions in any such "lobe" lead to similar trajectory patterns. As observed in Fig. 6, FTLE maps separate initial conditions corresponding to trajectories that enter the moon's vicinity from those that do not approach the moon. Finally, within the maps themselves, strainlines represent various behaviors and outcomes (see Fig. 7). Hence, they inform in the selection of initial conditions for specified departure and arrival characteristics. Every "lobe" in Fig. 7 yields a specific motion pattern, such as captures, tours and collisions. Moreover, the separatrices between "lobes" identify collision trajectories because when an impact occurs, the propagation time of the trajectory is relatively short, and this produces large values of the FTLEs (e.g., 1.352 for the collision trajectory in Fig. 7), resulting in distinctive patterns. Finally, low FTLE values are associated with the centers of the "lobes". This fact is justified since smaller FTLEs (e.g., 0.315 for the selected tour trajectory in Fig. 7) involve a lower sensitivity to perturbations. It is important to emphasize that the only purpose of the raw FTLE values is to identify distinct "lobes" that generate different trajectory behaviors.
III Designing moon-to-moon transfers with different trajectory patterns: the moon-to-moon access maps methodology
Merging the MMAT method with FTLE maps facilitates the design of moon-to-moon transfers with selected behaviors in the vicinity of the moons. Consider the framework for transfers from Ganymede to Europa in Fig. 4.
Fig. 5: Three departure trajectories with the same departure pattern and the same associated isoline upon the FTLE map.
Figure 6: Different isolines in the departure FTLE map corresponding to transit and non–transit trajectories.
Figure 7: **Three departure trajectories with different patterns and their distinct isolines on the departure FTLE map (\(J_{d}=3.00754\)).**
Recall that moon-to-moon transfers are dependent on the departure epoch because the moons revolve in their true orbital planes [21]. Equation (3) must be satisfied for the Ganymede and Europa vicinities to be connected with a single impulsive maneuver. Therefore, the selection of a suitable departure epoch is critical for the identification of effective transfers. Assume that all initial conditions depart Ganymede at epoch \(\theta_{0_{Gan}}=82.506^{\circ*}\). Experiments demonstrate that this value leads to the largest number of connections between Ganymede and Europa for the specified departure and arrival energy levels (see below). Then, since outside the SoIs the trajectories are approximated by conics in the Jupiter-centered 2BP, the central expression in Eq. (3) is evaluated for every departure conic generated by each initial condition on the Poincare section in the vicinity of the \(L_{1}\) gateway. Note that \(\Sigma_{departure}\) lies within the SoI. The map depicted in Fig. 8(a) is the moon-to-moon tides map. Its color gradients reflect the value of the expression \(\frac{1}{L_{a}}^{\pi}\frac{a_{d}(1-e_{d}{}^{2})}{1+e_{d}\cos(\theta_{d_{Int}})}\), i.e., the central term of Eq. (3), as each initial condition is propagated from the Poincare section to the SoI of Ganymede. This expression is labelled the "\(\theta_{d_{Int}}\) configuration" and includes the reference length \(L_{a}{}^{*}\) for normalization in the J-E CR3BP. Observe that the moon-to-moon tides map and the departure FTLE map have the same shape; however, the former uses the propagation of initial conditions towards the SoI and represents the "\(\theta_{d_{Int}}\) configuration", whereas the latter employs the propagation from the departure moon and illustrates the FTLE. The Europa arrival states are propagated backwards in time from \(\Sigma_{arrival}\) towards the SoI of Europa. Then, apoapsis and periapsis arrival maps (Figs. 8(b) and (c), respectively) are produced by evaluating the expressions on the left and right sides of Eq. (3) (\(a_{a}(1+e_{a})/L_{a}^{*}\) and \(a_{a}(1-e_{a})/L_{a}^{*}\), respectively) for each arrival condition at the SoI. Note that the reference length for the J-E CR3BP is used to normalize all three maps. The moon-to-moon tides map and the apoapsis and periapsis arrival maps are then matched to identify initial conditions that satisfy Eq. (3). For one-impulse transfers to be feasible, the middle term in this equation must be simultaneously \(\leq a_{a}(1+e_{a})\) (upper constraint) and \(\geq a_{a}(1-e_{a})\) (lower constraint). This methodology is denoted the moon-to-moon access maps approach and is sketched in Fig. 9: the gray region corresponds to the Ganymede-to-Europa transfers that satisfy Eq. (3). A specific isoline is selected to accomplish the matching in Fig. 8:
\[\frac{a_{d}(1-e_{d}^{2})}{1+e_{d}\mathrm{cos}(\theta_{d_{Int}})}=a_{a}(1+e_{a })=1.242\ L_{a}^{*}. \tag{9}\]
For transfers from an outer to an inner moon, the moon-to-moon tides map and the apoapsis arrival map are matched with respect to the requirement \(\frac{a_{d}(1-e_{d}{}^{2})}{1+e_{d}\cos(\theta_{d_{Int}}+\pi\pi)}\leq a_{a}(1+ e_{a})\). The uniform color of the periapsis arrival map in Fig. 8 means that the periapsis radii of all arrival trajectories are similar. Note that since these are smaller than the middle term in Eq. (3), the lower constraint is always satisfied for this particular application. Similarly, the upper constraint is always satisfied for a transfer from an inner to an outer moon, and the middle term of Eq. (3) must be larger than the arrival conic periapsis radius. Within the MMAT access maps, red arrows indicate accessible regions, i.e., sets
of initial conditions for which Eq. (3) is fulfilled and the construction of one-impulse trajectories from Ganymede to Europa with different trajectory patterns is possible; in other words, an arrival epoch exists and the transfer is feasible. Projecting the isolines onto the departure and arrival FTLE maps (Fig. 10) allows inspection for distinct transfers between Ganymede and Europa that are available leveraging strainlines associated with different patterns. Consequently, it is possible to select a feasible transfer between trajectories exhibiting desired behaviors in the vicinity of each moon. For example, the selected initial conditions for the departure and arrival FTLE maps in Fig. 10 lead to the sample transfer plotted in Fig. 11 in which the spacecraft, after completing two revolutions of Ganymede and implementing one impulsive maneuver, reaches the interior region of the J-E CR3BP and transits through the vicinity of Europa. Henceforth, the following notation is adopted to illustrate moon-to-moon trajectories in the Jupiter-centered frame (e.g., see Fig. 11): instant t\({}_{0}\) denotes the beginning of the transfer; label 0 corresponds to the crossing of the departure section; label 1 indicates the time at which the departure arc intersects the surface of the departure moon SoI; 2 represents the intersection between departure and arrival conics; 3 is the moment in which the arrival conic crosses the surface of the arrival moon SoI; 4 is the time of arrival of the spacecraft at the arrival section; finally, t\({}_{\text{f}}\) denotes the end of the transfer. Moreover, the following color scheme is used to link each time label with a specific body: black indicates the spacecraft, orange refers to the departure moon and green corresponds to the arrival moon.
Figure 10: Available trajectories from Ganymede to Europa corresponding to the 1.242 isoline shown through the departure and arrival FTLE maps.
Figure 9: Schematic of the moon-to-moon access maps method.
Figure 11: **Transfer from Ganymede to Europa with a desired behavior, as selected in the FTLE maps.**
## IV Selection of trajectory patterns through inspection maps
The design of moon-to-moon transfers is greatly aided by combining MMAT access maps with the information provided by FTLE maps, as demonstrated in Sect. III. Once the Jacobi constant values in the arrival and departure CR3BPs are selected, trajectories that yield a moon-to-moon transfer are identified, assuming that Eq. (3) is satisfied. Moreover, transfers with specific mission objectives near each moon are produced. For example, it is possible to determine all the departure conditions that yield a given arrival trajectory (Design 1). Alternatively, one can identify all the feasible arrival conditions for a given departure trajectory (Design 2). For illustration purposes, transfers of the two types with Jacobi constant values \(J_{d}=3.00754\) and \(J_{a}=3.00240\), respectively at Ganymede and Europa, and a departure epoch \(\theta_{0_{Gan}}=50^{\circ}\) are here presented. Both design types produce complete transfers dependent on the departure or arrival trajectory, and the other portion of the route (arrival or departure, respectively) is determined.
### Design 1: Identifying departure conditions corresponding to a specified arrival trajectory
Assume that the desired arrival trajectory is a close flyby of Europa with approach and departure through the \(L_{2}\) gateway, as depicted in Fig. 12. The \(\Delta v\) budget, the total time-of-flight and the relative phase requirements between the moons (Fig. 13) are identified based on the application of the MMAT method to the FTLE map corresponding to departures from Ganymede. Equation (3) is evaluated for all possible transfers with departure conditions from Ganymede matching the selected trajectory arc approaching Europa. Cost and phase inspection maps are employed to obtain cost-effective initial conditions, as apparent in Fig. 13, where the black line corresponds to the unstable manifold and bounds departure options on the maps. A departure trajectory with two revolutions around Ganymede (temporary capture) is selected from among all these opportunities. Eventually, combining the selected departure and arrival trajectories yields the full transfer plotted in Fig. 16(a).
Figure 13: Design 1: Ganymede departure FTLE map (a), \(\Delta v\) magnitude (b), time–of–flight (c) and Europa’s phase. (d) inspection maps for the Ganymede departure epoch \(\theta_{0_{Gas}}=50^{\circ}\).
### Design 2: Identifying arrival conditions corresponding to a specified departure trajectory
In the selected departure trajectory, the spacecraft completes a few revolutions around Ganymede before departure (temporary capture, see Fig. 14). By leveraging the MMAT method, all the accessible arrival trajectories near Europa and their associated costs (\(\Delta v\)) are determined. The initial conditions for arrival are matched to the specified departure arc using Eq. (3), and the inspection maps for \(\Delta v\) budget, total time-of-flight and relative phase requirements between moons (Fig. 15) are produced. For example, the selected solution leads to a landing on the surface of Europa (see Fig. 16(b)).
## V Dependence on the Jacobi constant
The methodology presented in the previous sections generates different results depending on the Jacobi constant of the departure and arrival trajectories. If the Jacobi constant value for the departure trajectories is decreased (\(J_{d}\) = 3.0061, i.e., the departure energy is higher than in the previous example) while the trajectories approaching Europa remain at \(J_{a}\) = 3.00240, the number of initial conditions leading to transfers from Ganymede to Europa increases, as is apparent by comparing Fig. 10 with Fig. 17. Note that an isoline of 1.25 is selected in Fig. 17 to illustrate the availability of a larger number of initial conditions. Moreover, for the same \(J_{a}\), the lower the departure Jacobi constant (\(J_{d}\)) the lower the resulting \(\Delta v\) for most available sets of initial conditions and for the same departure epoch. This fact is demonstrated by developing the inspection maps discussed in Sect. IV for two departure Jacobi constant values and the same departure epoch.
Now, increase the Jacobi constant level for the arrival trajectories to \(J_{a}\) = 3.0030, i.e., the arrival energy is lower than in the previous two cases, while the trajectories departing Ganymede remain at \(J_{d}\) = 3.00754. If the initial condition
Figure 14: Design 2: Selected Ganymede departure condition.
with the maximum value of \(a_{a}(1+e_{a})/L_{a}^{*}\) is selected from the apoapsis arrival map, there are no transit orbits departing the Ganymede vicinity that reach the vicinity of Europa (Fig. 18). Hence, the selection of the departure and arrival Jacobi constant values is crucial for the moon-to-moon design process.
## 6 Dependence on the departure epoch
Moon-to-moon transfers are epoch-dependent, i.e., the number of connecting trajectories available on the MMAT maps depends on the departure epoch. In the previous examples, the epoch is represented as \(\theta_{0_{Gam}}\) (true anomaly of Ganymede at departure). In the following, two strategies for selecting a departure epoch are discussed: one leverages the arrival hyperbolic stable manifold and the moon-to-moon tides map, whereas the other is based on the departure unstable manifold. For the sake of illustration, the Ganymede-to-Europa scenario is continued.
Figure 15: Design 2: Europa arrival FTLE map (a), \(\Delta v\) magnitude (b), time–of–flight (c) and Europa’s orbital phase (d) inspection maps for the selected Ganymede departure epoch (\(\theta_{0_{Gam}}\) = 50\({}^{\circ}\)).
Figure 17: Available transfers connecting Ganymede and Europa in the departure and arrival FTLE maps; the 1.25 isoline is used as a reference value for selecting initial conditions.
Figure 18: Apoapsis arrival map (a) and departure FTLE map (b) for which Ganymede and Europa cannot be connected for the given combination of energy levels.
**A. Method A: arrival stable manifold and moon-to-moon tides map**
The apoapsis and periapsis arrival maps respectively represent the apoapsis and periapsis radii of the arrival conics originating from all the initial conditions that cross the surface of the arrival moon SoI in the arrival FTLE map. Transit orbits near the arrival moon are enclosed by the stable hyperbolic invariant manifolds of planar Lyapunov orbits.
To locate the best departure epoch using this method, the stable manifold trajectories of arrival are employed. For an inward journey, the stable manifold trajectory that leads to the maximum \(\frac{a_{a}(1+e_{a})}{L_{a}^{*}}\) value is chosen. For the opposite journey, a trajectory that provides the minimum value for \(\frac{a_{a}(1-e_{a})}{L_{a}}\) is selected. From this perspective, the selected trajectory is termed the "minimum access arrival trajectory", i.e., the first trajectory that grants access to the arrival moon. As noted, to inspect the available connections from Ganymede to Europa (inward journey), the _minimum access arrival trajectory_ is the one that leads to the maximum value of \(\frac{a_{a}(1+e_{a})}{L_{a}^{*}}\) (first available trajectory to Europa for the selected \(J_{a}\)). In this case, the _minimum access arrival trajectory_ located at isoline 1.2750 is plotted in Fig. 19 on the corresponding apoapsis arrival map. To determine whether transit orbits for a departure from Ganymede and arrival near Europa exist, such an isoline is overlaid onto the moon-to-moon tides map varying \(\theta_{0_{Gan}}\). Figure 20 illustrates, for different values of \(\theta_{0_{Gan}}\), the trajectories departing the Ganymede vicinity that first deliver access towards Europa. This procedure demonstrates the variety of possibilities within the range of transfers based on departure epoch. At \(\theta_{0_{Gan}}\approx 29^{\circ}\), there is no access to Europa, though departure epochs greater than approximately \(35^{\circ}\) do yield transfer options. The number of opportunities increases continuously until a maximum value corresponding to \(\theta_{0_{Gan}}\approx 82.5^{\circ}\). Then, the number decreases until \(\theta_{0_{Gan}}\approx 130^{\circ}\), where transfers are no longer possible.
### Method B: departure unstable manifold
Though more precise, the moon-to-moon tides map approach requires many computations for FTLE maps to discern the various permutations in the departure epochs and identify the date with the largest amount of available transfers. To simplify the process, an alternative strategy utilizing the departure unstable manifold can be employed. The middle term in Eq. (3) is computed for every unstable manifold trajectory propagated towards the SoI of the departure moon over the span of departure epochs from \(0^{\circ}\) to \(360^{\circ}\). The apoapsis and periapsis for the minimum access arrival trajectory are then compared against the middle term in Eq. (3) for each unstable manifold trajectory. Figure 21 represent such a comparison based on a departure epoch of \(\theta_{0_{Gam}}=82.5^{\circ}\): the "departure unstable manifold angle" on the horizontal axis corresponds to the location of the departure/arrival arc along the manifold associated with the periodic orbit and
Figure 20: Europa access variation in moon–to–moon tides map dependent on \(\theta_{0_{Gam}}\).
measured clockwise from the \(\hat{x}\)-axis of the J-G rotating frame. The results exhibit significant similarities to those produced from Method A. Utilizing this technique for all \(\theta_{0_{Gan}}\) values allows for selection of the unstable manifold trajectories with minimum and maximum values for the middle term in Eq. (3) (inward or outward transfer, respectively). Finally, Fig. 22 illustrates the range of initial conditions upon the arrival FTLE map from the first departure trajectory from Ganymede that grants access to Europa, and the variations depending on \(\theta_{0_{Gan}}\). Employing the MMAT technique, a feasibility analysis concerning the budget for \(\Delta v\) and \(t_{tot}\) may, thus, be accomplished for this trajectory. By selecting the lowest \(\Delta v\), the MMAT access maps only need to be constructed for the given epoch.
## VII Access dependence between two moons
The analysis of the MMAT access maps reveals a relationship concerning the volume of available departure and arrival trajectories that link two moons. For inward transfers, recall that the isolines in the MMAT maps are produced by matching the values of \(\frac{a_{d}(1-e_{d}^{2})}{1+e_{d}\cos(\theta_{d_{Int}})}\) and \(a_{a}(1+e_{a})\). For outward transfers, the quantity \(\frac{a_{d}(1-e_{d}^{2})}{1+e_{d}\cos(\theta_{d_{Int}})}\) is instead set equal to \(a_{a}(1-e_{a})\). Consider again the Ganymede-to-Europa case. Extensive experiments with the moon-to-moon tides map and the apoapsis arrival map demonstrate that, for fixed Jacobi constant values at departure and arrival, increasing the isoline value yields a higher number of options in the moon-to-moon tides map, whereas the opposite occurs in the apoapsis arrival map (see Fig. 23(b)). In contrast, decreasing the isoline value yields more transfer options in the apoapsis arrival maps and fewer in the moon-to-moon tides maps (Fig. 23(a)):
1. An increase in the number of transit and unstable manifold trajectories at departure reduces the amount of arrival
Fig. 21: Evolution of Eq. 3 for the entire departure unstable manifold and compared against the _minimum access arrival trajectory_.
options;
2. An increase in the number of transit and stable manifold trajectories at arrival reduces the available departure options.
In conclusion, a larger pool of trajectories at departure reduces the set of possible arrival trajectories and vice versa. With the moons revolving in their true orbital planes, the above effects are particularly important for mission planning to deliver transfers with single impulsive maneuvers.
## 8 Outward versus inward transfers
The MMAT maps may also be employed in applications to deliver a spacecraft from an inner to an outer moon. As an example, a transfer is designed from Europa to Ganymede with the same Jacobi constant values as in the example of Sect. 3, i.e., \(J_{d}=3.00240\) (J-E CR3BP) and \(J_{a}=3.00754\) (J-G CR3BP). The transfer options are analyzed using the moon-to-moon tides map in conjunction with the periapsis arrival map. The reference isoline employed in this case is 0.779:
\[\frac{a_{d}(1-e_{d}^{2})}{1+e_{d}\cos(\theta_{d_{tt}})}=a_{a}(1-e_{a})=0.779~{}L _{a}^{*}. \tag{10}\]
The matching of the two maps is illustrated in Fig. 24. The red arrows indicate the initial conditions that lead to transfers from Europa to Ganymede. The direction of the arrows is characteristic of the transfer direction (inwards or outwards): for a path from an outer to an inner moon, the arrows are oriented towards the right of the isoline in both FTLE maps (i.e., see Fig. 8 whereas for transfers from an inner to an outer moon, the arrows are directed to the left of the isoline (see Fig. 24).
Figure 22: Apoapsis arrival map depicting accessibility from the first departure trajectory from Ganymede that grants access to Europa.
Figure 23: MMAT maps application to understand access dependence by varying isolines.
## IX Conclusions
This contribution introduces the use of finite-time Lyapunov exponent (FTLE) maps for the selection of desired motion patterns in the vicinity of the departure and destination bodies within moon-to-moon transfers. The options that can be identified include gravitational captures, lunar surface impacts (or landings), transits through the vicinity of the moons, and insertion into libration point periodic orbits. The FTLE maps have been blended with the Moon-to-Moon Analytical Transfer (MMAT) method and applied to the design of direct 3D single-impulse transfers between two Jovian moons, i.e., Ganymede and Europa. The technique offers a very simple visualization of different types of motion through so-called access maps. Their computation, constructed using a combination of MATLAB and Java scripts, is relatively fast, taking approximately 10 minutes on a Macbook Pro (2.3 GHz Intel Core i9, 16 GB RAM) per map at a typical resolution of \(10^{-4}\) in \(y\) and \(\hat{y}\) in normalized CR3BP units. This performance can be improved by 25-50% if only the calculation of the inner portion of the map (i.e., internal to the manifold) is carried out.
The number of solutions, their type and performance (i.e., \(\Delta v\) and time of flight) depend on the launch date and the Jacobi constant of the departure and destination orbits, hence the technique is suitable for trade off and optimization studies. Moreover, if the spacecraft performs multiple revolutions around the planet on the selected Keplerian orbits, more phasing options become available between departure and destination bodies for the same \(\Delta v\).
Although the case analysed throughout the paper is a simple impulsive connection between planet-centered ellipses originating from libration point periodic orbits at the departure and arrival moons, it constitutes a solid foundation for the development of the methodology. Firstly, the direct transfer can provide the initial guess for the design of low-thrust trajectories, in this way alleviating the mass budget corresponding to the large associated \(\Delta v\) values. Secondly, blending FTLE maps with the MMAT technique has served the purpose of illustrating the method and maintaining the focus of
Fig. 24: An overlay between moon–to–moon tides map and periapsis arrival maps for an Europa to Ganymede transfer. |
2308.06893 | Quantum Thermodynamics: Inside-Outside Perspective | We introduce an energy-resolved variant of quantum thermodynamics for open
systems strongly coupled to their baths. The approach generalizes the
Landauer-Buttiker inside-outside duality method [Phys. Rev. Lett. 120, 107701
(2018)] to interacting systems subjected to arbitrary external driving. It is
consistent with the underlying dynamical quantum transport description and is
capable of overcoming limitations of the only other consistent approach [New J.
Phys. 12, 013013 (2010)]. We illustrate viability of the generalized
inside-outside method with numerical simulations for generic junction models. | Jiayang Zhou, Anqi Li, Michael Galperin | 2023-08-14T02:18:31Z | http://arxiv.org/abs/2308.06893v1 | # Quantum Thermodynamics: Inside-Outside Perspective
###### Abstract
We introduce an energy-resolved variant of quantum thermodynamics for open systems strongly coupled to their baths. The approach generalizes the Landauer-Buttiker inside-outside duality method [Phys. Rev. Lett. **120**, 107701 (2018)] to interacting systems subjected to arbitrary external driving. It is consistent with the underlying dynamical quantum transport description and is capable of overcoming limitations of the only other consistent approach [New J. Phys. **12**, 013013 (2010)]. We illustrate viability of the generalized inside-outside method with numerical simulations for generic junction models.
**Introduction.** Construction of quantum molecular devices became a reality due to the advancement of experimental techniques at nanoscale [1; 2; 3; 4]. This development poses a challenge to theory making thermodynamic formulation at nanoscale important from both fundamental and applicational perspectives. Indeed, besides purely academic interest, such formulation is at the heart of any efficiency estimate of thermoelectric nano-devices [5; 6; 7; 8; 9; 10; 11; 12]. Existing theories rely on figure of merit concept which stems from classical macroscopic formulation. It is restricted to equilibrium considerations (linear response) and completely disregards quantum fluctuations.
Significant progress was achieved in theoretical formulations of quantum thermodynamics for systems weakly coupled to their baths. Analogs of traditional thermodynamics which focuses on average system characteristics as well as considerations of quantum fluctuations (thermodynamic fluctuation theorems) are available in the literature [13; 14].
In nanoscale devices, molecule usually forms a covalent bond with contacts on at least one of its interfaces which results in hybridization of molecular states with those of the contacts. This appears as system-bath interaction of a strength comparable to energy of the isolated system. Therefore, thermodynamic formulation of systems strongly coupled to their baths (i.e. situation where energy of system-bath interaction cannot be disregarded) becomes a practical necessity. Contrary to weakly coupled situation, thermodynamic formulations for strongly coupled systems are still at their infancy with most of discussion focused on formulations involving system averages. Only a few publications on steady-state regime considered quantum fluctuations in such systems [15], while fluctuation theorems formulated so far were shown to be violated in strongly coupled systems [16].
Here, we focus on thermodynamic theory of averages for systems strongly coupled to their baths keeping in mind that for future attempts of stochastic thermodynamic formulation for strongly coupled systems one of the guiding principles of the formulation should be consistency between thermodynamic and microscopic dynamical descriptions [17]. As far as we know, only one consistent thermodynamic formulation is available in the literature today. It postulates the von Neumann entropy expression for reduced density matrix of the system as proper thermodynamics entropy. The approach was originally proposed in Refs. [18; 19; 20] and later used in a number of studies [21; 22; 23; 24]. The formulation guarantees that in a thermodynamic process which starts from decoupled system and baths entropy production is positive (i.e. integrated form of the second law of thermodynamics is satisfied) while entropy production rate is non-monotonic (i.e. differential form of the second law is not guaranteed) [20]. When a thermodynamic process does not start from the decoupled state, the second law is not guaranteed in any form.
We argued [24] that the deficiency of this von Neumann formulation is due to its neglect of energy resolution in system entropy: the von Neumann expression operates with reduced density matrix, which is a time-local (integrated in energy) object. Here, we introduce a general energy-resolved thermodynamic formulation for systems strongly coupled to their baths which is consistent with underlying microscopic dynamics. This is done by employing nonequilibrium Green's function method in reformulating the inside-outside approach in Ref. [25] originally developed for noninteracting systems under adiabatic driving. We extend the theory to interacting systems under arbitrary driving. The resulting formulation is capable of overcoming limitations of the von Neumann formulation in Ref. [20].
Figure 1: (Color online) Sketch of the Carnot cycle in the resonant level junction model.
**Model.** First, we introduce a generic model of open quantum system and mention several concepts from its microscopic dynamics (quantum transport) description which will be necessary for thermodynamic formulation.
We consider a system \(S\) strongly coupled to a number of baths \(\{B\}\) and subjected to arbitrary external driving applied to the system and system-baths couplings. Hamiltonian of the model is
\[\hat{H}(t)=\hat{H}_{S}(t)+\sum_{B}\left(\hat{H}_{B}+\hat{V}_{SB}(t)\right) \tag{1}\]
where Hamiltonian of the system \(\hat{H}_{S}(t)\) contains any intra-system interactions, and
\[\hat{H}_{B} =\sum_{k,\alpha\in B}\varepsilon_{k\alpha}\hat{c}_{k\alpha}^{ \dagger}\hat{c}_{k\alpha} \tag{2}\] \[\hat{V}_{SB}(t) =\sum_{m\in S}\sum_{k,\alpha\in B}\left(V_{m,k\alpha}(t)\hat{d}_{ m}^{\dagger}\hat{c}_{k}+\text{H.c.}\right)\]
describe bath \(B\) and its coupling to the system. Here, \(\hat{d}_{m}^{\dagger}\) (\(\hat{d}_{m}\)) and \(\hat{c}_{k\alpha}^{\dagger}\) (\(\hat{c}_{k\alpha}\)) creates (annihilates) an electron in orbital \(m\) of the system \(S\) and state \(k\) in channel \(\alpha\) of the bath \(B\), respectively.
For description of the system microscopic dynamics we employ the nonequilibrium Green's function (NEGF) method [26; 27], which is routinely employed in quantum transport formulations for open nanoscale systems. Thermodynamic formulation below requires several basic concepts of the quantum transport theory. In particular, we utilize the single-particle Green's function defined on the Keldysh contour,
\[G_{m_{1}m_{2}}(\tau_{1},\tau_{2})\equiv-i\langle T_{c}\,\hat{d}_{m_{1}}(\tau_{ 1})\,\hat{d}_{m_{2}}^{\dagger}(\tau_{2})\rangle \tag{3}\]
(here and below \(e=k_{B}=\hbar=1\)), and expressions for particle \(I_{B}^{N}(t)\) and energy \(I_{B}^{E}(t)\) fluxes at the interface with bath \(B\)[28; 29]
\[I_{B}^{N}(t) =\sum_{\alpha\in B}\int\frac{dE}{2\pi}\,i_{\alpha}(t,E) \tag{4}\] \[I_{B}^{E}(t) =\sum_{\alpha\in B}\int\frac{dE}{2\pi}\,E\,i_{\alpha}(t,E)\]
Here,
\[i_{\alpha}(t,E) \equiv-2\,\text{Im}\,\sum_{m,m_{1}\in S}\int_{-\infty}^{t}dt_{1} \,e^{-iE(t_{1}-t)} \tag{5}\] \[\times 2\pi\rho_{\alpha}(E)\,V_{m_{1},\alpha}(E,t_{1})\,V_{\alpha,m}(E,t)\] \[\times\left(G_{mm_{1}}^{\leq}(t,t_{1})+f_{\alpha}(E)\,G_{mm_{1}}^ {r}(t,t_{1})\right)\]
is the unitless energy-resolved particle flux in channel \(\alpha\), \(T_{c}\) is the contour ordering operator, \(f_{\alpha}(E)\) is the Fermi-Dirac distribution in channel \(\alpha\), \(<\) and \(r\) are the lesser and retarded projections of the system Green's function (3), \(\rho_{\alpha}(E)\) is the density of states in channel \(\alpha\), and \(V_{m,k\alpha}(t)=V_{m,\alpha}(\varepsilon_{k\alpha},t)\). Finally, we use expression for heat flux \(\dot{Q}_{B}(t)\) from the quantum transport theory [15]
\[\dot{Q}_{B}(t)=\sum_{\alpha\in B}\int\frac{dE}{2\pi}\,(E-\mu_{B})\,\,i_{\alpha }(t,E) \tag{6}\]
With these definitions we are ready to discuss thermodynamic formulation for systems strongly coupled to their baths.
**Quantum thermodynamics.** For systems strongly coupled to their baths the main problem is formulation of the second law of thermodynamics in terms of quantities used in dynamical description (fluxes and populations of states). Differential form of the second law is
\[\frac{d}{dt}S(t)=\sum_{B}\beta_{B}\dot{Q}_{B}(t)+\dot{S}_{i}(t)\qquad\dot{S}_{ i}(t)\geq 0 \tag{7}\]
where \(S(t)\) is system entropy, \(\dot{Q}_{B}(t)\) is heat flux at system interface with bath \(B\), and \(\dot{S}_{i}(t)\) is entropy production. In this expression only heat flux is clearly defined by the microscopic theory, Eq.(6).
To define entropy and entropy production we employ the following observations from the quantum transport theory:
1. Overall (system plus baths) dynamics is unitary. Thus, entropy of the universe given by the von Neumann expression for the total density operator \(\hat{\rho}(t)\), \(S_{tot}(t)\equiv-\text{Tr}\,[\hat{\rho}(t)\,\ln\hat{\rho}(t)]\), does not change during evolution: \(\frac{d}{dt}S_{tot}(t)=0\).
2. Particle fluxes from baths into system are thermal. Non-thermal fluxes from system into baths do not return into the system. Indeed, within the NEGF dynamics of the system is governed by the Dyson equation - equation-of-motion for the Green's function (3). This dynamical law is exact. Effect of the baths enters the Dyson equation through corresponding self-energies which only contain information on thermal distribution in the baths.
3. Thermalization processes take place far away from the system. They do not affect the system dynamics.
Below we use these observation to develop thermodynamic formulation for systems strongly coupled to their baths.
The first observation allows to define system entropy \(S(t)\) from total baths entropy \(S_{B,tot}(t)\) as [39]
\[\frac{d}{dt}S_{tot}(t)\equiv\frac{d}{dt}S(t)+\frac{d}{dt}S_{B,tot}(t)=0 \tag{8}\]
Defining system quantity from baths characteristics is at the heart of _the inside-outside approach_ first introduced in Ref. [25]. Original (Landauer-Buttiker) formulation of Ref. [25] is restricted to noninteracting systems under adiabatic driving. Here, we generalize it to interacting systems and to arbitrary driving, thus providing a general
thermodynamic formulation applicable in any transport regime and for any system. Technically, the generalization requires employing the NEGF method in place of single-particle scattering theory, utilizing independent current-carrying states in the baths in place of scattering states, and accounting for correlations between states of different energies in addition to inter-channel correlations.
The second observation allows to separate total particle flux into incoming thermal, \(\phi^{in}(E)\), and outgoing non-thermal, \(\phi^{out}(t,E)\), contributions
\[i_{\alpha}(t,E)=\phi^{in}_{\alpha}(E)-\phi^{out}_{\alpha}(t,E) \tag{9}\]
Here, \(i_{\alpha}(t,E)\) is energy-resolved particle flux defined in Eq.(5), \(\phi^{in}_{\alpha}(E)\) is thermal population of incoming state in channel \(\alpha\) of the baths, and \(\phi^{out}_{\alpha}(t,E)\) is non-thermal population of outgoing state in channel \(\alpha\) of the baths. To account for system-induced coherences between states of different energies and different channels in the baths, in addition to populations one has to consider also coherences in the energy and channel spaces: \(\phi^{out}_{\alpha\beta}(t;E_{\alpha},E_{\beta})\). Their explicit form can be obtained from equation-of-motion for the baths density matrix (see Appendix A for derivation)
\[\begin{split}&\phi^{in}_{\alpha\beta}(E_{\alpha},E_{\beta}) \equiv 2\pi\delta_{\alpha,\beta}\,\delta\left(E_{\alpha}-E_{\beta}\right)f_{ \alpha}(E_{\alpha})\\ &\phi^{out}_{\alpha\beta}(t;E_{\alpha},E_{\beta})\equiv\phi^{in} _{\alpha\beta}(E_{\alpha},E_{\beta})\\ &-i(2\pi)^{2}\rho_{\alpha}(E_{\alpha})\rho_{\beta}(E_{\beta}) \sum_{m,m_{1}\in S}\int_{-\infty}^{t}dt_{1}\\ &\left(e^{-iE_{\beta}(t_{1}-t)}\,V_{a,m}(E_{\alpha},t)\,V_{m_{1},\beta}(E_{\beta},t_{1})\right.\\ &\left.\times\left[G^{<}_{mm_{1}}(t,t_{1})+G^{r}_{mm_{1}}(t,t_{ 1})\,f_{\beta}(E_{\beta})\right]\right.\\ &+\left.e^{+iE_{\alpha}(t-t_{1})}\,V_{\alpha,m_{1}}(E_{\alpha},t _{1})\,V_{m,\beta}(E_{\beta},t)\right.\\ &\left.\times\left[G^{<}_{m_{1}m}(t_{1},t)-G^{a}_{m_{1}m}(t_{1}, t)\,f_{\alpha}(E_{\alpha})\right]\,\right)\\ &-(2\pi)^{2}\rho_{\alpha}(E_{\alpha})\rho_{\beta}(E_{\beta})(E_ {\alpha}-E_{\beta})G^{<}_{E_{\alpha},E_{\beta}}(t,t)\end{split} \tag{10}\]
Here, \(\rho_{\alpha}(E_{\alpha})\) and \(\rho_{\beta}(E_{\beta})\) are densities of states of channels \(\alpha\) and \(\beta\) of the baths, respectively. While our approach is general, at steady-state Eq.(10) reduces to the Landauer-Buttiker scattering theory result.
Having expressions for populations and coherences, Eq.(10), one can define rate of entropy change in the baths as difference between entropy flux outgoing from the system and entropy flux incoming to the system
\[\frac{d}{dt}S_{B,tot}(t)\equiv\text{Tr}_{c,E}\bigg{\{}\sigma\left[\phi^{out}( t)\right]-\sigma\left[\phi^{in}\right]\bigg{\}} \tag{11}\]
Here, \(S_{B,tot}\) is the total entropy of all the baths, \(\text{Tr}_{c,E}\{\ldots\}\) is trace over channels and energies in the baths, and
\[\sigma\left[\phi\right]\equiv-\phi\,\ln(\phi)-(\mathbf{I}-\phi)\,\ln(\mathbf{ I}-\phi) \tag{12}\]
is the von Neumann expression for entropy in the baths.
The third observation indicates that neither thermalization process affects physics of the system, nor system participates in the process. Thus, entropy production, which takes place during thermalization, can be modeled as a process of reducing non-thermal distribution over bath states \(\phi^{out}\) to thermal distribution \(\phi^{in}\). Following Ref. [30] entropy production can be rationalized as information erasure due to measurement of non-thermal baths by set of thermal super-baths weakly coupled to the baths. The process leads to entropy production in the universe which consists of entropy change in the baths, \(\text{Tr}_{c,E}\left\{\sigma\left[\phi^{in}\right]-\sigma\left[\phi^{out}(t) \right]\right\}\), and heat flux into the super-baths, \(-\sum_{B}\beta_{B}\dot{Q}_{B}(t)\). Adding the two contributions leads to expression for entropy production
\[\begin{split}&\dot{S}_{i}(t)\equiv\text{Tr}_{c,E}\bigg{\{}\phi^{ out}(t)\left[\ln\phi^{out}(t)-\ln\phi^{in}\right]\\ &+\left(\mathbf{I}-\phi^{out}(t)\right)\,\left[\ln\left(\mathbf{ I}-\phi^{out}(t)\right)-\ln\left(\mathbf{I}-\phi^{in}\right)\,\right]\bigg{\}} \end{split} \tag{13}\]
Note that contrary to formulation of Ref. [20] entropy production rate in the inside-outside approach is always positive [31; 21].
Finally, using (11) and (13) in (8) leads to the differential form of the second law of thermodynamics for entropy of the system \(S(t)\), Eq.(7) (see Appendix B for derivation).
This completes thermodynamic formulation for system strongly coupled to its baths. Note that the introduced formulation readily yields access to energy-resolved version of the second law as discussed in Ref [24].
**Numerical results.** We now illustrate the inside-outside approach for the resonant level model of phonon
Figure 2: (Color online) Entropy production rate for an isothermal process in resonant level model with the level shifted from \(0.2\) to \(-0.2\). Calculations are done within the inside-outside approach (see Eq.(13), solid blue line) and within the von Neumann approach of Refs. [18; 19; 20] (see Eq.(9) of Ref. [24], dotted red line). See text for parameters.
assisted tunneling where a single molecular level \(\varepsilon_{0}\) represents a junction connecting two electron reservoirs (\(L\) and \(R\)) while being also coupled to a single harmonic mode \(\omega_{0}\). The mode represents molecular vibration and is coupled to a thermal (phonon) bath (\(P\)). Hamiltonian of the system is given in Eq.(1) with
\[\hat{H}_{S}(t)=\varepsilon_{0}(t)\hat{d}^{\dagger}\hat{d}+\omega_{0}\hat{a}^{ \dagger}\hat{a}+M\left(\hat{a}+\hat{a}^{\dagger}\right)\hat{d}^{\dagger}\hat{d} \tag{14}\]
and \(B\in\{L,R,P\}\) where
\[\hat{H}_{K} =\sum_{k\in K}\varepsilon_{k}\hat{c}_{k}^{\dagger}\hat{c}_{k} \qquad(K=L,R) \tag{15}\] \[\hat{V}_{SK}(t) =\sum_{k\in K}\left(V_{k}(t)\hat{d}^{\dagger}\hat{c}_{k}+V_{k}^{* }(t)\hat{c}_{k}^{\dagger}\hat{d}\right)\] \[\hat{H}_{P} =\sum_{p\in P}\omega_{\alpha}\hat{b}_{p}^{\dagger}\hat{p}_{p}\] \[\hat{V}_{SP} =\sum_{p\in P}\left(U_{p}\hat{a}^{\dagger}\hat{b}_{p}+U_{p}^{*} \hat{b}_{p}^{\dagger}\hat{a}\right)\]
Driving is performed in the position of the level, \(\varepsilon_{0}(t)\), and in system-fermionic baths coupling strengths, \(V_{k}(t)\equiv u(t)V_{k}\). To simplify simulations we assume the wide-band approximation (WBA) which allows to reduce the general expressions (10) to energy diagonal form (see Appendix C). Also, incorporation of the bosonic bath \(P\) requires slight generalization of the thermodynamic formulation (see Appendix D). Electron-phonon interaction \(M\) is treated within the self-consistent Born approximation (SCBA) [32].
Parameters of the simulation are given in terms of arbitrary unit of energy \(E_{0}\) and corresponding unit of time \(t_{0}=2\pi/E_{0}\). Unless stated otherwise, the parameters are as follows. Vibrational frequency \(\omega_{0}=0.05\), electron-phonon interaction \(M=0.01\), temperature \(T=0.05\), electron escape rate \(\Gamma_{0}\equiv 2\pi\sum_{k\in K}\lvert V_{k}\rvert^{2}\delta(E-\varepsilon_{k})= 0.1\), and energy dissipation rate \(\gamma(\omega)\equiv 2\pi\sum_{p\in P}\lvert U_{\alpha}\rvert^{2}\delta(\omega- \omega_{\alpha})=\theta(\omega)\,\gamma_{0}\,\frac{\omega^{2}}{\omega_{0}^{2}} exp(1-\omega/\omega_{0})\), where \(\gamma_{0}=0.1\). The junction is not biased, the Fermi energy is taken as origin, \(E_{F}=0\), and the temperatures in the fermionic and bosonic baths are assumed to be the same. Simulations are performed on energy grid with 4001 points spanning a range from \(-2\) to \(2\) with step size \(10^{-3}\). FFTW fast Fourier transform library [33] was employed in the simulations (see Appendix E for details).
Figure 2 shows entropy production rate for the resonant level coupled to a reservoir and driven from \(0.2\) to \(-0.2\) position with the constant rate \(\dot{\varepsilon}_{0}=1.6\times 10^{-3}\,E_{0}/t_{0}\). The simulation is performed in the absence of electron-phonon coupling, \(M=0\). We compare our results with the von Neumann approach in Refs. [18; 19; 20]. One can see that the lack of energy resolution in the von Neumann approach results in appearance of negative entropy production rate. As a result, integrating over part of the thermodynamic process may yield negative entropy production which contradicts the second law of thermodynamics. Note that our inside-outside approach yields positive entropy production for an arbitrary initial state.
We now consider the Carnot cycle in the resonant junction model (see Fig. 1 for a sketch). In step 1 (isothermal part of the cycle with a constant coupling strength to the hot reservoir) the resonant level is driven from \(0.2\) to \(0.1\) with a variety of driving rates. For simplicity, step 2 (decoupling from the hot and subsequent coupling to the cold reservoir) is performed adiabatically slow. This allows to find an analytical connection between rates \(\dot{u}\) and \(\dot{\varepsilon}_{0}\) from the requirement that \(\dot{Q}=0\) during the process (see Appendix F for details). Also, this guarantees zero entropy production during the step. Step 3 is performed under the same rate \(\dot{\varepsilon}_{0}\) as step 1. Finally, step 4 is again performed adiabatically slow. Temperature of the hot reservoir is \(T_{H}=0.1\), and the cold reservoir has temperature \(T_{C}=0.05\). Figure 3 shows results for the Carnot cycle vs. resonant level driving rate with (filled markers) and without (empty markers) electron-phonon interaction. Panel (a) shows entropy productions dur
Figure 3: (Color online) The Carnot cycle for the resonant junction model with (filled markers) and without (empty markers) intra-system interaction. Shown are (a) entropy production during hot (red line, circles) and cold (blue line, triangles) isothermal parts of the cycle and (b) efficiency (blue line, circles) of the Carnot cycle vs. driving rate. Dashed red line shows the Carnot efficiency of the cycle. See text for parameters.
ing the isothermal parts of the cycle. As anticipated, entropy production grows with driving rate, and the entropy production for step 3 is higher than that of step 1. Also, electron-phonon interaction increases entropy production due to the presence of additional (bosonic) bath. Panel (b) shows that the efficiency of the cycle is expectedly lower for faster driving and in the presence of interaction. Note that its dependence on the driving rate is non-monotonic which is expected for the employed model (see Ref. [34] for discussion on the relevance of the model to phonon assisted electron transport in junctions). Indeed, within the model, strength of electron-phonon interaction depends on the level population which at intermediate rates drops fast during step 3 of the cycle thus limiting the amount of heat which can be transferred to the cold bosonic bath. At even higher rates, the amount of heat from the hot bosonic baths also becomes affected which eventually leads to decreasing efficiency.
**Conclusion.** We established a connection between the inside-outside approach originally suggested in Ref. [25] and the nonequilibrium Green's function formulation which allowed us to extend the method to interacting systems with arbitrary drivings in the system and system-baths couplings. This generalized thermodynamic formulation applies to strong system-bath coupling and is consistent with the underlying microscopic dynamics (quantum transport). It overcomes limitations of the only other consistent thermodynamic formulation [20] by satisfying any form of the second law of thermodynamics for any initial state of a thermodynamic process and for any driving protocol.
###### Acknowledgements.
We thank Felix von Oppen for many helpful discussions and scientific guidance. This material is based upon work supported by the National Science Foundation under Grant No. CHE-2154323.
## Appendix A Derivation of the outgoing flow, Eq.(10)
In quantum transport particle flux is defined as the rate of population change in the bath [25]
\[I_{B}^{N}(t)=-\sum_{\alpha\in B}\sum_{k\in\alpha}\frac{d}{dt}\left\langle \hat{c}_{\alpha k}^{\dagger}(t)\hat{c}_{\alpha k}(t)\right\rangle\equiv\sum_{ \alpha\in B}I_{\alpha\alpha}(t) \tag{10}\]
To account for system-induced coherences between states of the bath we consider coherence flow. Similar to (10) it is defined as the rate of coherence change
\[I_{\alpha\beta}(t)=-\frac{d}{dt}\sum_{k\in\alpha}\sum_{k^{\prime}\in\beta} \left\langle\hat{c}_{\beta k^{\prime}}^{\dagger}(t)\,\hat{c}_{\alpha k}(t)\right\rangle \tag{11}\]
For Hamiltonian (1)-(2) it can be expressed in terms of bath and mixed Green's functions
\[\begin{split} I_{\alpha\beta}(t)&=\sum_{m\in S}\sum _{k\in\alpha}\sum_{k^{\prime}\in\beta}\left(V_{\alpha k,m}(t)\,G_{m,\beta k^{ \prime}}^{<}(t,t)-G_{\alpha k,m}^{<}(t,t)\,V_{m,\beta k^{\prime}}(t)\right)\\ &+\sum_{k\in\alpha}\sum_{k^{\prime}\in\beta}\left(\varepsilon_{ \alpha k}-\varepsilon_{\beta k^{\prime}}\right)G_{\alpha k,\beta k^{\prime}} ^{<}(t,t)\end{split} \tag{12}\]
where the Green's functions are the lesser projections of
\[\begin{split} G_{m,\beta k^{\prime}}(\tau,\tau^{\prime})& =-i\left\langle T_{c}\,\hat{d}_{m}(\tau)\,\hat{c}_{\beta k^{\prime }}^{\dagger}(\tau^{\prime})\right\rangle\\ G_{\alpha k,m}(\tau,\tau^{\prime})&=-i\left\langle T _{c}\,\hat{c}_{\alpha k}(\tau)\,\hat{d}_{m}^{\dagger}(\tau^{\prime})\right\rangle \\ G_{\alpha k,\beta k^{\prime}}(\tau,\tau^{\prime})& =-i\left\langle T_{c}\,\hat{c}_{\alpha k}(\tau)\,\hat{c}_{\beta k ^{\prime}}^{\dagger}(\tau^{\prime})\right\rangle\end{split} \tag{13}\]
Here, \(T_{c}\) is the Keldysh contour ordering operator, and \(\tau\) and \(\tau^{\prime}\) are the contour variables.
Using the Dyson equation mixed Green's functions are expressed in terms of free bath and system evolutions as
\[\begin{split} G_{m,\beta k^{\prime}}(\tau,\tau^{\prime})& =\sum_{m_{1}\in S}\int_{c}d\tau_{1}\,G_{mm_{1}}(\tau,\tau_{1})\,V_ {m_{1},\beta k^{\prime}}(t_{1})\,g_{\beta k^{\prime}}(\tau_{1},\tau^{\prime}) \\ G_{\alpha k,m}(\tau,\tau^{\prime})&=\sum_{m_{1}\in S }\int_{c}d\tau_{1}\,g_{\alpha k}(\tau,\tau_{1})\,V_{\alpha k,m_{1}}(t_{1})\,G _{m_{1}m}(\tau_{1},\tau^{\prime})\end{split} \tag{14}\]
Here,
\[g_{\alpha}(\tau,\tau^{\prime})\equiv-i\left\langle T_{c}\,\hat{c}_{\alpha k}(\tau )\,\hat{c}_{\alpha k}^{\dagger}(\tau^{\prime})\right\rangle_{0} \tag{10}\]
is the Green's function of free evolution in bath channel \(\alpha\), and system Green's function \(G_{m_{1}m_{2}}(\tau_{1},\tau_{2})\) is defined in Eq.(3).
Taking lesser projection of (11) and going from sum over states to integrals over energies
\[\sum_{k\in\alpha}\ldots=\int\frac{dE_{\alpha}}{2\pi}\,2\pi\sum_{k\in\alpha} \delta(E-\varepsilon_{\alpha k})\ldots\equiv\int\frac{dE_{\alpha}}{2\pi}\,2 \pi\rho_{\alpha}(E_{\alpha})\ldots \tag{11}\]
leads to the following expression for the coherence flow
\[\begin{split} I_{\alpha\beta}(t)&=\int\frac{dE_{ \alpha}}{2\pi}\int\frac{dE_{\beta}}{2\pi}\,(2\pi)^{2}\,\rho_{\alpha}(E_{\alpha })\rho_{\beta}(E_{\beta})\\ &\bigg{\{}i\sum_{m,m_{1}\in S}\int_{-\infty}^{t}dt_{1}\left(e^{- iE_{\beta}(t_{1}-t)}\,V_{\alpha,m}(E_{\alpha},t)\,V_{m_{1},\beta}(E_{\beta},t_{1} )\left[G_{mm_{1}}^{<}(t,t_{1})+G_{mm_{1}}^{r}(t,t_{1})\,f_{\beta}(E_{\beta}) \right]\right.\\ &\qquad\qquad\qquad\qquad\qquad\left.+\,e^{+iE_{\alpha}(t-t_{1}) }\,V_{\alpha,m_{1}}(E_{\alpha},t_{1})\,V_{m,\beta}(E_{\beta},t)\left[G_{m_{1} m}^{<}(t_{1},t)-G_{m_{1}m}^{a}(t_{1},t)\,f_{\alpha}(E_{\alpha})\right]\, \right)\\ &\left.+\left(E_{\alpha}-E_{\beta}\right)G_{E_{\alpha},E_{\beta} }^{<}(t,t)\right\}\end{split} \tag{12}\]
Representing the flow as the difference between incoming and outgoing fluxes
\[I_{\alpha\beta}(t)=\int\frac{dE_{\alpha}}{2\pi}\int\frac{dE_{\beta}}{2\pi} \bigg{[}\phi_{\alpha\beta}^{in}(E_{\alpha},E_{\beta})-\phi_{\alpha\beta}^{ out}(t;E_{\alpha},E_{\beta})\bigg{]} \tag{13}\]
and comparing with (12) leads to the expression presented in Eq.(10).
## Appendix B Derivation of the differential form of the second law, Eq.(7)
Unitarity of the overall evolution allows to express system entropy change in terms of baths entropy flux
\[\frac{d}{dt}S(t)=-\frac{d}{dt}S_{B,tot}(t)=\text{Tr}_{c,E}\left\{\sigma\left[ \phi^{in}\right]-\sigma\left[\phi^{out}(t)\right]\right\} \tag{14}\]
where we used Eq.(11). According to Eq.(10)
\[\phi^{out}(t)=\phi^{in}+\delta\phi(t) \tag{15}\]
Thus, using Eq.(12) we can write
\[\begin{split}&\text{Tr}_{c,E}\left\{\sigma\left[\phi^{out}\right] \right\}\equiv\text{Tr}_{c,E}\left\{-\phi^{out}\,\ln\phi^{out}-\left(\mathbf{ I}-\phi^{out}\right)\,\ln\left(\mathbf{I}-\phi^{out}\right)\right\}\\ &=\text{Tr}_{c,E}\left\{-\phi^{in}\,\ln\phi^{in}-\left(\mathbf{ I}-\phi^{in}\right)\,\ln\left(\mathbf{I}-\phi^{in}\right)\right\}-\text{Tr}_{c,E} \left\{\delta\phi\left[\ln\phi^{in}-\ln\left(\mathbf{I}-\phi^{in}\right)\right] \right\}\\ &+\text{Tr}_{c,E}\left\{-\phi^{out}\left[\ln\phi^{out}-\ln\phi^{in }\right]-\left(\mathbf{I}-\phi^{out}\right)\,\left[\ln\left(\mathbf{I}-\phi^{ out}\right)-\ln\left(\mathbf{I}-\phi^{in}\right)\right]\right\}\\ &\equiv\text{Tr}_{c,E}\left\{\sigma\left[\phi^{in}\right] \right\}-\sum_{B}\beta_{B}\dot{Q}_{B}(t)-\dot{S}_{i}(t)\end{split} \tag{16}\]
where to write the last line we used
\[\begin{split}&\left[\ln\frac{1-\phi^{in}}{\phi^{in}}\right]_{ \alpha\beta}=\delta_{\alpha,\beta}\,\delta(E_{\alpha}-E_{\beta})\,\ln\frac{1-f _{\alpha}(E_{\alpha})}{f_{\alpha}(E_{\alpha})}\equiv\delta_{\alpha,\beta}\, \delta(E_{\alpha}-E_{\beta})\,\frac{E-\mu_{\alpha}}{k_{B}T_{\alpha}},\\ &\left[\delta\phi(t;E,E)\right]_{\alpha\alpha}=-i_{\alpha}(t,E), \end{split} \tag{17}\]
and the definition of entropy production rate \(\dot{S}_{i}(t)\), Eq.(13) of the main text. Substituting (16) into (14) leads to Eq.(7).
## Appendix C Simplified expressions for coherence flows
To facilitate numerical simulations we simplify the general expression for outgoing coherence flow \(\phi^{out}_{\alpha\beta}(t;E_{\alpha},E_{\beta})\), Eq.(10), by employing two approximations:
1. Wide-band approximation (WBA) in which system-bath coupling is assumed to be energy independent \[V_{\alpha,m}(E_{\alpha},t)=V_{\alpha,m}(t),\qquad V_{m,\beta}(E_{\beta},t)=V_{m,\beta}(t)\] (10)
2. Diagonal approximation in which the highly oscillating terms are neglected. Because \[G^{<}_{E_{\alpha},E_{\beta}}(t,t)\sim e^{-i(E_{\alpha}-E_{\beta})t}\] (11) the last term in expression for \(\phi^{out}_{\alpha\beta}(t;E_{\alpha},E_{\beta})\), Eq.(10), can be dropped because for similar energies prefactor \(E_{\alpha}-E_{\beta}\sim 0\) and for large differences it is highly oscillating in time contribution which self-averages to zero.
Within the two approximations the general expression for coherence flow
\[I_{\alpha\beta}(t)=\int\frac{dE_{\alpha}}{2\pi}\int\frac{dE_{\beta}}{2\pi}\, \left[\phi^{in}_{\alpha\beta}(E_{\alpha},E_{\beta})-\phi^{out}_{\alpha\beta}( t;E_{\alpha},E_{\beta})\right] \tag{12}\]
allows evaluation of one of the integrals and thus reduces the expression to
\[I_{\alpha\beta}(t)=\int\frac{dE}{2\pi}\,\left[\phi^{in}_{\alpha\beta}(E)-\phi^ {out}_{\alpha\beta}(t,E)\right] \tag{13}\]
where
\[\begin{split}\phi^{in}_{\alpha\beta}(E)&=\delta_{ \alpha,\beta}\,f_{\alpha}(E)\\ \phi^{out}_{\alpha\beta}(t,E)&=\phi^{in}_{\alpha \beta}(E)-2\pi\,i\sum_{m,m_{1}\in S}\int_{-\infty}^{t}dt_{1}\\ &\left(e^{-iE(t_{1}-t)}\,V_{\alpha,m}(t)\,V_{m_{1},\beta}(t_{1}) \,\rho_{\beta}(E)\right.\\ &\qquad\times\left[G^{<}_{mm_{1}}(t,t_{1})+G^{r}_{mm_{1}}(t,t_{ 1})\,f_{\beta}(E)\right]\\ &\qquad+e^{+iE(t_{1}-t)}\,V_{\alpha,m_{1}}(t_{1})\,V_{m,\beta}( t)\,\rho_{\alpha}(E)\\ &\qquad\times\left[G^{<}_{m_{1}m}(t_{1},t)-G^{a}_{m_{1}m}(t_{1},t )\,f_{\alpha}(E)\right]\bigg{)}\end{split} \tag{14}\]
In this form expression for coherence flow is diagonal in energy.
Comparison with Eqs. (4) and (5) for \(\alpha=\beta\) yields
\[\phi^{in}_{\alpha\alpha}(E)-\phi^{out}_{\alpha\alpha}(t,E)\equiv f_{\alpha}( E)-\phi^{out}_{\alpha\alpha}(t,E)=i_{\alpha}(t,E) \tag{15}\]
## Appendix D Thermodynamic formulation for bosonic baths
Building the inside-outside approach to system thermodynamics for bosonic baths is identical to that presented in the main text for the case of fermionic baths.
Expressions for energy-resolved particle flux (analog of Eq.(5) in the main text)
\[i_{\alpha}(t,E)\equiv 2\,\text{Im}\sum_{v,r_{1}\in S}\int_{-\infty}^{t}dt_{1}\,e^ {-iE(t_{1}-t)}\,2\pi\rho_{\alpha}(E)\,U_{v_{1},\alpha}(E,t_{1})\,U_{\alpha,v}(E,t)\bigg{(}F^{<}_{vv_{1}}(t,t_{1})-N_{\alpha}(E)\,F^{r}_{vv_{1}}(t,t_{1})\bigg{)}, \tag{16}\]
incoming and outgoing fluxes (analog of Eq.(10) in the main text)
\[\begin{split}&\phi_{\alpha\beta}^{in}(E_{\alpha},E_{\beta})\equiv 2 \pi\delta_{\alpha,\beta}\delta\left(E_{\alpha}-E_{\beta}\right)N_{\alpha}(E_{ \alpha})\\ &\phi_{\alpha\beta}^{out}(t;E_{\alpha},E_{\beta})\equiv\phi_{ \alpha\beta}^{in}(E_{\alpha},E_{\beta})-i(2\pi)^{2}\rho_{\alpha}(E_{\alpha}) \rho_{\beta}(E_{\beta})\sum_{v,v_{1}\in S}\int_{-\infty}^{t}dt_{1}\\ &\left(\begin{array}{c}e^{-iE_{\beta}(t_{1}-t)}\,U_{\alpha,v}(E _{\alpha},t)\,U_{v_{1},\beta}(E_{\beta},t_{1})\left[F_{vv_{1}}^{<}(t,t_{1})-F_ {vv_{1}}^{r}(t,t_{1})\,N_{\beta}(E_{\beta})\right]\\ &+\,e^{+iE_{\alpha}(t-t_{1})}\,U_{\alpha,v_{1}}(E_{\alpha},t_{1})\,U_{v,\beta }(E_{\beta},t)\left[F_{v_{1}v}^{<}(t_{1},t)+F_{v_{1}v}^{a}(t_{1},t)\,N_{\alpha }(E_{\alpha})\right]\,\right)\\ &-(2\pi)^{2}\rho_{\alpha}(E_{\alpha})\rho_{\beta}(E_{\beta})(E_{\alpha}-E_{ \beta})F_{E_{\alpha},E_{\beta}}^{<}(t,t),\end{split} \tag{10}\]
and entropy production rate (analog of Eq.(13) in the main text)
\[\dot{S}_{i}(t)\equiv\mathrm{Tr}_{c,E}\bigg{\{}\phi^{out}(t)\left[\,\ln\phi^{ out}(t)-\ln\phi^{in}\right]-\left(\mathbf{I}+\phi^{out}(t)\right)\,\left[\,\ln \left(\mathbf{I}+\phi^{out}(t)\right)-\ln\left(\mathbf{I}+\phi^{in}\right)\, \right]\bigg{\}} \tag{11}\]
are slightly different for bosonic baths. Here, \(v\) indicates molecular vibrational modes, \(N_{\alpha}(E)\) is the Bose-Einstein thermal distribution in bath channel \(\alpha\), and
\[F_{v_{1}v_{2}}(\tau_{1},\tau_{2})\equiv-i\left\langle T_{c}\,\hat{a}_{v_{1}}( \tau_{1})\,\hat{a}_{v_{2}}^{\dagger}(\tau_{2})\right\rangle \tag{12}\]
is the single-particle Green's function of molecular vibrations (compare with Eq.(3) in the main text).
Note that the system does not induce correlations between fermionic and bosonic baths. That is, contributions of the baths to the thermodynamic formulation are additive.
## Appendix E Details of numerical simulations
Central to the inside-outside thermodynamic formulation is the ability to simulate energy-resolved particle fluxes \(i_{\alpha}(t,E)\), Eqs. (5) and (10). For the Holstein model, Eqs. (14) and (15), and simultaneous coupling to one fermionic and one bosonic reservoirs, only one channel \(\alpha\) in each bath is considered. Thus, we will drop the channel index.
To simulate the energy-resolved particle fluxes, we consider the retarded, lesser and greater projections of the single-particle Green's functions, Eqs. (3) and (12), within the wide band approximation.
In the absence of electron-phonon coupling, \(M=0\), zero-order (in the coupling) Green's functions for the model (14) and (15) are
\[\begin{split}& G_{0}^{r}(t_{1},t_{2})=-i\theta(t_{1}-t_{2})\,\exp \left(-i\int_{t_{2}}^{t_{1}}ds\,\left[\varepsilon_{0}(s)-\frac{i}{2}u^{2}(s) \Gamma_{0}\right]\right)\\ & G_{0}^{\gtrless}(t_{1},t_{2})=\int dt_{3}\int dt_{4}\,G_{0}^{r }(t_{1},t_{3})\,u(t_{3})\,\Sigma_{K}^{\gtrless}(t_{3},t_{4})\,u(t_{4})\,G_{0 }^{a}(t_{4},t_{2})\\ & F_{0}^{r}(t_{1},t_{2})=-i\theta(t_{1}-t_{2})\,\exp\left(-i[ \omega_{0}-i\gamma_{0}/2][t_{1}-t_{2}]\right)\\ & F_{0}^{\gtrless}(t_{1},t_{2})=\int dt_{3}\int dt_{4}\,F_{0}^{r }(t_{1},t_{3})\,\Pi_{P}^{\gtrless}(t_{3},t_{4})\,F_{0}^{a}(t_{4},t_{2})\end{split} \tag{13}\]
Here, \(\theta(t_{1}-t_{2})\) is the Heaviside step function, \(\Sigma_{K}\) and \(\Pi_{P}\) are respectively the electron self-energy due to coupling to the fermionic bath and vibration self-energy due to coupling to the thermal bath. The latter are Fourier transforms of
\[\begin{split}&\Sigma_{K}^{>}(E)=-i\Gamma_{0}\left[1-f(E)\right], \qquad\Sigma_{K}^{<}(E)=+i\Gamma_{0}f(E),\\ &\Pi_{P}^{>}(E)=-i\gamma_{0}\left[1+N(E)\right],\qquad\Pi_{P}^{<}( E)=-i\gamma_{0}N(E).\end{split} \tag{14}\]
In the presence of electron-phonon interaction, Green's functions should be obtained within a self-consistent procedure using system of coupled Dyson equations
\[\begin{split}& G^{\gtrless}(t_{1},t_{2})=\int dt_{3}\int dt_{4}\,G_{0 }^{r}(t_{1},t_{3})\left[u(t_{3})\,\Sigma_{K}^{\gtrless}(t_{3},t_{4})\,u(t_{4}) +\Sigma_{v}^{\gtrless}(t_{3},t_{4})\right]G^{a}(t_{4},t_{2})\\ & F^{\gtrless}(t_{1},t_{2})=\int dt_{3}\int dt_{4}\,F^{r}(t_{1},t_{3 })\left[\Pi_{P}^{\gtrless}(t_{3},t_{4})+\Pi_{v}^{\gtrless}(t_{3},t_{4}) \right]F^{a}(t_{4},t_{2})\end{split} \tag{15}\]
where \(G^{r}(t_{1},t_{2})\equiv\theta(t_{1}-t_{2})\left[G^{>}(t_{1},t_{2})-G^{<}(t_{1},t_{ 2})\right]\), \(F^{r}(t_{1},t_{2})\equiv\theta(t_{1}-t_{2})\left[G^{>}(t_{1},t_{2})-G^{<}(t_{1},t _{2})\right]\), \(G^{a}(t_{1},t_{2})\equiv\left[G^{r}(t_{2},t_{1})\right]^{*}\) and \(F^{a}(t_{1},t_{2})\equiv\left[F^{r}(t_{2},t_{1})\right]^{*}\). \(\Sigma_{v}\) and \(\Pi_{e}\) are respectively the electron self-energy due to interaction with vibration and the vibration self-energy due to interaction with electron. Within the SCBA they are
\[\begin{split}\Sigma_{v}(\tau_{1},\tau_{2})&=+i\,M^ {2}\,G(\tau_{1},\tau_{2})\left[F(\tau_{1},\tau_{2})+F(\tau_{2},\tau_{1})\right] \\ \Pi_{e}(\tau_{1},\tau_{2})&=-i\,M^{2}\,G(\tau_{1}, \tau_{2})\,G(\tau_{2},\tau_{1})\end{split} \tag{10}\]
Here, in the electron self-energy we neglected the Hartree term.
Once Green's functions are available, we dress electron Green's function with the system-bath coupling rate \(u(t)\),
\[D(\tau_{1},\tau_{2})\equiv u(t_{1})\,G(\tau_{1},\tau_{2})\,u(t_{2}) \tag{11}\]
consider retarded parts of the lesser and greater projections,
\[D^{\gtrless\,+}(t_{1},t_{2})\equiv\theta(t_{1}-t_{2})D^{\gtrless}(t_{1},t_{ 2}),\qquad F^{\gtrless\,+}(t_{1},t_{2})\equiv\theta(t_{1}-t_{2})F^{\gtrless \,}(t_{1},t_{2}), \tag{12}\]
and perform one-sided Fourier transform
\[D^{\gtrless\,+}(t,E)\equiv\int_{-\infty}^{+\infty}dt_{1}\,e^{-iEt_{1}}\,D^{ \gtrless\,+}(t,t_{1}),\qquad F^{\gtrless\,+}(t,E)\equiv\int_{-\infty}^{+ \infty}dt_{1}\,e^{-iEt_{1}}\,F^{\gtrless\,+}(t,t_{1}) \tag{13}\]
In terms of these Fourier transforms electron and vibration energy-resolved particle fluxes, Eqs. (5) and (10), are
\[\begin{split} i_{e}(t,E)&=-2\,\text{Im}\left\{e^{ iEt}\,\Gamma(E)\left(f\left(E\right)D^{>\,+}(t,E)+[1-f\left(E\right)]D^{<\,+}(t,E) \right)\right\}\\ i_{v}(t,E)&=-2\,\text{Im}\left\{e^{iEt}\,\gamma(E) \left(N(E)F^{>\,+}(t,E)-[1+N(E)]\,F^{<\,+}(t,E)\right)\right\}\end{split} \tag{14}\]
Here, \(\Gamma(E)\) and \(\gamma(E)\) are normalized values of escape rates. Eq.(14) is used to calculate thermodynamic properties as indicated in the main text and the section above.
Appendix F Connection between the level and coupling driving rates during adiabatic (de)coupling process
In the absence of electron-vibration coupling, \(M=0\), expressions for adiabatic driving (and beyond) were derived in Ref. [15]. Specifically, heat flux under adiabatic driving is given in Eq.(10) of that paper. In accordance with the standard quantum transport definitions we take \(\alpha=0\) in this expression
\[\begin{split}\dot{Q}^{(1)}(t)&=\frac{d}{dt}\int \frac{dE}{2\pi}\,f(E)A^{(0)}(t,E)\left(2E-\varepsilon_{0}(t)-\mu\right)\\ &-\int\frac{dE}{2\pi}f(E)\left(A^{(0)}(t,E)\left[\dot{\varepsilon }_{0}+\dot{\Lambda}(t,E)\right]+\text{Re}\,G^{r\,(0)}(t,E)\,\dot{\Gamma}(t,E) \right)\end{split} \tag{15}\]
where
\[\begin{split} A^{(0)}(t,E)&\equiv\frac{\Gamma(t,E)}{ \left[E-\varepsilon_{0}(t)-\Lambda(t,E)\right]^{2}+\left[\Gamma(t,E)/2\right]^ {2}}\\ \text{Re}\,G^{r\,(0)}(t,E)&\equiv\frac{E-\varepsilon_{0 }(t)-\Lambda(t,E)}{\left[E-\varepsilon_{0}(t)-\Lambda(t,E)\right]^{2}+\left[ \Gamma(t,E)/2\right]^{2}}\end{split} \tag{16}\]
are the zero order (in driving) expressions for the spectral function and real part of the retarded projection of the Green's function (3), and the Lamb shift \(\Lambda(t,E)\) and the broadening \(\Gamma(t,E)\) are [15]
\[\begin{split}\Lambda(t,E)&=u^{2}(t)\frac{\Gamma_{0}}{ 2}\frac{(E-E_{B})W_{B}}{(E-E_{B})^{2}+W_{B}^{2}}\\ \Gamma(t,E)&=u^{2}(t)\Gamma_{0}\frac{W_{B}^{2}}{(E-E_ {B})^{2}+W_{B}^{2}}\end{split} \tag{17}\]
where \(E_{B}\) and \(W_{B}\) are the center and width of the band, respectively, and \(\Gamma_{0}\) is the level escape rate.
Adiabatic coupling/decoupling is defined by the \(\dot{Q}^{(1)}(t)=0\) condition. Using (15) and employing
\[\dot{\Lambda}(t,E) =2\frac{\dot{u}}{u(t)}\Lambda(t,E) \tag{16}\] \[\dot{\Gamma}(t,E) =2\frac{\dot{u}}{u(t)}\Gamma(t,E)\]
leads to the following connection between the driving rates
\[\dot{\varepsilon}_{0}=\frac{\dot{u}}{u(t)}\,\int\frac{dE}{2\pi}\, f(E)\left([2E-\varepsilon_{0}(t)-\mu][1+\Lambda(t,E)(2\operatorname{Re}G^{r \,(0)}(t,E)-1)-\Gamma(t,E)\,A^{(0)}(t,E)/2]\,A^{(0)}(t,E)\right. \tag{17}\] \[\left.-\Gamma(t,E)\operatorname{Re}G^{r\,(0)}(t,E)\right)\bigg{/} \int\frac{dE}{2\pi}\,f(E)A^{(0)}(t,E)\left(1-[2E-\varepsilon_{0}(t)-\mu] \operatorname{Re}G^{r\,(0)}(t,E)\right)\]
In the presence of electron-vibration coupling, we substitute \(A^{(0)}\) and \(G^{r\,(0)}\) with their SCBA analogs. This is an approximation.
|
2306.10585 | Optimizing Stateful Dataflow with Local Rewrites | Optimizing a stateful dataflow language is a challenging task. There are
strict correctness constraints for preserving properties expected by downstream
consumers, a large space of possible optimizations, and complex analyses that
must reason about the behavior of the program over time. Classic compiler
techniques with specialized optimization passes yield unpredictable performance
and have complex correctness proofs. But with e-graphs, we can dramatically
simplify the process of building a correct optimizer while yielding more
consistent results! In this short paper, we discuss our early work using
e-graphs to develop an optimizer for a the Hydroflow dataflow language. Our
prototype demonstrates that composing simple, easy-to-prove rewrite rules is
sufficient to match techniques in hand-optimized systems. | Shadaj Laddad, Conor Power, Tyler Hou, Alvin Cheung, Joseph M. Hellerstein | 2023-06-18T15:41:20Z | http://arxiv.org/abs/2306.10585v1 | # Optimizing Stateful Dataflow with Local Rewrites
###### Abstract.
Optimizing a stateful dataflow language is a challenging task. There are strict correctness constraints for preserving properties expected by downstream consumers, a large space of possible optimizations, and complex analyses that must reason about the behavior of the program over time. Classic compiler techniques with specialized optimization passes yield unpredictable performance and have complex correctness proofs. But with e-graphs, we can dramatically simplify the process of building a correct optimizer while yielding more consistent results! In this short paper, we discuss our early work using e-graphs to develop an optimizer for a the Hydroflow dataflow language. Our prototype demonstrates that composing simple, easy-to-prove rewrite rules is sufficient to match techniques in hand-optimized systems.
distributed systems, query optimization, e-graphs +
Footnote †: journal: Computer Science
Motivating Example
Before we dive into the optimizer, let us explore how developers can build streaming dataflow services in Hydroflow. Consider a simple chat application, where users can join a channel and receive all messages sent (including those before they joined). To keep things simple, we'll consider the case where there is only a single channel.
Hydroflow programs are written sets of declarative statements that connect **pipelines** to each other through **operators**, which define logic such as map, filter, or join. Operators can take multiple inputs (senders explicitly index into these), and local pipelines can be created by chaining together several operators. In addition, Hydroflow supports dataflow cycles, which if present are run to fixpoint.
Our application has two streaming inputs, one for users requesting to join the channel (add_member), and one for messages being sent by users (messages). We can send messages to users by sending (_user_, _msg_) pairs to a downstream notify pipeline. An initial attempt to implement this in Hydroflow may look like the following.
add_member -> [0] broadcast messages -> [1] broadcast broadcast = cross() -> notify notify =...
In this case, we can broadcast messages by taking the cross product (with cross) of the users added to the channel and messages sent. But this program will actually behave incorrectly! To understand why, we need to dive into Hydroflow's execution model.
Each Hydroflow program ("spinner") executes as an event loop. At the beginning of each iteration, called a "tick," Hydroflow collects any available network packets for each input channel into a batch. The dataflow is then executed on these batches, and once fixpoint is reached the values accumulated at each output are flushed to the network. Critically, all dataflow operators are _stateless_ by default, so all state is cleared at the end of a tick.
In our example, this means that our program will only broadcast messages to users that joined _in the same tick_. This "catch" is by design--Hydroflow guides users to be mindful of the effects of network latency and batching on their programs. Indeed, there is nothing in our code that corresponds to showing previous messages to newly joined users.
Let us fix this. Hydroflow has a _stateful_ operator, persist, which consumes elements from some upstream source and emits the _entire history_ of values it has received up to and including the current tick. With this operator, it is easy to get a more sensible program:
add_member -> persist() -> [0] broadcast messages -> persist() -> [1] broadcast broadcast broadcast = cross() -> notify
There is still one last issue. Because persist replays the entire history of messages in every tick, clients will be sent repeated notifications for the same message. We can fix this by using the inverse of persist, the delta operation. This dataflow element consumes values from an upstream source, but only emits the _new_ values in this tick. So our final, complete program looks like the following:
add_member -> persist() -> [0] broadcast messages -> persist() -> [1] broadcast broadcast broadcast = cross() -> delta() -> notify
That's all! We now have a precise implementation of our specified program semantics. But this is not particularly efficient. In a naive execution of this dataflow, we will take the cross product with all messages in the history of the channel, only to later perform a delta that retains only the new messages and replays for newly joined members. Our goal is to preserve this clear model for computation while optimizing away the inefficiencies of naive state accumulation.
## 3. Optimizing Stateful Dataflow
To tackle the issue of inefficient stateful operators in dataflows, we turn to e-graphs. Our goal is to identify rewrite rules that optimize subflows while preserving which values are emitted and any ordering guarantees. An important principle in our usage of e-graphs is boiling down optimizations to first principles. Rather than baking in specific rules for operations like cross-products, we instead want to identify more general rules that can be composed during e-graph expansion.
First, we need to define an encoding of Hydroflow graphs as expressions that we can define rewrite rules over. For our prototype, we use a tree encoding, where dataflow operators are defined as functions with inputs passed as parameters. In our prototype, we elide any non-dataflow inputs (such as user-defined functions) because our optimizations do not currently make use of them. With our encoding, the motivating example can be expressed as an expression:
(delta (cross (persist add_member) (persist messages)))
Using a tree encoding has some limitations, such as being unable to express dataflows where a shared computation has several consumers. We discuss our current solutions for these challenges and propose opportunities for future research in Section 4. In our discussion of using e-graphs to discover dataflow optimizations, this representation is sufficient.
### Rewriting Persist
Let's start with some of the simpler rules. In the previous section, we introduced the persist and delta operators for reasoning about accumulated state. These are inverses, so we can define a rewrite for persist followed by a delta. In the syntax of egg, we can specify the rewrite:
(delta (persist?a)) <>?a
Next, we develop rewrite rules for reasoning about the behavior of persist. We discussed earlier that persist replays the messages it received in previous ticks, and also emits the values received from upstream in the current loop. A natural rewrite rule, then, is to make these semantics explicit so that our optimizer can reason about these two sources of values.
We can introduce a new operator old, which behaves the same as persist_except_ it does not emit the new values received from upstream. Then we can use the chain operator, which combines messages received from two upstream channels by emitting all values from the first before all values from the second, to rewrite a persist:
**(persist?a) <> (chain (old?a)?a)**
With the rules so far, we can rewrite our working example to replace the persist operators:
(delta (cross (chain (old add_member) add_member) (chain (old messages)))
### Distributing Cross Products
A natural next step for our rewrite rules is to reason about cross products over chained input channels. The cross product operator is distributive over chains (it makes no guarantees about the order of output tuples), so we can define a rewrite rule for this. Because cross is not commutative (the order of elements _in_ each tuple matters), we must also define a rule for when the chain is in the second input. In summary, we add the following rules:
(cross (chain?a?b)?c) <> (chain (cross?a?c) (cross?b?c)) (cross a? (chain?b?c)) <> (chain (cross?a?b) (cross?a?c))
Applying both of these rules to our working example, we can shift both chain operators to the other side of the cross product, which reveals how new and old data individually contribute to the final result:
(delta (chain (cross (old add_member) (old messages)) (cross (old add_member) messages) ) ) (chain (cross add_member (oldmessages)) (cross add_member messages) ))
Next, we have another rewrite rule corresponding to a fundamental property of dataflow operators: _associativity_. Because _chain_ is associative, we can shift around the grouping with a rewrite rule:
**(chain (chain?a?b)?c) <>> (chain?a (chain?b?c))**
This rewrite rule allows us to isolate the cross product dealing with only old values, which we will need later on to make this computation incremental:
(delta (chain (cross (old add_member) (oldmessages)) (chain (cross (old add_member) messages) (chain (cross add_member (oldmessages)) (cross add_member (oldmessages)) ) ))
### Modeling Determinism
Our next insight is that we have not yet used the property of _determinism_ in a rewrite. We know that cross (along with most other dataflow operators) is deterministic-it produces the same tuples (still with no ordering guarantee) over ticks as long as the input streams produce the same values.
Let us codify this by introducing a new dataflow operator prev. This operator simply emits the values it received in the _previous_ tick. First, we define a rewrite relating old and persist with prev. Then, we can define a rewrite rule for cross that uses the fact that it is deterministic to shift a computation to a previous tick:
**(old?a) <> (prev (persist?a)) (cross (prev?a) (prev?b)) <> (prev (cross?a?b))**
Again, these rewrite rules describe the core properties of our operators rather than a specific optimization case. Let's take a look at a rewrite of our program with these rules applied:
(delta (chain (prev (cross (persist add_member) (persist messages) )) (chain (cross (old add_member) messages) (chain (cross add_member (oldmessages)) (cross add_member messages) ) ))
### Putting it Together: Incrementalization
Finally, we we can optimize our dataflow into an incremental computation. We notice that the e-node for the subexpression inside the delta has remained within the same e-class as the original subexpression ((cross (persist add_member) (persist messages)), at the beginning of Section 3). This original subexpression appears _within_ our chain, which means that we could instead just add to the existing result from the previous tick! This is an exciting result; we have identified an incremental way to compute the cross product by composing primitive rewrites rather than writing specialized rules.
Indeed, we only have one rewrite that deals with incremental computation. We are looking for a cycle through a prev node, so we can attach a predicate that checks for equivalence between the root and the child inside prev:
**(chain (prev?a)?b) => (persist?b)**
if eclass((chain (prev?a)?b)) = eclass(?a)
The proof of correctness for this rewrite relies on induction over ticks. In the base case, (prev?a) is an empty stream, so (chain (prev?a)?b)?b = (persist?b) because there are no previous persisted values. In the inductive step, we know that (prev (chain (prev?a)?b)). Then, our equivalence constraint says that (prev (persist?b)) = (prev (chain (prev?a)?b)) = (prev?a). We can wrap these expressions to get (chain (prev?a)?b) = (chain (prev (persist?b))?b). The latter is the definition of (persist?b) so our rule is correct. Applying this to our working example, we get:
(delta (persist (chain (cross (old add_member) messages) (chain (cross add_member (oldmessages)) ) ))
Finally, we can apply the first rewrite rule we defined to cancel out the delta and persist and obtain an efficient, incremental dataflow:
(chain (cross (old add_member) messages) (chain (cross add_member (oldmessages)) (cross add_member messages) ) )
So far, we have discussed only the rewrite rules, but have not specified how we pick a single rewritten program from the expanded e-graph. To do this, we can specify a simple cost model that computes the number of nodes with a higher weight for delta nodes because these indicate duplicated work. With this cost model, we can apply the same set of rewrite rules to a three-way cross product (between add_member, messages, and platforms), and discover an appropriate incremental algorithm:
(chain (cross add_member (cross (oldmessages) (old platforms)))) (cross (persist add_member) (chain (cross messages (old platforms))) (cross (persist messages) platforms))))
What is exciting is that our rules around manipulating delta/persist/old operators are general, with no rules specific to the cross case. If we define similar rules for distribution and determinism over the join operation, we can derive the incremental semi-naive datalog evaluation from scratch! By using e-graphs to explore the search space of composed rewrites, we are able to easily support a large swath of programs with minimal effort needed to define rewrites and verify their correctness.
## 4. Diamonds are Hard to Crack
There is one limitation of our approach using e-graphs that is hard to ignore, yet leaves many exciting opportunities for future work in the wider e-graphs space. In Hydroflow, the data flowing out of a node can be used by several downstream paths through the tee operator, which at runtime sends copies of each incoming value to each consuming operator. For example, we can use tee to compute users who should meet at the next Bay Area e-graph meetup:
members = add_member -> persist() -> tee() meetup = cross() members -> map(with_school) -> filter(berkeley) -> [0] meetup members -> map(with_school) -> filter(stanford) -> [1] meetup
In the cost model for an optimizer, it is critical to take into account that the computation before the tee is only performed once each tick, regardless of the number of consumers. But with our encoding of dataflow as expressions, this is currently not possible, because we can only extract computation trees rather than general DAGs.
In particular, the dataflow structure that breaks our encoding is a **diamond**, a dataflow where a common computation is transformed in different ways that are eventually merged together (by interleaving their elements, joining on a key, etc). This is similar to a common table expression (CTE) in database lingo or a let-binding in functional languages, where a single result is produced.
In our current prototype, we simply flatten all diamonds by duplicating their shared subexpressions, and re-form diamonds after optimization by searching for identical expressions in the output. But in an ideal system, diamonds would be handled just like any other constructs in the optimizer. There are three key challenges in optimizing diamonds:
1. When computing the cost function for a node, a common subexpression's cost should be counted only once even if is referenced multiple times.
2. We may want to shift logic from the common subexpression into its downstream consumers (inlining), to enable further optimizations.
3. The reverse of (2), after performing rewrites we may want to extract shared logic into a common subexpression to avoid duplicate computation.
### Forming Diamonds with Zippers
In our early prototypes, we designed an explicit operator that captures the structure of a diamond. The diamond operator takes four parameters: the shared computation, two "edges" that describe the transformations being applied to data from the common source, and a merge node that defines how to combine the results from the two edges. This representation immediately solves challenge (1), since we precisely capture which computation is shared between multiple paths. For example, we can encode the earlier example with diamond:
(diamond (persist add_member) (zipper in (mapwith_school (filter berkeley out))) (zipper (filterstanford (mapwith_schoolin)) out) (crossfirstsecond))
A key trick in this formulation is representing the "edges" of the diamond using a zipper (Belle and Pellegrini, 2010) data structure. The nesting of operators is reversed between the halves of the zipper. In the first half, operator nodes have their inputs as children, but in the second half they have consumers as children. We use two special variables, first and second, to reference the values flowing out of both edges of the diamond.
What is powerful about zippers is that they make it possible to isolate either the first _or_ last operator in a sequence by shifting the "cursor," the point where the two halves meet. In standard zipper implementations, this is implemented by popping an element from one half and pushing it to the other. For our encoding, we similarly remove the outermost operator from one half and wrap the other half with it. In our example, we can shift the cursors in both zippers to isolate one operator in each half:
(diamond (persist add_member) (zipper (mapwith_schoolin) (filterberkeley out)) (zipper (mapwith_schoolin) (filterstanford out)) (crossfirstsecond))
After isolating the last operator of the edge in the second half of a zipper, we can apply another rewrite to inline the operator in the output, solving challenge (2). Thanks to the symmetry of the zipper, we can _also_ solve challenge (3). If a single operator is isolated in the _first_ half of each zipper, and is the same for both edges, it can be shared. Applying both rewrites, we can transform our example to:
(diamond (mapwith_school(persist add_member)) (zipper inout) (zipper in(filterstanford out)) (cross(filterberkeley first) second))
This encoding comes with a catch: there are dataflow graphs that _cannot_ be encoded in terms of this diamond operator. Because our zippers only represent flat sequences, we cannot have any multi-input operators along an edge, unless those operators are part of a sub-diamond. In addition, manipulating zippers is very expensive, as we generate new e-classes for both halves whenever we perform a cursor shift, causing the e-graph to expand quite quickly. But rewrite rules _consuming_ zippers only care about the _isolated_ first or last operator, so other intermediate states only exist for the shifting rule. In future work, we hope to explore ways to more efficiently represent zipper structures in an e-graph to take advantage of this domain-specific knowledge rather than naively using standard rewrite rules.
## 5. Conclusion
Developing optimizers is a challenging task, and building one for a low-level dataflow language is that much more daunting. The space of possible optimized programs is massive, and developing specialized rules can lead to brittle behavior. But with e-graphs, we can boil down dataflow optimization into a set of core rules that map to fundamental properties of operators such as associativity and determinism. By leveraging the composition of these rules, we can automatically discover optimizations such as incremental joins without specialized rules and cumbersome proof effort. E-graphs are not a perfect solution for all cases, with diamonds particularly hard to optimize, but there are promising directions that allow us to preserve the simplicity of local rewrites while supporting more programs.
## Acknowledgments
We thank our anonymous reviewers for their insightful feedback on this paper. This work is supported in part by National Science Foundation CISE Expeditions Award CCF-1730628, IIS-1955488, IIS-2027575, DOE award DE-SC0016260, ARO award W911NF2110339, and ONR award N00014-21-1-2724, and by gifts from Amazon Web Services, Ant Group, Ericsson, Futurewei, Google, Intel, Meta, Microsoft, Scotiabank, and VMware. Shadaj Laddad is supported in part by the NSF Graduate Research Fellowship Program under Grant No. DGE 2146752. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
|
2305.09584 | Revisiting Proprioceptive Sensing for Articulated Object Manipulation | Robots that assist humans will need to interact with articulated objects such
as cabinets or microwaves. Early work on creating systems for doing so used
proprioceptive sensing to estimate joint mechanisms during contact. However,
nowadays, almost all systems use only vision and no longer consider
proprioceptive information during contact. We believe that proprioceptive
information during contact is a valuable source of information and did not find
clear motivation for not using it in the literature. Therefore, in this paper,
we create a system that, starting from a given grasp, uses proprioceptive
sensing to open cabinets with a position-controlled robot and a parallel
gripper. We perform a qualitative evaluation of this system, where we find that
slip between the gripper and handle limits the performance. Nonetheless, we
find that the system already performs quite well. This poses the question:
should we make more use of proprioceptive information during contact in
articulated object manipulation systems, or is it not worth the added
complexity, and can we manage with vision alone? We do not have an answer to
this question, but we hope to spark some discussion on the matter. The codebase
and videos of the system are available at
https://tlpss.github.io/revisiting-proprioception-for-articulated-manipulation/. | Thomas Lips, Francis wyffels | 2023-05-16T16:31:10Z | http://arxiv.org/abs/2305.09584v1 | # Revisiting Proprioceptive Sensing for Articulated Object Manipulation
###### Abstract
Robots that assist humans will need to interact with articulated objects such as cabinets or microwaves. Early work on creating systems for doing so used proprioceptive sensing to estimate joint mechanisms during contact. However, nowadays, almost all systems use only vision and no longer consider proprioceptive information during contact. We believe that proprioceptive information during contact is a valuable source of information and did not find clear motivation for not using it in the literature. Therefore, in this paper, we create a system that, starting from a given grasp, uses proprioceptive sensing to open cabinets with a position-controlled robot and a parallel gripper. We perform a qualitative evaluation of this system, where we find that slip between the gripper and handle limits the performance. Nonetheless, we find that the system already performs quite well. This poses the question: should we make more use of proprioceptive information during contact in articulated object manipulation systems, or is it not worth the added complexity, and can we manage with vision alone? We do not have an answer to this question, but we hope to spark some discussion on the matter. The codebase and videos of the system are available here.
## I Introduction
Our living environments contain many articulated objects, including storage furniture such as cabinets and drawers or appliances like dishwashers and microwaves. Interacting with such objects will be a crucial skill of assistive robots and has hence been of great interest in robotic manipulation research.
Some of the earliest works on articulated object manipulation are [1] and [2]. These works perform explicit estimation of the joint parameters (type of the joint, axis of rotation/translation, and joint configuration) based on a sequence of part poses. The poses are obtained from the end-effector (proprioceptive sensing) under the assumption of a _firm grasp_ (i.e. rigid connection between handle and gripper), or from fiducial markers [2]. They use a hook to grasp the handle of the articulated objects at manually specified poses and use a compliant controller to overcome inaccuracies in the joint estimations to avoid exerting large forces. [1] obtains an impressive 37/40 success rate when tested on several articulated objects.
Other researchers have also estimated the articulation parameters either directly from a series of images directly [3], or by first tracking the poses of the parts and then estimating the parameters from this sequence of poses [4]. Yet other work has learned to detect articulated objects and determine their joint parameters from a single pair of stereo RGB images [5].
Another line of work has focused on learning affordances for actions instead of explicitly determining joint parameters [6, 7, 8]. These still perform separate grasp generation but then use closed-loop affordance estimation to determine appropriate actions for the robot to open the articulated objects. So far, all work in this direction determines the affordances based on a single observation and does not adapt at inference to correct wrong predictions [8].
Almost all papers mentioned so far use the PartNet-Mobility dataset [9] to obtain the required training data. Many use suction cups to limit the complexity of grasping the articulated object [7, 8]. Even then, determining appropriate grasping poses is challenging and is often reported as a major failure mode [7, 8]. Most works use a force-controlled robot to manipulate the articulated objects and a compliant1 low-level control scheme such as Impedance Control [11] or Operation Space Control [12] to account for uncertainties in the joint parameters.
Footnote 1: As in [10], we categorize compliant control as control schemes that shape the relation between positions (or velocities) and external forces
There are also more end-to-end works that aim to use Reinforcement Learning and Imitation Learning to open articulated objects [13, 14, 15]. These methods should be capable of handling the long tail that characterizes most category-level skills but they tend to require larger amounts of data and have so far not shown the same level of generalization on articulated object manipulation as the more task-specific methods that were discussed before.
A clear trend in recent work is to rely more on vision and not use proprioceptive information obtained during contact. However, to the best of our knowledge, this is not thoroughly motivated in the literature. Furthermore, proprioceptive information is naturally invariant to many of the typical varieties found in articulated objects and their environments, including materials, lighting, and certain aspects of the geometry. Therefore, in this paper, we create a system that uses proprioceptive sensing to open articulated objects. Compared
Fig. 1: We create a system to open articulated objects using only the proprioceptive information during contact. The system can open various articulated objects. We find that the main limitation is the occurrence of slip between the handle and gripper as can be observed in the different orientations of the gripper w.r.t. handle in the images above.
to [1], we use a position-controlled robot and hence switch to an admittance control [11] scheme to make the end-effector compliant. We also use a parallel-position gripper, as this is more generic than task-specific end-effectors or suction cups.
We describe the system in more detail in section II. In section III-A we qualitatively analyze the performance of our system on three articulated objects: 2 Ikea KALLAX cabinets and an oven. Based on this analysis, we then formulate the central question of this paper in section III-B: _should we use proprioceptive information, or not?_. Finally, we make some suggestions to improve the evaluations of articulated object manipulation in section III-C.
To summarize, our contributions are as follows:
* We combine previous work to implement a system that can open articulated objects with proprioceptive sensing. We do this with a position-controlled robot and a parallel gripper. Our qualitative analysis shows that the system is capable of opening various articulated objects.
* We find that slip between the gripper and handle can lead to failures. Based on this observation we pose the question if the added complexity of dealing with this slip is worth the gains of using proprioceptive sensing.
* Finally, we formulate some suggestions on how to improve the evaluation of articulated object manipulation systems based on other failure modes that we encountered during the analysis.
## II Method
In this section, we describe how we open articulated objects using proprioceptive information. We use a position-controlled UR3e robot with a built-in force-torque sensor and a Robotiq 2F-85 parallel gripper.
A high-level overview of our system is given in Algorithm 1. In the following sections, we discuss the compliant controller and articulation estimation method in more detail. Note that we manually determine the grasp pose to limit the scope of this work.
```
1: Enable the compliant controller \(\triangleright\) See section II-A
2:Manually determine grasp pose
3:set the initial joint estimation to a prismatic joint in the -Z direction of the gripper
4:while cabinet not opened do
5:repeat\(N\) times
6:move gripper along the current joint estimation
7:collect the gripper pose \(X^{t}\)
8:until
9:obtain a new joint estimation from the sequence of previous gripper poses \(\{X^{0:t}\}\)\(\triangleright\) see section II-B
10:endwhile
```
**Algorithm 1** High-level overview of the system
### _Compliant controller_
To make the position-controlled UR3e robot compliant and hence capable of overcoming errors in the articulation estimation without applying excessive forces on the articulated object, we use an admittance control scheme. With admittance control, we specify how the reference pose \(X_{\mathrm{r}}\) of the end-effector should be adapted under external forces, thus making the robot compliant. The relation between the gravity-compensated wrench on the end-effector \(W_{\mathrm{ext}}\) and the deviation from the reference trajectory \(X_{\mathrm{e}}\) is shaped as follows:
\[W_{\mathrm{ext}}=KX_{\mathrm{e}}+B\dot{X_{\mathrm{e}}}+M\ddot{X_{\mathrm{e}}}, \tag{1}\]
where K, B and M are the stiffness, damping and mass matrix respectively [16]. The dimensions are often decoupled by setting the off-diagonal elements of the aforementioned matrices to zero. The desired pose for the robot end-effector is then determined as \(X_{d}=X_{r}+X_{e}\).
A straightforward implementation of this scheme resulted in oscillatory and unstable behavior for stiff contacts (such as grasping cabinet handles) and low stiffness (which is desired to reduce forces applied on the gripper and hence avoid slip with inaccurate joint estimations). This is a known issue with low-stiffness admittance control on high-gain position-controlled robots such as the UR e-series [10]. We, therefore, resorted to a more advanced implementation from [17]. We empirically set the translational stiffness to 200 N/m in the Z-direction of the gripper and 50 N/m in the X and Y directions. The rotational stiffness was set to 2 Nms/radian for all dimensions. Other parameters in the implementation were set to their default values.
### _Articulation Estimation_
In this paper, we only consider joint mechanisms with a single degree of freedom as this is the most common case for furniture or appliances that can be _opened_ and _closed_. To estimate the joint parameters from the proprioceptive information, we used the method from Heppert et Al. [4], as the authors found their method, using a Factor Graph formulation, to perform better than the probabilistic model that was used in [2]. It also unifies the representation of revolute and prismatic joints as special cases of helical joints, which can be represented as a twist2\(V\in\mathbb{R}^{6}\). The articulation estimation method takes in a sequence of part poses \(\{X_{\mathrm{part}}^{i}\}\). These can be of any fixed frame on the moving part attached to the joint, e.g. a frame on the handle. Under the assumption that no slip occurs between the gripper and the handle, the poses of the gripper can also be used, which allows for estimating the joint parameters from the proprioceptive information.
Footnote 2: We refer to [18] for an excellent introduction into the geometric interpretation of spatial algebra, that is used in this and many other works on articulated object manipulation.
The articulation estimation returns a joint twist estimation \(\hat{V}\) and the joint configuration \(q_{i}\) (that describes the configuration single DOF of the joint) for all poses. The twist can then be used to determine the pose of the part frame for any joint configuration by taking the matrix exponential of the skew-symmetric matrix of the twist [18]:
\[X_{\mathrm{part}}=\exp(q[\hat{V}])\in SE(3). \tag{2}\]
This relation can then be used to open the articulated object. We set the initial joint estimation to a prismatic joint in the -Z direction of the initial grasp frame, as in [1].
To anticipate on slip between the gripper and handle, we increase the variance of the noise used in the factor graph model for the observed part poses compared to the original implementation. We also pre-compile all factor graphs to increase the speed of the estimations during manipulation.
### _Evaluation_
To evaluate the system, we selected 2 Ikea KALLAX3 cabinets, as they are widely available. One has a rotating door and one has 2 drawers. We also use an oven that is used in the lab for reflow soldering. All three items are relatively small to make sure they fit in the robot's workspace.
Footnote 3: [https://www.ikea.com/be/en/cat/kallax-series-27534/](https://www.ikea.com/be/en/cat/kallax-series-27534/)
We do not report quantitative measures as we did not perform enough experiments nor had enough diverse cabinets for them to have statistical significance. Rather we provide a qualitative evaluation of the system and focus in particular on the failure modes as these are the most interesting in our opinion.
## III Results & Discussion
In this section, we first perform an analysis of the system described in the previous section. Based on this analysis, we then pose the question if systems for articulated object manipulation should make more use of proprioception and formulate some suggestions on how to improve the evaluation of articulated object manipulation systems.
### _Qualitative Analysis of the system_
We find that in general, the system is capable of opening the articulated objects, irrespective of the relative pose of the object and/or the presence of other objects in the environment. Initial joint estimations are sometimes largely different from the actual joint parameters but are usually good enough to continue opening the articulated object, which allows for collecting additional proprioceptive information and improves the articulation estimation. Some successful experiments can be seen in Figures 1(a) and 1(b).
In the following paragraphs, we discuss three aspects of the system in more detail.
#### Iii-A1 slip
As expected, due to inaccurate joint estimations that are used to determine the next target pose for the gripper, slip occurs between the handle and the gripper. We found that this slip mostly affects the orientation of the grasp pose. The extent to which this slip occurs also largely depends on the shape of the handle, where handles with rounded sides such as in figure 1(a) are more prone to slip than rectangular handles as in figure 1(d), as these provide a larger contact surface. We found that slip, even though it results in estimation errors for the joint parameters, not always results in a failure to open the articulated object. The slip is usually limited and hence results in limited changes to the perpendicular orientation of the gripper to the surface of the moving parts. An example of a successful interaction despite the occurrence of slip can be seen in Figure 1.
#### Iii-A2 Interaction time
It takes about 2 minutes to open a single articulated object, which is in the same order of magnitude as [7, 8], that take about 1 minute4. Most of this time is spent in the actual robot motions, where we move slowly to keep the admittance controller stable and avoid shaky motions. The joint estimations take about 2s each (resulting in about 10 seconds spent estimating joint parameters). With additional efforts, the admittance controller could probably be tuned better to reduce the time needed for opening the objects but it will probably never be as fast as a force-controlled robot. And even so, it will still be much slower than the time it takes for humans to open articulated objects, which is in the order of seconds.
Footnote 4: These times were estimated from the demo videos on the project websites of both papers.
#### Iii-A3 fixed handle grasps
We also found that using handle grasps that remain fixed during execution causes additional issues, which are not limited to or caused by proprioceptive information. On the one hand, the requirement for such
Fig. 2: Additional examples of our system opening articulated objects. (a) and (b) are successful examples of opening a cabinet and drawer respectively. (c) shows how a fixed grasp limits the workspace of the robot such that it cannot fully open the cabinet. (d) shows how a fixed grasp can lead to collisions with the environment.
grasps limits the workspace of the robot unnecessarily (see figure (c)c). On the other hand, it can result in collisions with the environment (see figure (d)d). The first failure case could be solved by simply using larger or mobile robots. The second is more problematic. Think for example about dishwashers, which usually open towards the floor. We argue that future systems should be capable of breaking contact to regrasp the moving part based on the robot kinematics, joint parameters and the environment, and should maybe even deliberately use slip to change the grasp pose. Interesting work in this direction is [19], where the authors use agent-aware affordances to determine where to grasp and when to regrasp for opening and closing articulated objects with given geometry and joint parameters.
### _Should we use the proprioceptive information?_
In section III-A we discussed that the baseline system we created is already quite capable of opening articulated objects, in line with the findings of Jain and Kemp [1]. By replacing their hook for a more generic parallel gripper, slip starts to become an issue, however. To make the proprioceptive information more useful, this should be tackled. One approach could be to design more suitable fingertips, although we believe the goal is to open the objects with a general-purpose system: opening cabinets is after all a means, not an end by itself. Alternatively one could attempt to model the slip or measure the slip using e.g. tactile sensors or visual odometry, which enables canceling the slip in the controller and/or taking it into account during joint estimation. This will make the system more complex though. Another issue is that slip is hard to capture in simulation, as friction is difficult to simulate realistically. This limits the ability to validate or train proprioception-based systems in simulation. Furthermore, vision is still required for determining appropriate grasp poses. It could also speed up the system or make it more robust to make an initial estimation of the joint parameters before interacting, as in [5]. This brings us to the central question this paper wants to bring up: does proprioceptive sensing provide enough additional information when combined with vision to warrant the added complexity? Or can we manage with vision alone?
### _Suggestions for evaluation of articulated object manipulation systems_
In this section, we make some suggestions for evaluating articulated object manipulation systems. Based on the findings in section III-A, we believe that evaluation protocols and/or benchmarks should incorporate the following aspects:
* To evaluate systems that use proprioception in simulation, we have to attempt to provide realistic contacts to reduce sim2real gaps. Current simulation environments such as the UMP environment, join the gripper (or suction cup) with the cabinet through an artificial spring-like constraint [7, 8]. This is perfect to incorporate some of the controller's compliance without handling the complex contact dynamics, but it does not suffice to properly evaluate systems that make use of contact information.
* The time needed to open the articulated objects should be optimized as well as the success rates and both should be reported to allow for a complete comparison between different systems.
* We should add appropriate collision objects in our evaluation. Articulated objects are not floating in a vacuum and this brings additional challenges, as discussed before. These challenges should be reflected in our evaluations. This was already mentioned in previous work, such as by Jain and Kemp [1].
* We should incorporate opening/closing mechanisms, locking mechanisms and other joint dynamics. Many articulated objects (microwaves, drawers) have a locking mechanism that requires a certain amount of force to open. Other objects such as washing machines can have a handle that needs to be pressed to open. Yet other drawers have push-to-open, soft-close mechanisms, etc. Many cabinet doors also have a spring-like mechanism to close unless a certain opening angle is reached. This diversity should be represented in our evaluation.
## IV Conclusions
We combined previous work to enable a position-controlled robot equipped with a general-purpose parallel gripper to open articulated objects using proprioception. The success of such a system hinges on the degree to which the transform between the gripper and the handle remains fixed over time, i.e., the amount of slip that occurs during contact. Although slip occurs, our system was able to open several different articulated objects. Overcoming slip or simulating it to benchmark different systems creates additional complexity, which raises the question of whether we should reintroduce proprioception and fully embrace contact or if we can manage with vision-only systems and do not need to introduce additional complexity.
## Acknowledgments
The authors wish to thank Nick Heppert, author of [4] for open-sourcing the code used to estimate joint parameters and for the interesting discussions on articulated object manipulation. The authors also wish to thank Cristian C. Beltran-Hernandez, author of [16], and Frederik Ostyn for sharing their experience and expertise with compliant control. This research is supported by the Research Foundation Flanders (FWO) under grant number 1S56022N and the euROBIn Project (EU grant number 101070596). |
2301.03238 | MAQA: A Multimodal QA Benchmark for Negation | Multimodal learning can benefit from the representation power of pretrained
Large Language Models (LLMs). However, state-of-the-art transformer based LLMs
often ignore negations in natural language and there is no existing benchmark
to quantitatively evaluate whether multimodal transformers inherit this
weakness. In this study, we present a new multimodal question answering (QA)
benchmark adapted from labeled music videos in AudioSet (Gemmeke et al., 2017)
with the goal of systematically evaluating if multimodal transformers can
perform complex reasoning to recognize new concepts as negation of previously
learned concepts. We show that with standard fine-tuning approach multimodal
transformers are still incapable of correctly interpreting negation
irrespective of model size. However, our experiments demonstrate that
augmenting the original training task distributions with negated QA examples
allow the model to reliably reason with negation. To do this, we describe a
novel data generation procedure that prompts the 540B-parameter PaLM model to
automatically generate negated QA examples as compositions of easily accessible
video tags. The generated examples contain more natural linguistic patterns and
the gains compared to template-based task augmentation approach are
significant. | Judith Yue Li, Aren Jansen, Qingqing Huang, Joonseok Lee, Ravi Ganti, Dima Kuzmin | 2023-01-09T10:11:23Z | http://arxiv.org/abs/2301.03238v1 | # MAQA: A Multimodal QA Benchmark for Negation
###### Abstract
Multimodal learning can benefit from the representation power of pretrained Large Language Models (LLMs). However, state-of-the-art transformer based LLMs often ignore negations in natural language and there is no existing benchmark to quantitatively evaluate whether multimodal transformers inherit this weakness. In this study, we present a new multimodal question answering (QA) benchmark adapted from labeled music videos in AudioSet (Gemmeke et al., 2017) with the goal of systematically evaluating if multimodal transformers can perform complex reasoning to recognize new concepts as negation of previously learned concepts. We show that with standard fine-tuning approach multimodal transformers are still incapable of correctly interpreting negation irrespective of model size. However, our experiments demonstrate that augmenting the original training task distributions with negated QA examples allow the model to reliably reason with negation. To do this, we describe a novel data generation procedure that prompts the 540B-parameter PaLM model to automatically generate negated QA examples as compositions of easily accessible video tags. The generated examples contain more natural linguistic patterns and the gains compared to template-based task augmentation approach are significant.
## 1 Introduction
Large language models (LLMs) have difficulty understanding negation in natural language. Pretrained LLMs often ignore negation in cloze questions and give same prediction for negated ("Birds cannot [MASK]") and non-negated ("Birds can [MASK]") queries (Kassner and Schutze, 2019; Hosseini et al., 2021). Hossain et al. (2022) analyzed the training corpora of state-of-the-art LLMs and found that negation is rarely present, leading to the poor handling of negation at inference time.
State-of-the-art multimodal learning leverages pretrained LLMs for fusing different modalities (Jia et al., 2021; Radford et al., 2021; Oncescu et al., 2021; Kilgour et al., 2022; Nagrani et al., 2022). Will the fine-tuned LLMs intended for multimodal applications inherit the negation problem? Huang et al. (2022) showed that the zero-shot performance on the text query based audio retrieval task degrades when the text query includes negation cues, e.g., "no vocals". Yu et al. (2022) showed that the text-to-image model generates items that are mentioned in the text prompt, even when the
prompt suggests the absence of the item. However, there is no benchmark for quantitatively evaluation of how well negation patterns in the text are handled in such multimodal settings.
In this study, we created MAQA, a binary music audio question answering benchmark, to evaluate how well the multimodal transformers understand negation in music related questions. This benchmark is created from labeled videos in the music-related portion of AudioSet Gemmeke et al. (2017). While the original benchmarks features 5000 hours of audios labeled with \(527\) audio event classes and only contains a handful of labels including negation, the proposed benchmark MAQA features a significant portion of negated questions that are generated programmatically from the original audio labels. Our goal is to evaluate if multimodal transformer can be fine-tuned to understand new concepts, e.g., "no vocals" as negation of the previously learned concept, e.g., "vocals" through compositional generalization.
The main contributions of the paper are: (1) A compositional generalization experiment that demonstrates standard fine-tuning prevents our baseline model, a multimodal transformer modified from the multilingual T5 (MT5) Raffel et al. (2019); Xue et al. (2021) from generalizing to new concepts that are negation of learned concepts. (2) A PaLM-based data generation approach that automatically generate negated QA examples from easily access video tags. (3) Two task augmentation strategies that lead to a significant boost of the model performance on portion of MAQA with text negation.
The rest of this paper is organized as follows. Section 2 provides relevant background and related work on negation, compositional generalization and multimodal learning. Section 3 provides an overview of the MAQA dataset and its statistics. Section 4 details how we create the benchmark through data generation. The models and experiment results are presented in Section 5 and 6. The paper closes with a discussion on the limitations, implications of our results and future work.
## 2 Related Works
**Negation.** Despite improvements of LLMs in many NLP tasks such as natural language understanding, reading comprehension, zero-shot text generation, negation remains a challenge for pre-trained LLMs Kassner and Schutze (2019); Hosseini et al. (2021). Data augmentation has been used to tackle negation in the NLP literature. For example, modification of the natural language understanding corpora by adding negation to the verb or adjective and reversing the labels was proposed in Hossain et al. (2020), and an unlikelihood loss for the span corruption pre-training tasks was proposed in Hosseini et al. (2021). Negation is also addressed in the meta-learning literature Murty et al. (2021), where it is treated as one of the reasoning categories that requires additional few-shot classification tasks to augment the original task distribution.
**Compositional Generalization.** Compositional Generalization refers to the ability to understand novel concept as compositions of previously learned concept or _atoms_. Negation can be thought as a form of composition. In the field of semantic parsing, several benchmarks have been proposed to evaluate compositional generalization Lake and Baroni (2018); Keysers et al. (2020); Kim and Linzen (2020), which have encouraged development of techniques and architectures to make LLMs better at solving compositional tasks Furrer et al. (2020); Ontanon et al. (2022); Csordas et al. (2021); Qiu et al. (2022). Several multimodal benchmarks have shown visually grounded LLMs often struggle with compositional generalization in visual reasoning tasks Johnson et al. (2017), visual grounded command following tasks Ruis et al. (2020), text-to-image matching Zhang et al. (2021), etc. Our study focus on evaluating audio grounded LLMs on compositional tasks involves negation.
**Multimodal QA.** Multimodal question answering benchmarks are used to probe the multimodal models to evaluate their perception and reasoning capability on different modalities. Visual Question Answering benchmarks Zhang et al. (2015); Agrawal et al. (2015) commonly consist of triplets of (image, a natural language question about the image, answer), and the task is to answer the question based on the visual cue in the image. In the field of audio perception, audio QA benchmarks Fayek and Johnson (2020) are less common than audio classification benchmarks Gemmeke et al. (2017). In the music domain, most benchmarks are music information retrieval tasks Law and Von Ahn (2009), where the text labels are usually in the format of short form music tags.
**Multimodal Transformers.** A series of Transformer-based multimodal models (Sun et al., 2019; Tan and Bansal, 2019; Lu et al., 2019), referred to as "Multimodal Transformers" in this study, explored using Transformer encoder as a join encoder for multimodal fusion achieve state-of-the-art results on a range of multimodal QA tasks. Changpinyo et al. (2022) proposed a multimodal version of T5 (Raffel et al., 2019). Given the image and the question in a VQA example, the multimodal T5 takes the global and regional image features generated by a pre-trained visual encoder and text tokens of the question as inputs, and solves a classification problem with pre-defined classes of answers for the VQA task. The parameters of the visual encoder are frozen during T5 fine-tuning. We follow the same approach but use pre-trained audio encoders (Gemmeke et al., 2017; Huang et al., 2022) to extract global representation of the music audio. A detailed survey of audio representation can be found in Huang et al. (2022).
## 3 Music Audio Question Answering (MAQA)
To evaluate the ability of multimodal models to reason with negation, we create a music audio QA benchmark (MAQA) which emphasizes on correct understanding of text negation. The music audio QA pairs are generated programmatically from the music related portion of AudioSet (Gemmeke et al., 2017), which contains music audio clips annotated with music attributes and an ontology describing their relationship. There are \(388,262\) and \(4,497\) unique music audio clips in the train and evaluation split, respectively. Each clip is labeled with one or more music tags out of the \(141\) unique music attributes covering music genres, music roles, music instruments and music moods.
Table 1 presents an example in MAQA, which consists of four QA pairs generated from an example of music attribute labeled audio clip in AudioSet. Q1 and Q2 are questions generated from the same seed attribute, and essentially probe about the same musical skill, i.e., listen to a music audio and try to identify if a bass guitar is played. Q2 is a negated form of Q1. If a model answer Q1 correctly and fail on its negated counterpart Q2, it suggests that the model does not understand the negation logic in the question and unable to perform compositional generalization.
MAQA contains two evaluation sets ASBaseEval and ASNegationEval and two training sets ASBaseTrain and ASNegationTrain as shown in Table 2 with balanced binary label distribution, featuring QA pairs about music moods, genre, instrument and roles. ASBaseTrain / ASBaseEval contains
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline
**Sampleed Attributes** & **Question** & **Answer** & **Negated** \\ \hline Bass guitar (+) & Q1)The musical instrument played & TRUE & No \\ Bass guitar (+) & in the song is **Bass guitar** & TRUE & No \\ Bass guitar (+) & Q2)**Bass guitar** is not played in the song & FALSE & Yes \\ steel guitar, slide guitar (\(-\)) & Q3)The song has **steel guitar** or **slide guitar** & FALSE & No \\ steel guitar, slide guitar (\(-\)) & Q4)The song does not have **slide guitar** or **steel guitar** & TRUE & Yes \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of generated Binary Audio QA Pairs in MAQA. The original AudioSet example is a music audio clip associated with the following tags: _Bass guitar, Guitar, Acoustic guitar and Strum_. Questions and their negated counterpart are generated from the sampled attributes with the PaLM based approach. Negative attributes _steel guitar, slide guitar_ are sampled from the sibling nodes in the AudioSet ontology.
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l|l} \hline \hline \multirow{2}{*}{**Data Version**} & \multicolumn{2}{c|}{**Label Stats**} & \multicolumn{2}{c|}{**\# of QA Pairs**} & \multicolumn{3}{c}{**\# of mentions**} \\ & True Negated & Non-negated & Negated & Genre & Mood & Instrument & Role \\ \hline ASBaseEval & 50\(\%\) 0\(\%\) & 17,028 & 0 & 5574 & 730 & 9740 & 984 \\ ASNegationEval & 50\(\%\) 50\(\%\) & 17,028 & 17,028 & (\(32.7\%\)) & (\(4.3\%\)) & (\(57.2\%\)) & (\(5.8\%\)) \\ \hline ASBaseTrain & 50\(\%\) 0\(\%\) & 1,263,004 & 0 & 439,904 & 28,634 & 740,126 & 54,340 \\ ASNegationTrain & 50\(\%\) 50\(\%\) & 1,263,004 & 1,263,004 & (\(34.8\%\)) & (\(2.3\%\)) & (\(58.6\%\)) & (\(4.3\%\)) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics on Music Audio QA (MAQA) Benchmark. Statistics on Music Audio QA (MAQA) Benchmark. Both evaluation sets ASBaseEval and ASNegationEval are generated by PaLM. Each training set has a template-generated and a PaLM-generated version. All the datasets have balanced binary label distributions. ASNegationEval contains ASBaseEval and its negated counterparts. The music attributes have a similar distribution in training and evaluation split.
non-negated QA pairs about music audio recordings. ASNegationTrain / ASNegationEval is a superset of ASBaseTrain / ASBaseEval, and it also includes their negated counterparts of the QA pairs. A multimodal model with strong music audio understanding capabilities should score high on ASBaseEval. Moreover, to demonstrate its ability of reasoning about negation logic, it has to also score high on ASNegationEval.
## 4 Data Generation
Since music descriptive text that involves negation rarely occur in the standard text corpora Hossain et al. (2022), we propose the following 3-step approach to programmatically generate binary audio QA pairs that involve text negation: 1. For each music audio-attribute pair in the original dataset, we sample a negative attribute that is not associated with the audio clip. 2. Convert the positive and the negative audio-attribute pair into a binary AQA example in the format of a triplet (audio clip, question on the attribute, _True / False_ label). 3. Perform a text negation on the question and flip the binary label simultaneously to create _negated_ audio QA pairs. As a first attempt we curate MAQA from AudioSet with this method, however it can be applied to other datasets containing annotated music audios. Next, we discuss the details of how we followed the 3 steps to create MAQA from AudioSet.
**Negative Attribute Sampling.** We adopt negative sampling to create a balanced binary label distribution. In particular, we sample hard negative attributes using sibling nodes in the ontology tree and assign _False_ label to the derived audio QA pair. Consider the example in Table 1, the audio clip is tagged with _Bass guitar_ and _Acoustic guitar_, which are both under the parent node _Guitar_. We sample hard negative attributes _steel guitar_ and _slide guitar_ from the sibling nodes, to create a negative audio-attribute pair. This hard negative sampling approach encourages the model to differentiate related but different music concepts.
**Question Generation.** We explore the following two approaches to generate questions from the audio-attribute pair sampled from the first step. The first approach is template based, and it takes advantage of the AudioSet ontology, where each music attribute is associated with one of the four attribute types: genres, roles, instruments, and moods. We use type-specific templates to convert attributes into a true-or-false question, e.g., "The <_Attribute Type>_ of the song is <_Attribute Value>_." The second approach leverages the few-shot text generation capability of PaLM Chowdhery et al. (2022) to improve the diversity of generated questions. Similar to GPT-3 Brown et al. (2020), when prompted with an instruction, e.g., "Generate a sentence about music given the music attribute", PaLM learns from a few demonstrations and generates questions on unseen attributes.
**Task Augmentation with Negation.** The template-based approach convert a question to the negation form by inserting a modifier _not_ before the noun, i.e., "The <Attribute Type> of the song is _not_ <Attribute Value>_." and the binary label is flipped. One of the limitation of this approach is that it is attribute type specific and only modifies nouns. PaLM based method overcomes the limitation as with few shot learning the model can generate different negation patterns by modifying both nouns and verbs. For example, the negation patterns associated with the instrument attribute "guitar" include "no guitar", "guitar is not played", and "the song does not feature bass guitar". For each music attribute, we use PaLM to generate a few question candidates and manually pick the best one. Row 2 and 4 in Table 1 are example questions generated in this way. More example questions generated by PaLM and the prompts used are shown in Appendix 8.3.
## 5 Multimodal Modeling
Following the VQA literature Changpinyo et al. (2022); Zhang et al. (2015), we treat the audio QA as a binary classification task. We adopt a multimodal T5 architecture similar to that in Changpinyo et al. (2022) to fuse the audio and text inputs, and we replace T5 with its multi-lingual version MT5. Each music audio clip input is represented as a 128-dimensional embedding obtained either from VGGish Gemmeke et al. (2017)1, which uses a VGG ConvNet architecture, or the transformer based MuLan model Huang et al. (2022). The audio encoders are frozen when we finetune the multimodal T5. The audio embeddings are projected to the text token embedding
space through a linear projection layer, which is initialized randomly at the beginning of finetuning. Then, the audio token and text token are fed into the pre-trained multi-layer MT5 (Xue et al., 2021) encoder as a sequence of vectors and the final multimodal representation is classified into the binary classes. The multimodal code is based on the Flaxformer framework2. Training details can be found in Appendix 8.1.
Footnote 2: [https://github.com/google/flaxformer](https://github.com/google/flaxformer)
## 6 Experiments and Results
We report experiment results on the ASBaseEval and ASNegationEval evaluation benchmark in Table 3 and Table 4 respectively. The Audio QA task is formulated as a binary classification problem, and we report the best AUC-ROC score and the corresponding accuracy in the positive class. To evaluate model's ability to generalize compositionally so that it can understand composed music concepts like "no vocals" that involve negation, we split the data into train and test sets following the design recommended by (Keysers et al., 2020). By design the music attributes or _atoms_ are similarly represented in the train and test sets, while the test set contains novel combinations of the _atoms_ that are not seen in the train set. _Compound Divergence_ (CD) is used to measure quantitatively how different is the compound distributions in the train and test split (Keysers et al., 2020), while in our case CD is used as a qualitative measure (Tabel 5 in Appendix 8.2), and _compound_ refers to the QA pairs after applying compositional rules, e.g., negation to the _atoms_. For each split scenario, we compare the performance of finetuned multimodal transformer with different audio feature extractors, as well as with different sized pre-trained MT5 model. Furthermore, we vary the types of QA pairs (template-based or PaLM-based) used in training split and study how compound divergence affects learning negation.
### Music Understanding
Table 3(a) shows the result for the first split scenario where the model is trained and evaluated on non-negated QA pairs generated by PaLM. This Low CD experiment establish a fine-tuning baseline on basic music concepts. The fine-tuned multimodal MT5 score over \(90\%\) AUC-ROC on the ASBaseEval benchmark that features Audio QA tasks on music styles, moods, genres, instruments, etc. Recall the random baseline is \(50\%\) for balanced binary classification tasks, this suggests multimodal transformer learn to efficiently fuse audio and text signals through fine-tuning, even it is warm started from a text-only checkpoint. Probing the model on different music attributes suggests that music concepts like "Scary music", "Children music" and popular percussion instruments like "Cowbell" are easy for the fine-tuned model to pick up, while the model has a harder time to understand electronic music genres such as "Drum and bass", "Trance music".
We further replace the training examples generated by PaLM with the template-generated QA examples resulting in the Medium CD setting. The model scores around \(6\%\) lower in the Medium CD setting (Table 3(b)) compared to the Low CD setting. This suggest the model can still transfer most of the music knowledge learned in a different linguistic context via compositional generalization. For both split scenarios the best multimodal model is the MT5-XL with Mulan embedding as audio features.
### Reasoning with Negation
For the third split scenario (Table 3(b)) we apply the same fine-tuning setup as in Table 3(a) but evaluate on ASNegationEval, where the non-negated half is from ASBaseEval and the other half contains their negation counterparts. As shown in Table 4(a), the multimodal MT5 fine-tuned on non-negated audio QA pairs (ASBaseTrain) scores only \(50\%\) on the ASNegationEval benchmark in this High CD setting. Although the model still scores around \(80\%\) on the non-negated QAs (comparable to the accuracy on ASBaseEval in Table 3), it scores only around \(20\%\) on their negated counterparts. The model does worse than the \(50\%\) random guess baseline on these negated questions after fine-tuning. This shows that while the model is trained to answer the non-negated questions correctly, they also learn to "ignore" the negation cue in the negated questions. We also show that increasing the model size does not improve the AUC-ROC score, suggesting that even larger model fail to generalize compositionally using the standard fine-tuning approach.
### Task Augmentation
We then apply task augmentation during training by augmenting ASBaseTrain with negated QA example generated by PaLM (AsNegationTrain-PaLM), which lower the compound divergence. The task augmentation proves to be an effective strategy for tackling negation. As shown in Table 4(b), multimodal MT5 fined-tuned with task augmentation improves the baseline on ASNegationEval as shown in Table 4(a) by nearly \(40\%\), while obtaining similar performance on the non-negated QA pairs (ASBaseEval). The AUC-ROC score and accuracy is on par with the scores on the non-negated Audio QA pairs (ASBaseEval), suggesting that task augmentation can indeed help the model to learn to answer the questions with negation correctly. The best result on ASNegationEval is obtained by fine-tuned MT5-XL with MuLan audio embedding.
### Template versus PaLM
We further explore how different task augmentation strategy affects the learning outcome. As shown in Table 4(c), we use template-based approach for composing QA pairs and task augmentation, and compare with the fine-tuning results with PaLM-generated QA pairs. The Template-based fine-tuning scores around \(10\%\) lower in AUC score compared to PaLM-based fine-tuning. The observed gap can be explained by the increased compound divergence between the training data and the evaluation data. The accuracy difference on the non-negated split is around \(7\%\) while the difference on the negated split is around \(10\%\) to \(12\%\). Recall that the template-based approach only modifies the noun for negation while Palm-based approach incorporates more variations, which can explain why the template-based fine-tuning performs worse on the negated split. This experiment has highlighted the importance of composing augmented tasks with natural linguistic variations that match human language used in production environment. However, even Template-based task augmentation can improve negation understanding significantly, on average \(30\%\) higher than training without task augmentation (Table 4(a)).
\begin{table}
\begin{tabular}{c|l|l|l|l|l|l|l} \hline & \multicolumn{3}{c|}{**Finetuning Details**} & \multicolumn{3}{c}{**ASNegationEval**} \\ \hline & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{Acc} \\ & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c|}{**Train Data - QA Type**} & \multicolumn{1}{c|}{**CD Type**} & \multicolumn{1}{c|}{AUC} & \multicolumn{1}{c|}{Avg} & \multicolumn{1}{c|}{Neg} & \multicolumn{1}{c}{NoNeg} \\ \hline a) & MT5-Base+VGGish & ASBaseTrain-PaLM & High CD & 0.524 & 0.513 & 0.218 & 0.803 \\ & MT5-XL+VGGish & ASBaseTrain-PaLM & High CD & 0.525 & 0.525 & 0.247 & 0.802 \\ & MT5-Base+MuLan & ASBaseTrain-PaLM & High CD & **0.553** & 0.541 & 0.273 & 0.802 \\ & MT5-XL+MuLan & ASBaseTrain-PaLM & High CD & 0.528 & 0.520 & 0.220 & 0.819 \\ \hline b) & MT5-Base+VGGish & ASNegationTrain-PaLM & Low CD & 0.896 & 0.814 & 0.814 & 0.813 \\ & MT5-XL+VGGish & ASNegationTrain-PaLM & Low CD & 0.903 & 0.821 & 0.821 & 0.821 \\ & MT5-Base+MuLan & ASNegationTrain-PaLM & Low CD & 0.905 & 0.821 & 0.821 & 0.822 \\ & MT5-XL+MuLan & ASNegationTrain-PaLM & Low CD & **0.907** & 0.825 & 0.825 & 0.825 \\ \hline c) & MT5-Base+VGGish & ASNegationTrain-Temp & Med CD & 0.784 & 0.715 & 0.690 & 0.741 \\ & MT5-XL+VGGish & ASNegationTrain-Temp & Med CD & 0.823 & 0.743 & 0.724 & 0.763 \\ & MT5-Base+MuLan & ASNegationTrain-Temp & Med CD & 0.805 & 0.739 & 0.723 & 0.755 \\ & MT5-XL+MuLan & ASNegationTrain-Temp & Med CD & **0.828** & 0.750 & 0.740 & 0.759 \\ \hline \end{tabular}
\end{table}
Table 4: Accuracy on ASNegationEval for three different Compound Divergence (CD) settings.
\begin{table}
\begin{tabular}{c|l|l|l|l|l} \hline & \multicolumn{3}{c|}{**Finetuning Details**} & \multicolumn{3}{c}{**ASBaseEval**} \\ \hline & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c|}{**Train Data - QA Type**} & \multicolumn{1}{c|}{**CD Type**} & \multicolumn{1}{c|}{AUC} & \multicolumn{1}{c}{Acc} \\ \hline a) & MT5-Base+VGGish & ASBaseTrain-PaLM & Low CD & 0.905 & 0.821 \\ & MT5-XL+VGGish & ASBaseTrain-PaLM & Low CD & 0.911 & 0.827 \\ & MT5-Base+MuLan & ASBaseTrain-PaLM & Low CD & 0.913 & 0.828 \\ & MT5-XL+MuLan & ASBaseTrain-PaLM & Low CD & **0.918** & 0.832 \\ \hline b) & MT5-Base+VGGish & ASBaseTrain-Temp & Med CD & 0.847 & 0.771 \\ & MT5-XL+VGGish & ASBaseTrain-Temp & Med CD & 0.850 & 0.765 \\ & MT5-Base+MuLan & ASBaseTrain-Temp & Med CD & 0.845 & 0.766 \\ & MT5-XL+MuLan & ASBaseTrain-Temp & Med CD & **0.851** & 0.767 \\ \hline \end{tabular}
\end{table}
Table 3: Accuracy on ASBaseEval for two different Compound Divergence (CD) settings.
Conclusion
In this work, we propose a new Binary Audio QA benchmark MAQA in the music domain to probe the state-of-the-art multimodal models on understanding negation. MAQA fills in the gap of lacking negation-focused evaluation benchmark in the multimodal setting. Our experiments show that standard fine-tuning prevents the multimodal transformers from generalizing to new concepts that are negation of the learned concepts. While increasing the model size or adopting a better audio encoder doesn't help with negation, task augmentation allows the model to reason with negation by providing more fine-tuning examples that contain negation. And LLMs like PaLM can be used to generate negated examples with more natural linguistic variations, which have a significant effect on the learning outcome. With the MAQA benchmark, we hope to encourage multimodal research community to develop new modeling frameworks or algorithms to handle complex natural language instructions that involves negation. We plan to release the MAQA dataset on Github.
|
2303.15223 | How far generated data can impact Neural Networks performance? | The success of deep learning models depends on the size and quality of the
dataset to solve certain tasks. Here, we explore how far generated data can aid
real data in improving the performance of Neural Networks. In this work, we
consider facial expression recognition since it requires challenging local data
generation at the level of local regions such as mouth, eyebrows, etc, rather
than simple augmentation. Generative Adversarial Networks (GANs) provide an
alternative method for generating such local deformations but they need further
validation. To answer our question, we consider noncomplex Convolutional Neural
Networks (CNNs) based classifiers for recognizing Ekman emotions. For the data
generation process, we consider generating facial expressions (FEs) by relying
on two GANs. The first generates a random identity while the second imposes
facial deformations on top of it. We consider training the CNN classifier using
FEs from: real-faces, GANs-generated, and finally using a combination of real
and GAN-generated faces. We determine an upper bound regarding the data
generation quantity to be mixed with the real one which contributes the most to
enhancing FER accuracy. In our experiments, we find out that 5-times more
synthetic data to the real FEs dataset increases accuracy by 16%. | Sayeh Gholipour Picha, Dawood AL Chanti, Alice Caplier | 2023-03-27T14:02:43Z | http://arxiv.org/abs/2303.15223v1 | # How far generated data can impact Neural Networks performance?
###### Abstract
The success of deep learning models depends on the size and quality of the dataset to solve certain tasks. Here, we explore how far generated data can aid real data in improving the performance of Neural Networks. In this work, we consider facial expression recognition since it requires challenging local data generation at the level of local regions such as mouth, eyebrows, etc, rather than simple augmentation. Generative Adversarial Networks (GANs) provide an alternative method for generating such local deformations but they need further validation. To answer our question, we consider noncomplex Convolutional Neural Networks (CNNs) based classifiers for recognizing Ekman emotions. For the data generation process, we consider generating facial expressions (FEs) by relying on two GANs. The first generates a random identity while the second imposes facial deformations on top of it. We consider training the CNN classifier using FEs from: real-faces, GANs-generated, and finally using a combination of real and GAN-generated faces. We determine an upper bound regarding the data generation quantity to be mixed with the real one which contributes the most to enhancing FER accuracy. In our experiments, we find out that 5-times more synthetic data to the real FEs dataset increases accuracy by 16%.
Facial Expression Recognition, Generative Adversarial Networks, Synthetic data.
## 1 Introduction
Deep learning (DL) has achieved high accuracy performance in various complex tasks including recognition Rakesh et al. (2022), detection Zhou et al. (2022), localization Grumiaux et al. (2022), etc. Yet despite its success, it requires large amounts of labeled data, especially if high performance is required. For instance, considering a Facial Expression Recognition (FER) model trained on a specific Facial Expressions (FEs) dataset, it would not perform as well when applied to a moderately different real-world dataset. This is due to the distribution shift coming from a lack of diversity and biases in the datasets against certain demographic changes Drozdowski et al. (2020) such as race, gender, and age.
Biases in the training data prone trained models towards overfitting as they are optimized over the majority samples (e.g. certain age) represented in the dataset. Hence a low performance is expected over minor samples (e.g. certain races). To address this issue, we argue that having at disposal a diverse dataset would help in overcoming such biases and building a generalizable model. However, acquiring and labeling image and video data is a very expensive and time-consuming task and sometimes it is not even feasible. In this paper, we study the impact of synthetic data generation on the
performance of neural networks. We propose to alleviate the bias issue by testing a data augmentation procedure able to generate balanced and diverse data samples.
Several works da Silva and Pedrini (2015); Gu et al. (2012); Hasani and Mahoor (2017); and Zavarez et al. (2017) routinely performed standard data augmentation using affine transformation (e.g., translation, scaling, rotation, reflection, shearing, cropping, etc.). Standard augmentation does not bring any new information to enrich the training dataset to solve the bias problem. On the contrary, Generative adversarial networks (GANs) Goodfellow et al. (2014) offer the opportunity, to increase the amount of training samples, and to enrich the diversity of the final training set under certain experimental data generation process. In this paper, we consider an FER task and we address and evaluate the use of generated synthetic FEs via GANs to compensate the lack of diversity in FE training databases in an attempt to reduce the bias of the considered FER model and to increase its generalization ability.
Here, we consider a classical CNN classification scheme as it is not our intention to build a novel classifier. However we carefully design the data augmentation scheme based on combining multiple GANs that consider generating: i) new and diverse FEs with new identities and races, various genders, and different ages; ii) various FEs deformation intensities, which makes the generated facial expressions closer to spontaneous human behavior; and iii) balanced dataset where we guarantee that each identity gets the same amount of generated images per emotion class.
To this end, our contributions are:
* We design a method to generate diverse and balanced facial expression deformations.
* We empirically investigate the contribution of synthetic data and their role in improving DL performance.
* We perform a cross-database evaluation to estimate fairly the impact of generated data on the generalizability of the trained model.
The paper is structured as follows: Section 2 discusses related works; Section 3 presents the proposed procedure of building an FER system based on augmented data; Section 4 discusses the experimental results; Finally, Section 5 concludes the paper.
## 2 Related Works
In most traditional research in facial expression recognition, the combination of face appearance descriptors used to represent facial expressions with deep learning techniques is considered to overcome the difficult factors for FER. Regardless, due to the small size of public image-labeled databases, Data Augmentation (DA) techniques are often used to increase the size of the database. In addition to DA geometric transformations, more complex guided augmentation methods can be used for DA, such as GAN. In Yi et al. (2018), a conditional GAN is used to generate images to augment the FER2013 dataset. A CNN is used to train the predictive model, and the average accuracy increased by 5% after applying the GAN DA technique. Chu et al. (2019) proposed an FER method based on Contextual GAN. Chu's model uses a contextual loss function to enhance the facial expression image and a reconstruction loss function to retain the subject's identity information in the expression image. Experimental results with the extended CK+ database Lucey et al. (2010) show that Chu's method improves recognition performance by 7%. However, neither Yi's nor Chu's studies perform cross-database evaluation nor consider the generation of balanced synthetic FEs classes. Porcu et al. (2020) experimented with the combination of various data augmentation approaches, such as using synthetic images, and discovered that a combination of synthetic data with horizontal reflection, and translation can increase the accuracy by approximately 30%. They performed cross-database evaluations by training their model on an "augmented" KDEF database Lundqvist et al. (1998) and testing it on two different databases (CK+ and ExpW Zhanpeng Zhang and Tang (2016)). Unlike them, we design our method to consider a diverse but balanced generation of FE classes and create our experimental setup to resemble fair performance metrics.
## 3 Data Modality
Generative Adversarial Networks are used to generate different FEs for training our FER algorithm. Our model design splits into three different compartments: the data generation stage, the CNN classifier training stage, and the inference stage.
### Dataset Generation Process
Our data generation process relies on using two GANs on top of each other. One is for new identity generation while the other is used to impose the generation of local FEs. First, we generate new identities with new facial features using
the StyleGAN model of Karras et al. (2020) that randomly generates realistic human faces. Additionally, since we want to compare the performance of our FER model trained with both real or generated facial features, we build a database that resembles existing public databases. In those public datasets, subjects pose different expressions in front of a fixed-setting camera. For this reason, we build a novel method that jointly uses StyleGAN and StarGAN on top of each other as a way to reinforce the FEs generation process over new identities. However, due to the randomness of the StyleGAN model and the desirability of a balanced training set, we use the structure of a StarGAN model Choi et al. (2017) for image-to-image translation with different settings to artificially synthesize the six Ekman emotions (anger, disgust, fear, happiness, sadness, and surprised) on a single generated identity. We train the StarGAN model with the spontaneous public database Affectnet-HQ Mollahosseini et al. (2019) since this database captures images from various settings, and from lots of people through the internet. We use the trained model to generate facial expressions on both real face images and StyleGAN generated face images as shown in figure 1. The final result of using the image-to-image translation StarGAN model to synthesize different expressions for a given real or generated identity is shown in figure 2. As we can see, there are several artifacts in the output images, which are mainly found on the outer part of the face. However, these artifacts are not important in our task since we only focus on facial features for facial expression recognition. During this process, we generated 100,000 identities and synthesized 6 basic emotions on each of them. Finally, with some preprocessing (face cropping, gray-scale, and resizing), we generated the balanced dataset illustrated in figure 3.
### Convolutional Neural Network
In the second stage of our method, we design a CNN classifier whose architecture is summarized in table 1. Our purpose is to use a simple yet effective classifier in order to focus our attention on the contribution of GAN-generated images with respect to model enhancement. To avoid the overfitting effect, we use drop-out layers.
In the first experiment, the CNN classifier is trained on the two facial expression public databases (RaFD Langner et al. (2010), and Oulu-CASIA Zhao et al. (2011)) labeled with the 6 basic Ekman emotions. In the second experiment, the CNN model is trained again from scratch using only generated facial expressions. These two control experiments serve as baselines. Finally, we re-train the CNN model again by gradually augmenting the public databases with images of generated FEs with the same number of identities each time.
### Testing Phase
To fairly evaluate the performance of our method, we split the real and generated database to create a test database. About 17% of the data that are created are used for testing purposes. Although these test datasets are necessary to assess the model's performance, the similarity in distribution between the test and train sets makes it difficult to determine the exact contribution of the generated data to the model's performance. To ensure a fair analysis of the results and prevent bias in model prediction, it is important for our settings to perform a cross-database evaluation in which our test data resemble zero correlation with training sets. Hence, to have a fixed reference test dataset to compare all the models, we use the MMI database Pantic et al. (2005) which is completely blind to the training process.
Figure 1: Two samples of facial features.
Figure 2: Examples of synthetic facial expressions. In the top row, the StarGAN model acts on an actual human face. In the bottom row, the model works with a generated identity.
## 4 Results and Analysis
In the following, we present the results of three experiments with different settings and designs. RaFD Langner et al. (2010) and Oulu-CASIA Zhao et al. (2011) real databases have been used in experiments 1 and 3. Alternatively, in experiment 2 we use only synthetic data, and these synthetic data are also used in experiment 3 for further analysis.
### Experiment 1 - Training with real data
In this first baseline experiment, the CNN classifier for the FER model has been trained with real face images coming from RaFD and Oulu-CASIA which in total have 144 subjects. Here, we use 109 subject images with 6 basic emotions for training, 10 for validation, and 25 for testing. From these data, we only consider the frontal faces that are associated with emotional labels.
Our trained CNN classifier achieves 69.6% of accuracy when it is tested over 25 subjects. However, we observe overfitting because our model achieves 84.5% in the training phase. By applying a cross-database evaluation using the MMI database Pantic et al. (2005), the obtained accuracy drops to 45% which is expected due to the limited number of subjects we have in the training dataset. Next, we analyze each class separately to get an insight into the classes' separability. Figure 4 presents the associated confusion matrix. Based on this map, the "Happy" emotion has the best performance and the "Disgust" class is also showing a good performance compared to other classes of emotions. While this confusion matrix provides a comprehensive overview of each class, it is unnormalized, making later comparisons difficult. To analyze the results of the cross-database evaluation on the MMI database in a more detailed manner, in a second step, we measure three metrics (precision, recall, and F1-score) to explore the model's prediction with the annotations provided for the MMI database. Figure 5 presents these metrics results for each class of emotions. It can be noticed that the model trained on real facial features has the most difficulty at recognizing the "Sad" class, while the other classes are not showing a good performance either. Despite having two real databases for training, this model fails to perform adequately. As a result, adding more data to the training is necessary and we argue that adding synthetic data might aid in overcoming the overfitting and also in lifting up the accuracy rate.
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline & Layer & input & number of filters & Pool size & Activation function \\ \hline
1 & Conv2D & \(64\times 64\times 1\) & 32 & & Relu \\
2 & Conv2D & \(64\times 64\times 32\) & 64 & & Relu \\
3 & Max Pooling & \(64\times 64\times 64\) & & \(2\times 2\) & \\ \hline
4 & \multicolumn{4}{|c|}{Drop out 25\%} & \\ \hline
5 & Conv2D & \(32\times 32\times 64\) & 128 & & Relu \\
6 & Max Pooling & \(32\times 32\times 128\) & & \(2\times 2\) & \\
7 & Conv2D & \(16\times 16\times 128\) & 128 & & Relu \\
8 & Max Pooling & \(16\times 16\times 128\) & & \(2\times 2\) & \\ \hline
9 & \multicolumn{4}{|c|}{Drop out 25\%} & \\ \hline
10 & \multicolumn{4}{|c|}{Flatten} & \\
11 & \multicolumn{4}{|c|}{Dense **1024**} & & Relu \\ \hline
12 & \multicolumn{4}{|c|}{Drop out 50\%} & \\ \hline
13 & \multicolumn{4}{|c|}{Dense **6**} & \multicolumn{2}{|c|}{Softmax} \\ \hline \end{tabular}
\end{table}
Table 1: Model summary of the considered CNN-based model for Facial Expression Recognition.
Figure 3: Samples of generated FEs either on a real face or on a generated one.
### Experiment 2 - Training with synthetic generated data
In the second experiment, we follow the same protocol. The only difference is that we are using synthetic facial images as the training dataset. We consider the same number of synthetic identities as in the real dataset in the first experiment (109 identities). These identities are generated with the process presented in section 3.1. For each identity, all six basic emotions exist in the dataset. Training our model with this synthetic dataset, the accuracy reaches 99.84% during the training process and 97.6% while testing on the synthetic dataset. Also, no overfitting is observed in this experiment. Although these results show a significant improvement, performing the cross-database evaluation on the MMI database is not that promising. On the MMI database, the obtained accuracy drops to 47% showing nearly the same performance as the model trained in experiment 1.
Figure 6 presents the confusion matrix of the model trained with the synthetic dataset for the MMI database to discover whether the model trained on synthetic data has similar classifying difficulties as the model trained on real data. Comparing the Confusion Matrix in figures 4 and 6 we notice:
1. There has been a huge improvement in recognizing the class "Surprised", 620 samples instead of 187 samples.
2. We can observe improvements in the recognition of the "Angry" class, 251 samples instead of 187.
3. As compared to the CNN classifier trained on real faces, we see some drop in the "Disgust", "Fear", and "Sad" classes, but both models seem to have similar difficulties.
Also based on the presentation of precision, recall, and F1-score in figure 7, it appears the recall scores have decreased for most classes. We can therefore say that except for the "Surprised" class, the CNN model trained on synthetic data alone is unable to match the actual facial expressions annotations provided in the MMI database. The results of our current experiment prove that synthetic datasets can achieve similar performance as real datasets. Hence our final aim is to increase the dataset size to improve the performance at all class levels. In this case, we hope that the combination of these two databases will help solve such problems.
Figure 4: Confusion Matrix on the MMI database in experiment one.
Figure 5: Precision, recall, and F1-score on the MMI database obtained on the CNN model trained with real faces only (cf. experiment 1).
### Experiment 3 - Training with augmented datasets
In the last experiment, we augment the Real Facial Expressions (RFEs) dataset of experiment 1 with Generated Facial Expressions (GFEs). The number of generated identities in each unit is the same as the number in the real database used for experiment 1 (109 identities for training, 10 identities for validation, and 25 identities for testing). As an example, RFEs + 2 \(\times\) GFEs is the extension of the real FEs with two units of generated FEs (109 real identities + 218 generated identities for training). Each of the augmented datasets is split into training, validation, and test sets. And the CNN model is trained on each dataset individually. Each augmented dataset is represented in table 2 indicating the model accuracy during training and testing. The results demonstrate that adding more synthetic FEs to the training set results in constant improvement of training and testing accuracies. The study also reports no evidence of overfitting.
The cross-database evaluation on the MMI database is then performed for further validation and figure 8 shows the accuracy obtained from each trained model. Note that the first two points are the result of cross-database evaluation obtained in experiments 1 and 2 respectively. According to this figure, the highest accuracy corresponds to the model trained on the RFEs + 5 \(\times\) GFEs dataset with 58.3%. This performance from the model trained on the 5th augmented dataset indicates a 13% gain in response to the model trained in experiment 1 (with real FEs). But beyond this point,
\begin{table}
\begin{tabular}{c c c} \hline \hline & Training accuracy & Testing accuracy \\ \hline RFEs + GFEs & 91\% & 85.3\% \\ RFEs + 2 \(\times\) GFEs & 93.8\% & 89\% \\ RFEs + 3 \(\times\) GFEs & 94.8\% & 92.7\% \\ RFEs + 4 \(\times\) GFEs & 95.8\% & 92.5\% \\ RFEs + 5 \(\times\) GFEs & 97.6\% & 94.3\% \\ RFEs + 6 \(\times\) GFEs & 97.8\% & 94\% \\ RFEs + 10 \(\times\) GFEs & 97.9\% & 95.1\% \\ RFEs + 15 \(\times\) GFEs & 98.9\% & 95.5\% \\ RFEs + 20 \(\times\) GFEs & 98.9\% & 97\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy of the model trained on augmented datasets (Real Facial Expressions (RFEs) augmented by Generated Facial Expressions (GFEs)).
Figure 6: Confusion Matrix on MMI database in experiment two.
Figure 7: Precision, recall, and the F1-score on the MMI database for the model trained in experiment two (section 4.2).
the accuracy drops significantly due to a catastrophic forgetting mode caused by the large number of synthetic facial features in the training set overwhelming the facial features of the real face database.
To see the improvement of our best model (RFs + 5 \(\times\) GFFs) in each class separately, we present the confusion matrix and the calculated precision, recall, and F1-score metrics on the MMI database in figures 9 and 10 respectively. It appears that the "Sad" class performs significantly better than the two baseline experiments in all three metrics. Based on the recall scores in all classes, we can conclude that this trained model matches facial expressions in the MMI database to their actual annotations better than other trained models. In contrast with the model trained in experiment 1 (training set of real FEs), only the "Anger" class's performance decreases. In conclusion, based on our observation, we can say that generated data along with the real facial features is helping the model's recognition ability.
Furthermore, there is no limit to the number of identities we can generate. But there is a point beyond which adding new generated FEs no longer improves the results. We have observed experimentally that there is an upper limit in augmentation in relation to the size of the real face database.
Figure 8: The result of cross-database evaluation on the MMI database. RFEs is referring to Real Facial Expressions and GFFs is referring to Generated Facial Expressions.
Figure 10: Precision, recall, and F1-score on the MMI database for the model trained on the RFEs + 5 \(\times\) GFFs database in experiment three (section 4.3).
Figure 9: Confusion Matrix on MMI database in experiment three with the model trained on RFEs + 5 \(\times\) GFFs database.
### Comparison with the state-of-the-art
As a final step in this study, we compare our results with state-of-the-art findings. We use the VGG16 tool to calculate the accuracy of the FER VGG16 model on the MMI database. With that model, we achieve 54.08% accuracy while our best CNN-based model reaches 58.3% in accuracy. Through the use of synthetic facial features and a simpler model, we enhance the accuracy by 4%.
Many state-of-the-art studies have reported their evaluation results on the CK+ database. Nevertheless, we did not use the CK+ database in our training or testing processes. Therefore, in order to perform the comparison, we evaluate our best model performance on the CK+ database and the result is presented in table 3. It can be seen that the approach proposed by Zavarez et al. (2017) is the only one that outperforms our proposed CNN model. However, the difference is only 1.09% while they trained their model using 6 different public databases and some classical data augmentation techniques. Whilst our results are achieved with smaller training datasets using only two public databases and GAN images which makes our results more outstanding. In addition, compared to the study in Porcu et al. (2020) that is explained in section 2, even though their model's accuracy increased by 30%, our model had more promising results based on this cross-database evaluation.
Figure 11 shows the result of our CNN-based model in the cross-database evaluation on the CK+ database. For this database, we achieve an accurate model for most of the classes even though there is no record of this public database in our training data.
During this study, we replaced the MMI database with the CK+ database for cross-database evaluation. As a result, model performance increased by 16% rather than the 13% gain we previously achieved. While it is undeniable that generated data is a costless method that can help improve FER model accuracy, the exact gain would be determined by the test databases in applications.
Figure 11: Confusion Matrix on CK+ database.
\begin{table}
\begin{tabular}{|c|l|l|l|} \hline & Method & Training Database & Accuracy \\ \hline
1 & Proposed method & RaFD + Oulu-CASIA +GAN & **87.49\%** \\ \hline
2 & Porcu et al. (2020) & KDEF & 53.30\% \\ \hline
3 & da Silva and Pedrini (2015) & MUG & 45.60\% \\ \hline
4 & da Silva and Pedrini (2015) & JAFFE & 48.20\% \\ \hline
5 & da Silva and Pedrini (2015) & BOSPHOROUS & 57.60\% \\ \hline
6 & Lekidoui et al. (2017) & KDEF Lundqvist et al. (1998) & 78.85\% \\ \hline
7 & Gu et al. (2012) & JAFFE & 54.05\% \\ \hline
8 & Hasani and Mahoor (2017) & MMI+FERA & 73.91\% \\ \hline
9 & Molahosseini et al. (2016) & MultiPIE Gross et al. (2008), MMI, CK+, DISFA Mavadati et al. (2013), FERA Valstar et al. (2017), SFEW Dhall et al. (2011), and FERA2013 & **85.58\%** \\ \hline
10 & Zavarez et al. (2017) & CK+, JAFFE, MMI, RaFD, KDEF, BU-3DFE Yin et al. (2006), and ARFace Martinez and Benavente (1998) & **85.58\%** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison among state-of-the-art cross-database experiments tested on the CK+ database.
## 5 Conclusions
The purpose of this study was to investigate how generated data could be used to augment the data in a deep learning model to improve its performance. We chose a simple facial expression recognition model for this proposal. Our synthetic balanced dataset was created using two GAN models to test the potential improvement of the FER model performance. With real databases, synthetic datasets, and augmented datasets, we trained the CNN classifier multiple times for the FER task.
Our study confirms that enriching the training dataset with GAN images can improve CNN classifier performance. Training and cross-database evaluation performances were improved by augmenting real databases with synthetic facial features. In comparison to a model trained solely from real facial images, our best model shows a 16% increase in accuracy. On the same database, we also compared our results with the state-of-the-art and computed the accuracy of the VGG16 model, achieving 4% higher accuracy.
For further study, we propose to first augment the training database with additional real facial expressions. This will enable us to improve the performance of the model, as it would also let us augment more GAN images. Secondly, we propose to enrich the VGG16 database with our generated dataset to see if we can improve the performance of the VGG16 model as well. And third, we would like to study the potential performance increase for other applications related to facial models.
**Material, codes, and Acknowledgement:** Results can be reproduced using the code available in the GitHub repository [https://github.com/sayeh1994/synthesizin_facial_expression](https://github.com/sayeh1994/synthesizin_facial_expression) and [https://github.com/sayeh1994/Facial-Expression-Recognition](https://github.com/sayeh1994/Facial-Expression-Recognition). Most of the computations presented in this paper were performed using the Gricad infrastructure ([https://gricad.univ-grenoble-alpes.fr](https://gricad.univ-grenoble-alpes.fr)), which is supported by Grenoble research communities.
|
2305.17962 | Matrix-valued $θ$-deformed bi-orthogonal polynomials,
Non-commutative Toda theory and Bäcklund transformation | This paper is devoted to revealing the relationship between matrix-valued
$\theta$-deformed bi-orthogonal polynomials and non-commutative Toda-type
hierarchies. In this procedure, Wronski quasi-determinants are widely used and
play the role of non-commutative $\tau$-functions. At the same time, B\"acklund
transformations are realized by using a moment modification method and
non-commutative $\theta$-deformed Volterra hierarchies are obtained, which
contain the known examples of the Itoh-Narita-Bogoyavlensky lattices and the
fractional Volterra hierarchy. | Claire Gilson, Shi-Hao Li, Ying Shi | 2023-05-29T08:52:49Z | http://arxiv.org/abs/2305.17962v1 | Matrix-valued \(\theta\)-deformed bi-orthogonal polynomials, non-commutative Toda theory and Backlund transformation
###### Abstract.
This paper is devoted to revealing the relationship between matrix-valued \(\theta\)-deformed bi-orthogonal polynomials and non-commutative Toda-type hierarchies. In this procedure, Wronski quasi-determinants are widely used and play the role of non-commutative \(\tau\)-functions. At the same time, Backlund transformations are realized by using a moment modification method and non-commutative \(\theta\)-deformed Volterra hierarchies are obtained, which contain the known examples of the Itoh-Narita-Bogoyavlensky lattices and the fractional Volterra hierarchy.
Key words and phrases:matrix-valued orthogonal polynomials, \(\theta\)-deformation, Wronski quasi-determinant technique, non-commutative Toda-type lattices 2020 Mathematics Subject Classification: 39A36, 15A15
## 1. Introduction
The studies of connections between orthogonal polynomials and integrable systems doesn't only promote the development in their own respective directions, but stimulates disciplinary researches on random matrices, combinatorics, probability and so on. A famous example in this field is the connection between standard orthogonal polynomials and the Toda equation [17, 24]. Starting from a non-negative weight function \(\omega(x)\), one can define an inner product
\[\langle\cdot,\cdot\rangle:\mathbb{R}[x]\times\mathbb{R}[x]\to\mathbb{R},\quad \langle f(x),g(x)\rangle=\int_{\mathbb{R}}f(x)g(x)\omega(x)dx,\]
and a sequence of monic orthogonal polynomials \(\{P_{n}(x)\}_{n\in\mathbb{N}}\) is then defined by the orthogonality
\[\langle P_{n}(x),P_{m}(x)\rangle=h_{n}\delta_{n,m}\]
for some non-singular normalization constant \(h_{n}\), where \(\deg\,P_{n}=n\). It is then known that there exists a three-term recurrence relation for the orthogonal polynomials
\[xP_{n}(x)=P_{n+1}(x)+a_{n}P_{n}(x)+b_{n}P_{n-1}(x),\quad P_{-1}(x)=0,\,P_{0}(x)=1 \tag{1.1}\]
for some coefficients \(a_{n}\) and \(b_{n}\). This recurrence relation plays a role as a spectral problem and is a key ingredient to connect with the Toda equation. If we assume that there exist time evolutions in the weight function such that
\[\omega(x)\mapsto\omega(x;\mathbf{t}):=\exp\left(\sum_{i=1}^{\infty}t_{i}x^{i} \right)\omega(x),\]
then the orthogonal polynomials are dependent with time flows, and the Toda equation could be derived by the compatibility condition of the spectral equation and evolution part. Therefore, given a sequence of orthogonal polynomials, we could construct integrable structures for classical integrable systems, such as a Lax pair, wave functions, dressing structures, \(\tau\)-functions, and symmetries [1]. These exactly integrable structures lay a solid foundation for applications into different
## 1. Introduction
In this paper, we study the \(\theta\)-deformed bi-orthogonal polynomials \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy \(\theta\theta\)-deformed Toda hierarchy \(\theta\)-deformed Toda hierarchy
Blaszak-Marcinak hierarchy, the non-commutative nonlinear variables can be written in terms of a single Wronski quasi-determinant with higher-order derivatives. Therefore, such a method could be regarded as a modification of Hirota's direct method into non-commutative integrable systems. Although the equations written using a single quasi-determinant are no longer bilinear, the ideas of Hirota's bilinear method could be extensively enlarged into non-commutative circumstances. Therefore, based on reduction techniques in Hirota's bilinear method, we apply the idea of Backlund transformation from the commutative case into the non-commutative case. These details are discussed later in Sec. 4.
In Sec 3, we generalize our choice of \(\theta\) to positive rational numbers. For \(\theta=a/b\), where \(a,b\in\mathbb{Z}_{+}\), we show that the corresponding matrix-valued bi-orthogonal polynomials satisfy an \((a+b+1)\)-term recurrence relation. Thus it leads us to two-parameter deformed non-commutative equations. In the commutative case, the corresponding two-parameter deformed theory is related to the extended bigraded Toda hierarchy studied in [9, 10, 21] arising from the theory of Frobenius manifolds and geometric structures. In [44], solutions of the bigraded Toda hierarchy were given by using string orthogonal polynomials, which are wave functions for the 2-dimensional Toda hierarchy [2]. Therefore, works in [44] is the realization of a bigraded Toda hierarchy as a reduction of 2d-Toda theory. In this paper, we provide an explanation for such a bigraded Toda hierarchy by using matrix-valued \(\theta\)-deformed bi-orthogonal polynomials and characterize its solution in closed form by making use of block Wronski quasi-determinants. We show that \(\theta\)-deformed bi-orthogonal polynomials act exactly as the wave function for an extended bigraded Toda hierarchy.
In Sec 4, we mainly use the moment reduction technique to grade the matrix-valued orthogonal polynomial space, and find out the Backlund transformation for the \(\theta\)-deformed integrable hierarchies. In Sec. 2, we showed that a Wronski quasi-determinant could be used to construct solutions for the Blaszak-Marciniak equation. According to a property of Wronski (quasi-)determinants, it is known that (quasi-)determinants of the same order but with different phases can be regarded as solutions to the same equation. Moreover, these solutions could be linked to a simple equation according to Hirota's idea of bilinear Backlund transformation [34, SS4]. Such an idea has been widely applied to the correspondence between the Toda and the Lotka-Volterra equation (i.e. Kac-van Moerbeke lattice) [28], and later was used in orthogonal polynomials by making constraints on the weight function [59].
Revisiting the connection between orthogonal polynomials and the Toda equation mentioned at the beginning, we can set the weight function symmetrically so that moments admit the form
\[m_{i,j}=\langle x^{i},x^{j}\rangle=\left\{\begin{array}{ll}m_{i+j},&\text{ if $i+j$ is even,}\\ 0,&\text{ if $i+j$ is odd.}\end{array}\right.\]
Then corresponding orthogonal polynomials are symmetric, and their normalization factors are dependent with \(\tau_{n}^{(0)}=\det(d_{i+j})_{i,j=0}^{n-1}\) and \(\tau_{n}^{(1)}=\det(d_{i+j+1})_{i,j=0}^{n-1}\) where \(d_{i}=m_{2i}\). The Wronski technique tells us that if we assume that \(\partial_{t}d_{i}=d_{i+1}\), then each \(\tau_{n}^{(0)}\) and \(\tau_{n}^{(1)}\) are solutions of the Toda equation. Therefore, the equation derived by using symmetric orthogonal polynomials will be an integrable equation connecting different solutions of the Toda equation. In Sec 4.1, we deduce in detail the Backlund transformation corresponding to the non-commutative Blaszak-Marcininak lattice when \(\theta\) is a positive integer. The original solution space of \(\tau\)-functions is divided into \(\theta+1\) different families, together with matrix-valued polynomial space. It is shown that the polynomial space could be graded as a direct sum of equivalence classes, in which the powers of polynomials
are the same moduli \(\theta+1\). As a result, the Backlund transformation of the non-commutative Blaszak-Marciniak lattice can be written as addition and multiplication forms of the Itoh-Narita-Bogoyavlensky (INB) lattice in the non-commutative version. Since the INB lattice hierarchy can be regarded as a discretization of the Gelfand-Dickey hierarchy, we understand that the \(\theta\)-deformed integrable hierarchy can be regarded as the discrete Gelfand-Dickey flows under the perspective of orthogonal polynomials. In Sec. 4.2, we again use the Wronski quasi-determinant technique to verify solutions of a specific INB lattice, and demonstrate that the graded \(\tau\)-functions are simply connected by non-commutative Jacobi identities. Moreover, the case where \(\theta\in\mathbb{Q}_{+}\) is discussed in Sec. 4.3. We find that the spectral problem in this case corresponds to the fractional Volterra hierarchy proposed in [49], and under certain time flows, we obtain the corresponding integrable equations.
The highlights of this article are the following:
1. Matrix-valued \(\theta\)-deformed bi-orthogonal polynomials are proposed, and corresponding non-commutative integrable systems are obtained, with Lax pairs and solutions;
2. The direct method of Wronski quasi-determinants is developed. We verify solutions of several non-commutative integrable systems by using the quasi-Wronski technique;
3. The Backlund transformation can be understood as a gradation of the solution space. We make the moment reduction approach to grade wave function space and solution space, and thus realize the corresponding Backlund transformation.
## 2. Matrix-valued bi-orthogonal polynomials and Recurrence relation
Before we work on the matrix-valued bi-orthogonal polynomials, we need to introduce a matrix-valued Radon measure \(\mu\): \((-\infty,\infty)\to\mathbb{R}^{p\times p}\). Firstly, according to Riesz-Markov-Kakutani representation theory, it is known that for any positive linear functional \(\psi\) on \(C_{c}(\mathbb{R})\) (the space of continuous compactly supported real-valued functions on \(\mathbb{R}\)), there is a unique Radon measure \(\mu\) on \(\mathbb{R}\) such that
\[\psi(f)=\int_{\mathbb{R}}f(x)d\mu(x). \tag{2.1}\]
Therefore, for any matrix-valued polynomials \(f(x)\in\mathbb{R}^{p\times p}[x]\), the integration (2.1) is well-defined for a Radon measure \(\mu\). Moreover, if we normalize the Radon measure \(\mu(\mathbb{R})=\mathbb{I}_{p}\), where \(\mathbb{I}_{p}\) is a \(p\times p\) identity matrix, then according to the Radon-Nikodym theorem, the normalized Radon measure \(\mu\) is related to a matrix-valued weight function \(W(x)\) such that \(d\mu(x)=W(x)dx\). Please refer to [16] for details.
Therefore, the weight function \(W(x)\) can induce a bilinear form
\[\left\langle\cdot,\cdot\right\rangle_{\theta}:\mathbb{R}^{p\times p}[x]\times \mathbb{R}^{p\times p}[x]\to\mathbb{R}^{p\times p},\quad\left\langle f(x),g(x )\right\rangle_{\theta}=\int_{\mathbb{R}}f(x)W(x)g^{\top}(x^{\theta})dx, \tag{2.2}\]
with dependence of \(\theta\in\mathbb{Z}_{+}\). Here \(\top\) represents the transpose of a matrix. To make the bilinear form well-defined, we need to make some assumptions about \(W(x)\). Similar to the weight function discussed in [22], the matrix-valued weight function is not necessarily symmetric or Hermitian. However, to ensure the existence and uniqueness of corresponding matrix-valued bi-orthogonal polynomials, details of requirements on \(W(x)\) are addressed in Definition 2.2. Firstly, we state some properties of the bilinear form.
**Proposition 2.1**.: _The bilinear form (2.2) has the following properties:_
1. _Bimodule structures_. _For any_ \(L_{1},\,L_{2},\,R_{1},\,R_{2}\in\mathbb{R}^{p\times p}\) _and_ \(f_{1}(x),f_{2}(x),g_{1}(x),g_{2}(x)\in\mathbb{R}^{p\times p}[x]\)_, we have_ \[\begin{array}{l}\langle L_{1}f_{1}(x)+L_{2}f_{2}(x),g(x)\rangle_{\theta}=L_{ 1}\langle f_{1}(x),g(x)\rangle_{\theta}+L_{2}\langle f_{2}(x),g(x)\rangle_{ \theta},\\ \langle f(x),R_{1}g_{1}(x)+R_{2}g_{2}(x)\rangle_{\theta}=\langle f(x),g_{1}(x) \rangle_{\theta}R_{1}^{\top}+\langle f(x),g_{2}(x)\rangle_{\theta}R_{2}^{\top}.\end{array}\] (2.3)
2. _Quasi-symmetry property_. _For any_ \(f(x),g(x)\in\mathbb{R}^{p\times p}[x]\)_, it holds that_ \[\langle x^{\theta}f(x),g(x)\rangle_{\theta}=\langle f(x),xg(x)\rangle_{ \theta},\quad\text{ for }\theta\in\mathbb{Z}_{+}.\] (2.4)
Moreover, the bilinear form (2.2) can induce a family of monic bi-orthogonal polynomial sequences \(\{P_{n}(x),Q_{n}(x)\}_{n\in\mathbb{N}}\) such that
\[\langle P_{n}(x),Q_{m}(x)\rangle_{\theta}=H_{n}\delta_{n,m}, \tag{2.5}\]
where \(H_{n}\in\mathbb{R}^{p\times p}\) is a nonsingular normalization factor and deg \(P_{n}\)=deg \(Q_{n}\)=\(n\). Since \(\mathbb{R}^{p\times p}[x]\) is a free module and \(\{x^{k}\mathbb{I}_{p}\}_{k\in\mathbb{N}}\) form its basis, we know that \(P_{n}(x)\) can be expanded as
\[P_{n}(x)=\mathbb{I}_{p}x^{n}+\xi_{n,n-1}x^{n-1}+\cdots+\xi_{n,0},\quad\xi_{n,j }\in\mathbb{R}^{p\times p},\quad j=0,\cdots,n-1. \tag{2.6}\]
Moreover, according to bimodule property (2.3), the orthogonal condition (2.5) is equivalent to
\[\langle P_{n}(x),x^{j}\mathbb{I}_{p}\rangle_{\theta}=0,\quad 0\leq j\leq n-1. \tag{2.7}\]
Therefore, if we denote moments
\[m_{i+j\theta}=\langle x^{i}\mathbb{I}_{p},x^{j}\mathbb{I}_{p}\rangle_{\theta}= \int_{\mathbb{R}}(x^{i}\mathbb{I}_{p})W(x)(x^{j}\mathbb{I}_{p})^{\theta}dx= \int_{\mathbb{R}}x^{i+j\theta}W(x)dx, \tag{2.8}\]
then the orthogonal condition (2.7) is equivalent to a linear system with matrix-valued coefficients
\[\xi_{n,0}m_{j\theta}+\xi_{n,1}m_{1+j\theta}+\cdots+\xi_{n,n-1}m_{n-1+j\theta}= -m_{n+j\theta},\quad j=0,\cdots,n-1. \tag{2.9}\]
It is noted that the existence and uniqueness of bi-orthogonal polynomials \(\{P_{n}(x)\}_{n\in\mathbb{N}}\) are equivalent to the existence and uniqueness of solutions in (2.9). Therefore, we make the following assumptions about the weight function \(W(x)\), which we call the moment condition.
**Definition 2.2**.: _The weight function \(W(x)\) satisfies the moment condition if_
1. _all moments_ \(m_{i+j\theta}\) _exist and are finite;_
2. _the moment matrices_ \(\left(m_{i+j\theta}\right)_{i,j=0,1,\cdots}\) _are invertible._
It is known from Proposition A.2 that if \(W(x)\) satisfies the moment condition, then coefficients of \(P_{n}(x)\) are given by++
Footnote ‡: For self-consistent, we give the definitions and basic properties of quasi-determinants in the appendix.
\[\xi_{n,j}=-\left(m_{n},m_{n+\theta},\cdots,m_{n+(n-1)\theta}\right)\left( \begin{array}{cccc}m_{0}&m_{\theta}&\cdots&m_{(n-1)\theta}\\ m_{1}&m_{\theta+1}&\cdots&m_{(n-1)\theta+1}\\ \vdots&\vdots&&\vdots\\ m_{n-1}&m_{n-1+\theta}&\cdots&m_{n-1+(n-1)\theta}\end{array}\right)^{-1}\,e_{j+ 1}^{\top}, \tag{2.10}\]
where
\[e_{j}=(0,\cdots,\mathbb{I}_{p},\cdots,0) \tag{2.11}\]
is the block unit vector, whose \(j\)-th element is the unity \(\mathbb{I}_{p}\) and the others are zeros. Therefore, by substituting (2.10) into the expansion (2.6), we have the quasi-determinant formula
\[P_{n}(x)=\left|\begin{array}{cccc}m_{0}&\cdots&m_{(n-1)\theta}&\mathbb{I}_{p} \\ \vdots&&\vdots&\vdots\\ m_{n-1}&\cdots&m_{n-1+(n-1)\theta}&x^{n-1}\mathbb{I}_{p}\\ m_{n}&\cdots&m_{n+(n-1)\theta}&\framebox{$x^{n}\mathbb{I}_{p}$}\end{array} \right|. \tag{2.12}\]
On the other hand, if we assume that
\[Q_{n}^{\top}(x)=\mathbb{I}_{p}x^{n}+\eta_{n,n-1}x^{n-1}+\cdots+\eta_{n,0}, \quad\eta_{n,j}\in\mathbb{R}^{p\times p},\quad j=0,\cdots,n-1, \tag{2.13}\]
then the orthogonal relation (2.5) gives the linear system
\[m_{j}\eta_{j,0}+m_{j+\theta}\eta_{j,1}+\cdots+m_{j+(n-1)\theta}\eta_{j,n-1}=- m_{j+n\theta},\quad j=0,\cdots,n-1. \tag{2.14}\]
Therefore, coefficients of \(Q_{n}^{\top}(x)\) are given by
\[\eta_{n,j}=-e_{j+1}\left(\begin{array}{cccc}m_{0}&m_{\theta}&\cdots&m_{(n-1 )\theta}\\ m_{1}&m_{1+\theta}&\cdots&m_{1+(n-1)\theta}\\ \vdots&\vdots&&\vdots\\ m_{n-1}&m_{n-1+\theta}&\cdots&m_{n-1+(n-1)\theta}\end{array}\right)^{-1}\left( \begin{array}{c}m_{n\theta}\\ m_{1+n\theta}\\ \vdots\\ m_{n-1+n\theta}\end{array}\right), \tag{2.15}\]
and \(Q_{n}^{\top}(x)\) admits the quasi-determinant formula
\[Q_{n}^{\top}(x)=\left|\begin{array}{cccc}m_{0}&\cdots&m_{(n-1)\theta}&m_{n \theta}\\ m_{1}&\cdots&m_{(n-1)\theta+1}&m_{n\theta+1}\\ \vdots&&\vdots&\vdots\\ m_{n-1}&\cdots&m_{n-1+(n-1)\theta}&m_{n-1+n\theta}\\ \mathbb{I}_{p}&\cdots&x^{n-1}\mathbb{I}_{p}&\framebox{$x^{n}\mathbb{I}_{p}$} \end{array}\right|. \tag{2.16}\]
To conclude, we have the following definition.
**Definition 2.3**.: _With bilinear form (2.2) in which \(W(x)\) satisfies the moment condition, a family of matrix-valued bi-orthogonal polynomials \(\{P_{n}(x),Q_{n}(x)\}_{n\in\mathbb{N}}\) are defined by_
\[\langle P_{n}(x),Q_{m}(x)\rangle_{\theta}=H_{n}\delta_{n,m},\]
_where \(P_{n}(x)\) and \(Q(x)\) are given by (2.12) and (2.16) respectively, and_
\[H_{n}=\left|\begin{array}{cccc}m_{0}&\cdots&m_{(n-1)\theta}&m_{n\theta}\\ \vdots&&\vdots&\vdots\\ m_{n-1}&\cdots&m_{n-1+(n-1)\theta}&m_{n-1+n\theta}\\ m_{n}&\cdots&m_{n+(n-1)\theta}&\framebox{$m_{n+n\theta}$}\end{array}\right|. \tag{2.17}\]
### Recurrence relations for \(\theta\in\mathbb{Z}_{+}\)
Hereafter, we are going to discuss the recurrence relations for \(\{P_{n}(x),Q_{n}(x)\}_{n\in\mathbb{N}}\) when \(\theta\in\mathbb{Z}_{+}\).
**Proposition 2.4**.: _The matrix-valued bi-orthogonal polynomials \(\{P_{n}(x),Q_{n}(x)\}_{n\in\mathbb{N}}\) satisfy the following recurrence relations_
\[x^{\theta}P_{n}(x) =P_{n+\theta}(x)+\sum_{j=n-1}^{n+\theta-1}\alpha_{n,j}P_{j}(x), \tag{2.18a}\] \[xQ_{n}(x) =Q_{n+1}(x)+\sum_{j=n-\theta}^{n}\beta_{n,j}Q_{j}(x), \tag{2.18b}\]
_for \(\theta\in\mathbb{Z}_{+}\) and some certain \(\alpha_{n,j}\), \(\beta_{n,j}\in\mathbb{R}^{p\times p}\)._
Proof.: Since \(\{P_{n}(x)\}_{n\in\mathbb{N}}\) form a basis of the left module \(\mathbb{R}^{p\times p}[x]\) under the bilinear form (2.2), it is known that any polynomial in \(\mathbb{R}^{p\times p}[x]\) can be written as a left linear combination of \(\{P_{n}(x)\}_{n\in\mathbb{N}}\). Therefore, we have
\[x^{\theta}P_{n}(x)=P_{n+\theta}(x)+\sum_{i=0}^{n+\theta-1}\alpha_{n,i}P_{i}(x ),\quad\alpha_{n,i}\in\mathbb{R}^{p\times p},\]
and the recurrence coefficients
\[\alpha_{n,i}=\langle x^{\theta}P_{n}(x),Q_{i}(x)\rangle_{\theta} \cdot H_{i}^{-1}=\langle P_{n}(x),xQ_{i}(x)\rangle_{\theta}\cdot H_{i}^{-1}, \quad i=0,\cdots,n+\theta-1, \tag{2.19}\]
where the last equality is due to the quasi-symmetry property (2.4). When \(i<n-1\), we know that \(\deg\,xQ_{i}<n\), and according to (2.5), the bilinear form is definitely zero. It means that \(\alpha_{n,i}=0\) when \(0\leq i<n-1\), and the recurrence relation (2.18a) is obtained. The recurrence relation (2.18b) for \(\{Q_{n}(x)\}_{n\in\mathbb{N}}\) can be proved similarly.
**Remark 2.5**.: _The recurrence relations (2.18a) and (2.18b) can alternatively be written in a matrix form. If we denote_
\[\Phi=\left(\begin{array}{c}P_{0}(x)\\ P_{1}(x)\\ \vdots\end{array}\right),\quad\Psi=\left(\begin{array}{c}Q_{0}(x)\\ Q_{1}(x)\\ \vdots\end{array}\right), \tag{2.20}\]
_then (2.18a) can be written as_
\[x^{\theta}\Phi=L\Phi,\quad L=\Lambda^{\theta}+a_{\theta-1}\Lambda^{\theta-1}+ \cdots+a_{-1}\Lambda^{-1},\]
_where \(\{a_{i}\}_{i=-1,\cdots,\theta-1}\) are block diagonal matrices \(a_{i}=\text{diag}(\alpha_{0,i},\alpha_{1,i+1},\alpha_{2,i+2},\cdots)\), \(\Lambda\) is a block shift operator_
\[\Lambda=\left(\begin{array}{cccc}0&\mathbb{I}_{p}&0&0&\cdots\\ 0&0&\mathbb{I}_{p}&0&\cdots\\ 0&0&0&\mathbb{I}_{p}&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right),\]
_and \(\Lambda^{-1}\) is defined by the transpose of \(\Lambda\)SS. Similarly, the recurrence (2.18b) could be written by_
Footnote §: It should be noted that \(\Lambda^{-1}\) is not the inverse of \(\Lambda\). We use the notation \(\Lambda^{-1}\) to denote the transpose of \(\Lambda\). In our later computations, we use the positive power and negative power of \(\Lambda\) to indicate the block upper triangular part and block lower triangular part of a matrix, respectively.
\[x\Psi=M\Psi,\quad M=\Lambda+\beta_{0}\Lambda^{0}+\cdots+\beta_{\theta}\Lambda^ {-\theta},\]
_where \(\beta_{i}=\text{diag}(\beta_{i,0},\beta_{i+1,1},\beta_{i+2,2},\cdots)\) for \(i=0,\cdots,\theta\)._
The following proposition states that those recurrence coefficients \(\alpha_{n,j}\) and \(\beta_{n,j}\) could be written in terms of quasi-determinants.
**Proposition 2.6**.: _Recurrence coefficients \(\alpha_{n,j}\) could be written in terms of quasi-determinants and_
\[\alpha_{n,j}=\left(Z_{n,j}+\sum_{k=n-1}^{j-1}Z_{n,k}\eta_{j,k} \right)H_{j}^{-1},\quad j=n-1,\cdots,n+\theta-1,\]
_where_
\[Z_{n,j}=\left\langle P_{n}(x),x^{j+1}\mathbb{I}_{p}\right\rangle_{\theta}= \left|\begin{array}{ccccc}m_{0}&m_{\theta}&\cdots&m_{(n-1)\theta}&m_{(j+1) \theta}\\ \vdots&\vdots&&\vdots&\vdots\\ m_{n-1}&m_{n-1+\theta}&\cdots&m_{n-1+(n-1)\theta}&m_{n-1+(j+1)\theta}\\ m_{n}&m_{n+\theta}&\cdots&m_{n+(n-1)\theta}&\boxed{m_{n+(j+1)\theta}}\end{array} \right|, \tag{2.21}\]
_and \(\eta_{j,k}\) is the coefficient of \(Q_{j}(x)\) given in (2.15). Moreover, if we introduce the notation_
\[Y_{n,j+\theta-1}=\left\langle x^{j+\theta}\mathbb{I}_{p},Q_{n}( x)\right\rangle_{\theta}=\left|\begin{array}{ccccc}m_{0}&m_{\theta}&\cdots&m_{(n-1) \theta}&m_{n\theta}\\ \vdots&\vdots&&\vdots&\vdots\\ m_{n-1}&m_{n-1+\theta}&\cdots&m_{n-1+(n-1)\theta}&m_{n-1+n\theta}\\ m_{j+\theta}&m_{j+2\theta}&\cdots&m_{j+n\theta}&\boxed{m_{j+(n+1)\theta}}\end{array} \right|,\]
_then \(\beta_{n,j}^{\top}\) could be expressed by_
\[\beta_{n,j}^{\top}=H_{j}^{-1}\cdot\left(Y_{n,j+\theta-1}+\sum_{k=n -\theta}^{j-1}\xi_{j,k}Y_{n,k+\theta-1}\right),\quad j=n-\theta,\cdots,n,\]
_where \(\xi_{j,k}\) is the coefficient of \(P_{j}(x)\) given in (2.10)._
Proof.: Here we only prove the quasi-determinant expression for \(\alpha_{n,j}\), that for \(\beta_{n,j}^{\top}\) can be similarly verified. By substituting the expansion of \(Q_{j}(x)\) in (2.13) into the formula (2.19), we obtain
\[\alpha_{n,j}=\left(\langle P_{n}(x),x^{j+1}\mathbb{I}_{p}\rangle_{\theta}+\sum _{k=0}^{j-1}\langle P_{n}(x),x^{k+1}\mathbb{I}_{p}\rangle_{\theta}\cdot\eta_{j,k}\right)H_{j}^{-1}.\]
To simplify above equation, the notation \(Z_{n,j}\) is thus introduced as the bilinear form \(\langle P_{n}(x),x^{j+1}\mathbb{I}_{p}\rangle_{\theta}\), which has the quasi-determinant expression as in (2.21). Moreover, when \(j=0,\cdots,n-2\), there are two identical columns in \(Z_{n,j}\). According to Proposition A.4, we know that \(Z_{n,j}=0\) for \(j=0,\cdots,n-2\), indicating that recurrence coefficients are truncated.
**Remark 2.7**.: _It should be remarked that \(\{\alpha_{n,j}\}_{j=n-1}^{n+\theta-1}\) admit another expression. Since we can expand \(P_{n}(x)\) in terms of (2.6), then by comparing coefficients in (2.18a) on both sides, we have_
\[\alpha_{n,n+\theta-1}=\xi_{n,n-1}-\xi_{n+\theta,n+\theta-1},\] \[\alpha_{n,n+\theta-2}=\xi_{n,n-2}-\xi_{n+\theta,n+\theta-2}- \alpha_{n,n+\theta-1}\xi_{n+\theta-1,n+\theta-2},\] \[\cdots\cdots\]
_This formula is useful in the verification of solutions of the Blaszak-Marcinininak three-field equations._
### Time evolutions and non-commutative lattices
In this part, we are going to discuss how to introduce time flows into the matrix-valued bi-orthogonal polynomials. We assume that there is a family of time variables \(\mathbf{t}=(t_{1},t_{2},\cdots)\) added into the weight function such that
\[W(x;\mathbf{t})=\exp\left(\sum_{i=1}^{\infty}t_{i}x^{i}\right)W(x). \tag{2.22}\]
Such deformation plays the role of a ladder operator in orthogonal polynomials theory since the action of \(\partial_{t_{i}}\) is equivalent to the action of \(x^{i}\), i.e. \(\partial_{t_{i}}W(x;\mathbf{t})=x^{i}W(x;\mathbf{t})\). Under such evolution assumptions, we have
\[\partial_{t_{i}}m_{k}=m_{k+i} \tag{2.23}\]
for the moments \(\{m_{k}\}_{k\in\mathbb{N}}\) defined in (2.8).
Therefore, we have a time-deformed bilinear form
\[\left\langle f(x),g(x)\right\rangle_{\theta}=\int_{\mathbb{R}}f(x)W(x; \mathbf{t})g^{\top}(x^{\theta})dx,\]
and time-dependent matrix-valued bi-orthogonal polynomials can be defined by
\[\left\langle P_{n}(x;\mathbf{t}),Q_{m}(x;\mathbf{t})\right\rangle_{\theta}=H_ {n}(\mathbf{t})\delta_{n,m}. \tag{2.24}\]
The derivative formulas for the time-dependent matrix-valued bi-orthogonal polynomials are then investigated.
**Proposition 2.8**.: \(\{P_{n}(x;\mathbf{t})\}_{n\in\mathbb{N}}\) _satisfying (2.24) have the evolution equation_
\[\partial_{t_{\theta}}P_{n}(x;\mathbf{t})=-\alpha_{n,n-1}P_{n-1}(x;\mathbf{t}), \tag{2.25}\]
_where \(\alpha_{n,n-1}\) is the recurrence coefficient in (2.18a); and \(\{Q_{n}(x;\mathbf{t})\}_{n\in\mathbb{N}}\) satisfy the evolution equation_
\[\partial_{t_{\theta}}Q_{n}(x;\mathbf{t})=-\sum_{j=n-\theta}^{n-1}\beta_{n,j}Q _{j}(x;\mathbf{t}), \tag{2.26}\]
_where \(\{\beta_{n,j}\}_{j=n-\theta}^{n-1}\) are recurrence coefficients in (2.18b)._
Proof.: To obtain the derivative formulas, let us consider the derivative of the orthogonal relation (2.24), from which we have
\[\begin{split}\partial_{t_{\theta}}H_{n}(\mathbf{t})\delta_{n,m}=& \langle\partial_{t_{\theta}}P_{n}(x;\mathbf{t}),Q_{m}(x;\mathbf{t}) \rangle_{\theta}+\langle P_{n}(x;\mathbf{t}),\partial_{t_{\theta}}Q_{m}(x; \mathbf{t})\rangle_{\theta}\\ &+\langle x^{\theta}P_{n}(x;\mathbf{t}),Q_{m}(x;\mathbf{t}) \rangle_{\theta}.\end{split} \tag{2.27}\]
By assuming that \(m<n\), the above equation implies
\[\langle\partial_{t_{\theta}}P_{n}(x;\mathbf{t}),Q_{m}(x;\mathbf{t})\rangle_{ \theta}=-\langle P_{n}(x;\mathbf{t}),xQ_{m}(x;\mathbf{t})\rangle_{\theta}. \tag{2.28}\]
Since \(\partial_{t_{\theta}}P_{n}(x;\mathbf{t})\) is a polynomial of degree \(n-1\), it could be expanded as a left linear combination of the orthogonal basis \(\{P_{k}(x;\mathbf{t})\}_{k=0}^{n-1}\) and
\[\partial_{t_{\theta}}P_{n}(x;\mathbf{t})=\sum_{k=0}^{n-1}\gamma_{n,k}P_{k}(x; \mathbf{t}).\]
By taking \(m=0,1,\ldots,n-1\) in (2.28), we obtain (2.25). On the other hand, by assuming that \(n<m\), then
\[\langle P_{n}(x;\mathbf{t}),\partial_{t_{\theta}}Q_{m}(x;\mathbf{t}) \rangle_{\theta}=-\langle x^{\theta}P_{n}(x;\mathbf{t}),Q_{m}(x;\mathbf{t}) \rangle_{\theta},\]
and in a similar manner, we can obtain (2.26).
With matrix-form notation, equations (2.25) and (2.26) can alternatively be written as
\[\partial_{t_{\theta}}\Phi=-L_{<0}\Phi,\quad\partial_{t_{\theta}} \Psi=-M_{<0}\Psi,\]
where \(L_{<0}\) means the strictly lower triangular part of the block matrix. Thus the compatibility conditions \(x^{\theta}\partial_{t_{\theta}}\Phi=\partial_{t_{\theta}}(x^{\theta}\Phi)\) and \(x\partial_{t_{\theta}}\Psi=\partial_{t_{\theta}}(x\Psi)\) result in integrable lattices
\[\partial_{t_{\theta}}L=[L,L_{<0}],\quad\partial_{t_{\theta}}M=[M,M_{<0}].\]
If these equations are written into explicit elements, then we have
\[\left\{\begin{array}{l}\partial_{t_{\theta}}\alpha_{n,n+\theta-1}=\alpha_{n +\theta,n+\theta-1}-\alpha_{n,n-1},\\ \partial_{t_{\theta}}\alpha_{n,i}=\alpha_{n,i+1}\alpha_{i+1,i}-\alpha_{n,n-1} \alpha_{n-1,i},\end{array}\right.\ i=n-1,\ldots,n+\theta-2, \tag{2.29}\]
and
\[\left\{\begin{array}{l}\partial_{t_{\theta}}\beta_{n,n}=\beta_{n +1,n}-\beta_{n,n-1},\\ \partial_{t_{\theta}}\beta_{n,j}=\beta_{n+1,j}-\beta_{n,j-1}+\beta_{n,n}\beta_ {n,j}-\beta_{n,j}\beta_{j,j},\end{array}\right.\ j=n-\theta,\ldots,n-1, \tag{2.30}\]
where \(\beta_{n,n-\theta-1}\) is defined to be zero. Equations (2.29) and (2.30) are non-commutative generalizations of the Toda-type equations. In this paper, we call them non-commutative hungry Toda lattices, as the corresponding commutative cases are hungry Toda lattices, which are special situations of Kostant-Toda hierarchy [57]. Moreover, in the commutative case, these equations were studied from different perspectives, such as the discrete Lax formalism, the \(r\)-matrix approach and Hamiltonian structures [4, 43]. In particular, we can also refer to equation (2.29) as the non-commutative Blaszak-Marciniak equation since Blaszak and Marciniak studied the commutative case [4] via the r-matrix approach, and to equation (2.30) as the non-commutative Kuperschmidt lattice since the Hamiltonian structure of the commutative equation was considered by Kuperschmidt [43].
**Definition 2.9**.: _Given that \(\{m_{i}\}_{i=1}^{N}\) is dependent on a time parameter \(t\), if the \(k\)-th derivative of \(m_{i}\) is denoted by \(m_{i}^{(k)}\), then the corresponding Wronski quasi-determinant is defined by_
\[\left|\begin{array}{cccc}m_{1}^{(0)}&m_{1}^{(1)}&\cdots&m_{1}^{(N-1)}\\ m_{2}^{(0)}&m_{2}^{(1)}&\cdots&m_{2}^{(N-1)}\\ \vdots&\vdots&&\vdots\\ m_{N}^{(0)}&m_{N}^{(1)}&\cdots&\boxed{m_{N}^{(N-1)}}\end{array}\right|.\]
**Remark 2.10**.: _With time parameters involved, the normalization factors \(\{H_{n}\}_{n\in\mathbb{N}}\) in (2.17) are Wronski quasi-determinants. Moreover, these Wronski quasi-determinants play the role of non-commutative \(\tau\)-functions for non-commutative integrable systems (2.29) and (2.30)._
### A quasi-determinant solution to non-commutative Blaszak-Marciniak three-field equations
In this part, we intend to make use of quasi-Wronski techniques to show that the Blaszak-Marciniak equation admits quasi-determinant solutions. The first non-trivial example in the Blaszak-Marciniak equation is when \(\theta=1\), which gives rise to the non-commutative Toda equation
\[\partial_{t_{1}}a_{n}=b_{n+1}-b_{n},\quad\partial_{t_{1}}b_{n}=a_{n}b_{n}-b_{n }a_{n-1}\]
with \(a_{n}=\left(\partial_{t_{1}}H_{n}\right)H_{n}^{-1}\) and \(b_{n}=H_{n}H_{n-1}^{-1}\), where \(H_{n}\) is the corresponding Hankel quasi-determinant. Verifications of the non-commutative Toda lattice have been exhibited in [27, 45, 47, 55] and so we omit the details here.
Therefore, we pay attention to the second non-trivial example (i.e. \(\theta=2\) case), the non-commutative Blaszak-Marciniak three-field equations. The commutative Blaszak-Marciniak three-field equations attracted much attention. For example, its Backlund transformation and superposition formula was given in [35], its algebro-geometric solution was given in [29], and its connection with moving frame was given in [60]. For the non-commutative case, the Hamiltonian structure and recursion operator were recently given in [12]. To demonstrate the quasi-determinant solution of the non-commutative Blaszak-Marciniak three-field equations, we have the following theorem.
**Theorem 2.11**.: _The non-commutative Blaszak-Marciniak three-field equations_
\[\partial_{t_{2}}\alpha_{n,n+1}=\alpha_{n+2,n+1}-\alpha_{n,n-1}, \tag{2.31a}\] \[\partial_{t_{2}}\alpha_{n,n}=\alpha_{n,n+1}\alpha_{n+1,n}-\alpha_ {n,n-1}\alpha_{n-1,n},\] (2.31b) \[\partial_{t_{2}}\alpha_{n,n-1}=\alpha_{n,n}\alpha_{n,n-1}-\alpha_ {n,n-1}\alpha_{n-1,n-1}, \tag{2.31c}\]
_admit following solutions_
\[\alpha_{n,n-1}=H_{n}H_{n-1}^{-1},\quad\alpha_{n,n}=(Z_{n,n}+H_{n} \eta_{n,n-1})H_{n}^{-1},\] \[\alpha_{n,n+1}=\xi_{n,n-1}-\xi_{n+2,n+1}=\left(Z_{n,n+1}+Z_{n,n} \eta_{n+1,n}+H_{n}\eta_{n+1,n-1}\right)H_{n+1}^{-1}\]
_in which_
\[\xi_{n+1,n}=\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n}&0\\ \vdots&&\vdots&\vdots\\ m_{n}&\cdots&m_{3n}&\mathbb{I}_{p}\\ m_{n+1}&\cdots&m_{3n+1}&0\end{array}\right|,\quad\eta_{n+1,j}=\left|\begin{array} []{cccc}m_{0}&\cdots&m_{2n}&m_{2n+2}\\ \vdots&&\vdots&\vdots\\ m_{n}&\cdots&m_{3n}&m_{3n+2}\\ e_{j+1}&&\framebox{0}\end{array}\right|,\]
_and_
\[H_{n}=\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n-2}&m_{2n}\\ \vdots&&\vdots&\vdots\\ m_{n-1}&\cdots&m_{3n-3}&m_{3n-1}\\ m_{n}&\cdots&m_{3n-2}&\framebox{m_{3n}}\end{array}\right|,\quad Z_{n,j}=\left| \begin{array}{cccc}m_{0}&\cdots&m_{2n-2}&m_{2j+2}\\ \vdots&&\vdots&\vdots\\ m_{n-1}&\cdots&m_{3n-3}&m_{n+1+2j}\\ m_{n}&\cdots&m_{3n-2}&\framebox{m_{n+2+2j}}\end{array}\right|,\]
_under time evolution \(\partial_{t_{2}}m_{i}=m_{i+2}\)._
**Remark 2.12**.: _Below, we show that \(\xi_{n+1,n}\), \(\eta_{n+1,j}\) and \(Z_{n,j}\) are quantities related to the Wronski quasi-determinant \(H_{n}\) and its derivatives. Therefore, we say that the non-commutative Blaszak-Marciniak three field equation could be simply expressed in terms of the Wronski quasi-determinant._
To prove this theorem, firstly, we notice the following homological relation.
**Lemma 2.13**.: _It holds that_
\[Z_{n,n}=-H_{n}\eta_{n+1,n} \tag{2.32}\]
Proof.: By using the non-commutative Jacobi identity (A.1), we have
\[\eta_{n+1,n} =\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n-2}&m_{2n}&m_{2n+2} \\ \vdots&&\vdots&\vdots&\vdots\\ m_{n-1}&\cdots&m_{3n-3}&m_{3n-1}&m_{3n+1}\\ m_{n}&\cdots&m_{3n-2}&m_{3n}&m_{3n+2}\\ 0&\cdots&0&\mathbb{I}_{p}&\framebox{0}\end{array}\right|^{-1}\] \[=-\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n-2}&m_{2n}\\ \vdots&&\vdots&\vdots\\ m_{n-1}&\cdots&m_{3n-3}&m_{3n-1}\\ m_{n}&\cdots&m_{3n-2}&\framebox{m_{3n}}\end{array}\right|^{-1}\left|\begin{array} []{cccc}m_{0}&\cdots&m_{2n-2}&m_{2n+2}\\ \vdots&&\vdots&\vdots\\ m_{n-1}&\cdots&m_{3n-3}&m_{3n+1}\\ m_{n}&\cdots&m_{3n}&\framebox{m_{3n+2}}\end{array}\right|\] \[=-H_{n}^{-1}Z_{n,n}.\]
Moreover, we have the following derivative formulas for \(\xi_{n,n-1}\).
**Lemma 2.14**.: _Regarding with the derivative of \(\xi_{n,n-1}\), it holds that_
\[\partial_{t_{2}}\xi_{n,n-1}=-H_{n}H_{n-1}^{-1}. \tag{2.33}\]
Proof.: According to the derivative formula for quasi-determinants, we obtain
\[\partial_{t_{2}}\xi_{n,n-1} =\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n-4}&0\\ \vdots&&\vdots&\vdots\\ m_{n-2}&\cdots&m_{3n-6}&\mathbb{I}_{p}\\ m_{n+1}&\cdots&m_{3n-2}&\framebox{0}\end{array}\right|\] \[+\sum_{k=1}^{n-1}\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n-4} &m_{2k}\\ \vdots&&\vdots&\vdots\\ m_{n-2}&\cdots&m_{3n-6}&m_{2k+n-2}\\ m_{n-1}&\cdots&m_{3n-5}&\framebox{0}\end{array}\right|.\left|\begin{array}[] {cccc}m_{0}&\cdots&m_{2n-4}&0\\ \vdots&&\vdots&\vdots\\ m_{n-2}&\cdots&m_{3n-6}&\mathbb{I}_{p}\\ e_{k}&&\framebox{0}\end{array}\right|,\]
where \(e_{k}\) is the \(k\)-th unit vector given in (2.11). Moreover, by noting that
\[\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n-4}&m_{2k}\\ \vdots&&\vdots&\vdots\\ m_{n-2}&\cdots&m_{3n-6}&m_{2k+n-2}\\ m_{n-1}&\cdots&m_{3n-5}&\framebox{0}\end{array}\right|=\left\{\begin{array}[] {cccc}-m_{2k+n-1},&k=1,\cdots,n-2\\ -m_{3n-3}+H_{n-1},&k=n-1\end{array}\right. \tag{2.34}\]
we get
\[\partial_{t_{2}}\xi_{n,n-1}=H_{n}\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n -2}&0\\ \vdots&&\vdots&\vdots\\ m_{n-1}&\cdots&m_{3n-3}&\framebox{\mathbb{I}_{p}}\\ 0&\cdots&\mathbb{I}_{p}&\framebox{0}\end{array}\right|=-H_{n}H_{n-1}^{-1},\]
where a non-commutative Jacobi identity (A.1) is applied to the last step.
Therefore, equation (2.31a) can be verified directly from (2.33).
**Lemma 2.15**.: _Regarding the derivative of \(H_{n}\), it holds that_
\[\partial_{t_{2}}H_{n}=Z_{n,n}+H_{n}\eta_{n,n-1}. \tag{2.35}\]
Proof.: The derivative formula for \(H_{n}\) is given by
\[\partial_{t_{2}}H_{n} =\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n-2}&m_{2n+2}\\ m_{1}&\cdots&m_{2n-1}&m_{2n+3}\\ \vdots&&\vdots&\vdots\\ m_{n-1}&\cdots&m_{3n-3}&m_{3n+1}\\ m_{n}&\cdots&m_{3n-2}&\overline{m_{3n+2}}\end{array}\right|+\left|\begin{array} []{cccc}m_{0}&\cdots&m_{2n-2}&m_{2n}\\ m_{1}&\cdots&m_{2n-1}&m_{2n+1}\\ \vdots&\vdots&&\vdots\\ m_{n-1}&\cdots&m_{3n-3}&m_{3n-1}\\ m_{n+2}&\cdots&m_{3n}&\overline{0}\end{array}\right|\] \[+\sum_{j=1}^{n}\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n-2}&m_ {2j}\\ m_{1}&\cdots&m_{2n-1}&m_{2j+1}\\ \vdots&\vdots&&\vdots\\ m_{n-1}&\cdots&m_{3n-3}&m_{n-1+2j}\\ m_{n}&\cdots&m_{3n-2}&\overline{0}\end{array}\right|\cdot\left|\begin{array} []{cccc}m_{0}&\cdots&m_{2n-2}&m_{2n}\\ m_{1}&\cdots&m_{2n-1}&m_{2n+1}\\ \vdots&&\vdots&\vdots\\ m_{n-1}&\cdots&m_{3n-3}&m_{3n-1}\\ e_{j}&&\overline{0}\end{array}\right|.\]
By applying the equation (2.34), we know that only \(j=n\) is left in the sum, and (2.35) is verified.
**Remark 2.16**.: _This lemma indicates that \(\alpha_{n,n}=\left(\partial_{t_{2}}H_{n}\right)H_{n}^{-1}\)._
Therefore, by substituting formulas (2.32) and (2.35) into (2.31c), we know that the third equation of the non-commutative Blaszak-Marciniak three-field equations is verified. To verify (2.31b), a derivative formula for \(\eta_{n+1,n}\) is needed.
**Lemma 2.17**.: _Regarding with the derivative of \(\eta_{n+1,n}\), we have that_
\[\partial_{t_{2}}\eta_{n+1,n}=\eta_{n+1,n}\eta_{n+1,n}-H_{n}^{-1}Z_{n,n+1}-\eta _{n+1,n-1}. \tag{2.36}\]
Proof.: Taking the derivative of \(\eta_{n+1,n}\), one obtains
\[\partial_{t_{2}}\eta_{n+1,n}=\left|\begin{array}{cccc}m_{0}& \cdots&m_{2n}&m_{2n+4}\\ m_{1}&\cdots&m_{2n+1}&m_{2n+5}\\ \vdots&&\vdots&\vdots\\ m_{n}&\cdots&m_{3n}&m_{3n+4}\\ 0&\cdots&\mathbb{I}_{p}&\overline{0}\end{array}\right|\] \[+\sum_{j=1}^{n+1}\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n}& m_{2j}\\ m_{1}&\cdots&m_{2n+1}&m_{2j+1}\\ \vdots&&\vdots&\vdots\\ m_{n}&\cdots&m_{3n}&m_{n+2j}\\ 0&\cdots&\mathbb{I}_{p}&\overline{0}\end{array}\right|\cdot\left|\begin{array} []{cccc}m_{0}&\cdots&m_{2n}&m_{2n+2}\\ m_{1}&\cdots&m_{2n+1}&m_{2n+3}\\ \vdots&&\vdots&\vdots\\ m_{n}&\cdots&m_{3n}&m_{3n+2}\\ e_{j}&&\overline{0}\end{array}\right|.\]
Noting that
\[\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n}&m_{2j}\\ m_{1}&\cdots&m_{2n+1}&m_{2j+1}\\ \vdots&&\vdots&\vdots\\ m_{n}&\cdots&m_{3n}&m_{n+2j}\\ 0&\cdots&\mathbb{I}_{p}&\underline{0}\end{array}\right|=\left\{\begin{array}{ ccl}0&&j=1,\cdots,n-1,\\ -\mathbb{I}_{p}&&j=n,\\ \eta_{n+1,n}&j=n+1,\end{array}\right.\]
we have a simplified equation
\[\partial_{t_{2}}\eta_{n+1,n}=\left|\begin{array}{cccc}m_{0}&\cdots&m_{2n}&m_ {2n+4}\\ m_{1}&\cdots&m_{2n+1}&m_{2n+5}\\ \vdots&&\vdots&\vdots\\ m_{n}&\cdots&m_{3n}&m_{3n+4}\\ 0&\cdots&\mathbb{I}_{p}&\underline{0}\end{array}\right|+\eta_{n+1,n}\eta_{n+ 1,n}-\eta_{n+1,n-1}.\]
The application of non-commutative Jacobi identity (A.1) to \((n+1,n+2)\)-rows and \((n+1,n+2)\)-columns to the above quasi-determinant gives the result of (2.36).
Therefore, by using (2.36), (2.31b) can be directly verified.
## 3. A non-commutative generalization of the bigraded Toda lattice
In this section, we propose to generalize the non-commutative Blaszak-Marciniak lattice and Kuperschmidt lattice to the extended bigraded Toda lattice case, which in the commutative case, admits the Lax operator [9, 10, 21]
\[\mathcal{L}(a,b)=\Lambda^{a}+c_{1}\Lambda^{a-1}+\cdots+c_{a+b} \Lambda^{-b},\quad a,b\in\mathbb{Z}_{+}. \tag{3.1}\]
To construct this Lax operator by using bi-orthogonal polynomials, we need to consider the parameter \(\theta\in\mathbb{Q}_{+}\) instead of \(\theta\in\mathbb{Z}_{+}\) in bilinear form (2.2). Let us first consider recurrence relations for \(\{P_{n}(x),Q_{n}(x)\}_{n\in\mathbb{N}}\) when \(\theta=b/a\), where \(a,b\) are positive integers. It should be noted that to uniquely determine the value of \(\theta\), we need to require that \(a\) and \(b\) should be co-prime. However, the recurrence relation is dependent on the values of \(a\) and \(b\) rather than that of \(\theta\), as we demonstrate below.
**Proposition 3.1**.: _Given_
\[\theta=\frac{b}{a},\quad a,\,b\in\mathbb{Z}_{+},\]
_the so-called \((a,b)\)-graded matrix-valued bi-orthogonal polynomials \(\{P_{n}(x),Q_{n}(x)\}_{n\in\mathbb{N}}\) satisfying the bi-orthogonal relation under (2.5) have the following recurrence relations_
\[x^{b}P_{n}(x)=P_{n+b}(x)+\sum_{j=n-a}^{n+b-1}\alpha_{n,j}P_{j}(x), \tag{3.2a}\] \[x^{a}Q_{n}(x)=Q_{n+a}(x)+\sum_{j=n-b}^{n+a-1}\beta_{n,j}Q_{j}(x). \tag{3.2b}\]
_Moreover, coefficients \(\alpha_{n,j}\) and \(\beta_{n,j}\) can be written as_
\[\alpha_{n,j} =\left(\mathcal{Z}_{n,j}+\sum_{i=n-a}^{j-1}\mathcal{Z}_{n,i}\eta_{ j,i}\right)H_{j}{}^{-1},\] \[\beta_{n,j}^{T} =H_{j}{}^{-1}\left(\mathcal{Y}_{n,j}+\sum_{i=n-b}^{j-1}\xi_{j,i} \mathcal{Y}_{n,i}\right),\]
_where \(H_{n}\) has the same expression as in (2.17), and \(\mathcal{Z}_{n,j}\) and \(\mathcal{Y}_{n,j}\) can be written as quasi-determinants_
\[\mathcal{Z}_{n,j}=\langle P_{n}(x),x^{a+j}\mathbb{I}_{p}\rangle_{\theta}= \left|\begin{array}{ccccc}m_{0}&m_{\theta}&\cdots&m_{(n-1)\theta}&m_{(a+j) \theta}\\ \vdots&\vdots&&\vdots&\vdots\\ m_{n-1}&m_{n-1+\theta}&\cdots&m_{n-1+(n-1)\theta}&m_{n-1+(a+j)\theta}\\ m_{n}&m_{n+\theta}&\cdots&m_{n+(n-1)\theta}&\overline{m_{n+(a+j)\theta}}\\ \end{array}\right|,\]
_and_
\[\mathcal{Y}_{n,j}=\langle x^{b+j}\mathbb{I}_{p},Q_{n}(x)\rangle_{\theta}= \left|\begin{array}{ccccc}m_{0}&m_{\theta}&\cdots&m_{(n-1)\theta}&m_{n\theta }\\ \vdots&\vdots&&\vdots&\vdots\\ m_{n-1}&m_{n-1+\theta}&\cdots&m_{n-1+(n-1)\theta}&m_{n-1+n\theta}\\ m_{b+j}&m_{b+j+\theta}&\cdots&m_{b+j+(n-1)\theta}&\overline{m_{b+j+n\theta}}\\ \end{array}\right|.\]
Proof.: The proof of this proposition is similar to the ones for \(\theta\in\mathbb{Z}_{+}\) by rewriting the quasi-symmetry property
\[\langle x^{b}P_{n}(x),Q_{m}(x)\rangle_{\theta}=\langle P_{n}(x),x^{a}Q_{m}(x) \rangle_{\theta},\quad\theta=\frac{b}{a}\in\mathbb{Q}_{+}. \tag{3.3}\]
If we denote
\[\mathcal{L}(b,a)=\Lambda^{b}+\alpha_{b-1}\Lambda^{b-1}+\cdots+ \alpha_{-a}\Lambda^{-a},\quad\alpha_{i}=(\alpha_{0,i},\alpha_{1,i+1},\cdots),\] \[\mathcal{M}(a,b)=\Lambda^{a}+\beta_{a-1}\Lambda^{a-1}+\cdots+ \beta_{-b}\Lambda^{-b},\quad\beta_{i}=(\beta_{i,0},\beta_{i+1,1},\cdots),\]
then (3.2a) and (3.2b) can be equivalently written as
\[x^{b}\Phi=\mathcal{L}(b,a)\Phi,\quad x^{a}\Psi=\mathcal{M}(a,b)\Psi \tag{3.4}\]
where \(\Phi\) and \(\Psi\) are wave functions defined in (2.20).
Similarly, we can introduce infinitely many time flows \(\{t_{1},t_{2},\cdots\}\) such that the weight function is dependent on the time flows as in (2.22), then the evolution of the wave function can be stated as follows.
**Proposition 3.2**.: _The matrix-valued bi-orthogonal polynomials \(\{P_{n}(x;\mathbf{t})\}_{n\in\mathbb{N}}\) and \(\{Q_{n}(x;\mathbf{t})\}_{n\in\mathbb{N}}\) satisfy the evolution equations_
\[\partial_{t_{k}}P_{n}(x;\mathbf{t})=-\sum_{i=n-a}^{n-1}\alpha_{n,i}P_{i}(x; \mathbf{t}), \tag{3.5a}\] \[\partial_{t_{k}}Q_{n}(x;\mathbf{t})=-\sum_{i=n-b}^{n-1}\beta_{n,i}Q_{i}(x; \mathbf{t}), \tag{3.5b}\]
_where \(\{\alpha_{n,i}\}_{i=n-a}^{n-1}\) and \(\{\beta_{n,i}\}_{i=n-b}^{n-1}\) are the recurrence coefficients in (3.2a) and (3.2b) respectively._
**Remark 3.3**.: _The proofs of (3.5a) and (3.5b) are similar to those in Proposition 2.8. However, one should notice why we consider the \(t_{b}\)-flow. When we take the derivative of the orthogonal relation (2.5) with respect to the \(t_{j}\)-flow, we have_
\[\langle\partial_{t_{j}}P_{n}(x;\mathbf{t}),Q_{m}(x;\mathbf{t}) \rangle_{\theta}=-\langle x^{j}P_{n}(x;\mathbf{t}),Q_{m}(x;\mathbf{t})\rangle_ {\theta}\]
_similar to (2.28). Then the evaluation of \(\langle x^{j}P_{n}(x;\mathbf{t}),Q_{m}(x;\mathbf{t})\rangle_{\theta}\) determines the coefficients when expanding the derivative of polynomials. Therefore, to truncate the derivative formula, one needs to take \(j\) as a multiplier of \(b\) and makes use of the quasi-symmetry property. Proposition 3.2 gives the simplest case. However, it doesn't include all cases. If we consider a continuous independent variable \(x\), then fractional powers and logarithm of the Lax operator \(\mathcal{L}\) will be used. Further discussions with Frobenius manifolds have been given in [9] in commutative case, and its connection with Hurwitz numbers has been discussed in [58]._
Therefore, if one rewrites (3.5a) and (3.5b) as a matrix form
\[\partial_{t_{b}}\Phi=-\mathcal{L}(b,a)_{<0}\Phi,\quad\partial_{t _{b}}\Psi=-\mathcal{M}(a,b)_{<0}\Psi,\]
then compatibility conditions of (3.2a) and (3.5a) give rise to the non-commutative \((b,a)\)-graded Toda lattice
\[\partial_{t_{b}}\mathcal{L}(b,a)=[\mathcal{L}(b,a),\mathcal{L}(b,a)_{<0}],\]
while (3.2b) and (3.5b) lead to the non-commutative \((a,b)\)-graded Toda lattice
\[\partial_{t_{b}}\mathcal{M}(a,b)=[\mathcal{M}(a,b),\mathcal{M}(a,b )_{<0}].\]
Moreover, we can generalize the above procedure to higher order time flows, namely \(t_{kb}\)-flow (\(k=1,2,\cdots\)), which generates a non-commutative bigraded Toda hierarchy.
**Proposition 3.4**.: _The derivative of \(P_{n}(x;\mathbf{t})\) with respect to the \(t_{kb}\)-flow is given by the formula_
\[\partial_{t_{kb}}\Phi= -\left(\mathcal{L}^{k}(b,a)\right)_{<0}\Phi,\] (3.6a) _and that of \[Q_{n}(x;\mathbf{t})\] is given by_ \[\partial_{t_{kb}}\Psi= -\left(\mathcal{M}^{k}(a,b)\right)_{<0}\Psi. \tag{3.6b}\]
Proof.: Let us introduce the notation
\[(\langle\Phi,\Psi\rangle_{\theta}):=\left(\begin{array}{ccc} \langle P_{0}(x;\mathbf{t}),Q_{0}(x;\mathbf{t})\rangle_{\theta}&\langle P_{0} (x;\mathbf{t}),Q_{1}(x;\mathbf{t})\rangle_{\theta}&\cdots\\ \langle P_{1}(x;\mathbf{t}),Q_{0}(x;\mathbf{t})\rangle_{\theta}&\langle P_{1} (x;\mathbf{t}),Q_{1}(x;\mathbf{t})\rangle_{\theta}&\cdots\\ \vdots&\vdots&\ddots\end{array}\right),\]
Using the orthogonal relations (2.5), we have \((\langle\Phi,\Psi\rangle_{\theta})=\mathcal{H}\), which is a block diagonal matrix with elements \((H_{0},H_{1},\cdots)\). From the spectral problem (3.4), we have \(x^{kb}\Phi=\mathcal{L}^{k}(b,a)\Phi\). Therefore
\[(\langle x^{kb}\Phi,\Psi\rangle_{\theta})=(\langle\mathcal{L}^{k }(b,a)\Phi,\Psi\rangle_{\theta})=\mathcal{L}^{k}(b,a)(\langle\Phi,\Psi\rangle _{\theta})=\mathcal{L}^{k}(b,a)\mathcal{H}.\]
On the other hand, by taking the derivative for the orthogonal relation, we get
\[\partial_{t_{kb}}\mathcal{H} =(\langle\partial_{t_{kb}}\Phi,\Psi\rangle_{\theta})+(\langle \Phi,\partial_{t_{kb}}\Psi\rangle_{\theta})+\left(\langle x^{kb}\Phi,\Psi \rangle_{\theta}\right)\] \[=(\langle\partial_{t_{kb}}\Phi,\Psi\rangle_{\theta})+(\langle \Phi,\partial_{t_{kb}}\Psi\rangle_{\theta})+\mathcal{L}^{k}(b,a)\mathcal{H},\]
where \((\langle\partial_{t_{kb}}\Phi,\Psi\rangle_{\theta})\) is a strictly block lower triangular matrix and \((\langle\Phi,\partial_{t_{kb}}\Psi\rangle_{\theta})\) is strictly block upper triangular. Thus, according to the decomposition of matrix, we have
\[(\langle\partial_{t_{kb}}\Phi,\Psi\rangle_{\theta})=-\left(\mathcal{L}^{k}(b,a) \mathcal{H}\right)_{<0}=-\left(\mathcal{L}^{k}(b,a)\right)_{<0}\mathcal{H}.\]
Moreover, by the Riesz representation theorem [54], we get
\[\partial_{t_{kb}}\Phi=-\left(\mathcal{L}^{k}(a,b)\right)_{<0}\Phi.\]
The derivative of \(Q_{n}(x;\mathbf{t})\) can be similarly verified.
Therefore, the compatibility conditions of (3.6a), (3.6b) and (3.4) result in the non-commutative bigraded Toda hierarchy
\[\partial_{t_{kb}}\mathcal{L}(b,a) =[\mathcal{L}(b,a),\mathcal{L}^{k}(b,a)_{<0}],\] \[\partial_{t_{kb}}\mathcal{M}(a,b) =[\mathcal{M}(a,b),\mathcal{M}^{k}(a,b)_{<0}].\]
## 4. Moment reduction and Backlund transformation
Backlund transformations are an important tool in soliton theory. They relate different solutions to the same equation. It is well known that in the commutative case, the Lotka-Volterra equation is the Backlund transformation of the Toda equation, see e.g. [14, 33]. Such a relation has been generalized to the hungry Toda case (i.e. commutative Blaszak-Marciniak/Kuperschmidt lattice hierarchy) and the hungry Lotka-Volterra hierarchy (or so-called Itoh-Narita-Bogoyavlensky lattice hierarchy) [25, 56].
Therefore, it is of interest to understand Backlund transformations from the orthogonal polynomials perspective, as well as to obtain a non-commutative generalization of the Itoh-Narita-Bogoyavlensky lattice hierarchy. To this end, we start from the Wronski quasi-determinant solutions of the Blaszak-Marciniak/Kuperschmidt lattice hierarchy, and introduce the moment reduction approach.
### A Backlund transformation for the Blaszak-Marciniak lattice
We first consider the Backlund transformation for \(\theta\in\mathbb{Z}_{+}\) case. For \(\theta=1\), the Backlund transformation between the non-commutative Toda and Lotka-Volterra has been revealed in [47].
Let us first claim the following proposition.
**Proposition 4.1**.: _If_
\[H_{n}=\left|\begin{array}{cccc}m_{0}&m_{\theta}&\cdots&m_{n\theta}\\ m_{1}&m_{\theta+1}&\cdots&m_{n\theta+1}\\ \vdots&\vdots&&\vdots\\ m_{n}&m_{\theta+n}&\cdots&\overline{m_{n\theta+n}}\end{array}\right|,\quad n=0,1,\cdots\]
_are solutions to (2.29) and (2.30) under proper evolutions \(\partial_{t}m_{i}=m_{i+\theta}\), then_
\[H_{n}^{(\ell)}=\left|\begin{array}{cccc}m_{\ell}&m_{\ell+\theta}&\cdots&m_{ \ell+n\theta}\\ m_{\ell+1}&m_{\ell+\theta+1}&\cdots&m_{\ell+n\theta+1}\\ \vdots&\vdots&&\vdots\\ m_{\ell+n}&m_{\ell+\theta+n}&\cdots&\overline{m_{\ell+n\theta+n}}\end{array} \right|,\quad n=0,1,\cdots\]
_are still solutions to (2.29) and (2.30) under the same evolutions._
**Remark 4.2**.: _This proposition is an alternative expression of the Wronski property. It is known that if the seed functions are properly chosen such that an equation admits a proper Wronskian-type solution, then the expression of solutions is independent of the choice of seed functions [34, SS4.3.1]. Such an idea also appears in the construction of non-commutative integrable systems by using the Marchenko lemma [23]._
Based on this proposition, there are two questions remaining if we want to construct a Backlund transformation for equations (2.29) and (2.30). One is how to construct \(H_{n}^{(\ell)}\), and another is how to find a possible connection equation for those \(H_{n}^{(\ell)}\). To construct \(H_{n}^{(\ell)}\), a moment reduction method is needed.
**Proposition 4.3**.: _If we take the moment reduction_
\[\langle x^{i}\mathbb{I}_{p},x^{j}\mathbb{I}_{p}\rangle_{\theta}=m_{i+j\theta}: =\left\{\begin{array}{ll}d_{\frac{i+j\theta}{\theta+1}},&i+j\theta\mod\theta +1=0,\\ 0,&i+j\theta\mod\theta+1\neq 0,\end{array}\right. \tag{4.1}\]
_then \(\{H_{n(\theta+1)+\ell}\}_{\ell=0,1,\cdots,\theta,\,n\in\mathbb{N}}\) are graded and have expressions of Wronski quasi-determinants_
\[H_{n(\theta+1)+\ell}=\left|\begin{array}{cccc}d_{\ell}&d_{\ell+\theta}& \cdots&d_{\ell+n\theta}\\ d_{\ell+1}&d_{\ell+1+\theta}&\cdots&d_{\ell+1+n\theta}\\ \vdots&\vdots&&\vdots\\ d_{\ell+n}&d_{\ell+n+\theta}&\cdots&\framebox{$d_{\ell+n+n\theta}$}\end{array} \right.\right|,\quad\ell=0,1,\cdots,\theta,\,n\in\mathbb{N}\]
Proof.: We prove the quasi-determinant expressions for \(H_{n(\theta+1)}\), and the others can be similarly verified. Expanding \(H_{n(\theta+1)}\) by definition, we have
\[H_{n(\theta+1)}=d_{n+n\theta}-(v_{0},v_{1},\cdots,v_{n-1},\tilde{v}_{n})\left( \begin{array}{cccc}M_{0}&M_{\theta}&\cdots&\hat{M}_{n\theta}\\ M_{1}&M_{\theta+1}&\cdots&\hat{M}_{n\theta+1}\\ \vdots&\vdots&&\vdots\\ \tilde{M}_{n}&\tilde{M}_{\theta+n}&\cdots&\hat{\tilde{M}}_{n\theta+n}\end{array} \right)^{-1}\left(\begin{array}{c}w_{0}\\ w_{1}\\ \vdots\\ \tilde{w}_{n}\end{array}\right),\]
where \(\tilde{v}_{n}\) (resp. \(\tilde{w}_{n}\)) is a \(\theta\) row (resp. column) zero vector, and \(v_{j}\) (resp. \(w_{j}\)) are \(\theta+1\) row (resp. column) vectors of the form
\[v_{j}=(d_{n+j\theta},0,\cdots,0)\,,\quad w_{j}=\left(\begin{array}{c}d_{n \theta+j}\\ \vdots\\ 0\\ 0\end{array}\right),\]
and
\[\hat{\hat{M}}_{j}=\left(\begin{array}{cccc}d_{j}&0&\cdots&0\\ 0&d_{j+1}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&d_{j+\theta-1}\end{array}\right),\quad\hat{M}_{j}=\left(\begin{array} []{c}\hat{\hat{M}}_{j}\\ 0\\ 0\end{array}\right),\quad\tilde{M}_{j}=\left(\begin{array}{cc}\hat{\hat{M}}_ {j}&0\\ 0\end{array}\right),\quad M_{j}=\left(\begin{array}{cc}\hat{\hat{M}}_{j}&0\\ 0&d_{j+\theta}\end{array}\right).\]
The action of a permutation matrix gives the result.
Therefore, we know that \(\{H_{n}\}_{n\in\mathbb{N}}\) could be divided into \(\theta+1\) different families of Wronski quasi-determinant. Let's denote
\[H_{n(\theta+1)+\ell}:=\tilde{H}_{n}^{(\ell)},\quad\ell=0,1,\cdots,\theta,\,n\in \mathbb{N}\]
and if moments \(\{d_{i},\,i=0,1,\cdots\}\) satisfy evolutions \(\partial_{t}d_{i}=d_{i+\theta}\), we know that for each \(\ell=0,\cdots,\theta\), \(\{\tilde{H}_{n}^{(\ell)}\}_{n\in\mathbb{N}}\) are solutions of the non-commutative Blaszak-Marciniak and Kuperschmidt lattices based on Proposition 4.1.
Our next step is to find a connecting equation for these solutions. For this purpose, we consider how moment reduction influences the matrix-valued bi-orthogonal polynomials.
**Theorem 4.4**.: _Under the moment reduction condition (4.1), we have_
\[P_{n(\theta+1)+\ell}(x)=x^{\ell}\tilde{P}_{n}^{(\ell)}(x^{\theta+1}),\quad Q_ {n(\theta+1)+\ell}=x^{\ell}\tilde{Q}_{n}^{(\ell)}(x^{\theta+1}),\quad\ell=0,1,\cdots,\theta,\]
_where_
\[\tilde{P}_{n}^{(\ell)}(x) =\left|\begin{array}{cccc}d_{\ell}&d_{\ell+\theta}&\cdots&d_{ \ell+(n-1)\theta}&\mathbb{I}_{p}\\ d_{\ell+1}&d_{\ell+1+\theta}&\cdots&d_{\ell+1+(n-1)\theta}&x\mathbb{I}_{p}\\ \vdots&\vdots&&\vdots&\vdots\\ d_{\ell+n}&d_{\ell+n+\theta}&\cdots&d_{\ell+n+(n-1)\theta}&\framebox{$x^{n} \mathbb{I}_{p}$}\\ \end{array}\right|,\] \[\tilde{Q}_{n}^{(\ell)}(x) =\left|\begin{array}{cccc}d_{\ell}&d_{\ell+\theta}&\cdots&d_{ \ell+n\theta}\\ \vdots&\vdots&&\vdots\\ d_{\ell+n-1}&d_{\ell+n-1+\theta}&\cdots&d_{\ell+n-1+n\theta}\\ \mathbb{I}_{p}&x\mathbb{I}_{p}&\cdots&\framebox{$x^{n}\mathbb{I}_{p}$}\\ \end{array}\right|.\]
_Moreover, the graded orthogonality reads_
\[\langle P_{n(\theta+1)+\ell}(x),Q_{m(\theta+1)+k}(x)\rangle_{\theta}=\tilde{ H}_{n}^{(\ell)}\delta_{n,m}\delta_{\ell,k}. \tag{4.2}\]
The proof of this theorem is based on the same idea as in Prop. 4.3, by making use of block matrices and permutations. Moreover, this theorem implies that there is a gradation of the module
\[\mathbb{R}^{p\times p}[x]=\mathbb{R}_{0}^{p\times p}[x]\oplus\mathbb{R}_{1}^{ p\times p}[x]\oplus\cdots\oplus\mathbb{R}_{\theta}^{p\times p}[x],\]
where
\[\mathbb{R}_{\ell}^{p\times p}[x]=\text{span}\{x^{\ell}\mathbb{I}_{p},x^{\ell+ \theta+1}\mathbb{I}_{p},\cdots,x^{\ell+n(\theta+1)}\mathbb{I}_{p},\cdots\}, \quad\ell=0,1,\cdots,\theta.\]
Therefore, we have the following important observation
\[P_{n(\theta+1)+\ell}(x)\in\mathbb{R}_{\ell}^{p\times p}[x], \tag{4.3}\]
from which the terms of recurrence relations would be dramatically decreased. We have the following proposition.
**Proposition 4.5**.: _Monic bi-orthogonal polynomials \(\{P_{n}(x)\}_{n\in\mathbb{N}}\) satisfying the orthogonal relation (4.2), have recurrence relations_
\[\begin{split} x^{\theta}P_{n(\theta+1)}(x)&=P_{n( \theta+1)+\theta}(x)+\xi_{n,0}P_{(n-1)(\theta+1)+\theta}(x),\\ x^{\theta}P_{n(\theta+1)+\ell}(x)&=P_{(n+1)(\theta+1)+ \ell-1}(x)+\xi_{n,\ell}P_{n(\theta+1)+\ell-1}(x),\quad\ell=1,\cdots,\theta, \end{split} \tag{4.4}\]
_where_
\[\xi_{n,\ell}=\left\{\begin{array}{ll}\tilde{H}_{n}^{(0)}\left(\tilde{H}_{n-1}^{( \theta)}\right)^{-1},&\ell=0,\\ \tilde{H}_{n}^{(\ell)}\left(\tilde{H}_{n}^{(\ell-1)}\right)^{-1},&\ell=1, \cdots,\theta\end{array}\right.\]
_The \(\{Q_{n}(x)\}_{n\in\mathbb{N}}\), satisfy_
\[\begin{split} xQ_{n(\theta+1)+\ell}&=Q_{n(\theta+1)+\ell+1}(x )+\eta_{n,\ell}Q_{(n-1)(\theta+1)+\ell+1},\quad\ell=0,\cdots,\theta-1,\\ xQ_{n(\theta+1)+\theta}&=Q_{(n+1)(\theta+1)}(x)+\eta_{n,\theta}Q_{n( \theta+1)},\end{split} \tag{4.5}\]
_where_
\[\eta_{n,\ell}^{\top}=\left\{\begin{array}{ll}\left(\tilde{H}_{n-1}^{(\ell+ 1)}\right)^{-1}\tilde{H}_{n}^{(\ell)},&\ell=0,\cdots,\theta-1,\\ \left(\tilde{H}_{n}^{(0)}\right)^{-1}\tilde{H}_{n}^{(\theta)},&\ell=\theta. \end{array}\right.\]
Proof.: When \(x^{\theta}\) acts on \(P_{n(\theta+1)+\ell}(x)\), there are two different cases based on whether \(\ell+\theta\leq\theta\) or \(\ell+\theta>\theta\) due to the gradation. Therefore, we should discuss \(\ell=0\) and \(\ell=1,\cdots,\theta\) seperately. When \(\ell=0\), then
\[x^{\theta}P_{n(\theta+1)}(x)=P_{n(\theta+1)+\theta}(x)+\sum_{k=0}^{n-1}\xi_{n,0}^{(k)}P_{k(\theta+1)+\theta}(x),\]
where
\[\xi_{n,0}^{(k)}=\langle x^{\theta}P_{n(\theta+1)}(x),Q_{k(\theta+1)+\theta}(x )\rangle_{\theta}\left(\tilde{H}_{k}^{(\theta)}\right)^{-1}=\langle P_{n( \theta+1)}(x),xQ_{k(\theta+1)+\theta}(x)\rangle_{\theta}\left(\tilde{H}_{k}^{ (\theta)}\right)^{-1},\]
and the last step here is due to the quasi-symmetry (2.4). Therefore, according to the graded orthogonality (4.2), only when \(k=n-1\) is the bilinear form nonzero and we get
\[\xi_{n,0}^{(k)}=\left\{\begin{array}{ll}0,&k=0,\cdots,n-2,\\ \tilde{H}_{n}^{(0)}\left(\tilde{H}_{n-1}^{(\theta)}\right)^{-1},&k=n-1.\end{array}\right.\]
When \(\ell=1,\cdots,\theta\), by similar discussion, we have
\[x^{\theta}P_{n(\theta+1)+\ell}(x)=P_{(n+1)(\theta+1)+\ell-1}(x)+\sum_{k=0}^{n }\xi_{n,\ell}^{(k)}P_{k(\theta+1)+\ell-1}(x),\]
and
\[\xi_{n,\ell}^{(k)}=\left\{\begin{array}{ll}0&k=0,\cdots,n-1,\\ \tilde{H}_{n}^{(\ell)}\left(\tilde{H}_{n}^{(\ell-1)}\right)^{-1}&k=n.\end{array}\right.\]
Therefore, we can ignore the upper index and the proof is complete for \(\{P_{n}(x)\}_{n\in\mathbb{N}}\). The proof for \(\{Q_{n}(x)\}_{n\in\mathbb{N}}\) is similar.
**Remark 4.6**.: _The corresponding spectral problems (4.4) and (4.5) can be characterized by_
\[x^{\theta}\Phi=(\Lambda^{\theta}+\mathcal{A}\Lambda^{-1})\Phi:= \mathcal{L}\Phi, \mathcal{A}=\text{diag}(\xi_{0,0},\cdots,\xi_{0,\theta},\xi_{1,0}, \cdots,\xi_{1,\theta},\cdots), \tag{4.6a}\] \[x\Psi=(\Lambda+\mathcal{B}\Lambda^{-\theta})\Psi:=\mathcal{M}\Psi, \mathcal{B}=\text{diag}(\eta_{0,0},\cdots,\eta_{0,\theta},\eta_{1,0}, \cdots,\eta_{1,\theta},\cdots). \tag{4.6b}\]
Time evolutions are then considered. Considering the time-dependent weight function (2.22), we have the following proposition.
**Proposition 4.7**.: _Only \(t_{c(\theta+1)}\)-flows (\(c=1,2,\cdots\)) are compatible with the moment reduction condition (4.1). In other words, if \(k\,\text{mod}\,(\theta+1)\neq 0\),_
\[\partial_{t_{k}}P_{n}(x;\mathbf{t})=\partial_{t_{k}}Q_{n}(x;\mathbf{t})=0,\]
_i.e. the wave function doesn't evolve with respect to \(t_{k}\)-flows if \(k\,\text{mod}\,(\theta+1)\neq 0\)._
Proof.: From the expression for the time-dependent weight function (2.22), we know that \(\partial_{t_{k}}m_{i}=m_{i+k}\). With the moment reduction condition, it is known that if \(i=0\) mod \((\theta+1)\), then \(i+k=k\) mod \((\theta+1)\). Therefore, only when \(k=0\) mod \((\theta+1)\) do we have \(\partial_{t_{k}}d_{i}\neq 0\), and the corresponding derivative of the coefficient doesn't vanish.
Therefore, if we take the \(t_{c(\theta+1)}\)-derivative of the orthogonal relation \((\langle\Phi,\Psi\rangle_{\theta})=\mathcal{H}\), we have
\[\big{(}\langle\partial_{t_{c(\theta+1)}}\Phi,\Psi\rangle_{\theta}\big{)}+ \big{(}\langle\Phi,\partial_{t_{c(\theta+1)}}\Psi\rangle_{\theta}\big{)}+ \Big{(}\langle x^{c(\theta+1)}\Phi,\Psi\rangle_{\theta}\Big{)}=\partial_{t_{c( \theta+1)}}\mathcal{H}.\]
Moreover, if \(c\) is a multiplier of \(\theta\), then we get
\[\partial_{t_{k\theta(\theta+1)}}\Phi =-(\mathcal{L}^{k(\theta+1)})_{<0}\Phi, \tag{4.7a}\] \[\partial_{t_{k\theta(\theta+1)}}\Psi =-(\mathcal{M}^{k(\theta+1)})_{<0}\Psi, \tag{4.7b}\]
by following the proof of Proposition 3.4. The first non-trivial flow is \(t_{\theta(\theta+1)}\) flow, which could be read by the compatibility conditions of (4.6a)-(4.7a), and (4.6b)-(4.7b). Thus we get
\[\partial_{t_{\theta(\theta+1)}}\mathcal{L} =\left[\mathcal{L},(\mathcal{L}^{\theta+1})_{<0}\right], \tag{4.8a}\] \[\partial_{t_{\theta(\theta+1)}}\mathcal{M} =\left[\mathcal{M},(\mathcal{M}^{\theta+1})_{<0}\right]. \tag{4.8b}\]
By realizing that
\[\big{(}(\Lambda^{\theta}+\mathcal{A}\Lambda^{-1})^{\theta+1}\big{)}_{<0}= \big{(}(\mathcal{A}\Lambda^{-1})^{\theta+1}\big{)}_{<0}\,,\]
we get the equation expressed in terms of elements in \(\mathcal{A}\) and
\[\partial_{t_{\theta(\theta+1)}}(\mathcal{A}\Lambda^{-1})=\left[\Lambda^{\theta },(\mathcal{A}\Lambda^{-1})^{\theta+1}\right].\]
Moreover, if we denote \(\xi_{i,j}=\gamma_{i(\theta+1)+j}\), then equation (4.8a) could be explicitly written as
\[\partial_{t_{\theta(\theta+1)}}\gamma_{n}=\gamma_{n+\theta}\cdots\gamma_{n}- \gamma_{n}\cdots\gamma_{n-\theta}. \tag{4.9}\]
On the other hand, from equations (4.8b), we obtain the equation in terms of \(\mathcal{B}\) and
\[\partial_{t_{\theta(\theta+1)}}\mathcal{B}\Lambda^{-\theta}=\sum_{i+j=\theta- 1\atop i,j\geq 0}\Lambda^{i+1}(\mathcal{B}\Lambda^{-\theta})\Lambda^{j}( \mathcal{B}\Lambda^{-\theta})-\sum_{i+j=\theta-1\atop i,j\geq 0}(\mathcal{B} \Lambda^{-\theta})\Lambda^{i}(\mathcal{B}\Lambda^{-\theta})\Lambda^{j+1}.\]
Explicitly, one has
\[\partial_{t_{\theta(\theta+1)}}\zeta_{n}=\left(\sum_{i=1}^{\theta}\zeta_{n+i} \right)\zeta_{n}-\zeta_{n}\left(\sum_{i=1}^{\theta}\zeta_{n-i}\right) \tag{4.10}\]
where \(\eta_{i,j}=\zeta_{i(\theta+1)+j}\).
In the literature, the commutative versions of equations (4.9) and (4.10) are called the Itoh-Narita-Bogoyavleskii (INB) lattices [5, 12, 38, 52], as additive and multiplicative extensions of the Lotka-Volterra lattice. In [12], the Hamiltonian structure of non-commutative INB lattice was studied recently.
The Backlund transformation for Blaszak-Marciniak three-field equations and its quasi-determinant solutions
In this section, let us consider the \(\theta=2\) case, which gives a Backlund transformation for the three-field equations (2.31a)-(2.31c). It reads from (4.9) that*
Footnote *: Here, we use \(t\) instead of \(t_{6}\) to make the writing simpler.
\[\partial_{t}\gamma_{n}=\gamma_{n+2}\gamma_{n+1}\gamma_{n}-\gamma_{n}\gamma_{n -1}\gamma_{n-2}, \tag{4.11}\]
and we have the following proposition.
**Proposition 4.8**.: _The non-commutative INB lattice (4.11) has the solutions_
\[\gamma_{3n}=\tilde{H}_{n}^{(0)}\left(\tilde{H}_{n-1}^{(2)}\right)^{-1},\quad \gamma_{3n+1}=\tilde{H}_{n}^{(1)}\left(\tilde{H}_{n}^{(0)}\right)^{-1},\quad \gamma_{3n+2}=\tilde{H}_{n}^{(2)}\left(\tilde{H}_{n}^{(1)}\right)^{-1},\]
_whose moments \(\{d_{i}\}_{i\in\mathbb{N}}\) satisfy time evolutions \(\partial_{t}d_{i}=d_{i+2}\)._
If we take solutions into the equation (4.11), it is necessary to verify that
\[\partial_{t}\left(\tilde{H}_{n}^{(0)}\left(\tilde{H}_{n-1}^{(2)} \right)^{-1}\right) =\tilde{H}_{n}^{(2)}\left(\tilde{H}_{n-1}^{(2)}\right)^{-1}-\tilde {H}_{n}^{(0)}\left(\tilde{H}_{n-1}^{(0)}\right)^{-1}, \tag{4.12a}\] \[\partial_{t}\left(\tilde{H}_{n}^{(1)}\left(\tilde{H}_{n}^{(0)} \right)^{-1}\right) =\tilde{H}_{n+1}^{(0)}\left(\tilde{H}_{n}^{(0)}\right)^{-1}-\tilde {H}_{n}^{(1)}\left(\tilde{H}_{n-1}^{(1)}\right)^{-1},\] (4.12b) \[\partial_{t}\left(\tilde{H}_{n}^{(2)}\left(\tilde{H}_{n}^{(1)} \right)^{-1}\right) =\tilde{H}_{n+1}^{(1)}\left(\tilde{H}_{n}^{(1)}\right)^{-1}-\tilde {H}_{n}^{(2)}\left(\tilde{H}_{n-1}^{(2)}\right)^{-1}. \tag{4.12c}\]
By simplifying (4.12a), one gets
\[\left(\tilde{H}_{n}^{(0)}\right)^{-1}\partial_{t}\tilde{H}_{n}^{(0)}-\left( \tilde{H}_{n-1}^{(2)}\right)^{-1}\partial_{t}\tilde{H}_{n-1}^{(2)}=\left( \tilde{H}_{n}^{(0)}\right)^{-1}\tilde{H}_{n}^{(2)}-\left(\tilde{H}_{n-1}^{(0)} \right)^{-1}H_{n-1}^{(2)}. \tag{4.13}\]
According to equations (2.35) and (2.32), we know that if we introduce the notation
\[\tilde{\eta}_{n+1,n}^{(\ell)}=\left|\begin{array}{ccccc}d_{\ell}&d_{\ell+2}& \cdots&d_{\ell+2n}&d_{\ell+2n+2}\\ d_{\ell+1}&d_{\ell+3}&\cdots&d_{\ell+2n+1}&d_{\ell+2n+3}\\ \vdots&\vdots&&\vdots&\vdots\\ d_{\ell+n}&d_{\ell+n+2}&\cdots&d_{\ell+3n}&d_{\ell+3n+2}\\ 0&0&\cdots&\mathbb{I}_{p}&\framebox{0}\end{array}\right|,\]
then (4.13) is equal to
\[\tilde{\eta}_{n,n-1}^{(2)}-\tilde{\eta}_{n+1,n}^{(0)}-\left(\tilde{\eta}_{n-1,n-2}^{(2)}-\tilde{\eta}_{n,n-1}^{(0)}\right)=\left(\tilde{H}_{n}^{(0)}\right) ^{-1}\tilde{H}_{n}^{(2)}-\left(\tilde{H}_{n-1}^{(0)}\right)^{-1}\tilde{H}_{n-1} ^{(2)}.\]
Moreover, from the next proposition, we know that (4.12a) is valid.
**Proposition 4.9**.: _It holds that_
\[\tilde{\eta}_{n+1,n}^{(0)}=\tilde{\eta}_{n,n-1}^{(2)}-\left(\tilde{H}_{n}^{(0 )}\right)^{-1}\tilde{H}_{n}^{(2)}.\]
Proof.: According to the identity (A.1) to \((n,n+1)\)-rows and \((1,n+1)\)-columns, we have
\[\tilde{\eta}_{n+1,n}^{(0)}=\left|\begin{array}{cccc}d_{2}&\cdots&d_{2n}&d_{2n+ 2}\\ d_{3}&\cdots&d_{2n+1}&d_{2n+3}\\ \vdots&&\vdots&\vdots\\ d_{n+2}&\cdots&d_{3n}&d_{3n+2}\\ 0&\cdots&\mathbb{I}_{p}&\framebox{0}\end{array}\right|\] \[-\left|\begin{array}{cccc}d_{0}&d_{2}&\cdots&d_{2n}\\ d_{1}&d_{3}&\cdots&d_{2n+1}\\ \vdots&\vdots&&\vdots\\ d_{n-1}&d_{n+1}&\cdots&d_{3n-1}\\ \framebox{0}&0&\cdots&\mathbb{I}_{p}\end{array}\right|\left|\begin{array}{ ccccc}d_{0}&d_{2}&\cdots&d_{2n}\\ d_{1}&d_{3}&\cdots&d_{2n+1}\\ \vdots&\vdots&&\vdots\\ d_{n-1}&d_{n+1}&\cdots&d_{3n-1}\\ \framebox{d_{n}}&d_{n+2}&\cdots&d_{3n}\end{array}\right|^{-1}\left|\begin{array} []{cccc}d_{2}&\cdots&d_{2n}&d_{2n+2}\\ d_{3}&\cdots&d_{2n+1}&d_{2n+3}\\ \vdots&&\vdots&\vdots\\ d_{n+1}&\cdots&d_{3n-1}&d_{3n+1}\\ d_{n+2}&\cdots&d_{3n}&\framebox{d_{3n+2}}\end{array}\right|.\]
Moreover, by applying the homological relation (A.3), this proof is complete.
We note that proving equations (4.12b) and (4.12c), is equivalent to verifying
\[\left(H_{n}^{(\ell+1)}\right)^{-1}\partial_{t}H_{n}^{(\ell+1)}-\left(H_{n}^{( \ell)}\right)^{-1}\partial_{t}H_{n}^{(\ell)}=\left(H_{n}^{(\ell+1)}\right)^{- 1}H_{n+1}^{(\ell)}-\left(H_{n-1}^{(\ell+1)}\right)^{-1}H_{n}^{(\ell)},\quad \ell=0,1.\]
By using (2.35) and (2.32), it is known that we only need to prove the following proposition.
**Proposition 4.10**.: _It holds that_
\[\tilde{\eta}_{n+1,n}^{(\ell)}-\tilde{\eta}_{n+1,n}^{(\ell+1)}=\left(\tilde{H} _{n}^{(\ell+1)}\right)^{-1}\tilde{H}_{n+1}^{(\ell)},\quad\ell=0,1.\]
Proof.: This proof is based on the observation that
\[\left|\begin{array}{ccccc}d_{\ell}&d_{\ell+2}&\cdots&d_{\ell+2n}&d_{\ell+2n +2}&0\\ d_{\ell+1}&d_{\ell+3}&\cdots&d_{\ell+2n+1}&d_{\ell+2n+3}&0\\ \vdots&\vdots&&\vdots&\vdots&\vdots\\ d_{\ell+n}&d_{\ell+n+2}&\cdots&d_{\ell+3n}&d_{\ell+3n+2}&0\\ d_{\ell+n+1}&d_{\ell+n+3}&\cdots&d_{\ell+3n+1}&d_{\ell+3n+3}&\mathbb{I}_{p}\\ 0&0&\cdots&\mathbb{I}_{p}&0&\framebox{0}\end{array}\right|=-\tilde{\eta}_{n+1, n}^{(\ell)}\left(\tilde{H}_{n+1}^{(\ell)}\right)^{-1}\] \[=-\left(\tilde{H}_{n}^{(\ell+1)}\right)^{-1}-\tilde{\eta}_{n+1, n}^{(\ell+1)}\left(\tilde{H}_{n+1}^{(\ell)}\right)^{-1},\]
where two different non-commutative Jacobi identities are applied to the left-hand side quasi-determinants. The first equality is the application of non-commutative Jacobi identity (A.1) to \((n+1,n+2)\)-rows and \((n+1,n+2)\)-columns while the second equality is obtained by using (A.1) to \((1,n+2)\)-rows and \((n+1,n+2)\)-columns.
### Backlund transformation for bi-graded Toda lattices and fractional Volterra hierarchy
This part is devoted to finding the Backlund transformation for the case where \(\theta=b/a\in\mathbb{Q}_{+}\) with \(a,b\in\mathbb{Z}_{+}\). Consider the moment reduction condition
\[\langle x^{i}\mathbb{I}_{p},x^{j}\mathbb{I}_{p}\rangle_{\frac{n}{a}}=m_{i+j \frac{n}{a}}:=\left\{\begin{array}{ll}d_{\frac{i\theta+jh}{a(a+b)}},&ia+jb \mod a+b=0,\\ 0,&ia+jb\mod a+b\neq 0.\end{array}\right. \tag{4.14}\]
By a similar manner to the proof of Prop. 4.3, we claim the following proposition.
**Proposition 4.11**.: _Under the moment reduction condition (4.14), we have_
\[H_{n(a+b)+\ell}=\left|\begin{array}{ccccc}d_{\frac{\ell}{a}}&d_{ \frac{\ell}{a}+\theta}&\cdots&d_{\frac{\ell}{a}+n\theta}\\ d_{\frac{\ell}{a}+1}&d_{\frac{\ell}{a}+1+\theta}&\cdots&d_{\frac{\ell}{a}+1+n \theta}\\ \vdots&\vdots&&\vdots\\ d_{\frac{\ell}{a}+n}&d_{\frac{\ell}{a}+n+\theta}&\cdots&\framebox{$d_{\frac{ \ell}{a}+n+n\theta}$}\end{array}\right|:=\tilde{H}_{n}^{(\ell)},\quad\ell=0,1, \cdots,a+b-1,\]
_and for the polynomials, we have_
\[P_{n(a+b)+\ell}(x)=x^{\ell}\tilde{P}_{n}^{(\ell)}(x^{a+b}),\quad Q _{n(a+b)+\ell}(x)=x^{\ell}\tilde{Q}_{n}^{(\ell)}(x^{a+b}),\quad\ell=0,1,\cdots,a+b-1,\]
_where_
\[\tilde{P}_{n}^{(\ell)}(x)=\left|\begin{array}{ccccc}d_{\frac{ \ell}{a}}&d_{\frac{\ell}{a}+\theta}&\cdots&d_{\frac{\ell}{a}+(n-1)\theta}& \mathbb{I}_{p}\\ d_{\frac{\ell}{a}+1}&d_{\frac{\ell}{a}+1+\theta}&\cdots&d_{\frac{\ell}{a}+1+(n -1)\theta}&x^{a+b}\mathbb{I}_{p}\\ \vdots&\vdots&&\vdots&\vdots\\ d_{\frac{\ell}{a}+n}&d_{\frac{\ell}{a}+n+\theta}&\cdots&d_{\frac{\ell}{a}+n+(n -1)\theta}&\framebox{$x^{n(a+b)}\mathbb{I}_{p}$}\end{array}\right|,\]
_and_
\[\tilde{Q}_{n}^{(\ell)}(x)=\left|\begin{array}{ccccc}d_{\frac{ \ell}{a}}&d_{\frac{\ell}{a}+\theta}&\cdots&d_{\frac{\ell}{a}+(n-1)\theta}&d_{ \frac{\ell}{a}+n\theta}\\ d_{\frac{\ell}{a}+1}&d_{\frac{\ell}{a}+1+\theta}&\cdots&d_{\frac{\ell}{a}+1+(n -1)\theta}&d_{\frac{\ell}{a}+1+n\theta}\\ \vdots&\vdots&&\vdots&\vdots\\ d_{\frac{\ell}{a}+n-1}&d_{\frac{\ell}{a}+n-1+\theta}&\cdots&d_{\frac{\ell}{a}+n -1+(n-1)\theta}&d_{\frac{\ell}{a}+n-1+n\theta}\\ \mathbb{I}_{p}&x^{a+b}\mathbb{I}_{p}&\cdots&x^{(n-1)(a+b)}\mathbb{I}_{p}& \framebox{$x^{n(a+b)}\mathbb{I}_{p}$}\end{array}\right|.\]
Moreover, orthogonal relations under this moment reduction condition can be written as
\[\langle P_{n(a+b)+\ell}(x),Q_{m(a+b)+k}(x)\rangle_{\theta}= \tilde{H}_{n}^{(\ell)}\delta_{n,m}\delta_{\ell,k}. \tag{4.15}\]
In analogy to (4.3), we know that there is a graded polynomial space
\[\mathbb{R}^{p\times p}[x]=\mathbb{R}_{0}^{p\times p}[x]\oplus \mathbb{R}_{1}^{p\times p}[x]\oplus\cdots\oplus\mathbb{R}_{a+b-1}^{p\times p}[x],\]
and
\[P_{n(a+b)+\ell}(x)\in\mathrm{span}\{x^{\ell}\mathbb{I}_{p},x^{ \ell+a+b}\mathbb{I}_{p},\cdots,x^{\ell+n(a+b)}\mathbb{I}_{p}\}.\]
According to the quasi-symmetry property (3.3), we have the following recurrence relations.
**Proposition 4.12**.: _For monic bi-orthogonal polynomials \(\{P_{n}(x)\}_{n\in\mathbb{N}}\) satisfying the orthogonal relation (4.15), we have recurrence relations_
\[x^{b}P_{n(a+b)+\ell}(x)=P_{n(a+b)+b+\ell}(x)+\xi_{n,\ell}P_{(n-1) (a+b)+b+\ell}(x),\quad\ell=0,\cdots,a-1\]
_with \(\xi_{n,\ell}=\tilde{H}_{n}^{(\ell)}\left(\tilde{H}_{n-1}^{(b+\ell)}\right)^{-1}\), and_
\[x^{b}P_{n(a+b)+\ell}(x)=P_{(n+1)(a+b)+\ell-a}(x)+\xi_{n,\ell}P_{n (a+b)+\ell-a}(x),\quad\ell=a,\cdots,a+b\]
_with \(\xi_{n,\ell}=\tilde{H}_{n}^{(\ell)}\left(\tilde{H}_{n}^{(\ell-a)}\right)^{-1}\). Similarly the \(\{Q_{n}(x)\}_{n\in\mathbb{N}}\), satisfy the recurrence relations_
\[x^{a}Q_{n(a+b)+\ell}(x)=Q_{n(a+b)+a+\ell}(x)+\eta_{n,\ell}Q_{(n- 1)(a+b)+a+\ell}(x),\quad\ell=0,\cdots,b-1\]
_with \(\eta_{n,\ell}^{\top}=\left(\tilde{H}_{n-1}^{(a+\ell)}\right)^{-1}\tilde{H}_{n}^{( \ell)}\), and_
\[x^{a}Q_{n(a+b)+\ell}(x)=Q_{(n+1)(a+b)+\ell-b}(x)+\eta_{n,\ell}Q_{n(a+b)+\ell-b}( x),\quad\ell=b,\cdots,a+b\]
_with \(\eta_{n,\ell}^{\top}=\left(\tilde{H}_{n}^{(\ell-b)}\right)^{-1}\tilde{H}_{n}^ {(\ell)}\)._
If we denote \(\Phi=(P_{0}(x),P_{1}(x),\cdots)\) and \(\Psi=(Q_{0}(x),Q_{1}(x),\cdots)\), then the corresponding recurrence relations can be written into spectral problem form
\[x^{b}\Phi =(\Lambda^{b}+\mathcal{A}\Lambda^{-a})\Phi,\quad\mathcal{A}= \operatorname{diag}(\xi_{0,0},\cdots,\xi_{0,a+b},\xi_{1,0},\cdots,\xi_{1,a+b}, \cdots),\] \[x^{a}\Psi =(\Lambda^{a}+\mathcal{B}\Lambda^{-b})\Psi,\quad\mathcal{B}= \operatorname{diag}(\eta_{0,0},\cdots,\eta_{0,a+b},\eta_{1,0},\cdots,\eta_{1,a +b},\cdots).\]
Such a spectral problem corresponds to the so-called fractional Volterra hierarchy, which was studied in the cubic Hodge integrals and integrable systems in [49].
In particular, if we consider \(t_{kb(a+b)}\)-flows, then we obtain the corresponding integrable lattices
\[\partial_{t_{kb(a+b)}}\mathcal{L}=\left[\mathcal{L},\left(\mathcal{L}^{k(a+b) }\right)_{<0}\right],\quad\partial_{t_{kb(a+b)}}\mathcal{M}=\left[\mathcal{M},\left(\mathcal{M}^{k(a+b)}\right)_{<0}\right],\]
where \(\mathcal{L}=\Lambda^{b}+\mathcal{A}\Lambda^{-a}\) and \(\mathcal{M}=\Lambda^{a}+\mathcal{B}\Lambda^{-b}\).
## Acknowledgement
This work is partially funded by grants (NSFC12101432, NSFC12175155). SHL would like to thank Dr. Jesper Ipsen for fruitful discussions on the Muttalib-Borodin model and fractional powers of bi-orthogonal functions. SHL would also like to thank Prof. Di Yang for sharing his ideas on the bigraded Toda hierarchy and applications in Frobenius manifolds.
## Appendix A
### Basic quasi-determinant identities
Quasi-determinants were first introduced by Gelfand and Retakh in the early 1990s for a matrix with non-commutative entries [26]. Since this paper deals with matrix-valued orthogonal polynomials and non-commutative integrable systems by using quasi-determinants, we give a brief introduction to quasi-determinants as well as the basic properties we used in this paper. For a detailed reference, please see [26, 27, 42].
**Definition A.1**.: _Let \(A\) be an \(n\times n\) matrix over a ring \(\mathcal{R}\). For \(i,j=1,2,\ldots,n\), let \(r_{i}^{j}\) be the \(i\)-th row of \(A\) without the \(j\)-th entry, \(c_{j}^{i}\) be the \(j\)-th column without the \(i\)-th entry, and \(A^{i,j}\) be the submatrix of \(A\) without the \(i\)-th row and \(j\)-th column of \(A\). Assume that \(A^{i,j}\) is invertible. Then there are \(n^{2}\) quasi-determinants of \(A\), denoted as \(|A|_{i,j}\) for \(1\leq i,j\leq n\), as follows_
\[|A|_{i,j}=a_{i,j}-r_{i}^{j}\left(A^{i,j}\right)^{-1}c_{j}^{i},\]
_where \(a_{i,j}\) is the \((i,j)\)-th entry of \(A\). For convenience, in this paper, we denote_
\[|A|_{i,j}=\left|\begin{array}{cc}A^{i,j}&c_{j}^{i}\\ r_{i}^{j}&\boxed{a_{ij}}\end{array}\right|.\]
Since the concept of quasi-determinant was proposed, it has been used in different branches of mathematics, such as representation theory, combinatorics, non-commutative geometry and so on. For self-consistency, we list below, several basic properties of quasi-determinants which we have used in this article.
* Quasi-determinants can be used to solve linear systems with non-commutative coefficients. ([27, Thm. 1.6.1]) Let \(A=(a_{i,j})\) be an \(n\times n\) matrix over a ring \(\mathcal{R}\). Assume that all the quasi-determinants \(|A|_{i,j}\) are defined and invertible. Then \[\left\{\begin{array}{c}a_{1,1}x_{1}+\cdots+a_{1,n}x_{n}=\xi_{1}\\ \vdots\\ a_{n,1}x_{1}+\cdots+a_{n,n}x_{n}=\xi_{n}\end{array}\right.\] has a solution \(x_{i}\in\mathcal{R}\) if and only if \[x_{i}=\sum_{j=1}^{n}|A|_{j,i}^{-1}\xi_{j}.\]
* Invariance of quasi-determinants under elementary row or column operations. ([42, Prop 2.2]) A permutation of the rows or columns of a quasi-determinant does not change its value. For example, \[\left|\begin{array}{ccc}A&B&C\\ D&f&g\\ E&h&\framebox{i}\end{array}\right|=\left|\begin{array}{ccc}B&A&C\\ f&D&g\\ h&E&\framebox{i}\end{array}\right|=\left|\begin{array}{ccc}A&B&C\\ E&h&\framebox{i}\\ D&f&g\end{array}\right|.\] Moreover, this proposition could be written in a general form ([31, Eq. 8]) \[\left|\left(\begin{array}{cc}F&0\\ E&h\end{array}\right)\left(\begin{array}{cc}A&B\\ D&f\end{array}\right)\right|_{n,n}=h\left|\begin{array}{cc}A&B\\ D&\framebox{f}\end{array}\right|.\]
* Equivalent conditions for a zero quasi-determinant. ([27, Prop. 1.4.6]) The following statements are equivalent if the quasi-determinant \(|A|_{ij}\) is defined. (i) \(|A|_{ij}\)=0; (ii) The \(i\)-th row of the matrix \(A\) is a left linear combination of the other rows of \(A\). (iii) The \(j\)-th column of the matrix \(A\) is a right linear combination of the other columns of \(A\).
* Non-commutative Jacobi identity. There are several identities for quasi-determinants, as an analogy of Sylvester identity for determinants. For a general Sylvester's identity for quasi-determinants, please refer to [26]. In this article, we mainly make use of the simplest one, which is called the non-commutative Jacobi identity [31] \[\left|\begin{array}{ccc}A&B&C\\ D&f&g\\ E&h&\framebox{i}\end{array}\right|=\left|\begin{array}{ccc}A&C\\ E&\framebox{i}\end{array}\right|-\left|\begin{array}{ccc}A&B\\ E&\framebox{h}\end{array}\right|\left|\begin{array}{ccc}A&B\\ D&\framebox{f}\end{array}\right|^{-1}\left|\begin{array}{cc}A&C\\ D&\framebox{g}\end{array}\right|.\] According to Prop. A.3, it has the following alternative form \[\left|\begin{array}{ccc}B&A&C\\ f&D&g\\ h&E&\framebox{i}\end{array}\right|=\left|\begin{array}{ccc}A&C\\ E&\framebox{i}\end{array}\right|-\left|\begin{array}{ccc}B&A\\ \framebox{h}\end{array}\right|E\left|\left|\begin{array}{ccc}B&A\\ \framebox{f}\end{array}\right|D\right|^{-1}\left|\begin{array}{cc}A&C\\ D&\framebox{g}\end{array}\right|.\] (A.2)
* Homological relations in terms of quasi-Plucker coordinates. Given a matrix \(A\) with \((n+k)\) rows and \(n\) columns. \(A_{i}\) is the \(i\)-th row of \(A\). \(A_{I}\) is a submatrix of \(A\) having rows with indices in \(I\), where \(I\) is a subset of \(\{1,2,\ldots,n+k\}\). Denote \(A_{\{1,2,\ldots,n+k\}\setminus\{i\}}\) by \(A_{i}\). Given \(i,j\in\{1,2,\ldots,n+k\}\) and the subset \(I\), where the number of entries of \(I\) is \(\#I=(n-1)\) and \(j\notin I\). Then the (right) quasi-Plucker coordinates are given by [26, 31] \[r_{ij}^{I}=-\left|\begin{array}{cc}A_{I}&0\\ A_{i}&\framebox{0}\\ A_{j}&1\end{array}\right|.\] By using quasi-Plucker coordinates, one could state the following homological relations \[\left|\begin{array}{ccc}A&B&C\\ D&f&g\\ E&\framebox{h}&i\end{array}\right|=\left|\begin{array}{ccc}A&B&C\\ D&f&g\\ E&h&\framebox{i}\end{array}\right|\left|\begin{array}{ccc}A&B&C\\ D&f&g\\ 0&\framebox{0}&1\end{array}\right|,\] (A.3) \[\left|\begin{array}{ccc}A&B&C\\ D&f&\framebox{g}\\ E&h&\framebox{i}\end{array}\right|=\left|\begin{array}{ccc}A&B&0\\ D&f&\framebox{0}\\ E&h&1\end{array}\right|\left|\begin{array}{ccc}A&B&C\\ D&f&g\\ E&h&\framebox{i}\end{array}\right|,\] (A.4) which were used in this article.
* Derivatives of general quasi-determinants. Let \(A\), \(B\), \(C\) and \(d\) be functions of \(t\), then \[\left|\begin{array}{cc}A&B\\ C&\framebox{d}\end{array}\right|^{\prime}=d^{\prime}-C^{\prime}A^{-1}B-CA^{-1} B^{\prime}+CA^{-1}A^{\prime}A^{-1}B,\] (A.5) where prime denotes the derivative with respect to \(t\). In particular, since we mainly consider Wronski quasi-determinants in this article, we have the following Wronskian-type derivative formula [31, eqs. 22, 23] \[\left|\begin{array}{cc}A&B\\ C&\framebox{d}\end{array}\right|^{\prime}=\left|\begin{array}{cc}A&B\\ C^{\prime}&\framebox{d^{\prime}}\end{array}\right|+\sum_{j=1}^{n}\left| \begin{array}{cc}A&e_{j}^{\top}\\ C&\framebox{0}\end{array}\right|\left|\begin{array}{cc}A&B\\ \left(A^{j}\right)^{\prime}&\framebox{\left(B^{j}\right)^{\prime}}\end{array} \right|,\] (A.6) and \[\left|\begin{array}{cc}A&B\\ C&\framebox{d}\end{array}\right|^{\prime}=\left|\begin{array}{cc}A&B^{ \prime}\\ C&\framebox{d^{\prime}}\end{array}\right|+\sum_{j=1}^{n}\left|\begin{array}{ cc}A&\left(A_{j}\right)^{\prime}\\ C&\framebox{\left(C_{j}\right)^{\prime}}\end{array}\right|\left|\begin{array}[] {cc}A&B\\ e_{j}&\framebox{0}\end{array}\right|,\] (A.7) where \(e_{j}\) is a block unit vector whose \(j\)-th position is the unit element, and \(A^{j}\) (respectively \(A_{j}\)) is the \(j\)-th row (respectively column) of \(A\).
|
2310.02667 | Retardation Effects in Atom-Wall Interactions | The onset of retardation effects in atom-wall interactions is studied. It is
shown that the transition range from the 1/z^3 short-range (van der Waals)
interaction to the 1/z^4 long-range (Casimir) retarded interaction critically
depends on the atomic properties and on the dielectric function of the
material. For simple non-alkali atoms (e.g., ground-state hydrogen and
ground-state helium) interacting with typical dielectric materials such as
intrinsic silicon, the transition to the retarded regime is shown to proceed at
a distance of about 10 nm (200 Bohr radii). This is much shorter than typical
characteristic absorption wavelengths of solids. Larger transition regimes are
obtained for atoms with a large static polarizability such as metastable
helium. We present a simple estimate for the critical distance,
z_cr=137*(\alpha(0)/Z)^(1/2) atomic units, where alpha(0) is the static
polarizability (expressed in atomic units) and Z is the number of electrons of
the atom. | T. Das, C. A. Ullrich, U. D. Jentschura | 2023-10-04T09:07:08Z | http://arxiv.org/abs/2310.02667v2 | # Retardation Effects in Atom-Wall Interactions
###### Abstract
The onset of retardation effects in atom-wall interactions is studied. It is shown that the transition range from the \(1/z^{3}\) short-range (van der Waals) interaction to the \(1/z^{4}\) long-range (Casimir) retarded interaction critically depends on the atomic properties and on the dielectric function of the material. For simple non-alkali atoms (e.g., ground-state hydrogen and ground-state helium) interacting with typical dielectric materials such as intrinsic silicon, the transition to the retarded regime is shown to proceed at a distance of about \(10\,\mathrm{nm}\) (\(200\) Bohr radii). This is much shorter than typical characteristic absorption wavelengths of solids. Larger transition regimes are obtained for atoms with a large static polarizability such as metastable helium. We present a simple estimate, \(z_{\mathrm{cr}}=137\,\sqrt{\alpha(0)/Z}\) atomic units, where \(\alpha(0)\) is the static polarizability (expressed in atomic units) and \(Z\) is the number of electrons of the atom.
###### Contents
* I Introduction
* II Hydrogen and Helium on Silicon and Gold
* III Other Elements
* IV Conclusions
* A Definition of Distance Ranges
* B Intrinsic Silicon
* C TRK Sum Rule for Metastable States
## I Introduction
Dispersion forces between spatially well-separated microscopic systems are important for phenomena such as atom-surface scattering, physisorption, the structure of soft matter and 2D layered materials, and many applications [1; 2]. In this context, it is well known that atom-atom interactions undergo a transition from a short-range van der Waals (\(1/R^{6}\)) to a retarded long-range (\(1/R^{7}\)) behavior, where \(R\) is the interatomic distance (see Ref. [3] and Chaps. 4 and 11 of Ref. [4]). For atom-wall interactions, the asymptotic behavior changes from \(1/z^{3}\) for short-range to \(1/z^{4}\) in the long-range limit (see, e.g., Ref. [5]), due to a process called retardation. The interpolating formula has been given in Eqs. (18) and (21) of Ref. [6] (see also Ref. [7]). However, the precise nature of this transition is less well characterized in the literature. From Fig. 3 of Ref. [6], it is evident that the interaction of \({}^{87}\)Rb atoms with a sapphire surface starts to substantially deviate from the \(1/z^{3}\) short-range asymptotics in the range \(z\sim 30\,\mathrm{nm}\approx 600\,a_{0}\), where \(a_{0}\) is the Bohr radius. For the example of metastable helium interacting with a gold surface, estimates for the transition region to the retarded regime have been indicated in the range of \(z\leq 150\,\mathrm{nm}\approx 3000\,a_{0}\) in the text following Eq. (3) in Sec. III of Ref. [8], and in Sec. 16.3.4 and Sec. 16.4.2 of Ref. [9]. In this paper, we aim to provide clarity and give both simple estimates and precise numerical results that show the onset and spatial range of the transition regime between van der Waals and Casimir-Polder interactions. The dependence of the transition region on the atomic species and on the dielectric function of the surface is also studied.
Intuitively, we can understand the onset of retardation as follows: Atom-wall interactions happen due to the exchange of virtual photons. If an exchange photon picks up a nonnegligible phase (of order unity) on its way from the atom to the wall and back, retardation needs to be taken into account (Chap. 5 of Ref. [4]). The phase of a characteristic photon is given as \(\Delta\phi=k_{\mathrm{ch}}\,z\), where \(k_{\mathrm{ch}}\) is the wave vector corresponding to a characteristic resonance excitation of the atom (or solid). The condition \(\Delta\phi\sim 1\) leads to \(z\sim 1/k_{\mathrm{ch}}=\lambda_{\mathrm{ch}}/(2\pi)\), where \(\lambda_{\mathrm{ch}}\) is the characteristic wavelength. For simple atomic systems such as (atomic) hydrogen or helium (in their ground states), the characteristic excitation wavelength is \(\lambda_{\mathrm{ch}}=\hbar c/E_{h}\), where \(E_{h}=\alpha^{2}mc^{2}\) is the Hartree energy (where \(\alpha\) is the fine-structure constant, \(m\) is the electron mass, and \(c\) is the speed of light). Hence, _a priori_, we can expect retardation effects to become important when the atom-wall distance is of the order of a Hartree wavelength,
\[z\sim\lambda_{h}=\frac{\hbar c}{E_{h}}=\frac{a_{0}}{\alpha}=7.25\,\mathrm{nm}=1 37\,\mathrm{a.u.}\,, \tag{1}\]
where \(a_{0}\) is the Bohr radius, which is the unit of length in the atomic unit system (a.u.). We note that \(\lambda_{h}\) is, purely parametrically, of the same order as optical wavelengths, but typical optical wavelengths in the visible spectrum are longer than \(\lambda_{h}\); the UV spectrum ends at about \(400\,\mathrm{nm}\). Hence, one might ask whether or not large prefactors could shift the parametric estimate (1).
Here, we demonstrate that an extended distance scale for the nonretarded interaction may be observed for special atoms with an excessively large static polarizability, but that retardation sets in at much shorter length scales commensurate with Eq. (1) (in typical cases, about \(10\,\mathrm{nm}\approx 200\,\mathrm{a.u.}\)) for many simple atomic systems. For example, we demonstrate by explicit numerical calculations that the atom-wall interaction of ground-state helium atoms undergoes a transition to the retarded regime much earlier, at length scales commensurate with \(z\sim z_{h}\). Variations of the onset of the retarded regime with the atomic system are also discussed.
This paper is organized as follows: We discuss the interpolating formula for the transition from the short-range to the long-range regime in Sec. II, with a special emphasis on interactions of hydrogen and helium with a silicon surface. Other elements are discussed in Sec. III. Atomic units are used throughout unless indicated otherwise (\(\hbar=e=1\), \(\epsilon_{0}=1/(4\pi)\), \(c=1/\alpha\), where \(\alpha\) is the fine-structure constant). We provide mini-reviews of applicable distance ranges in Appendix A and of the dielectric function of intrinsic silicon in Appendix B. A derivation of the Thomas-Reiche-Kuhn (TRK) sum rule for metastable reference states [10; 11] is presented in Appendix C.
## II Hydrogen and helium on silicon and gold
We start from an interpolating expression for the atom-wall interaction, which reduces to the \(1/z^{3}\) short-range interaction for small atom-wall distance and to the \(1/z^{4}\) long-range interaction for large distance. The relevant formula is given in Eqs. (18) and (21) of Ref. [6],
\[\mathcal{E}(z)= -\,\frac{\alpha^{3}}{2\pi}\int\limits_{0}^{\infty}\mathrm{d} \omega\,\omega^{3}\,\alpha(\mathrm{i}\omega)\int\limits_{1}^{\infty}\mathrm{d }p\mathrm{e}^{-2\,\alpha\,p\,\omega\,z}H(\epsilon(\mathrm{i}\omega),p)\,, \tag{2}\]
where
\[H(\epsilon,p)=\frac{\sqrt{\epsilon-1+p^{2}}-p}{\sqrt{\epsilon-1+p^{2}}+p}+(1- 2p^{2})\frac{\sqrt{\epsilon-1+p^{2}}-p\,\epsilon}{\sqrt{\epsilon-1+p^{2}}+p\, \epsilon}. \tag{3}\]
Here, \(\alpha(\mathrm{i}\omega)\) is the dynamic (dipole) polarizability of the atom at imaginary driving frequency, and \(\epsilon(\mathrm{i}\omega)\) is the dielectric function of the solid at imaginary angular frequency. For the material of the solid (intrinsic silicon), we assume the interpolating model of the temperature-dependent dielectric function recently discussed in Ref. [12] for intrinsic silicon (with slight modifications). The parameters are reviewed in Appendix B. We also study gold, employing a simple plasma model for its dielectric function for definiteness, and a modified model discussed in Eq. (13.46) of Ref. [9].
In the current section, we focus on atomic hydrogen and helium. For hydrogen, we employ the following formula for the dipole polarizability in the non-recoil approximation (infinite nuclear mass), which is sufficient for the accuracy required in the current investigation,
\[\alpha_{\mathrm{H}}(\omega)=Q_{\mathrm{H}}(\omega)+Q_{\mathrm{H}}(-\omega)\,, \tag{4}\]
where
\[Q_{\mathrm{H}}(\omega)=\frac{1}{3}\,\left\langle 1S\left|\vec{r}\,\frac{1}{H- E_{1S}+\hbar\omega}\,\vec{r}\right|1S\right\rangle\,, \tag{5}\]
where \(E_{1S}\) is the ground-state energy of hydrogen, \(H\) is the Schrodinger-Coulomb Hamiltonian, and the scalar product is understood for the two position operators. According to Eq. (4.154) of Ref. [4], the dipole matrix element can be expressed as follows,
\[Q^{(\mathrm{H})}(\omega)=\frac{2t^{2}\,p(t)}{3\,(1-t)^{5}\,(1+t)^{4}}+\frac{25 6\,t^{9}\,f(t)}{3\,(1+t)^{5}\,(1-t)^{5}}\,, \tag{6}\]
where the photon energy is parameterized by the \(t\) variable, \(t=t(\omega)=(1+2\omega)^{-1/2}\). The polynomial \(p(t)\) incurred in Eq. (6) is
\[p(t)=3-3t-12t^{2}+12t^{3}+19t^{4}-19t^{5}-26t^{6}-38t^{7}\,. \tag{7}\]
The function \(f(t)\) is a complete hypergeometric function,
\[f(t)={}_{2}F_{1}(1,-t,1-t,\xi)\,, \tag{8}\]
and \(\xi=(1-t)^{2}/(1+t)^{2}\).
For helium we use an approach based on a fully correlated basis set, using exponential basis functions [13; 14; 15]. We employ a Lowdin decomposition of the overlap matrix (see Appendix J of Ref. [16]) and use extended-precision arithmetic in the julia language [17; 18; 19] in order to avoid loss of numerical precision in intermediate steps of the calculation. The HTQLS algorithm [20] is used to diagonalize the overlap matrix in arbitrary precision. The parameters of the basis set are chosen in a similar way as indicated in Eq. (18) of Ref. [21], with a quadratic dependence of the exponents of the basis functions in the exponential basis with the index of the function. This choice involves basis functions of the type \(\exp(-ar_{1}-br_{2}-cr_{12})\) with numerically large coefficients \(a\), \(b\) and \(c\), and consequently, with a steep exponential decay. It leads to an accurate representation of the cusp area \(r_{12}\approx 0\), where \(r_{12}\) is the electron-electron distance. With very modest computational effort, this approach reproduces other data [21; 22] for low-lying energy levels of helium, and for the static and dynamic polarizability of helium (within the nonrelativistic approximation), to better than one permille [23]. A similar approach is used for the calculation of the polarizability of the metastable \(2^{3}S_{1}\) state of helium. Details of the fully correlated helium polarizability calculation will be published elsewhere.
For the evaluation of the dynamic polarizability, two other approaches have been discussed in the literature, namely, the single-oscillator model [8; 25] (henceforth referred to by the acronym 1osc) and the few-oscillator
model [12] (henceforth referred to by the acronym fosc). The single-oscillator model, asymptotically matched to the static (\(\omega\to 0\)) and ultraviolet (\(\omega\to\infty\)) limits, reads as follows (in atomic units),
\[\alpha_{\rm 1osc}({\rm i}\omega)=\frac{Z}{\omega^{2}+Z/\alpha(0)}=\frac{Z}{ \omega^{2}+\omega_{\rm cr}^{2}}\;. \tag{9}\]
Here, \(\alpha(0)\) is the static polarizability, \(Z\) is the number of electrons, and the critical frequency is \(\omega_{\rm cr}=\sqrt{Z/\alpha(0)}\) (this frequency will be important for our considerations in Sec. III). The formula (9) has the correct static limit [\(\alpha({\rm i}\omega=0)=\alpha(0)\)]. The correct ultraviolet limit is also obtained in view of the asymptotic relation \(\alpha({\rm i}\omega)\to Z/\omega^{2}\), which fulfills the Thomas-Reiche-Kuhn (TRK) sum rule [10; 11]. This sum rule remains valid for metastable reference states (see Appendix C). Values of \(\alpha(0)\) have been tabulated in Ref. [26] for all elements with nuclear charge numbers \(1\leq Z\leq 120\). (Remark: It is also possible to match the single-oscillator model against the van der Waals coefficient of the atomic dimer system [27; 28], but in this case, one fails to fulfill the TRK sum rule in the ultraviolet region. Here, we use the functional form given in Eq. (9).)
As an intermediate between the exact calculation of the dynamic polarizability and the single-oscillator model, the few-oscillator model has recently been discussed in Appendix B of Ref. [12]. Let us assume that a finite number of oscillator strengths \(f_{n}\) are known (\(n\in\{1,\ldots,N\}\)), with corresponding resonance frequencies \(\omega_{n}\). The few-oscillator strength model reads as follows,
\[\alpha_{\rm fosc}({\rm i}\omega) = \sum_{n=1}^{N}\frac{f_{n}}{\omega+\omega_{n}^{2}}+\frac{1}{ \omega^{2}+\omega_{c}^{2}}\,\left(Z-\sum_{n=1}^{N}f_{n}\right)\,, \tag{10a}\] \[\left(\omega_{c}\right)^{2} = \frac{Z-\sum_{n=1}^{N}f_{n}}{\alpha(0)-\sum_{m=1}^{N}f_{m}/\omega _{m}^{2}}\;. \tag{10b}\]
Here, \(\omega_{c}\) describes the typical scale of virtual excitations
Figure 2: Dynamic (dipole) polarizability of (ground-state) atomic helium as a function of imaginary driving frequency. The values obtained from the exact approach based on a fully correlated basis set are compared with the oscillator-strength based model. Oscillator strengths for excited states from \(n=2\) to \(n=10\), and their corresponding transition energies are collected from Table 14 of Ref. [24] for the evaluation, which yields a maximum relative error of \(12.14\%\). For comparison, the single-oscillator model, Eq. (9), is also plotted.
Figure 3: Same as Figs. 1 and 2, but for metastable helium (\({\rm He}^{*}\)) in the \(2^{3}S_{1}\) reference state.
Figure 1: Dynamic (dipole) polarizability of atomic hydrogen, \(\alpha({\rm i}\omega)\), as a function of the imaginary driving frequency. The exact values obtained from Eq. (4) are compared with the few-oscillator-strength based model (fosc) described in Appendix B of Ref. [12]. The first 30 oscillator strengths and their corresponding transition energies are collected from Table 4 of Ref. [24] for the evaluation, which yields a maximum relative error of \(4.16\%\). For comparison, the single-oscillator model, Eq. (9), is also plotted.
into the continuum. One collects a number \(N\) of oscillator strengths [first term on the right-hand side of Eq. (10a)] and approximates the completion of the spectrum by including the second term on the right-hand side of Eq. (10a). The choice of the frequency \(\omega_{c}\) in Eq. (10b) ensures that the correct static limit \(\alpha(0)\) is reproduced. From tables (e.g., Ref. [24]), it is possible to collect at least \(N=8\) oscillator strengths to the lowest excited states for typical atomic species.
In Figs. 1 and 2, and Fig. 3 for metastable helium, we show that the oscillator-strength-based approach used in Ref. [12], which is enhanced by matching the static polarizability and the ultraviolet limits with exact results, yields results for the dynamic polarizability of hydrogen and helium which are in good agreement with the exact results. A comparison of the single-oscillator model to the few-oscillator-strength model illustrates the gradual improvement achieved by including more known oscillator strengths. Another observation is as follows: The presence of resonances due to transitions to other bound-state energy levels has a tendency to lower the curve of \(\alpha({\rm i}\omega)\) upon the inclusion of more bound-state resonances as compared to the single-oscillator model, i.e., one has \(\alpha_{\rm lose}({\rm i}\omega)>\alpha_{\rm lose}({\rm i}\omega)>\alpha({\rm i}\omega)\). Hence, the single-oscillator model tends to overestimate the polarizability at finite excitation frequencies, while reproducing the correct limit for very high frequencies.
In order to gauge the transitions from the short-range to the long-range regime, we use the effective "local" power coefficient
\[n_{\rm eff}=\frac{z}{V(z)}\frac{{\rm d}V(z)}{{\rm d}z}=\frac{{\rm d}\ln(|V(z)|) }{{\rm d}\ln(z)}\,. \tag{11}\]
It evaluates to exactly \(n\) when \(V(z)=V_{0}z^{n}\). By the logarithm of the potential, we understand the logarithm of the numerical value (reduced quantity) of the potential, expressed in atomic units.
The dependence of the effective exponent \(n_{\rm eff}\) on the atom-wall distance \(z\) is shown in Figs. 4 and 5 for the interaction of H, He and H\({}^{*}\) with silicon and gold, respectively. Let us define the break-down point \(z_{\rm br}\) for the short-range expansion to be the distance where the effective exponent \(n_{\rm eff}\) reaches the value \(n_{\rm eff}=-3.25\), which is \(25\%\) of the way between the asymptotic short-range value (\(n_{\rm eff}=-3\)) and the long-range value (\(n_{\rm eff}=-4\)). This definition, while arbitrary to some extent, captures the essence of the transition between the two regimes. In addition to the substantial deviation of the effective exponent \(n_{\rm eff}\) from the value \(n_{\rm eff}=-3\) at the break-down point, we have checked that the relative deviation of the atom-surface potential \(V(z)\) from the short-range estimate (19), parametrized by the function \(D(z)=[V(z)-(-C_{3}/z^{3})]/V(z)\), is at least \(35\,\%\) at \(z=z_{\rm br}\), further validating the sensibility of our definition.
From Fig. 4, one reads off the following values for interactions with intrinsic silicon,
\[z_{\rm br}({\rm H};{\rm Si}) \approx 203\,{\rm a.u.}\,, \tag{12a}\] \[z_{\rm br}({\rm He};{\rm Si}) \approx 126\,{\rm a.u.}\,,\] (12b) \[z_{\rm br}({\rm He}^{*};{\rm Si}) \approx 979\,{\rm a.u.}\,. \tag{12c}\]
When using the single-oscillator model, the values change
Figure 5: Breakdown of the short-range asymptotics for the atom-wall interaction for hydrogen interacting with gold, as described by the plasma model (14). Otherwise, the figure is analogous to Fig. 4.
Figure 4: Change in the effective exponent \(n_{\rm eff}\) for atom-wall interactions due to the transition from the short-range (van der Waals) to the long-range (Casimir) regime for hydrogen, ground-state helium and metastable helium, interacting with intrinsic silicon. To this end, the atom-wall potential is numerically evaluated and the effective exponent is calculated via Eq. (11) as a function of the atom-wall separation. The Clausius-Mossotti (CM) model described in Ref. [12] is used for the dielectric function of intrinsic silicon, with slightly modified parameters (see Table 1). The exact dynamic polarizability is used for all atoms.
into
\[z_{\rm br}({\rm H};{\rm losc};{\rm Si}) \approx 194\,{\rm a.u.}\,, \tag{13a}\] \[z_{\rm br}({\rm He};{\rm losc};{\rm Si}) \approx 117\,{\rm a.u.}\,,\] (13b) \[z_{\rm br}({\rm He}^{*};{\rm losc};{\rm Si}) \approx 674\,{\rm a.u.}\,. \tag{13c}\]
For the fosc model, the values of \(z_{\rm br}\) are in between the values for the exact polarizabilities and those for the 1osc model, namely, 200\(\,\)a.u., 121\(\,\)a.u., and 101\(\,\)a.u., respectively, for H, He, and He\({}^{*}\).
Another example is the calculation of \(z_{\rm br}\) for interactions with gold, where we use the plasma model,
\[\epsilon({\rm i}\omega)=1+\frac{\omega_{\rm pl}^{2}}{\omega^{2}}\,, \tag{14}\]
where \(\omega_{\rm pl}\) is the plasma frequency. For the plasma frequency \(\omega_{\rm pl}\), we use the same value as advocated in Ref. [9], namely, \(9\,{\rm eV}\). The dielectric function of the plasma model diverges in the limit \(\omega\to 0\), which implies that the long-range limit of the interaction with a gold surface is the same as for a perfect conductor (see also Ref. [7]). The dielectric function of gold, approximated by the plasma model, is strongly peaked for very low frequency; it constitutes a cursory approximation. Because of the strong emphasis on very low virtual photon frequencies, we can expect \(z_{\rm br}\) to be exceptionally large as compared to other materials (see also the discussion in Sec. III).
Figure 5 shows the breakdown of the short-range expansion for hydrogen, ground-state and metastable helium, for interactions with gold. One reads off the values
\[z_{\rm br}({\rm H};{\rm Au}) \approx 309\,{\rm a.u.}\,, \tag{15a}\] \[z_{\rm br}({\rm He};{\rm Au}) \approx 228\,{\rm a.u.}\,,\] (15b) \[z_{\rm br}({\rm He}^{*};{\rm Au}) \approx 1336\,{\rm a.u.}\,. \tag{15c}\]
For the single-oscillator model, one obtains
\[z_{\rm br}({\rm H};{\rm losc};{\rm Au}) \approx 297\,{\rm a.u.}\,, \tag{16a}\] \[z_{\rm br}({\rm He};{\rm losc};{\rm Au}) \approx 217\,{\rm a.u.}\,,\] (16b) \[z_{\rm br}({\rm He}^{*};{\rm losc};{\rm Au}) \approx 892\,{\rm a.u.}\,. \tag{16c}\]
For the fosc model, the values of \(z_{\rm br}\) are in between the values for the exact polarizabilities and those for the 1osc model, namely, 306\(\,\)a.u., 223\(\,\)a.u., and 1405\(\,\)a.u. respectively, for H, He, and He\({}^{*}\).
The break-down distance depends quite substantially on the atomic system. These observations raise the question of the general dependence of the breakdown of the short-range expansion on the atomic species, and on the properties of the solid.
We had already mentioned that the plasma model of gold leads to exceptionally large values of \(z_{\rm br}\). This can be ramified: For example, the modified plasma model given in Eq. (13.46) of Ref. [9] is less strongly peaked around very small \(\omega\) than the simple plasma model given in Eq. (14). Hence, we can expect smaller values of \(z_{\rm br}\). This is indeed confirmed. When using the exact polarizabilities and the modified plasma model given in Eq. (13.46) of Ref. [9], the results given in Eq. (15) change into 201\(\,\)a.u., 123\(\,\)a.u., and 1232\(\,\)a.u., respectively, for H, He and He\({}^{*}\).
## III Other elements
The question of the dependence of the break-down distance \(z_{\rm br}\) on the atomic species is made more urgent by the observation that more complex atoms with occupied inner shells typically have a much larger static polarizability [26], and much smaller typical excitation energies (at least to the first excited states, see Ref. [29] for a compilation). One might think that the smaller (lowest) excitation energies of more complex atoms could imply a much narrower functional form of the dynamic polarizability \(\alpha({\rm i}\omega)\) for more complex atoms, and hence, a drastic extension of the nonretarded \(1/z^{3}\) short-range regime. However, one could also counter-argue that more complex atoms also possess transitions to much higher excited states. Hence, one could argue that these higher-energy virtual transitions might lower the distance range for the onset of retardation.
The discussion of other atomic species is made easier by investigating the general structure of the atom-surface interaction integral given in Eq. (2). In order to estimate how far the nonretarded approximation is valid, let us start from the regime of not excessively large \(z\). In this case, the exponential suppression factor \(\exp(-\alpha\omega pz)\) is not very pronounced, and the dominant integration region comes from large \(p\). We expand \(H(\epsilon({\rm i}\omega),p)\) for large \(p\) with the result,
\[H(\epsilon({\rm i}\omega),p)\approx 2p^{2}\,\frac{\epsilon({\rm i}\omega)-1}{ \epsilon({\rm i}\omega)+1}\,, \tag{17}\]
commensurate with the leading term recorded in Eq. (22) of Ref. [30]. One then carries out the integral over \(p\) in Eq. (2) and obtains the approximate formula
\[{\cal E}(z)\approx\,-\,\frac{1}{4\pi z^{3}}\int\limits_{0}^{\infty}{\rm e}^{- 2\,\alpha\,\omega\,z}\alpha({\rm i}\omega)\,\frac{\epsilon({\rm i}\omega)-1}{ \epsilon({\rm i}\omega)+1}\,. \tag{18}\]
Now, if one can ignore the exponential suppression factor \(\exp(-2\alpha\omega z)\) over the entire characteristic \(\omega\) integration region, then one can approximate the interaction energy by the very simple expression
\[{\cal E}(z)\approx\,-\,\frac{1}{4\pi z^{3}}\int\limits_{0}^{\infty}\alpha({ \rm i}\omega)\,\frac{\epsilon({\rm i}\omega)-1}{\epsilon({\rm i}\omega)+1}=- \frac{C_{3}}{z^{3}}\,, \tag{19}\]
where \(C_{3}\) is defined in the obvious way. This is precisely the short-range asymptotic limit [in the expansion in powers of \(z\) and \(\ln(z)\)]) of the atom-surface interaction energy. The term given in Eq. (19) corresponds to
the expression \(-C_{3}/z^{3}\) where the leading short-range \(C_{3}\) coefficient is otherwise listed in Eq. (35) of Ref. [30] (it is called \(C_{30}\) in Ref. [30]) and in Eq. (16.24) of Ref. [9]. However, if one cannot ignore the exponential suppression (retardation) factor \(\exp(-2\,\alpha\,\omega\,z)\) over the relevant characteristic \(\omega\) integration region, then the short-range expansion breaks down, and the atom-surface interaction energy is no longer well approximated by Eq. (19). We can thus conclude that the nonretardation approximation is valid in the distance range
\[z\lesssim\frac{1}{\alpha\,\omega_{\rm ch}} \tag{20}\]
where \(\omega_{\rm ch}\) is the _largest_ characteristic frequency in the problem, i.e., either in the polarizability or in the dielectric function of the material.
The _largest_ characteristic excitation frequency will typically be obtained from the atom, not from the solid. Typical characteristic excitation energies for solids are in the range of a few eV, as is evident from the extensive tabulation of dielectric functions in Ref. [31]. For conductors whose dielectric function is described by the plasma model given in Eq. (14), the characteristic absorption frequency is zero. This is evident if one writes the expression for the plasma model dielectric function as \(1+\omega_{\rm pl}^{2}/(\omega^{2}+\omega_{0}^{2})\) with \(\omega_{0}=0\).
Let us use the single-oscillator model and define the _critical_ distance \(z_{\rm cr}\) for the onset of retardation effects to be the scale where the condition (20) breaks down. This means that the critical angular frequency and its corresponding distance scale (in atomic units) is
\[\omega_{\rm cr}=\sqrt{\frac{Z}{\alpha(0)}}\,{\rm a.u.}\,,\qquad z_{\rm cr}=13 7\,\sqrt{\frac{\alpha(0)}{Z}}\,{\rm a.u.}. \tag{21}\]
The estimates from Eq. (21) read as follows (using data from Ref. [26]),
\[z_{\rm cr}({\rm H}) =137\,\sqrt{\frac{4.5}{1}}\,{\rm a.u.}=290\,{\rm a.u.}\,, \tag{22a}\] \[z_{\rm cr}({\rm He}) =137\sqrt{\frac{1.383}{2}}\,{\rm a.u.}=113\,{\rm a.u.}\,,\] (22b) \[z_{\rm cr}({\rm He}^{*}) =137\sqrt{\frac{316}{2}}\,{\rm a.u.}=1720\,{\rm a.u.}\,. \tag{22c}\]
The correspondence (in terms of the order-of-magnitude) \(z_{\rm cr}({\rm H})\sim z_{\rm br}({\rm H})\), \(z_{\rm cr}({\rm He})\sim z_{\rm br}({\rm He})\), and \(z_{\rm cr}({\rm He}^{*})\sim z_{\rm br}({\rm He}^{*})\), is obvious. In the latter case, the order-of-magnitude approximation \(z_{\rm cr}({\rm He}^{*})\) is larger than \(z_{\rm br}({\rm He}^{*})\) by about \(27\,\%\), which is perfectly acceptable given the cursory nature of the approximation. A table of values of \(z_{\rm cr}\) for all elements with \(1\leq Z\leq 120\) is presented in Fig. 6. Peak values are observed for alkali metals, which typically display a very large static polarizability.
For the case studied here, we have \(z_{\rm cr}>z_{\rm br}\), i.e., the onset of retardation happens a bit earlier than predicted by the approximation \(z\approx z_{\rm cr}\). At \(z=z_{\rm cr}\), we have
\[\exp(-2\alpha\omega_{\rm cr}\,z_{\rm cr})=[\exp(1)]^{-2}=0.135\,, \tag{23}\]
indicating that, for typical excitation frequencies \(\omega\sim\omega_{\rm cr}\) the exponential suppression of the integrand in Eq. (18) is already very substantial at \(z=z_{\rm cr}\). This fact supports the observation that \(z_{\rm br}<z_{\rm cr}\). Still, the approximation \(z_{\rm br}\approx z_{\rm cr}\) remains a good, albeit somewhat cursory, estimate for the transition to the retarded regime.
## IV Conclusions
The world of atomic physics is full of surprises in terms of nonparametric prefactors. An example is the variation of the static polarizability with the atomic species (element number). Parametrically, the static polarizabilities of all atoms are of order \(e^{2}a_{0}^{2}/E_{h}\), where \(e\) is the electron charge, \(a_{0}\) is the Bohr radius, and \(E_{h}\) is the Hartree energy. However, large nonparametric prefactors multiply this estimate, two extreme cases being metastable helium with a static polarizability of \(315\,{\rm a.u.}\), and lithium with a static polarizability of \(164\,{\rm a.u.}\).
As another example in a completely different context, the so-called relativistic Bethe logarithm for \(S\) states of hydrogen, parametrically, was estimated to be of order \(\alpha(Z\alpha)^{4}E_{h}\). Surprising nonparametric prefactors \(\approx-31\) were seen to multiply the parametric estimate, shifting theoretical predictions for the Lamb shift in atomic hydrogen considerably [32; 33].
Within the context of atom-wall interactions, we can expect the nonretarded regime to extend furthest for those atoms with the highest static polarizabilities at the lowest nuclear charge numbers. Indeed, we have demonstrated that the onset of retardation in atom-wall interactions depends quite significantly on the atomic species,
Figure 6: Critical distance \(z_{\rm cr}\) versus nuclear charge number \(Z\), for all elements with \(1\leq Z\leq 120\) given in Ref. [26].
even if, parametrically, the estimate (1) remains valid for all. For simple atomic systems such as hydrogen and ground-state helium, retardation effects set in already at distances of less than \(10\,\mathrm{n}\mathrm{m}\approx 200\,\mathrm{a.u.}\) in atom-wall interactions. This result is consistent with remarks in the text following Eq. (16) of Ref. [34]. The breakdown of the short-range \(1/z^{3}\) approximation happens at distance scales indicated in Eqs. (12), (15) and (22). An explicit estimate, \(z_{\mathrm{cr}}=137\,(\alpha(0)/Z)^{1/2}\,\mathrm{a.u.}\), was given in Eq. (21).
With the exception of lithium (and metastable helium), the critical distance for the onset of retardation effects does not exceed \(600\,\mathrm{a.u.}\), as shown in Fig. 6. An exceptional example is provided by metastable helium where \(z_{\mathrm{cr}}\) assumes the exceptionally large value of \(1720\,\mathrm{a.u.}\), in view of an exceptionally large static polarizability of \(315.63\) atomic units. However, the actual breakdown distance for metastable helium is smaller, namely, \(979\,\mathrm{a.u.}\) and \(1336\,\mathrm{a.u.}\) for interactions with silicon and gold, respectively (the latter being described by a simple plasma model).
We can thus confirm that comparatively large \(z_{\mathrm{br}}\) can be expected for metastable helium, especially for interactions with very good conductors, an extreme example being provided by the plasma model (14) for gold. However, even in this extreme case, the nonretarded regime is limited to about \(1350\,\mathrm{a.u.}\). We conclude that the short-range approximation, for atom-wall interactions, breaks down much earlier than for solid-solid interactions [9], and provide estimate for all elements in the periodic table (see Fig. 6).
###### Acknowledgements.
Helpful conversations with M. DeKieviet and C. Moore are gratefully acknowledged. T. D. and U. D. J. were supported by NSF grant PHY-2110294. C. A. U. acknowledges support from NSF grant DMR-2149082.
## Appendix A Definition of Distance Ranges
The designations of "short-range" and "long-range" asymptotics crucially depend on the point of view. Because the designations are not always consistent, we here present a mini-review of this issue.
Zaremba and Kohn [35] define "close range" to be the range of a few atomic radii, commensurate with their aim to study the transition from physisorption to the van der Waals regime; the latter is understood as the "long-range regime" in Ref. [35].
On the other hand, Antezza _et al._[6] define the "long-range regime" as the limit of very large separations far beyond the validity of van der Waals and Casimir-Polder interactions. This limit is characterized by a very-long range nonretarded tail proportional to \(1/z^{3}\), which is due to effects described by thermal field theory (contributions of the first Matsubara frequency) and vanishes at zero temperature. The numerical coefficient of this extreme \(1/z^{3}\) long-range tail is very small (see Eq. (17) of Ref. [6]) and we do not consider it here.
Thus, from the viewpoint of Ref. [6], the extreme short range is the regime of less than ten atomic radii, where the discretization of the crystal surface starts to play a role. The short-range regime is the \(1/z^{3}\) nonretarded (van der Waals) range. The \(1/z^{4}\) Casimir-Polder interactions then define the long range regime. This is the viewpoint we also adopt in the present paper.
For completeness, let us also say a few words about the limit of very close approach to the surface. Zaremba and Kohn [35] showed that in this limit the van der Waals interaction becomes \(-C_{3}/(z-z_{0})^{3}\), where \(z_{0}\) is a parameter, of order unity in atomic units, which can be calculated separately or obtained experimentally. For dielectric solids, the position of the reference plane is well approximated by \(z_{0}\approx d/2\), where \(d\) is the distance between layers of the substrate [36; 34].
In other investigations [36; 37], van-der-Waals corrected density-functional theory (DFT) is used in order to calculate the adsorption energies of atoms on surfaces (e.g. rare gases on noble metals). The quadrupole correction is routinely taken into account in this procedure (see Table III of Ref. [36] and Ref. [7]), and the van der Waals energy is added to the contact energy at the equilibrium position of the atom in the immediate vicinity of the surface, the latter being calculated with the use of DFT (see Table III of Ref. [36], and also Ref. [38] for a general discussion of van-der-Waals corrected DFT). This procedure is consistent with remarks made after Eq. (39) of Ref. [35], where the authors stress that their approach should be valid for the region of physisorption (i.e., for the range in between 4 and 7 atomic units).
## Appendix B Intrinsic Silicon
In this appendix, we provide a brief review of the Clausius-Mossotti fits recently employed in Ref. [12] for intrinsic silicon. We also take the opportunity to correct a few typographical errors. From Ref. [12], we recall the Lorentz-Dirac master function, as follows:
\[f(T_{\Delta},\omega)=\sum_{k=1}^{k_{\mathrm{max}}}\frac{a_{k}(\omega_{k}^{2}- \mathrm{i}\gamma_{k}^{\prime}\omega)}{\omega_{k}^{2}-\omega^{2}-\mathrm{i} \omega\gamma_{k}}\,, \tag{35}\]
where \(T_{\Delta}=(T-T_{0})/T_{0}\) and \(T_{0}\) is the room temperature. In Refs. [39] and [5], inspired by the Clausius-Mossotti relation, the dielectric ratio
\[\rho(T_{\Delta},\omega)=\frac{\epsilon(T_{\Delta},\omega)-1}{\epsilon(T_{ \Delta},\omega)+2}\doteq f(T_{\Delta},\omega) \tag{36}\]
was fitted to a functional form corresponding to the master function. That is to say, one fits
\[\epsilon_{\mathrm{CM}}(T_{\Delta},\omega)\doteq\frac{1+2f(T_{\Delta},\omega)}{ 1-f(T_{\Delta},\omega)}\,. \tag{37}\]
We now take the opportunity to correct two unfortunate typographical errors in Ref. [12], Equation (8) of Ref. [12] misses an opening curly parenthesis in the numerator,
\[\rho(T_{\Delta},\omega)=\sum_{k=1}^{k_{\text{max}}}\frac{a_{k}^{\text{CM}}(T_{ \Delta})\left\{\left[\,\omega_{k}^{\text{CM}}(T_{\Delta})\,\right]^{2}-\mathrm{ i}\,\gamma_{k}^{\text{CM}}(T_{\Delta})\,\omega\right\}}{\left[\,\omega_{k}^{ \text{CM}}(T_{\Delta})\,\right]^{2}-\omega^{2}-\mathrm{i}\,\omega\,\gamma_{k}^ {\text{CM}}(T_{\Delta})}, \tag{30}\]
while Eq. (9) of Ref. [12] has a typographical in the last term of the numerator; one needs to replace \(\omega^{4}\rightarrow\left[\omega_{k}^{\text{CM}}(T_{\Delta})\right]^{4}\),
\[\text{Re}[\rho_{\text{CM}}(T_{\Delta},\omega)]=\sum_{k=1}^{k_{ \text{max}}}a_{k}^{\text{CM}}(T_{\Delta})\] \[\times\frac{\omega^{2}\left[\,\gamma_{k}^{\text{CM}}(T_{\Delta}) \,\gamma_{k}^{\text{CM}}(T_{\Delta})-[\omega_{k}^{\text{CM}}(T_{\Delta})]^{2 }\,\right]+[\omega_{k}^{\text{CM}}(T_{\Delta})]^{4}}{(\omega^{2}-[\omega_{k}^{ \text{CM}}(T_{\Delta})]^{2})^{2}+\omega^{2}\,\left[\,\gamma_{k}^{\text{CM}}(T_ {\Delta})\,\right]^{2}}\,, \tag{31}\]
while the imaginary part is
\[\text{Im}[\rho_{\text{CM}}(T_{\Delta},\omega)]=\sum_{k=1}^{k_{ \text{max}}}a_{k}^{\text{CM}}(T_{\Delta})\,\omega\] \[\times\frac{\omega^{2}\gamma_{k}^{\text{CM}}(T_{\Delta})+\left\{ \gamma_{k}^{\text{CM}}(T_{\Delta})-\gamma_{k}^{\text{CM}}(T_{\Delta})\right\} \left[\omega_{k}^{\text{CM}}(T_{\Delta})\right]^{2}}{\left\{\omega^{2}-[\omega_ {k}^{\text{CM}}(T_{\Delta})]^{2}\right\}^{2}+\omega^{2}\,\left[\,\gamma_{k}^{ \text{CM}}(T_{\Delta})\,\right]^{2}}. \tag{32}\]
We note that the imaginary part of the dielectric function should be strictly positive for real, positive frequencies, due to causality (see Chap. 6 of Ref. [40]). The region near \(\omega\approx 0\) of the CM fit for the imaginary part of \(\text{Im}[\epsilon(\omega)]\) has a positive second derivative. This means that, between the frequency points \(\omega=0\) and \(\omega=\omega_{\text{min}}\), where \(\omega_{\text{min}}\) is the smallest frequency for which the dielectric function has been measured, one has a "gap" where any fitting function runs the risk of "undershooting" the line \(\text{Im}[\epsilon(\omega)]=0\), and the positive second derivative compensates a negative first derivative at \(\omega=0\) to match the points of smallest frequency of the fit.
Indeed, the fitting parameters that were given in Ref. [12] led to spurious negative imaginary parts under some circumstances, but these were so small as to be statistically insignificant (less than 0.2 % of the total \(\epsilon(\omega)\)). Table 1 gives the best CM fitting parameters at room temperature, where some entries were slightly adjusted compared to the values given in Ref. [12] (but still within the error bars of the fit) to ensure overall positivity of \(\text{Im}[\epsilon(\omega)]\). Figure 7 shows \(\epsilon(i\omega)\), comparing the adjusted CM fitting parameters of Table 1 with the original parameters of Ref. [12]. The two curves are essentially on top of each other, with a maximal difference of 0.6 %.
## Appendix C TRK Sum Rule for Metastable States
The TRK sum rule [10; 11] is instrumental in deriving the correct asymptotic form of the dynamic polarizability at large imaginary frequency. It states that the sum over all oscillator strengths is equal to the number of electrons \(Z\) of the atom. According to Eq. (61.1) of Ref. [41], it is valid for an arbitrary (e.g., metastable) reference state \(|\psi_{m}\rangle\),
\[\sum_{n}f_{nm}=Z\,, \tag{33}\]
where \(n\) sums over all quantum numbers of the system (not just the principal ones). In view of the relation
\[\alpha(\mathrm{i}\omega)=\sum_{n}\frac{f_{nm}}{\omega_{nm}^{2}+\omega^{2}}\,, \tag{34}\]
the TRK sum rule determines the asymptotic behavior of the polarizability for large \(\omega\) (the energy difference of the virtual and the reference state is \(\omega_{nm}=\omega_{n}-\omega_{m}\). Here, we present a derivation which, in contrast to Eq. (11.10) of Ref. [42], is valid for a system with an arbitrary number \(Z\) of electrons, and for an arbitrary reference state. The Hamiltonian is
\[H=\sum_{a}\left(\frac{\vec{p}_{a}^{2}}{2}-\frac{Z}{r_{a}}\right)+\sum_{a<b} \frac{1}{r_{ab}}\,, \tag{35}\]
\begin{table}
\begin{tabular}{l c c c c} \hline \(k\) & \(a_{k}\) & \(\omega_{k}\) & \(\gamma_{k}\) & \(\gamma_{k}^{\prime}\) \\ \hline
1 & 0.004943 & 0.1293 & 0.01841 & 0.1306 \\
2 & 0.7709 & 0.3117 & 0.101\({}^{*}\) & 0.0968\({}^{*}\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameters for the CM fit are indicated for the real and for the imaginary parts of the dielectric function for silicon, as given in Eqs. (31) and (32), at room temperature (\(T_{\Delta}=0\) in the notation of Ref. [12]). Here, \(a_{1,2}\) are dimensionless and \(\omega_{1,2}\), \(\gamma_{1,2}\) and \(\gamma_{1,2}^{\prime}\) are in units of \(E_{h}/\hbar\). The values are adapted from Tables 1 and 2 of Ref. [12], with minor adjustments of the entries marked by \({}^{*}\) (see text).
Figure 7: Dielectric function of silicon at room temperature for imaginary frequency argument, \(\epsilon(i\omega)\). The two curves compare the CM fit using the parameters of Ref. [12] and the adjusted parameters of Table 1.
where \(a\) and \(b\) sum over the electrons, \(r_{a}\) is the electron-nucleus distance, and \(r_{ab}\) is the interelectron distance. Indices \(a,b,c,d\in\{1,\ldots,Z\}\) enumerate the bound electrons. The (dipole) oscillator strength for the excitation to the states \(|\psi_{n}\rangle\) is is
\[f_{nm}=\frac{2}{3}\left(E_{n}-E_{m}\right)|\langle\psi_{n}|\sum_{c}\vec{r}_{c} |\psi_{m}\rangle|^{2}\,. \tag{30}\]
The sum over oscillator strengths can be written as follows, in operator notation,
\[\sum_{n}f_{nm}=\frac{2}{3}\left\langle\psi_{n}|\left(\sum_{c}\vec{r}_{c} \right)\,(H-E_{m})\left(\sum_{d}\vec{r}_{d}\right)|\psi_{m}\right\rangle, \tag{31}\]
where the sum over \(n\) includes the continuum. We now make use of the well-known operator identity
\[ABA=\frac{1}{2}A^{2}B+\frac{1}{2}BA^{2}+\frac{1}{2}\left[A,[B,A]\right], \tag{32}\]
where \(A=\sum_{c}\vec{r}_{c}\) and \(B=H-E_{m}\). Because \(|\psi_{m}\rangle\) is an eigenstate of the Hamiltonian, we have \((H-E_{m})|\psi_{m}\rangle=0\) and thus
\[\sum_{n}f_{nm} =\frac{1}{3}\left\langle\psi_{m}\right|\left[\sum_{c}\vec{r}_{c},\left[H-E_{m},\sum_{d}\vec{r}_{d}\right]\right]|\psi_{m}\rangle\] \[=-\frac{\mathrm{i}}{3}\left\langle\psi_{m}\right|\left[\sum_{c} \vec{r}_{c},\sum_{d}\vec{p}_{d}\right]|\psi_{m}\rangle\] \[=-\frac{\mathrm{i}}{3}\,\mathrm{i}\langle\psi_{m}|\sum_{d}[\vec{ r}_{c},\vec{p}_{c}]|\psi_{m}\rangle=Z\,. \tag{33}\]
This derivation shows that the TRK sum rule remains valid for metastable excited states and justifies our parameters for the single-oscillator model of the dynamic polarizability of metastable triplet helium, used in Sec. III. Recently, generalizations of the TRK sum rule suitable for the treatment of dipole recoil terms which occur in recoil-induced contributions to the shake-off probability following beta decay have been discussed in Ref. [43]. Their derivation follows the ideas underlying the above considerations.
|
2310.14005 | Ophthalmic Biomarker Detection Using Ensembled Vision Transformers --
Winning Solution to IEEE SPS VIP Cup 2023 | This report outlines our approach in the IEEE SPS VIP Cup 2023: Ophthalmic
Biomarker Detection competition. Our primary objective in this competition was
to identify biomarkers from Optical Coherence Tomography (OCT) images obtained
from a diverse range of patients. Using robust augmentations and 5-fold
cross-validation, we trained two vision transformer-based models: MaxViT and
EVA-02, and ensembled them at inference time. We find MaxViT's use of
convolution layers followed by strided attention to be better suited for the
detection of local features while EVA-02's use of normal attention mechanism
and knowledge distillation is better for detecting global features. Ours was
the best-performing solution in the competition, achieving a patient-wise F1
score of 0.814 in the first phase and 0.8527 in the second and final phase of
VIP Cup 2023, scoring 3.8% higher than the next-best solution. | H. A. Z. Sameen Shahgir, Khondker Salman Sayeed, Tanjeem Azwad Zaman, Md. Asif Haider, Sheikh Saifur Rahman Jony, M. Sohel Rahman | 2023-10-21T13:27:07Z | http://arxiv.org/abs/2310.14005v1 | Ophthalmic Biomarker Detection Using Ensembled Vision Transformers - Winning Solution to Ieee Sps VIP Cup 2023
###### Abstract
This report outlines our approach in the IEEE SPS VIP Cup 2023: Ophthalmic Biomarker Detection competition. Our primary objective in this competition was to identify biomarkers from Optical Coherence Tomography (OCT) images obtained from a diverse range of patients. Using robust augmentations and 5-fold cross-validation, we trained two vision transformer-based models: MaxViT and EVA-02, and ensembled them at inference time. We find MaxViT's use of convolution layers followed by strided attention to be better suited for the detection of local features while EVA-02's use of normal attention mechanism and knowledge distillation is better for detecting global features. Ours was the best-performing solution in the competition, achieving a patient-wise F1 score of 0.814 in the first phase and 0.8527 in the second and final phase of VIP Cup 2023, scoring 3.8% higher than the next-best solution.
H.A.Z. Sameen Shahgir Khondker Salman Sayeed
M. Asif Haider Sheikh Saifur Rahman Jony\({}^{*}\)
Tanjeem Azwad Zaman
M. Sohel Rahman\({}^{\dagger}\)
Department of Computer Science and Engineering
Banglades University of Engineering and Technology
Footnote †: dagger}\)Supervisor
## 1 Introduction
Accurate ophthalmic biomarker detection using Optical Coherence Tomography (OCT) images has received tremendous attention in contemporary research in ophthalmology, with significant implications in the diagnosis and treatment of eye conditions. In this context, the IEEE SPS VIP Cup 2023 presented a unique opportunity to explore innovative approaches in the field of ophthalmic biomarker detection. This paper outlines our approach to address this issue, focusing on utilizing vision transformer-based models to analyze OCT images.
On this backdrop, to solve this ophthalmic biomarker detection task, we employed two distinct models, the Multi-Axis Vision Transformer (MaxViT) [1] and EVA-02 [2], carefully selected following a systematic exploration, for their respective proficiency in identifying local and global features within the images. The intricate exploration and subsequent integration of these models were pivotal in formulating a solution that not only fulfilled the competition's criteria but also advanced the state-of-the-art of ophthalmic biomarker detection in both understanding and methodology.
## 2 Methodology
### Dataset
The competition provided a rich dataset, OLIVES [3], encompassing 9408 labeled image-biomarker pairs from 96 patients and an additional 78185 unlabeled OCT images, each accompanied by clinical labels. The competition was divided into two phases as follows. In Phase 1 (2), the test dataset consisted of 3871 (250) images from 40 different (167 new) patients.
Each OCT scan segment was labeled to denote the presence or absence of 6 biomarkers, namely Intraretinal Hyperreflective Foci (IRHRF), Partially Attached Vitreous Face (PAVF), Fully Attached Vitreous Face (FAVF), Intraretinal Fluid (IRF), Diffuse Retinal Thickening or Diabetic Macular Edema (DRT/DME) and Vitreous Debris (VD). Depending on the spatial extent, IRHRF and IRF can be loosely grouped as _local_ features, meaning they could be detected by looking at just a subsection of the image. On the other hand, PAVF, FAVF, and VD are _global_ features with DRT/DME falling somewhat in between. We elucidate the rationale behind the dataset partitioning in relation to model architecture in section 3.3.
### Models Considered
We considered multiple variants of ResNet [4] models and Inception [5] models (collectively referred to as Convolution-based Models henceforth).
Inspired by [6], we added Convolutional Block Attention Modules (CBAM) [7] to InceptionResnetV2 (referred to as IRV2_CBAM for brevity). We added three such CBAMs after the Stem, Reduction A, and Reduction B modules of InceptionResnetV2. The improved performance of IRV2_CBAM (to be presented in Section 3) inspired us to move to vision transformer models, including ViT [8], MaxViT [1], and EVA-02 [2].
Our early tests indicated an important role for image dimensions when detecting biomarkers. This observation was corroborated through a consultation with an ophthalmologist, wherein the discussion came up that downsizing images to a resolution of 224x224 pixels might have made it harder to identify these biomarkers. As such, we focused on models pre-trained on larger images. ViT [8], MaxViT [1] and EVA-02 [2] support image resolutions of \(384\times 384\), \(512\times 512\) and \(448\times 448\) respectively. Notably, we could only use the base version of these models due to computational constraints.
### Hyperparameters
We used AdamW[9] optimizer with default initialization and set the initial learning rate to \(3\times 10^{-5}\). We used the Exponential Learning Rate Scheduler, with a weight decay of 0.9. For convolution-based models, we used 128 as the batch size and trained models for 35 epochs, with early stopping based on the best cross-validation F1 score. For transformer-based models, we used the largest possible batch size supported by our hardware, which was 1 for MaxViT and 2 for both EVA-02 and ViT. To account for the small batch size, we set the gradient accumulation steps to 8. We trained all vision transformer models for two epochs. We found all ViT models to be prone to overfitting the training data after 2 epochs.
### Data Augmentation
In Phase 1, we used random greyscale transformation with \(p=0.2\), color jitter with \(p=0.8\), random resized crop with \(scale=(0.7,1)\), random horizontal flip, and finally, normalization with a mean of 0.1706 and a standard deviation of 0.2112. We found 0.7 to be the optimal scale for random resized crop while keeping other augmentations constant. Other augmentation parameters were not optimally tuned.
For Phase 2, we add a random perspective shift augmentation with \(distortion\ scale=0.23\), \(p=1\), and \(fill=255\) to make the training data similar to the Phase 2 evaluation dataset. In both phases, we did not augment the test data beyond resizing and normalization.
### 5-fold Cross Validation
For both phases, we performed 5-fold cross-validation where we partitioned the data into 5 folds with 80% in the train set and 20% on the validation set. On these 5 different folds, we trained our models and ran inference on the test set after every epoch, and combined the confidence scores to obtain the final binary decision for each biomarker.
### Ensembling MaxViT and EVA-02
The complementary strengths of MaxViT and EVA-02 naturally imply that ensembling their outputs has the potential to improve upon their individual performance across all biomarkers. One straightforward way to implement this is by using MaxViT to detect local biomarkers and using EVA-02 for global biomarkers. In this scheme, MaxViT's predictions for global biomarkers are entirely ignored (as well as EVA-02's predictions for local ones). We also apply a finer-grained ensembling scheme, where we average both model's output probabilities. Fig. 1 presents a schematic overview of our overall pipeline. We will refer to this (finer-grained) ensemble as MaxViT-EVA-02.
### Evaluation Metrics
In the domain of medical imaging where severe class imbalance is the norm, the F1 score often is the metric of choice instead of accuracy. For Phase 1 of the competition, to test the generalization ability of solutions, the F1 score was calculated over all the images in the test set. For Phase 2, to measure personalization: how well a model performs on individual patients, patient-wise F1 scores were calculated over images from the same patient and these scores were averages over all patients in the test dataset. More details can be found on the competition website.
Figure 1: Our final training pipeline. For evaluation, all augmentations except Resize were removed.
### Hardware Specification and Environment Setup
For convolution-based models implemented in Tensorflow[10], we used Kaggle TPU VM v3-8 instances paired with 330GB RAM. Due to the limited support of state-of-the-art models on TPU, we mainly used this setup for pilot experiments. For transformer-based models (implemented in PyTorch 2.0.1[11] and 'timm' [12] library with the weights hosted on Hugging Face), we used Kaggle Nvidia P100 GPU instances with 16GB VRAM, 13GB RAM, and 19GB disk space. We used scikit-learn [13] libraries for other auxiliary needs. The runtime of our complete MaxViT pipeline, including training, validation, and inference, was approximately 11 hours, while that of our EVA-02 pipeline was approximately 7 hours.
## 3 Results and Discussions
### Model Selection Results
To establish a baseline, we trained multiple variants of ResNet [4] models and Inception [14] models. We find that model size or model performance on ImageNet dataset [15] are not reliable indicators of its suitability for the task at hand (Table 1). InceptionResnetV2[5] (55.84 M parameters) proved to be the most effective model with an F1 score of 0.686 and the much smaller InceptionV3 (23.83 M parameters) model performed comparably with an F1 score of 0.682 (Table 1).
### 5-fold Cross Validation Results
5-fold cross-validation boosts out Phase-1 test scores substantially. Initial experiments revealed that our best-performing convolution-based model, InceptionResnetV2 consistently scored 0.66 when trained on random 80% splits of the train set. However, using cross-validation, InceptionResnetV2 consistently scored around 0.68. As such, we used cross-validation in all further experiments and in the final submission as well. Individually, MaxViT and EVA-02 models scored 0.68 while with cross-validation they scored 0.71.
### Classification of Biomarkers according to Spatial Extent
Upon reviewing the images, we noticed that biomarkers B1 (IRHRF), B4 (IRF), and B5 (DRT/DME) were _local_, meaning they could be detected by looking at just a subsection of the image. This observation was confirmed by an ophthalmologist, who also mentioned that B5 is somewhat in between local and global.
### Analysis Of Adding CBAM
Adding CBAM [7] to InceptionResnetV2 substantially boosted the F1 score from 0.686 to 0.696 (Table 2) for a negligible increase in the network complexity (i.e., parameter count increased by only 0.37%; not reported in the table). Notably, this boost in performance actually inspired us to move to vision transformer models.
To understand the reason for the improved F1 scores, we calculated the F1 score across biomarker types individually and discovered that CBAM improved the performance on certain biomarkers substantially while showing marginal improvement in others. It even registered a deterioration, albeit only slightly, in one case. Therefore, we hypothesize that the attention module improved the detection of local biomarkers.
### Comparison of Convolution-based and Transformer-based Models
Although adding an attention mechanism in the form of CBAM to InceptionResnet specifically improves the performance on local biomarkers, we find no such correlation when comparing convolution-based models and the purely attention-based ViT [8] architectures. This suggests the need for explicit convolution in addition to attention for optimal biomarker detection.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Param(M) & ImageNet & Test F1 \\ \hline ConvNextBase & 88.59 & **87.13** & 0.612 \\ Resnet50 & 25.57 & 75.30 & 0.634 \\ Resnet152 & 66.84 & 78.57 & 0.649 \\ Resnet101 & 44.57 & 78.25 & 0.657 \\ EfficientNetV2L & 118.52 & 86.80 & 0.662 \\ InceptionV3 & 23.83 & 78.95 & 0.682 \\ InceptionResnetV2 & 55.84 & 80.46 & **0.686** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of Convolution-based Models. We report the number of model parameters, Top1 Accuracy on the ImageNet [15] dataset collected from PapersWithCode, and F1 score on the Phase 1 Test dataset. All models were evaluated using 5-fold cross-validation.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Biomarker & Type & IRV2 & IRV2\_CBAM & VIT\_BASE \\ \hline IRHRF & L & 0.709 & 0.746 (**+**) & 0.773 **(+**)** \\ PAVF & G & 0.610 & 0.609 & 0.662 **(+)** \\ FAVF & G & 0.837 & 0.841 & 0.869 **(+)** \\ IRF & L & 0.557 & 0.599 **(+)** & 0.552 \\ DRT/DME & L/G & 0.599 & 0.628 **(+)** & 0.594 \\ VD & G & 0.753 & 0.759 & 0.755 \\ \hline Overall & & 0.686 & 0.696 & 0.701 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of InceptionResnetV2 with (IRV2_CBAM) and without (IRV2) CBAM. L (G) in the type column refers to Local (Global). For individual biomarker types, a plus sign in the bracket beside a score indicates significant improvement against the score of the network to its immediate left column. All models were evaluated using 5-fold cross-validation.
### Effectiveness of Convolution and Attention
MaxViT[1] is a vision transformer model composed of multiple MaxViT blocks where each block performs convolution, strided/block attention, and dilated/grid attention. The addition of explicit convolution makes MaxViT ideal for biomarker detection. We achieved an F1 score of 0.718 (Table 3) using the base variant of the MaxViT model, which is a substantial improvement over IRV2_CBAM and ViT_BASE. However, MaxViT does not utilize true attention across all image tokens, which motivated us to test EVA-02 [2], a plain Vision Transformer model that improves upon the standard ViT [8] by using a 1B parameter EVA-CLIP model as its teacher. The parameter counts of MaxViT and EVA-02 are 119.88M and 87.12M respectively. Comparing MaxViT and EVA-02 across the 6 biomarkers, we see that EVA-02 performs noticeably better on global biomarkers despite being smaller between the two. we hypothesize that MaxViT's sparse attention improves the detection of local biomarkers while EVA-02's true attention excels at detecting global features.
### Ensembling Results
While our simple ensembling does boost the test set F1 score to 0.720 (not shown in the table for brevity), the finer-grained ensembling scheme yields an even greater performance with an improved F1 score of 0.724.
### Patient-wise F1 score and Leaderboard Position
Our MaxViT-EVA-02 ensemble pipeline achieved a patient-wise F1 score of 0.814 averaging over 40 patients and 3781 images in the first phase and 0.8527 in the second phase (167 patients and 250 images). Our second phase F1 score is 3.8% higher than the next best solution (0.8215) in the competition as per the competition website.
### Leveraging Unlabelled Training Data
Several strategies were explored to enhance the model's performance on the task using unlabelled data and the accompanying clinical labels. Initially, contrastive learning was applied following [16]; however, it did not yield any improvement, maintaining a score of 0.686. Although looked promising, we were not able to explore contrastive learning using ViT models due to computational constraints. Subsequently, pretraining the InceptionResnetV2 model was attempted, but this approach also failed to enhance the model's performance, with the score remaining stagnant at 0.686. To exploit the availability of two additional clinical labels besides the six target biomarkers, the model was trained to predict all eight labels, anticipating a more refined gradient descent. Unfortunately, this modification did not lead to any improvement, with the score persisting at 0.686. Lastly, the implementation of pseudo-labeling was explored, but it significantly deteriorated the model's performance, plummeting the score to 0.519.
### Analysis of Outlying Patient-wise F1 Scores
In the analysis of cases where the model exhibited a low F1 score in detecting biomarkers from OCT scans, several patterns were observed. Patient 01-002 at week 40 and patient 02-044 at week 0 presented with severe spots, resulting in F1 scores of 0.64 and 0.55, respectively. Moderate spots were identified as the likely cause for the low F1 scores of 0.6 in patients 01-007 at week 100 and 01-049 at week 0. Additionally, patient 01-043 at week 100 exhibited a severe artifact, leading to the lowest F1 score of 0.37. Moderate artifacts were also noted in patients 01-049 and 02-044 at week 100, with F1 scores of 0.6 and 0.52, respectively. However, the likely cause for the low F1 scores observed in patients 01-019, 01-036, and 01-054 at week 100 (F1 scores of 0.51, 0.62, and 0.48) are not immediately evident to non-medical professionals. We leave a more thorough analysis and subsequent pipeline adjustments as future work.
## 4 Conclusion
In this work, we presented the methodology of our final submission to the second phase of IEEE SPS VIP Cup 2023: Ophthalmic Biomarker Detection. We also presented the underlying motivation for pipeline design decisions. We find that Vision Transformer (ViT) models have begun to consistently outperform their Convolutional Neural Network (CNN) counterparts. Furthermore, we find that k-fold cross-validation and model ensembling continue to be effective means to utilize the entire dataset and to improve the generalization of predictions.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Biomarker & Type & MaxViT & EVA-02 & Ensemble \\ \hline IRHRF & L & 0.774 & 0.731 & **0.779** \\ PAVF & G & 0.677 & **0.701** & 0.688 \\ FAVF & G & 0.868 & 0.874 & **0.879** \\ IRF & L & **0.611** & 0.575 & 0.600 \\ DRT/DME & L/G & 0.615 & 0.593 & **0.618** \\ VD & L & 0.764 & 0.779 & **0.782** \\ \hline Overall & Overall & 0.718 & 0.709 & **0.724** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of MaxViT, EVA-02, and their Ensemble for Various Biomarkers. The models have been ensembled by averaging their output probabilities.(L: Local, G: Global)
## 5 Acknowledgement
We would like to extend our sincere gratitude to Dr. S.M. Rezwan Hussain, a distinguished ophthalmologist at the Eye Department, Combined Military Hospital(CMH), Dhaka. Bangladesh, for his invaluable insights and expertise regarding biomarker classification according to their spatial extent.
|
2302.03768 | Catch Me If You Can: Improving Adversaries in Cyber-Security With
Q-Learning Algorithms | The ongoing rise in cyberattacks and the lack of skilled professionals in the
cybersecurity domain to combat these attacks show the need for automated tools
capable of detecting an attack with good performance. Attackers disguise their
actions and launch attacks that consist of multiple actions, which are
difficult to detect. Therefore, improving defensive tools requires their
calibration against a well-trained attacker. In this work, we propose a model
of an attacking agent and environment and evaluate its performance using basic
Q-Learning, Naive Q-learning, and DoubleQ-Learning, all of which are variants
of Q-Learning. The attacking agent is trained with the goal of exfiltrating
data whereby all the hosts in the network have a non-zero detection
probability. Results show that the DoubleQ-Learning agent has the best overall
performance rate by successfully achieving the goal in $70\%$ of the
interactions. | Arti Bandhana, Ondřej Lukáš, Sebastian Garcia, Tomáš Kroupa | 2023-02-07T21:57:59Z | http://arxiv.org/abs/2302.03768v1 | # Catch Me If You Can: Improving Adversaries in Cyber-Security With Q-Learning Algorithms
###### Abstract
The ongoing rise in cyberattacks and the lack of skilled professionals in the cybersecurity domain to combat these attacks show the need for automated tools capable of detecting an attack with good performance. Attackers disguise their actions and launch attacks that consist of multiple actions, which are difficult to detect. Therefore, improving defensive tools requires their calibration against a well-trained attacker. In this work, we propose a model of an attacking agent and environment and evaluate its performance using basic Q-Learning, Naive Q-learning, and DoubleQ-Learning, all of which are variants of Q-Learning. The attacking agent is trained with the goal of exfiltrating data whereby all the hosts in the network have a non-zero detection probability. Results show that the DoubleQ-Learning agent has the best overall performance rate by successfully achieving the goal in 70% of the interactions.
Q-Learning, Reinforcement Learning, MDP, Cybersecurity, Learning Agents, Advanced Persistent Threat.
## 1 Introduction
The risk of cyber attacks is constantly increasing. Attackers continue to become more sophisticated and manage to find new vulnerabilities to exploit, making the role of network defenders skewed and asymmetric. Most attack techniques involve little direct interaction between the attacker and the defender. In attacks such as ransomware (ENISA, 2022), port scanning or cryptocurrency mining, the interaction can be as little as only one action from the attacker. In more complex attacks such as banking trojans or Advanced Persistent Threat (APT) attacks (Drasar et al., 2020), the attacker has to perform a series of steps within the network or target device to be successful while remaining undetected. Such attacks are extremely difficult to detect, yet they are the most impactful. APT attacks are usually long-term, with many decisions typically taken by a human adapting their tactics and techniques to avoid detection and in most cases, the defense mechanisms are not versatile enough to adapt to the behavior of an attacker.
APT attackers can be modeled as agents who pursue their goals while interacting with an environment (target device or network). Most of these interactions are captured mainly by Game theory or Reinforcement Learning (RL) models with the intent of improving defenses in the network. Game-theoretic frameworks are used to provide solutions for optimal defenses (such as honeypot allocation) but RL models are mostly used to improve penetration testing attacks (Durkota et al., 2016; Mitchell and Healy, 2018). LSTM network and Q-Learning techniques are also being applied to predict the attacker's action in APT data sets (Dehghan et al., 2022). However, modeling realistic defenses inevitably requires learning almost optimal decisions for attackers. To the best of our knowledge, there are no studies about modeling APT attacker's behavior with the goal to _improve_ the decisions made by the attacker. Creating a realistic inference model for the attacker requires consideration of factors such as intent, capabilities, objectives, opportunities, and available resources for the attacker (Moskal et al., 2018; Liu et al., 2005). Due to the complexity of these attributes, developing a general framework becomes challenging. To overcome these challenges, RL models are generally applied to train and solve an optimal policy from a defender's perspective; however, we are unaware of a RL model to optimize the actions of an APT attacker.
In this paper, we model both an APT attacker and a network environment to train RL agents that optimize the attack. The goal of the attacker is to exfiltrate
data from a specific server inside a local network to a command and control (C&C) server in the Internet. To find the optimal policy for the attacker, three off-policy RL algorithms are trained: Q-Learning, Naive Q-Learning, and DoubleQ-Learning.
Our results show that the DoubleQ-Learning-based attacker agent is able to exfiltrate data in almost 70% of the interactions. Furthermore, we show that the agent can learn how to plan and execute a multi-stage data exfiltration attack detected less than 40% of the time. From a cybersecurity point of view, it means that a model of an attacker can be learned and improved, and therefore a better model of the defender could be learned in future research.
The main contributions of this paper are:
* a novel model of a decision-making entity (APT attacker) in an adversarial environment;
* implementation of RL algorithms for an attacking agent in a custom environment; and
* an analysis of the impact of APT attacker models on the cybersecurity domain.
The paper is structured as follows. Section 2 provides the motivation and previous work. Section 3 describes the RL environment. Section 4 presents the RL algorithms; Section 5 presents the setup of the experiments; Section 6 presents the results and discusses their impact. The conclusions and future work are contained in Section 7.
## 2 Motivation & Related Work
There are two main sources of motivation for studying the behavioral models of attackers in APT attacks for local networks. First, improving defense mechanisms (algorithms, antivirus systems, etc.) based on the knowledge of past attacks highlights the need to better understand the characteristics of nearly optimal attack behaviors in realistic networks. Second, by creating and training RL models of the attacker's behavior, it is possible to optimize future defense mechanisms and the dynamic properties of such systems.
Game theory and RL (Shiva et al., 2010) have gained traction over the years in modeling attack and defense mechanisms in many domains, including network security. Network security problems are primarily complex and require rational decision-making. Game theory provides mathematical models of strategic interaction among multiple decision makers (players or agents) along with algorithms for finding solutions (equilibria) in such scenarios. The potential benefit of applying game theory to network security is the automation of the exhaustive threat detection process for network administrators. However, real-world cybersecurity models may have limitations with regard to the information observed by players. Typically, the defender's knowledge of the attacker's strategy and decisions is limited (Patil et al., 2018). This leads to games with partial observation or incomplete information, which are extremely difficult to scale to the required size of the problem.
In the area of game theory for security, there has been promising research in honeypot technologies (Anwar and Kamhoua, 2022). The authors designed an optimal approach for honeypot allocation by formulating a two-player zero-sum game between the defender and the attacker, which is played on top of an attack graph. The defender places honeypots on machines, while the attacker selects an attack path through the attack graph, which would lead to the target machine without being detected. In addition to solving an effective strategy for honeypot placement in the network, the authors also experiment with a diversity of honeypot configurations. Diversifying the honeypot configuration ensures that not all honeypots are discovered if one is compromised; however, this adds to the operational cost. To automate response to a cyber attack, (Hammar and Stadler, 2020) investigate methods where strategies evolve without human intervention and do not require domain knowledge. The authors model the cyber interaction as a Markov game and use simulations of self-play where agents interact and update their strategies based on experience from previously played games.
Another promising research direction used Proximal Policy Optimization (PPO) with self-play to solve a stochastic (Markov) two-player game with sequential moves between defender and attacker (Du et al., 2022). The game is played on top of an attack graph, and the authors show that the performance of a PPO policy is better than that of a heuristic policy. The initial results are promising, but the setting used by the authors is limited to the attack graph with five nodes and four edges. By contrast, our work deals only with a single-agent environment.
Attack graphs are helpful, as they can predict the attacker's path depending on the vulnerabilities present in the network. At the same time, defenders can leverage attack graphs to find an effective defense strategy. In particular, (Guo et al., 2021) provides defense solutions through edge blocking in an attack graph constructed in the active directory. Another stream of research focuses on the assistance of attacking tools for better penetration testing or cyber-training, for example, using Deep Q-Learning (Niculae et al., 2020). The authors compare Q-Learning,
Extended Classifier Systems (XCS), and Deep Q-Networks (DQN) to find attacker strategies. To determine the best response for a suspicious user on the network, Chung et al. (2016) compares the variations of Q-Learning with a stochastic game.
## 3 Environment Model
Q-Learning is one of the most widely applied model-free off-policy RL algorithms Jang et al. (2019). It allows agents to learn in domains with Markovian properties and thus can be modeled as a Markov Decision Process (MDP). Sufficient exploration of the environment is done with a \(\epsilon\)-greedy policy. An \(\epsilon\)-greedy method chooses a uniformly random action with probability \(\epsilon\) and greedy action with probability \(1-\epsilon\). The hyperparameter \(\epsilon\) is chosen to balance exploration and exploitation, intending to maximize the cumulative reward.
An MDP is used as the underlying model Sutton and Barto (2018) as the focus is on training a single attacking agent. Such an approach results in the defender being part of the environment. In real-life scenarios, successful detection requires several steps, from placement of the defensive measures, detecting and generating alerts, to evaluating and addressing threats. In this work, the defender is modeled as a stochastic and global part of the environment.
### Network
The computer network used for the definition of the environment represents a small organization with five clients, five servers, and a router that provides Internet; see figure 1. Each host in the network has Internet access. The router is also a firewall that controls which clients from subnetwork 2 can access the servers in subnetwork 1 (corresponding to dotted lines in figure 1). Computers can connect to each other if they are in the same subnetwork.
In the environment, we assume that the attacker has already gained access to one of the clients on the network. Additionally, the attacker knows the address of an external C&C server on the Internet. The attacker's goal is to find and exfiltrate data located in one of the servers in subnetwork 1.
### Defender
The defender in our model is an entity present in all clients/servers simultaneously and it has assigned a probability of detecting the attacker's action. Once the attacker is detected, the episode ends and the environment is reset to the initial state. This is represented by a terminal state in the environment. Given that the defender has full network visibility, there is a probability of detection for each action on all clients and servers.
### Attacker
Attackers usually do not have information about the network and so they must compensate for lack of knowledge by learning through trial and error. We simulate an attacker who has already gained a foothold in subnetwork 1 (figure 1) according to our assumption. This holds for a real-world scenario, as the initial breach can be done in various ways since there are many connected devices on the network, and preventing the initial breach in some ways is extremely hard. Therefore, modeling the attacker entry in our current setup is ignored. The attacker's objective is to find the optimal path to a server in subnetwork 2 containing sensitive data, find and exfiltrate this data, and make it accessible on the web. The available actions are the minimal actions required to complete the goal: find hosts, find services, get access, find data, and exfiltrate. The attacker was modeled as a rational attacker behaving optimally.
### States
A state is an abstract representation of the environment from the attacker's perspective. It contains sev
Figure 1: Network topology with two local subnetworks and a C&C server on the Internet. The solid black lines represent direct network connectivity (such as Ethernet cables). The dotted lines represent logical connections from clients to servers as allowed by the firewall. In the non-randomized experiments, the attacker starts in Client 1. In the experiments with a randomized start, the attacker starts in one of the clients in subnetwork 2. The IP address of the C&C server is always known to the attacker.
eral assets the attacker can use or has discovered with previous actions. Therefore, the state of the environment changes based on the actions of the attacker and the current state. The probabilities \(p(a|s)\) represent the probability of success of the attacker's action \(a\) in a state \(s\) and \(p(detection|s,a)\) represents the probability of detection given the action \(a\) played in the state \(s\). These probabilities (table 1) of success and detection were set based on the expert evaluations of penetration test professionals, where knowledge of the domain was compared and matched with the evaluation of various detection tools for malicious behavior discovery shown in [14].
Success probabilities are based on known tools and techniques. While network issues are the cause of the failure of most actions, in the case of _ExecuteCodeInService_ other problems such as service versions and exploits quality have to be taken into account. Detection probabilities consider the false positives found in real networks with benign traffic by a human player. Some actions, such as _ScanNetwork_ with ARP scan, are highly successful and barely recognized (off-the-shelf state-of-the-art IDS can not detect it [15]). Often, even if these scans are detected, such alerts are dismissed for the sake of limiting the False positives. The same applies to _FindData_ which is performed locally and thus nearly undetectable and _ExfiltrateData_ which when done correctly, is known to be extremely hard to distinguish from benign traffic.
At each time step, the following information is part of the state:
* set of networks the attacker has discovered;
* set of hosts the attacker has discovered;
* set of hosts that the attacker has control of;
* set of services the attacker has discovered in each host; and
* set of data the attacker has discovered in a host.
Having states consisting of assets, we can follow the well-known STRIPS representation originally designed for planning [14]. STRIPS describes transitions in a system as operators, which are applicable if _preconditions_ are met. Originally, the effects of _add_ and _delete_ can be specified for each operator. However, in our approach, we completely omit the delete effect, which results in a _relaxed_ problem representation [1]. Problem relaxation is a commonly used method in a variety of AI areas. Such an approach simplifies the problem of traversing the state space.
### Actions
The attacker's actions follow the subset of techniques for adversary behavior listed in Mitre ATT&CK1. As we are only representing one type of goal in this model, data exfiltration, only the subset of Mitre actions related to data exfiltration are used:
Footnote 1: [https://attack.mitre.org/](https://attack.mitre.org/)
1. active scanning: 1. find computers in the network 2. find services run on the hosts in the network 3. find data in the computer
2. attack service to execute code; and
3. exfiltrate data to the Internet.
The attacker in our model follows a five-step action as represented in Table 2 to reach its goal.
### Rewards
The reward is an incentive that the agent receives with respect to the state action pair. In our model, the reward of the agent is constructed as: \(-1\) for every action taken,\(-50\) if the action is detected, and \(+100\) if the goal state is reached.
The small negative reward per action is intended to motivate the agent to find the shortest path to the goal. The \(+100\) reward for the achievement of the goal allows the attacker to take actions with a higher expected detection probability if they lead to a higher expected reward.
\begin{table}
\begin{tabular}{l l l} \hline \hline Action & \begin{tabular}{l} Success \\ probability \\ \end{tabular} &
\begin{tabular}{l} Detection \\ probability \\ \end{tabular} \\ \hline ScanNetwork & 0.9 & 0.2 \\ FindServices & 0.9 & 0.3 \\ ExecudeCode InService & 0.7 & 0.4 \\ FindData & 0.8 & 0.1 \\ ExfiltrateData & 0.8 & 0.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Probability of success and detection for each action executed by the attacker in the network.
### Implementation
The representation of a state, as described in section 3.4, allows the modification of the environment _without_ the need to retrain the agent from zero. This differentiates our environment model and offers a higher degree of modularity for various cybersecurity scenarios. Instead of allocating the complete Q-table prior to training, our agents create the Q-values dynamically, saving both memory and time during training.
## 4 Learning Agents
To train and evaluate the attacker's performance, we use Q-Learning [14] and its variants: Naive Q-Learning and Double Q-Learning. Q-Learning is a reinforcement learning algorithm that approximates the optimal state-action value function independently of the policy being followed. It is an off-policy algorithm that separates learning from the current acting policy by updating the Q-value \(Q(s,a)\), which is an indication of how good a state-action pair is. The equation for the Q-value update is:
\[Q(s,a)\coloneqq Q(s,a)+\alpha(R_{t+1}+\gamma V^{t}(s^{\prime})), \tag{1}\]
where \(\alpha\in[0,1]\) is the learning rate and \(\gamma\in[0,1]\) is the discount factor that captures the concept of depreciation. A value closer to 0 means that the current reward is preferred over future rewards.
In Naive Q-Learning, the learning rate is partially allocated to the previous result to combine the knowledge of the past history during learning, the actual immediate reward in the current iteration, and the expected future reward [10]. This leads to the following variation of equation (1):
\[Q(s,a)\coloneqq\alpha Q(s,a)+(1-\alpha)(R_{t+1}+\gamma V^{t}(s^{\prime})) \tag{2}\]
Double Q-Learning [1, 14] proposes learning two Q-functions instead of one. Each Q-function gets the update from the other for the next state. These two Q-functions are an unbiased estimate of the value of the action. The action selection is then performed by averaging or adding the two Q values for each action and then performing \(\epsilon\)-greedy action selection with the resulting Q values. In this paper, action selection is performed by adding the two Q values before performing the \(\epsilon\)-greedy.
\[Q^{A}(s,a)\coloneqq Q^{A}(s,a)+\alpha(R+\gamma Q^{B}(s^{\prime},a^{\prime})-Q ^{A}(s,a)) \tag{3}\]
\[Q^{B}(s,a)\coloneqq Q^{B}(s,a)+\alpha(R+\gamma Q^{A}(s^{\prime},a^{\prime})-Q ^{B}(s,a)) \tag{4}\]
The other two learning agents also use \(\epsilon\)-greedy as the action selection criteria in accordance with the original papers.
## 5 Experiment Setup
_Three different scenarios_ were used to train the learning agents: specific attacker position, random attacker position, and random target server to attack.
In the first scenario, the attacker is placed on client 1 in subnetwork 2 (figure 1). We define a client as an official device on the network used for work and a server as a device that holds data and offers services accessed by the clients. The attacker's goal is to reach the target server, which is specified as server 3 in subnetwork 1; exfiltrate the data from the target server to the C&C server outside the local network.
There are five clients in subnetwork 2, and in reality, any connected device within the network is susceptible to an attack; therefore, for the second scenario, we randomly assign the starting position of the attacker. This was done to compare the performance of the learning agents and see how they adapt to randomness in the starting position. In addition to randomizing the starting position, we also randomized the target server for data exfiltration; which was our third scenario.
For successful achievement of the goal state, at least 5 successful actions had to be performed in all 3 scenarios; however, if the agent exceeds the limit of 25 actions per episode, the interaction is terminated.
The defender in all 3 scenarios is an entity with unlimited visibility and is present in all hosts, that is, every action can be detected with a predefined probability. Additionally, we assume that all services running on the hosts are exploitable and that a connection to the Internet is available on all hosts.
The learning parameter for each algorithm is presented in Table 3. Experiments start with a random attacker which randomly picks an action. The Q-Learning agent and the DoubleQ-Learning agent were trained on a learning rate of 0.3, while the Naive Q-Learning agent was trained on a learning rate of 0.8. The action selection parameter controlled by epsilon was kept at 0.2 for all the agents; however, Double Q-Learning used a linearly decaying \(\epsilon\) from 0.2 to 0.05.
In all experiments, we measured the win rate, the detection rate, and the mean return of the episodes. The win rate represents the percentage of interactions that were successful for the attacker, which is
the number of times the attacker was able to reach the goal state and exfilter the data in 10 000 episodes. The detection rate represents the percentage of interactions that were detected and resulted in the attacker receiving a reward of \(-50\).
## 6 Experimental Results
The following results were obtained in the first scenario of the experiment, when the attacker's position was specified in the network. Table 4 summarizes the performance of the different learning agents. The random attacker, without any knowledge of the network and without any strategy, has a detection rate of 99.58%, while the DoubleQ-Learning attacker had a detection rate of 33%. The Q-learning and Naive Q-Learning agents have similar detection rates.
Randomizing the starting position decreases the win rate and increases the detection rate for all learning agents, as shown in table 5. The Naive Q-Learning agent had the greatest impact on performance due to the randomness of the starting position among all learning agents. The detection rate increased from 40.4% to 50.78%
\begin{table}
\begin{tabular}{l c c c} \hline \hline Algorithm & \begin{tabular}{c} Winning \\ rate (\%) \\ \end{tabular} & \begin{tabular}{c} Detection \\ rate (\%) \\ \end{tabular} &
\begin{tabular}{c} Mean \\ return \\ \end{tabular} \\ \hline Q-Learning & 53.3 & 53 & 23.45 \\ Naive Q-Learning & 61.8 & 44.1 & 36.8 \\ Double Q-Learning & 64.9 & 41.7 & 41.2 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of learning agents in a scenario where the attacker’s starting point and target server were randomized. All algorithms were trained on 10 000 episodes.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Algorithm & \begin{tabular}{c} Winning \\ rate (\%) \\ \end{tabular} & \begin{tabular}{c} Detection \\ rate (\%) \\ \end{tabular} &
\begin{tabular}{c} Mean \\ return \\ \end{tabular} \\ \hline Random & 0.48 & 99.58 & 53.03 \\ Q-Learning & 66.4 & 40.4 & 43.94 \\ Naive Q-Learning & 66.91 & 40.19 & 43.94 \\ Double Q-Learning & 74.0 & 33.0 & 54.61 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison of the learning agents with a fixed attacker starting in _client1_. Q-Learning and Double Q-Learning were trained with 10 000 episodes, while Naive Q-Learning was trained with 5 000 episodes.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Algorithm & \begin{tabular}{c} Winning \\ rate (\%) \\ \end{tabular} & \begin{tabular}{c} Detection \\ rate (\%) \\ \end{tabular} &
\begin{tabular}{c} Mean \\ return \\ \end{tabular} \\ \hline Random & 0.34 & 99.48 & -54.04 \\ Q-Learning & 65.4 & 39.27 & 41.97 \\ Naive Q-Learning & 54.27 & 50.78 & 25.59 \\ Double Q-Learning & 68.9 & 36.8 & 47.58 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of performance for the learning agents in a scenario with randomized attacker’s starting randomized. Q-Learning and Double Q-Learning were trained with 10 000 episodes, while Naive Q-Learning was trained with 5 000 episodes.
Figure 3: Comparison of the winning rate of agents during the learning process in the scenario with defender and randomized starting position.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Algorithm & \begin{tabular}{c} Winning \\ rate (\%) \\ \end{tabular} & \begin{tabular}{c} Detection \\ rate (\%) \\ \end{tabular} &
\begin{tabular}{c} Mean \\ return \\ \end{tabular} \\ \hline Q-Learning & 53.3 & 53 & 23.45 \\ Naive Q-Learning & 61.8 & 44.1 & 36.8 \\ Double Q-Learning & 64.9 & 41.7 & 41.2 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of learning agents in a scenario where the attacker’s starting point and target server were randomized. All algorithms were trained on 10 000 episodes.
Figure 2: Comparison of the mean cumulative reward of agents during the learning process in the scenario with defender and randomized starting position for the attacker.
### Analysis of Results
We compared how agents with varying parameters learned a policy in a network with ten hosts in the presence of a defender with full visibility. The detection probability was nonzero for actions at all clients and servers. When comparing the win rate and the detection rate for all learning agents, it is clear that Double Q-Learning outperforms all other agents in all scenarios. Two Q-functions are trained in such agent but from different episodes which makes the training more robust. A sum of the Q-functions is used during inference. This avoids the overestimation bias of Q-Learning and leads to better training stability even in a noisy environment. The Q-Learning attacker and the Naive Q-Learning attacker have the same performance for the first scenario where the starting point was specified. This is due to the distribution of the learning rate according to equations (1) and (2). A learning rate of 0.8 was used for Naive Q-Learning, which in comparison with the Q-learning gives similar results as the learning rate of 0.2. However, the performance of Naive Q-Learning decreased when the starting position was randomized. This is attributed to the weighting of the update rule of the Q-value, as shown in equation ( 2). When considering negative rewards, the update affects the Q-value more than the standard Q-Learning due to the split update \(\alpha\). Although this can be beneficial in the cases of high positive rewards, the results show that this approach lacks adaptability in the case of the stochastic environment.
Figures 3 and 4 show that DoubleQ-Learning outperforms the other two agents in terms of winning and detection rates. The high variance of the mean returns, as shown in figure 2 is the result of the stochastic environment and the reward distribution described in section 3.6. The graphs also show that even though DoubleQ-Learning performs badly in the beginning, over time as the number of episodes increases and state-action values are updated, it outperforms the other two learning agents. In particular, even if the agent's policy is optimal, it cannot influence the detection and subsequent reward of \(-50\). Therefore, the three agents share similar high variance in mean returns but differ significantly in metrics that focus on reaching the goal, in which the Double Q-Learning shows the most promising results.
Despite using random exploration \(\epsilon\) in the three agents based on Q-Learning, the results from the first and second scenarios show that the environment and the goal are non-trivial and unsolvable for agents performing purely random actions, which reached the goal in fewer than 1% of the cases. For that reason, the Random Agent was excluded from the comparison in figures 2, 3 and 4 and in the _third scenario_.
The results of our experiments show that despite the defender having full visibility of the network, a rational attacker was still able to reach the target and exfiltrate data. From a security perspective, this indicates that the defensive tools in the network need to be improved so as to prevent the attacker's lateral movement in the system.
## 7 Conclusion
In this paper, we propose a Q-Learning-based attacking agent capable of performing data exfiltration.
Our results show that even though the three learning agents can find meaningful policies, Double Q-Learning outperforms the others and provides the most stable training. It reached the goal 70% of the interactions while being undetected in 37%. This shows that despite a globally present defender, a rational attacker could still reach the target.
The initial success and detection probabilities were set based on expert knowledge, however, our results clearly show that there is room for improvement in the detection capability of the defender. Having a high success probability for attacker action highlights the need for a robust defense mechanism that is capable of detecting any stealthy attacker. This provides a foundation for studying and improving attacker techniques to increase defense capability in the network.
Currently, the method is limited to small or medium-sized networks. Although the interaction and world representation model can be easily extended to a more complex setup in size of the network and action space, the scalability and computational feasibility of such extensions have yet to be evaluated.
Therefore, the natural direction for future research
Figure 4: Comparison of the detection rate of agents during the learning process in the scenario with defender and randomized starting position.
is to expand our approach towards larger environments, which will require subsequent scalability testing due to those complex setups. We also plan to incorporate other types of cyber attacks into Mtire taxonomy and model the defender as a rational entity with its own set of actions in the interaction. In addition, we plan to test the performance of our agent in a simulated environment.
Along with increasing the environmental complexity, the problem of more complex goals for the attacker is also in the pipeline resulting in the need for more reconnaissance from the agent.
## Acknowledgments
The authors acknowledge support from the Research Center for Informatics (CZ.02.1.01/0.0/0.0/ 16_019/0000765) and Strategic Support for the Development of Security Research in the Czech Republic 2019-2025 (IMPAKT 1) program, by the Ministry of the Interior of the Czech Republic under No. VJ02010020 - AI-Dojo: Multi-agent testbed for the research and testing of AI-driven cyber security technologies.
|
2304.07680 | Manifold Fitting | While classical data analysis has addressed observations that are real
numbers or elements of a real vector space, at present many statistical
problems of high interest in the sciences address the analysis of data that
consist of more complex objects, taking values in spaces that are naturally not
(Euclidean) vector spaces but which still feature some geometric structure.
Manifold fitting is a long-standing problem, and has finally been addressed in
recent years by Fefferman et al. (2020, 2021a). We develop a method with a
theory guarantee that fits a $d$-dimensional underlying manifold from noisy
observations sampled in the ambient space $\mathbb{R}^D$. The new approach uses
geometric structures to obtain the manifold estimator in the form of image sets
via a two-step mapping approach. We prove that, under certain mild assumptions
and with a sample size $N=\mathcal{O}(\sigma^{(-d+3)})$, these estimators are
true $d$-dimensional smooth manifolds whose estimation error, as measured by
the Hausdorff distance, is bounded by $\mathcal{O}(\sigma^2\log(1/\sigma))$
with high probability. Compared with the existing approaches proposed in
Fefferman et al. (2018, 2021b); Genovese et al. (2014); Yao and Xia (2019), our
method exhibits superior efficiency while attaining very low error rates with a
significantly reduced sample size, which scales polynomially in $\sigma^{-1}$
and exponentially in $d$. Extensive simulations are performed to validate our
theoretical results. Our findings are relevant to various fields involving
high-dimensional data in machine learning. Furthermore, our method opens up new
avenues for existing non-Euclidean statistical methods in the sense that it has
the potential to unify them to analyze data on manifolds in the ambience space
domain. | Zhigang Yao, Jiaji Su, Bingjie Li, Shing-Tung Yau | 2023-04-16T03:17:36Z | http://arxiv.org/abs/2304.07680v2 | # Manifold fitting: an invitation to statistics
###### Abstract
While classical statistics has addressed observations that are real numbers or elements of a real vector space, at present many statistical problems of high interest in the sciences address the analysis of data that consist of more complex objects, taking values in spaces that are naturally not (Euclidean) vector spaces but which still feature some geometric structure. Manifold fitting is a long-standing problem, and has finally been addressed in recent years by Fefferman et. al ([14, 15]). We develop a method with a theory guarantee that fits a \(d\)-dimensional underlying manifold from noisy observations sampled in the ambient space \(\mathbb{R}^{D}\). The new approach uses geometric structures to obtain the manifold estimator in the form of image sets via a two-step mapping approach. We prove that, under certain mild assumptions and with a sample size \(N=\mathcal{O}(\sigma^{-(d+3)})\), these estimators are true \(d\)-dimensional smooth manifolds whose estimation error, as measured by the Hausdorff distance, is bounded by \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\) with high probability. Compared with the existing approaches proposed in [13, 16, 21, 42], our method exhibits superior efficiency while attaining very low error rates with a significantly reduced sample size, which scales polynomially in \(\sigma^{-1}\) and exponentially in \(d\). Extensive simulations are performed to validate our theoretical results. Our findings are relevant to various fields involving high-dimensional data in statistics and machine learning. Furthermore, our method opens up new avenues for existing non-Euclidean statistical methods in the sense that it has the potential to unify them to analyze data on manifolds in the ambience space domain.
\({}^{\dagger}\)ZY thanks Professor Shing-Tung Yau for his intellectual comments and the support from the Center of Mathematical Sciences and Applications (CMSA) at Harvard University. ZY thanks Professor Charles Fefferman for his helpful discussions. Part of the work has been done during the Harvard Conference on Geometry and Statistics, supported by CMSA during Feb 27-March 1, 2023.
_Keywords and phrases:_ Manifold fitting, Convergence, Hausdorff distance, Reach.
To address these problems, various mathematical approaches have been proposed (see [13, 14, 15, 17, 16]). However, many of these methods rely on restrictive assumptions, making it challenging to implement them as efficient algorithms. As the manifold hypothesis continues to be a foundational element in statistical research, the Geometric Whitney Problems, particularly Problem I, merit further exploration and discussion within the statistical community.
The manifold hypothesis posits that high-dimensional data typically lie close to a low-dimensional manifold. The genesis of the manifold hypothesis stems from the observation that numerous physical systems possess a limited number of underlying variables that determine their behavior, even when they display intricate and diverse phenomena in high-dimensional spaces. For instance, while the motion of a body can be expressed as high-dimensional signals, the actual motion signals comprise a low-dimensional manifold, as they are generated by a small number of joint angles and muscle activations. Analogous phenomena arise in diverse areas, such as speech signals, face images, climate models, and fluid turbulence. The manifold hypothesis is thus essential for efficient and accurate high-dimensional data analysis in fields such as computer vision, speech analysis, and medical diagnosis.
In early statistics, one common approach for approximating high-dimensional data was to use a lower-dimensional linear subspace. One widely used technique for identifying the linear subspace of high-dimensional data is Principal Component Analysis (PCA). Specifically, PCA involves computing the eigenvectors of the sample covariance matrix and then employing these eigenvectors to map the data points onto a lower-dimensional space. One of the principal advantages of methods like this is that they can yield a simplified representation of the data, facilitating visualization and analysis. Nevertheless, linear subspaces can only capture linear relationships in the data and may fail to represent non-linear patterns accurately. To address these limitations, it is often necessary to employ more advanced manifold-learning techniques that can better capture non-linear relationships and preserve key information in the data. These algorithms can be grouped into three categories based on their purpose: manifold embedding, manifold denoising, and manifold fitting. The key distinction between them is depicted in Figure 1.
_Manifold embedding_, a technique that aims to find a low-dimensional representation of high-dimensional data sets sampled near an unknown low-dimensional manifold, has gained significant attention and contributed to the development of dimensionality reduction, visualization, and clustering techniques since the beginning of the 21st century. This technique seeks to preserve the distances between points on the manifold. Thus the Euclidean distance between each pair of low-dimensional points is similar to the manifold distance between the corresponding high-dimensional points. Manifold embedding tries to learn a set of points in a low-dimensional space with a similar local or global geometric structure to the manifold data. The resulting low-dimensional representation usually has better aggregation and
Figure 1: Illustrations for (a) manifold embedding, (b) manifold denoising, and (c) manifold fitting.
clearer demarcation between classes. Many scholars have performed various types of research on manifold-embedding algorithms, such as Isometric Mapping ([38]), Locally Linear Embedding ([36, 8]), Laplacian Eigenmaps ([1]), Local Tangent Space Alignment ([44]), and Uniform Manifold Approximation Map ([31]). Although these algorithms achieve useful representations of real-world data, few of them provide theoretical guarantees. Furthermore, these algorithms typically do not consider the geometry of the original manifold or provide any illustration of the smoothness of the embedding points.
_Manifold denoising_ aims to address outliers in data sets distributed along a low-dimensional manifold. Because of disturbances during collection, storage, and transportation, real-world manifold-distributed data often contain noise. Manifold denoising methods are designed to reduce the effect of noise and produce a new set of points closer to the underlying manifold. There are two main approaches to achieving this: feature-based and expectation-based methods. Feature-based methods extract features using techniques such as wavelet transformation ([7, 41]) or neural networks ([30]) and then drop non-informative features to recover denoised points via inverse transformations. However, such methods are typically validated only through simulation studies, lacking theoretical analysis. On the other hand, expectation-based methods can achieve manifold denoising by shifting the local sample mean ([39]) or by fitting a local mean function ([37]). However, these methods lack a solid theoretical basis or require overly restrictive assumptions.
_Manifold fitting_ is a crucial and challenging problem in manifold learning. It aims to reconstruct a smooth manifold that closely approximates the geometry and topology of a hidden low-dimensional manifold, using only a data set that lies on or near it. Unlike manifold embedding or denoising, manifold fitting strongly emphasizes the local and global properties of the approximation. It seeks to ensure that the generated manifold's geometry, particularly its curvature and smoothness, is precise. The application of manifold fitting can significantly enhance data analysis by providing a deeper understanding of data geometry. A key benefit of manifold fitting is its ability to uncover the shape of the hidden manifold by projecting the samples onto the learned manifold. For example, when reproducing the three-dimensional structure of a given protein molecule, the molecule must be photoed from different angles several times via cryo-electron microscopy (cryo-EM). Although the orientation of the molecule is equivalent to the Lie group \(SO(3)\), the cryo-EM images are often buried by a high-dimensional noise because of the scale of the pixels. Manifold fitting helps recover the underlying low-dimensional Lie group of protein-molecule images and infer the structure of the protein from it. In a similar manner, manifold fitting can also be used for light detection and ranging ([25]), as well as wind-direction detection ([6]). In addition, manifold fitting can generate manifold-valued data with a specific distribution. This capability is potentially useful in generative machine-learning models, such as Generative Adversarial Network (GAN, [22]).
### Main Contribution
The main objective of this paper is to address the problem of manifold fitting by developing a smooth manifold estimator based on a set of noisy observations in the ambient space. Our goal is to achieve a state-of-the-art geometric error bound while preserving the geometric properties of the manifold. To this end, we employ the Hausdorff distance to measure the estimation error and reach to quantify the smoothness of manifolds. Further details and definitions of these concepts are provided in Section 2.1.
Specifically, we consider a random vector \(Y\in\mathbb{R}^{D}\) that can be expressed as
\[Y=X+\xi, \tag{1}\]
where \(X\in\mathbb{R}^{D}\) is an unobserved random vector following a distribution \(\omega\) supported on the latent manifold \(\mathcal{M}\), and \(\xi\sim\phi_{\sigma}\) represents the ambient-space observation noise, independent
of \(X\), with a standard deviation \(\sigma\). The distribution of \(Y\) can be viewed as the convolution of \(\omega\) and \(\phi_{\sigma}\), whose density at point \(y\) can be expressed as
\[\nu(y)=\int_{\mathcal{M}}\phi_{\sigma}(y-x)\omega(x)dx. \tag{2}\]
Assume \(\mathcal{Y}=\{y_{i}\}_{i=1}^{N}\subset\mathbb{R}^{D}\) is the collection of observed data points, also in the form of
\[y_{i}=x_{i}+\xi_{i},\quad\text{ for }i=1,\cdots,N, \tag{3}\]
with \((y_{i},x_{i},\xi_{i})\) being \(N\) independent and identical realizations of \((Y,X,\xi)\). Based on \(\mathcal{Y}\), we construct an estimator \(\widehat{\mathcal{M}}\) for \(\mathcal{M}\) and provide theoretical justification for it under the following main assumptions:
* The latent manifold \(\mathcal{M}\) is a compact and twice-differentiable \(d\)-dimensional sub-manifold, embedded in the ambient space \(\mathbb{R}^{D}\). Its volume with respect to the \(d\)-dimensional Hausdorff measure is upper bounded by \(V\), and its reach is lower bounded by a fixed constant \(\tau\).
* The distribution \(\omega\) is a uniform distribution, with respect to the \(d\)-dimensional Hausdorff measure, on \(\mathcal{M}\).
* The noise distribution \(\phi_{\sigma}\) is a Gaussian distribution supported on \(\mathbb{R}^{D}\) with density function \[\phi_{\sigma}(\xi)=(\frac{1}{2\pi\sigma^{2}})^{\frac{D}{2}}\exp{(-\frac{\|\xi \|_{2}^{2}}{2\sigma^{2}})}.\]
* The intrinsic dimension \(d\) and noise standard deviation \(\sigma\) are known.
In general, \(\widehat{\mathcal{M}}\) is constructed by estimating the projection of points. For a point \(y\) in the domain \(\Gamma=\{y:d(y,\mathcal{M})\leq C\sigma\}\), we estimate its projection on \(\mathcal{M}\) in a two-step manner: determining the direction and moving \(y\) in that direction. The estimation has both theoretical and algorithmic contributions. From the theoretical perspective:
* On the population level, given the observation distribution \(\nu\) and the domain \(\Gamma\), we are able to obtain a smoothly bordered set \(\mathcal{S}\in\mathbb{R}^{D}\) such that the Hausdorff distance satisfies \[d_{H}(\mathcal{S},\mathcal{M})<c\sigma^{2}\mathrm{log}(1/\sigma).\]
* On the sample level, given a sample set \(\mathcal{Y}\), with sample size \(N=\mathcal{O}(\sigma^{-(d+3)})\) and \(\sigma\) being sufficiently small, we are able to obtain an estimator \(\widehat{\mathcal{M}}\) as a smooth \(d\)-dimensional manifold such that
* For any point \(y\in\widehat{\mathcal{M}}\), \(d(y,\mathcal{M})\) is less than \(C\sigma^{2}\mathrm{log}(1/\sigma)\);
* For any point \(x\in\mathcal{M}\), \(d(x,\widehat{\mathcal{M}})\) is less than \(C\sigma^{2}\mathrm{log}(1/\sigma)\);
* For any two points \(y_{1}\), \(y_{2}\), we have \(\|y_{1}-y_{2}\|_{2}^{2}/d(y_{2},T_{y_{1}}\widehat{\mathcal{M}})\geq c\sigma\tau\), with probability \(1-C_{1}\exp(-C_{2}\sigma^{-c_{1}})\), for some positive constant \(c\), \(c_{1}\), \(C\), \(C_{1}\), and \(C_{2}\).
In summary, given a set of observed samples, we can provide a smooth \(d\)-dimension manifold \(\widehat{\mathcal{M}}\) which is higher-order closer to \(\mathcal{M}\) than \(\mathcal{Y}\). Meanwhile, the approximate reach of \(\widehat{\mathcal{M}}\) is no less than \(c\sigma\tau\).
In addition to its theoretical contributions, our method has practical benefits for some applications. This paper diverges from previous literature in its motivation, as other works often define output manifolds through the roots or ridge set of a complicated mapping \(f\). In contrast, we estimate the orthogonal projection onto \(\mathcal{M}\) for each point near \(\mathcal{M}\). Compared with previous manifold-fitting methods, our framework offers three notable advantages:
* Our framework yields a definitive solution to the output manifold, which can be calculated in two simple steps without iteration. This results in greater efficiency than existing algorithms.
* Our method requires only noisy samples and does not need any information about the latent manifold, such as its dimension, thereby broadening the applicability of our framework.
* Our framework computes the approximate projection of an observed point onto the hidden manifold, providing a clear relationship between input and output. In comparison, previous algorithms used multiple iterative operations, making it difficult to understand the relationship between input samples and the corresponding outputs.
### Related Works
One main source of manifold fitting would be the Delaunay triangulation [26] from the 1980s. Given a sample set, a Delaunay triangulation is a meshing in which no samples are inside the circumcircle of any triangle in the triangulation. Based on this technique, the early manifold-fitting approaches [5, 2] consider dense samples without noise. In other words, the given data set constitutes \((\epsilon,\delta)\)-net of the hidden manifold. Both [5] and [2] generate a piecewise linear manifold by triangulation that is geometrically and topologically similar to the hidden manifold. However, the generated manifold is not smooth and the noise-free and densely distributed assumption of the given data prevents the algorithm from being widespread.
In recent years, manifold fitting has been more intensively studied and developed, the research including the accommodation of multiple types of noise and sample distributions, as well as the smoothness of the resulting manifolds. Genovese et al. have obtained a sequence of results from the perspective of minimax risk under Hausdorff distance ([19, 20]) with Le Cam's method. Their work starts from [19], where noisy sample points are also modeled as the summation of latent random variables from the hidden manifold and additive noise, but the noise term is assumed to be bounded and perpendicular to the manifold. The optimal minimax estimation rate is lower bounded by \(\mathcal{O}(N^{-2/(2+d)})\) with properly constructed extreme cases, and upper bounded by \(\mathcal{O}((\frac{\log N}{N})^{2/(2+d)})\) with a sieve maximum likelihood estimator (MLE). Hence, they conclude the rate is tight up to logarithmic factors, and the optimal rate of convergence is \(\mathcal{O}(N^{-2/(2+d)})\). This result is impressive since the rate only depends on the intrinsic dimension \(d\) instead of the ambient dimension \(D\). However, the noise assumption is not realistic, and the sieve MLE is not computationally tractable. Their subsequent work [20] considers the noiseless model, clutter noise model, and additive noise model. In the additive model, the noise assumption is relaxed to general Gaussian distributions. They view the distribution of samples as a convolution of a manifold-valued distribution and a distribution of noise in ambient space, and the fitting problem is treated as a deconvolution problem. They find a lower bound for the optimal estimation rate, \(\mathcal{O}(\frac{1}{\log N})\), with the same methodology in [19], and an upper bound as a polynomial of \(\frac{1}{\log N}\) with a standard deconvolution density estimator. Nevertheless, their output is not necessarily a manifold, and they claim that this method requires a known noise distribution, which is also unrealistic. Meanwhile, to guarantee a small minimax risk, the required sample size should be in exponential form, which is unsatisfactory.
Since a consistent estimation of the manifold requires a very large sample size, Genovese et al. avoid this difficulty by studying the ridge of the sample distribution as a proxy [21]. They begin by showing that the Hausdorff distance between the ridge of the kernel density estimator (KDE) and the ridge of the sample density is \(\mathcal{O}_{P}((\frac{\log N}{N})^{2/(D+8)})\), and then prove that the ridge of the sample density is \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\) in the Hausdorff distance with their model. Consequently, the ridge of the KDE density is shown to be an estimator with rate \(\mathcal{O}_{P}((\frac{\log N}{N})^{2/(D+8)})+\mathcal{O}(\sigma^{2}\log(1/ \sigma))\), and they adopt the mean-shift algorithm [34] to estimate it. In two similar works, [4, 32], ridge estimation is implemented by two other approaches with convergence guarantee. While these methods yield favorable results in terms of minimax risk, evaluating the smoothness of their estimators presents a challenge. Despite claims that some methods require only a small sample size, their complex algorithms may prove
impractical even for toy examples. Furthermore, the feasibility of the KDE-based algorithm in high-dimensional cases remains unverified. As noted by [9], kernel-based methodologies which fail to consider the intrinsic geometry of the domain may lead to sub-optimal outcomes, such as convergence rates that are dependent on the ambient dimensionality, \(D\), rather than the intrinsic dimensionality, \(d\). Although [10] introduce a local-covariance-based approach that transforms the global manifold reconstruction problem into a local Gaussian process regression problem, thereby facilitating interpolation of the estimated manifold between fitted data points, their resulting output estimator is still in the form of discrete point sets.
The manifold generated with the above methods may have a very small reach, resulting in small twists and turns that do not align with the local geometry of the hidden manifold. To address this, some new research has aimed to ensure a lower-bounded reach of the output manifold, such as [13], [42] and [16]. Together with [32], all four papers design smooth mappings to capture some spatial properties and depict the output manifold as its root set or ridge. Despite the different techniques used, all these papers provide estimators, which are close to \(\mathcal{M}\) and have a lower-bounded reach, with high probability. Their required sample size depends only on \(\sigma\) and \(d\), which is noteworthy and instructive. The main difference is that [32], [13], and [42] estimate the latent manifold with accuracy \(\mathcal{O}(\sigma)\), measured in terms of Hausdorff distance, while [16] achieves a higher approximation rate \(\mathcal{O}(\sigma^{2})\). However, the method in [16] requires more knowledge of the manifold, which conflicts with the noisy observation assumption, and the restriction of sample size and the immature algorithms for estimating the projection direction hinder the implementation of the idea. On the other hand, obtaining a manifold defined as the ridge or root set of a function requires additional numerical algorithms. These algorithms can be computationally expensive and affect the accuracy of the estimate. A detailed technical comparison of these approaches is provided in Section 1.3 for completeness.
### Detailed review of existing fitting algorithms
This subsection presents a review of the technical details of the previously mentioned work [32, 13, 42, 16]. These papers relax the
Figure 2: A toy example to illustrate the methodologies in [32, 13, 42, 16].
requirement for sample size by exploiting the geometric properties of the data points. For ease of understanding, we introduce some common geometric notations here, while more detailed notations can be found in Section 2.1. For a point \(x\in\mathcal{M}\), \(T_{x}\mathcal{M}\) denotes the tangent space of \(\mathcal{M}\) at \(x\), and \(\Pi_{x}^{\perp}\) is the orthogonal projection matrix onto the normal space of \(\mathcal{M}\) at \(x\). For a point \(y\) off \(\mathcal{M}\), \(y^{*}=\arg\min_{x\in\mathcal{M}}\|y-x\|_{2}\) denotes the projection of \(y\) on \(\mathcal{M}\), and \(\widehat{\Pi}_{y}^{\perp}\) is the estimator of \(\Pi_{y^{*}}^{\perp}\). For an arbitrary matrix \(A\), \(\Pi_{hi}(A)\) represents its projection on the span of the eigenvectors corresponding to the largest \(D-d\) eigenvalues. We use the notation \(\mathcal{B}_{D}(z,r)\) to denote a \(D\)-dimensional ball with center \(z\) and radius \(r\). To be consistent with the papers subsequently referred to, we frequently use upper- and lower-case letters (such as \(c\), \(c_{1}\), \(c_{2}\), \(C\), \(C_{1}\), and \(C_{2}\)) to represent absolute constants. The upper and lower cases represent constants greater or less than one, respectively, and their values may vary from line to line.
An early work without noise.One early work on manifold fitting is [32], which only focuses on the case of noiseless sample \(\mathcal{X}=\{x_{i}\in\mathcal{M}\}_{i=1}^{N}\). To reconstruct an \(\widehat{\mathcal{M}}\) with \(\mathcal{X}\), the authors construct a function \(f(y)\) to approximate the squared distance from an arbitrary point \(y\) to \(\mathcal{M}\), and the ridge set of \(f(y)\) is a proper estimator of \(\mathcal{M}\).
As stated in [32], \(f(y)\) can be estimated by performing local Principal Components Analysis (PCA). The procedure is shown in Fig. 2(a). For an arbitrary point \(y\) close to \(\mathcal{M}\), its \(r\) neighborhood index set is defined as
\[I_{y}=\{i:\|x_{i}-y\|_{2}\leq r\}.\]
For each \(i\in I_{y}\), \(\widehat{\Pi}_{x_{i}}^{\perp}\) can be obtained via local PCA, and the square distance between \(y\) and \(T_{x_{i}}\mathcal{M}\) is approximated by
\[f_{i}(y)=\|\widehat{\Pi}_{x_{i}}^{\perp}(y-x_{i})\|_{2}^{2}.\]
Then, \(f(y)\) is designed as the weighted average of \(f_{i}(y)\)'s; that is,
\[f(y)=\sum_{i\in I_{y}}\alpha_{i}(y)f_{i}(y),\]
with the weights defined as
\[\tilde{\alpha}_{i}(y)=\theta(\frac{\sqrt{f_{i}(y)}}{2r}),\quad\tilde{\alpha}( y)=\sum_{i\in I_{y}}\tilde{\alpha}_{i}(y),\quad\alpha_{i}(y)=\frac{\tilde{ \alpha}_{i}(y)}{\tilde{\alpha}(y)},\]
and \(\theta(t)\) is an indicator function such that \(\theta(t)=1\) for \(t\leq 1/4\) and \(\theta(t)=0\) for \(t\geq 1\).
The estimator \(\widehat{\mathcal{M}}\) is given as the ridge set of \(f(y)\); that is,
\[\widehat{\mathcal{M}}=\{y\in\mathbb{R}^{D}:\;d(y,\mathcal{M})\leq cr,\;\Pi_{ hi}(H_{f}(y))\partial f(y)=0\},\]
where \(H_{f}(y)\) is the Hessian matrix of \(f\) at point \(y\). Such an \(\widehat{\mathcal{M}}\) is claimed to have a reach bounded below by \(cr\) and \(\mathcal{O}(r^{2})\)-close to \(\mathcal{M}\) in terms of Hausdorff distance.
Although this paper does not consider the ambient space noise and relies heavily on a well-estimated projection direction \(\widehat{\Pi}_{x_{i}}^{\perp}\), the idea of approximating the distance function with projection matrices is desirable and provides a good direction for subsequent work.
An attempt with noise.In the follow-up work [13], noise from the ambient space is considered. Similar to [32], the main aim of [13] is to estimate the bias from an arbitrary point to the hidden manifold with local PCA. The collection of all zero-bias points can be interpreted as an estimator for \(\mathcal{M}\).
To construct the bias function \(f(y)\), the authors assume there is a sample set \(\mathcal{Y}_{0}=\{y_{i}\}_{i=1}^{N}\), with the sample size satisfying
\[N/\ln(N)>\frac{CV}{\omega_{min}\beta_{d}(r^{2}/\tau)^{d}},\quad N\leq e^{D},\]
where \(V\) is the volume of \(\mathcal{M}\), \(\beta_{d}\) is the volume of a Euclidean unit ball in \(\mathbb{R}^{d}\), and \(\omega_{min}\) is the lower bound of \(\omega\) on \(\mathcal{M}\). Under such conditions, \(\mathcal{Y}_{0}\) is \(Cr^{2}/\tau\)-close to \(\mathcal{M}\) in Hausdorff distance with probability \(1-N^{-C}\). Then, a subset \(\mathcal{Y}_{1}=\{p_{i}\}\subset\mathcal{Y}_{0}\) is selected greedily as a minimal \(cr/d\)-net of \(\mathcal{Y}_{0}\). For each \(p_{i}\in\mathcal{Y}_{1}\), there exists a \(D\)-dimensional ball \(U_{i}=\mathcal{B}_{D}(p_{i},r)\) and a \(d\)-dimensional ball \(D_{i}=\mathcal{B}_{d}(p_{i},r)\), where \(D_{i}\) can be viewed as a disc cut from \(U_{i}\). In the ideal case, \(D_{i}\) should be parallel to \(T_{p_{i}}\mathcal{M}\). Thus, the authors provide a new algorithm to estimate the basis of \(D_{i}\) with the sample points falling in \(U_{i}\). The basis of \(D_{i}\) leads to an estimator of \(\Pi_{p_{i}}^{\perp}\), which is denoted by \(\widehat{\Pi}_{p_{i}}^{\perp}\).
For \(y\) near \(\mathcal{M}\), let \(I_{y}=\{i:\|p_{i}-y\|_{2}\leq r\}\), and
\[f_{i}(y)=\widehat{\Pi}_{p_{i}}^{\perp}(y-p_{i}),\quad\text{for }i\in I_{y}.\]
Then, \(f(y)\) can be constructed as
\[f(y)=\sum_{i\in I_{y}}\alpha_{i}(y)(\widehat{\Pi}_{y}^{\perp}\widehat{\Pi}_{p _{i}}^{\perp})(y-p_{i}), \tag{1.4}\]
with \(\widehat{\Pi}_{y}^{\perp}=\Pi_{hi}(\sum_{i\in I_{y}}\alpha_{i}(y)\widehat{\Pi }_{p_{i}}^{\perp})\), and the weights defined as
\[\tilde{\alpha}_{i}(y)=(1-\frac{\|y-p_{i}\|_{2}^{2}}{r^{2}})^{d+2},\quad\tilde {\alpha}(y)=\sum_{i\in I_{y}}\tilde{\alpha}_{i}(y),\quad\alpha_{i}(y)=\frac{ \tilde{\alpha}_{i}(y)}{\tilde{\alpha}(y)},\]
for \(y\) satisfying \(\|y-p_{i}\|_{2}\leq r\) and \(0\) otherwise. Subsequently, there is
\[\widehat{\mathcal{M}}=\{y\in\mathbb{R}^{D}:\;d(y,\mathcal{M})\leq cr,\quad f (y)=0\}.\]
By setting \(r=\mathcal{O}(\sigma)\), the authors prove \(\widehat{\mathcal{M}}\) is \(\mathcal{O}(r^{2})=\mathcal{O}(\sigma)\)-close to \(\mathcal{M}\) and its reach is bounded below by \(c\tau\) with probability \(1-N^{-C}\). However, it is notable that the algorithm for disc-orientation estimation is not proved theoretically the paper, and the accuracy of \(f(y)\) is limited by the subsequent successive projections \(\widehat{\Pi}_{y}^{\perp}\widehat{\Pi}_{p_{i}}^{\perp}\) and the lack of accuracy in estimating \(\widehat{\Pi}_{y}^{\perp}\). Moreover, because of the limitation of the sample size \(N\), the estimation error of the manifold has a non-zero lower bound and the practical application is very limited.
A better estimation for noisy data.To address the issues in [13], the authors of [42] propose an improved method that avoids the continuous projections and estimates \(\Pi_{y^{*}}^{\perp}\) better. The authors claim that fitting the manifold is enough to estimate the projection direction and the local mean well, because the manifold can be viewed as a linear subspace locally, and the local sample mean is a good reference point for the hidden manifold. They assume there is a sample set \(\mathcal{Y}=\{y_{i}\}_{i=1}^{N}\). For each \(y_{i}\), \(\widehat{\Pi}_{y_{i}}^{\perp}\) is obtained by local PCA with \(r=\mathcal{O}(\sqrt{\sigma})\). Then, for an arbitrary point \(y\) with \(I_{y}=\{i:\|y_{i}-y\|_{2}\leq r\}\), the bias function can be constructed as
\[f(y)=\widehat{\Pi}_{y}^{\perp}(y-\sum_{i\in I_{y}}\alpha_{i}(y)y_{i}), \tag{1.5}\]
where \(\widehat{\Pi}_{y}^{\perp}=\Pi_{hi}(\sum_{i\in I_{y}}\alpha_{i}(y)\widehat{\Pi }_{y_{i}}^{\perp})\). The weights are defined as
\[\tilde{\alpha}_{i}(y)=(1-\frac{\|y-y_{i}\|_{2}^{2}}{r^{2}})^{\beta},\quad \tilde{\alpha}(y)=\sum_{i\in I_{y}}\tilde{\alpha}_{i}(y),\quad\alpha_{i}(y)= \frac{\tilde{\alpha}_{i}(y)}{\tilde{\alpha}(y)},\]
for \(y\) satisfying \(\|y-y_{i}\|\leq r\) and \(0\) otherwise, with \(\beta\geq 2\) being a fixed integer which guarantees \(f(y)\) to be derivable in the second order. With such a bias function, the output manifold can be given as
\[\widehat{\mathcal{M}}=\{y\in\mathbb{R}^{D}:\;d(y,\mathcal{M})\leq cr,\quad f(y )=0\},\]
which is shown to be \(\mathcal{O}(\sigma)\)-close to \(\mathcal{M}\) in Hausdorff distance and have a reach no less than \(c\tau\) with probability \(1-c\exp(-Cr^{d+2}N)\). Although the theoretical error bound in [42] remains the same as that in [13], the method in [42] vastly simplifies the computational process and outperforms the previous works numerically in many cases.
The necessity of noise reduction and an attempt.Based on the result mentioned above, the error in the manifold fitting can be attributed to two components: sampling bias and learning error, namely
\[d_{H}(\mathcal{M},\widehat{\mathcal{M}})\leq d_{H}(\mathcal{M},\mathcal{Y})+d _{H}(\mathcal{Y},\widehat{\mathcal{M}}),\]
where \(\mathcal{Y}\) is the generic sample set. Usually, the first term can be regarded as \(\mathcal{O}(\sigma)\), as the Gaussian noise will die out within several \(\sigma\), and the second term is bounded by \(Cr^{2}\) with the PCA-based algorithms listed above. The optimal radius of local PCA, which balances the overall estimation error and the computation complexity, should be \(r=\mathcal{O}(\sqrt{\sigma})\), and leads to a fitting error such that
\[d_{H}(\mathcal{M},\widehat{\mathcal{M}})\leq C\sigma.\]
Since the sampling bias \(d_{H}(\mathcal{Y},\mathcal{M})=\mathcal{O}(\sigma)\) prevents us from moving closer to \(\mathcal{M}\), denoising is necessary for a better \(\widehat{\mathcal{M}}\).
On the basis of [13], the same group of authors provides better results in [16] with refined points and a net. They refine the points by constructing a mesh grid on each disc \(D_{i}\). As illustrated in Figure 2(d), each hyper-cylinder of the mesh is much longer in the direction perpendicular to the manifold than parallel. Subsequently, in each hyper-cylinder, a subset of \(\mathcal{Y}_{0}\) is selected with a complicated design, and their average is denoted by \(e_{y}\). The collection of such \(e_{y}\) in all hyper-cylinders is denoted by \(\mathcal{Y}_{1}\), which is shown to be \(Cd\sigma^{2}/\tau\)-close to \(\mathcal{M}\).
The authors take \(\mathcal{Y}_{1}\) as the input data set of [13] to perform subsampling and construct a new group of discs \(\{D^{\prime}_{i}\}\). With the refined points in \(\mathcal{Y}_{1}\) and refined discs \(\{D^{\prime}_{i}\}\), the same function \(f(y)\) will lead to an \(\widehat{\mathcal{M}}\) which is \(\mathcal{O}(\sigma^{2})\)-close to \(\mathcal{M}\) and has a reach no less than \(c\tau\) with probability \(1-N^{-C}\).
To the best of our knowledge, the result presented in [16] constitutes a state-of-the-art error bound for manifold fitting. However, some challenges exist in implementing the method described in that paper:
* The refinement step for \(e_{y}\) involves sampling directly from the latent manifold, which contradicts the initial assumption of noisy data.
* The algorithms for refining points and determining the orientation of discs are only briefly described and may not be directly applied to real-world data.
* The sample-size requirement is similar to that described in [13], further limiting the practical implementation of the algorithm.
### Organization
This paper is organized as follows. Section 2 presents the model settings, assumptions, preliminary results, and mathematical preliminaries. Section 3 introduces a novel contraction direction-estimation method. The workflow and theoretical results of our local contraction methods are included in Section 4, and the output manifold is analyzed in Section 5. Numerical studies are presented in Section 6, to demonstrate the effectiveness of our approach. Finally, Section 7 provides a summary of the key findings and conclusions of our study, as well as several directions for future research.
## 2 Proposed method
In this section, we present some necessary notations and fundamental concepts, then formally state our primary result regarding the fitting of a manifold. Finally, we introduce several lemmas and propositions crucial for further elaboration.
### Notations and important concepts
Throughout this paper, we use both upper- and lower-case \(C\) to represent absolute constants. The distinction between upper and lower-case letters represents the magnitude of the constants, with the former being greater than one and the latter being less than one. The values of these constants may vary from line to line. In our notation, \(x\) represents a point on the latent manifold \(\mathcal{M}\), \(y\) represents a point related to the distribution \(\nu\), and \(z\) represents an arbitrary point in the ambient space. The symbol \(r\) is used to denote the radius in some instances. Capitalized math calligraphy letters, such as \(\mathcal{M}\), \(\mathcal{Y}\), and \(\mathcal{B}_{D}(z,r)\), represent concepts related to sets. This last symbol denotes a \(D\)-dimensional Euclidean ball with center \(z\) and radius \(r\).
The distance between a point \(a\) and a set \(\mathcal{A}\) is represented as \(d(a,\mathcal{A})=\min_{a^{\prime}\in\mathcal{A}}\|a-a^{\prime}\|_{2}\), where \(\|\cdot\|_{2}\) is the Euclidean distance. To measure the distance between two sets, we adopt the _Hausdorff distance_, a commonly used metric in evaluating the accuracy of estimators. This distance will be used to measure the distance between the latent manifold \(\mathcal{M}\) and its estimate \(\widehat{\mathcal{M}}\) throughout this paper. Formally, this metric can be defined as follows:
**Definition 2.1** (Hausdorff distance): _Let \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) be two non-empty subsets of \(\mathbb{R}^{D}\). We define their Hausdorff distance \(d_{H}(\mathcal{A}_{1},\mathcal{A}_{2})\) induced by Euclidean distance as_
\[d_{H}(\mathcal{A}_{1},\mathcal{A}_{2})=\max\{\sup_{a\in\mathcal{A}_{1}}\inf_{b \in\mathcal{A}_{2}}\|a-b\|_{2},\ \sup_{b\in\mathcal{A}_{2}}\inf_{a\in\mathcal{A}_{1}}\|a-b\|_{2}\}.\]
**Remark**: _For any \(\mathcal{A}_{1},\ \mathcal{A}_{2}\subset\mathbb{R}^{D}\), \(d_{H}(\mathcal{A}_{1},\mathcal{A}_{2})<\epsilon\) is equivalent to the fact that, for \(\forall a\in\mathcal{A}_{1}\) and \(\forall b\in\mathcal{A}_{2}\),_
\[d(a,\mathcal{A}_{2})<\epsilon\text{ and }d(b,\mathcal{A}_{1})<\epsilon.\]
In the context of geometry, the Hausdorff distance provides a measure of the proximity between two manifolds. It is commonly acknowledged that a small Hausdorff distance implies a high level of alignment between the two manifolds, with controlled discrepancies.
We also require some basic geometrical concepts related to manifolds, more of which can be found in the supplementary material. Given a point \(x\) in the manifold \(\mathcal{M}\), the tangent space at \(x\), denoted by \(T_{x}\mathcal{M}\), is a \(d\)-dimensional affine space containing all the vectors tangent to \(\mathcal{M}\) at \(x\). To facilitate our analysis, we introduce the projection matrices \(\Pi_{x}^{-}\) and \(\Pi_{x}^{\perp}\), which project any vector \(v\in\mathbb{R}^{D}\) onto the tangent space \(T_{x}\mathcal{M}\) and its normal space, respectively. These two projection matrices are closely related as \(\Pi_{x}^{\perp}=I_{D}-\Pi_{x}^{-}\), where \(I_{D}\) is the identity mapping of \(\mathbb{R}^{D}\). Furthermore, given an arbitrary point \(z\) not in \(\mathcal{M}\), its projection onto the manifold is defined as \(z^{*}=\arg\min_{x\in\mathcal{M}}\|x-z\|_{2}\), and we use \(\widehat{\Pi}_{z}^{\perp}\) and \(\widehat{\Pi}_{z}^{-}\) as estimators for \(\Pi_{z^{*}}^{\perp}\) and \(\Pi_{z^{*}}^{-}\), respectively.
The concept of _Reach_, first introduced by Federer [11], plays a crucial role in measuring the regularity of manifolds embedded in Euclidean space. Reach has proven to be a valuable tool in various applications, including signal processing and machine learning, making it an indispensable concept in the study of manifold models. It can be defined as follows:
**Definition 2.2** (Reach): _Let \(\mathcal{A}\) be a closed subset of \(\mathbb{R}^{D}\). The reach of \(\mathcal{A}\), denoted by \(\operatorname{reach}(\mathcal{A})\), is the largest number \(\tau\) to have the following property: any point at a distance less than \(\tau\) from \(\mathcal{A}\) has a unique nearest point in \(\mathcal{A}\)._
Remark._The value of \({\rm reach}(\mathcal{M})\) can be interpreted as a second-order differential quantity if \(\mathcal{M}\) is treated as a function. Namely, let \(\gamma\) be an arc-length parameterized geodesic of \(\mathcal{M}\); then, according to [33], \(\|\gamma^{\prime\prime}(t)\|_{2}\leq{\rm reach}(\mathcal{M})^{-1}\) for all \(t\)._
For example, the reach of a circle is its radius, and the reach of a linear subspace is infinite. Intuitively, a large reach implies that the manifold is locally close to the tangent space. This phenomenon can be explained by the following lemma in [11]:
**Lemma 2.3**.: _[Federer's reach condition] Let \(\mathcal{M}\) be an embedded sub-manifold of \(\mathbb{R}^{D}\) with reach \(\tau\). Then,_
\[\tau^{-1}=\sup\left\{\frac{2d(b,T_{a}\mathcal{M})}{\|a-b\|_{2}^{2}}|a,b\in \mathcal{M},\ a\neq b\right\}.\]
### Overview of the main results
As stated in the introduction, the fundamental objective of this paper is to develop an estimator \(\widehat{\mathcal{M}}\) for the latent manifold \(\mathcal{M}\), using the available sample set \(\mathcal{Y}\). To this end, we employ a two-step procedure for each \(y\in\Gamma=\{y:d(y,\mathcal{M})\leq C\sigma\}\), involving (i) identification of the contraction direction and (ii) estimation of the contracted point. It should be noted that contraction is distinct from projection, as the former entails movement in a singular direction in normal space.
Determining the contraction direction.To enhance the accuracy of our algorithm, we introduce a novel approach for estimating the direction of \(y^{*}-y\) for each \(y\), instead of estimating the basis of \(T_{y^{*}}\mathcal{M}\).
On the population level, consider a \(D\)-dimensional ball \(\mathcal{B}(y,r_{0})\) with \(r_{0}=C\sigma\). Let
\[\mu_{y}^{\mathbb{B}}=\mathbb{E}_{Y\sim\nu}(Y|Y\in\mathcal{B}_{D}(y,r_{0})).\]
\(\mu_{y}^{\mathbb{B}}-y\) estimates the direction of \(y^{*}-y\) with an error upper bounded by \(C\sigma\sqrt{\log(1/\sigma)}\) (Theorem 3.1). To make the estimation continuous with respect to \(\mathcal{Y}\) and \(y\), we let
\[F(y)=\sum\alpha_{i}(y)y_{i},\]
with the weights \(\alpha_{i}\)'s given in Section 3.2. When the total sample size \(N=C_{1}r_{0}^{-d}\sigma^{-3}\), \(F(y)-y\) estimates the direction of \(y^{*}-y\) with an error upper bounded by \(C\sigma\sqrt{\log(1/\sigma)}\) with probability no less than \(1-C_{1}\exp(-C_{2}\sigma^{-c})\) (Theorem 3.5).
Estimating the contracted point.The estimation of the projection points is discussed in three distinct scenarios in Section 4, the most notable of which is using \(F(y)\) to estimate the contraction direction.
Let \(\widehat{U}\) be the projection matrix onto the direction of \(\mu_{y}^{\mathbb{B}}-y\). Consider a cylinder region
\[\mathbb{V}_{y}=\mathcal{B}_{D-1}(y,r_{1})\times\mathcal{B}_{1}(y,r_{2}),\]
where the second ball is an open interval in the direction of \(\mu_{y}^{\mathbb{B}}-y\), and the first ball is in the complement of it in \(\mathbb{R}^{D}\), with \(r_{1}=c\sigma\) and \(r_{2}=C\sigma\sqrt{\log(1/\sigma)}\). On the population level, let the contracted version of \(y\) be denoted by
\[\mu_{y}^{\mathbb{V}}=y+\widetilde{U}\mathbb{E}_{Y\sim\nu}\left(Y-y|Y\in \mathbb{V}_{y}\right);\]
then, \(\|\mu_{y}^{\mathbb{V}}-y^{*}\|_{2}\leq C\sigma^{2}\sqrt{\log(1/\sigma)}\) (Theorem 4.4). For the sake of continuity, we let
\[\widehat{U}=\frac{(F(y)-y)(F(y)-y)^{T}}{\|(F(y)-y)\|_{2}^{2}},\]
and construct another smooth map
\[G(y)=\sum\beta_{i}(y)y_{i},\]
where the weights \(\beta_{i}\)'s are related to \(\widehat{U}\); their definition can be found in Section 4.3. Then, the distance between \(G(y)\) and \(y^{*}\) is upper bounded by \(C\sigma^{2}{\log(1/\sigma)}\) with probability at least \(1-C_{1}\exp(-C_{2}\sigma^{-c})\) (Theorem 4.5).
_Constructing the manifold estimator._ In Section 5, we propose a variety of methods to construct the manifold estimator for various scenarios. We begin by considering the case where the distribution \(\nu\) is known, and demonstrate that the set
\[\mathcal{S}=\{\mu_{y}^{\mathbb{V}}:y\in\Gamma\}\]
has a Hausdorff distance of \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\) to \(\mathcal{M}\) (Theorem 5.1).
Next, we use the sample set \(\mathcal{Y}\) to obtain an estimated version,
\[\widehat{\mathcal{S}}=\{G(y):y\in\Gamma\},\]
which has an approximate Hausdorff distance to \(\mathcal{M}\) in the order of \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\) with high probability (Theorem 5.2).
Finally, we consider the scenario in which there exists a \(d\)-dimensional preliminary estimation \(\widetilde{\mathcal{M}}\) that is \(\mathcal{O}(\sigma)\) close to \(\mathcal{M}\). In this case, we show that, with high probability, \(G(\widetilde{\mathcal{M}})\) is a \(d\)-dimensional manifold having an approximate Hausdorff error of \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\) and a reach no less than \(c\sigma{\rm reach}(\widetilde{\mathcal{M}})\) (Theorem 5.4).
### Lemmas and propositions.
In this subsection, we present some propositions and lemmas for reference. Their proofs are omitted from the main content and can be found in the supplementary material.
A notable phenomenon when analyzing the distribution in the vicinity of the manifold is the prevalence of quantities contingent upon \(d\) rather than \(D\). This phenomenon is particularly evident in the subsequent lemma and its corollary.
**Lemma 2.4**.: _For any arbitrary point \(z\) such that its neighborhood \(\mathcal{B}_{D}(z,r)\cap\mathcal{M}\neq\varnothing\) with \(r=C\sqrt{2d}\sigma\sqrt{\log(1/\sigma)}\), the probability that \(Y\sim\nu\) falls in \(\mathcal{B}_{D}(z,r)\) is_
\[\mathbb{P}(Y\in\mathcal{B}_{D}(z,r))=cr^{d}\]
_for some small constant \(c\)._
**Corollary 2.4.1**.: _Let \(n\) be the number of observed points that fall in \(\mathcal{B}_{D}(z,r)\). Assume the total sample size is \(N=CD\sigma^{-3}r^{-d}\). Then,_
\[\mathbb{P}(C_{1}D\sigma^{-3}\leq n\leq C_{2}D\sigma^{-3})\geq 1-2\exp\left(-C_{ 3}\sigma^{-3}\right),\]
_for some constant \(C_{1}\), \(C_{2}\), and \(C_{3}\)._
Since the Gaussian distribution can be approximated to vanish within a few standard deviations (\(\sigma\)), adopting a radius that is marginally larger than \(\sigma\) can result in polynomial benefits for local estimation. For instance, when computing the conditional expectation within a ball near the origin, we have the following proposition:
**Proposition 2.5**.: _Let \(\xi\) be a \(D\)-dimensional normal random vector with mean \(0\) and covariance matrix \(\sigma^{2}I_{D}\). Assume there is a \(D\)-dimensional ball \(\mathcal{B}_{D}(z,r)\) centered at point \(z\) with radius \(r=C_{1}\sigma\sqrt{\log(1/\sigma)}\), and \(\|z\|_{2}=C_{2}\sigma\). Then, the truncated version of \(\xi\) satisfies_
\[\|\mathbb{E}(\xi|\xi\in\mathcal{B}_{D}(z,r))\|_{2}\leq C_{3}\sigma^{2},\]
_for some constants \(C_{1}\), \(C_{2}\), and \(C_{3}\)._
Analogously, it is sufficient to focus on a subset of \(\mathcal{M}\) when studying certain local structures. For instance, in analyzing the conditional moments of \(\nu\) within a \(D\)-dimensional ball \(\mathcal{B}_{D}(z,r)\), the submanifold \(\mathcal{M}_{R}=\mathcal{M}\cap\mathcal{B}_{D}(z,R)\) with \(R\gg r\) exerts a significant influence. By incorporating \(\mathcal{M}_{R}\), \(\nu\) can be approximated with
\[\nu_{R}(y)=\int_{\mathcal{M}_{R}}\phi_{\sigma}(y-x)\omega(x)dx.\]
If we normalize them within \(\mathcal{B}_{D}(z,r)\), the two densities \(\tilde{\nu}\) and \(\tilde{\nu}_{R}\) should be close, and it is sufficient to work with \(\tilde{\nu}_{R}(y)\) directly. These can be summarized as the following lemma:
**Lemma 2.6**.: _Let \(\tilde{\nu}(y)\) be the conditional density function within \(\mathcal{B}_{D}(z,r)\), and \(\tilde{\nu}_{R}(y)\) be its estimator based on \(\mathcal{M}_{R}\). By setting_
\[R=r+C_{1}\sigma\sqrt{(d+\eta)\log(1/\sigma)},\]
_we have_
\[|\tilde{\nu}(y)-\tilde{\nu}_{R}(y)|\leq C_{2}\sigma^{\eta}\tilde{\nu}_{R}(y) \tag{2.1}\]
_for some constant \(C_{1}\) and \(C_{2}\)._
**Corollary 2.6.1**.: _If (2.1) holds for all \(y\in\mathcal{B}_{D}(z,r)\), we have_
\[\|\mathbb{E}_{\tilde{\nu}}Y-\mathbb{E}_{\tilde{\nu}_{R}}Y\|_{2}\leq C\sigma^{ \eta}\int_{\mathcal{B}_{D}(z,r)}\|y-z\|_{2}\tilde{\nu}_{R}(y)\,dy\leq Cr\sigma ^{\eta}.\]
## 3 Estimation of contraction direction
This section presents a novel method for estimating the contraction direction and provides an error bound. Our approach is underpinned by the fact that, in the denoising step, the goal is to "push" a point \(z\), which is within a distance of \(\Delta=C\sigma\) to \(\mathcal{M}\), toward its projection on \(\mathcal{M}\), i.e., \(z^{*}\). Therefore, it is sufficient to estimate the direction of \(z^{*}-z\) instead of estimating the entire basis of \(T_{z}\)-\(\mathcal{M}\). To determine this direction, we focus on a ball \(\mathcal{B}_{D}(z,r_{0})\) centered at \(z\) with radius \(r_{0}=C\sigma\) and provide population-level and sample-level estimators.
### Population level
Let the conditional expectation of \(\nu\) within the ball be \(\mu_{z}^{\mathbb{B}}\), namely
\[\mu_{z}^{\mathbb{B}}=\mathbb{E}_{Y\sim\nu}(Y|Y\in\mathcal{B}_{D}(z,r_{0})). \tag{3.1}\]
The accuracy of the vector \(\mu_{z}^{\mathbb{B}}-z\) in estimating the direction of \(z^{*}-z\) is reported in Theorem 3.1. The proof of this theorem is presented in the remainder of this subsection. This result demonstrates that the vector \(\mu_{z}^{\mathbb{B}}-z\) performs well in estimating the contraction direction, providing further support for its use in the denoising step. The proof of lemmas and propositions is omitted here and can be found in the supplementary material.
**Theorem 3.1**.: _For a point \(z\) such that \(d(z,\mathcal{M})=\mathcal{O}(\sigma)\), we can estimate the direction of \(z^{*}-z\) with_
\[\mu_{z}^{\mathbb{B}}-z=\mathbb{E}_{Y\sim\nu}(Y-z|Y\in\mathcal{B}_{D}(z,r_{0})).\]
_The estimation error can be bounded as_
\[\sin\{\Theta\left(\mu_{z}^{\mathbb{B}}-z,\;z^{*}-z\right)\}\leq C\sigma\sqrt{ \log(1/\sigma)}. \tag{3.2}\]
Without loss of generality, we assume that \(z^{*}\) is the origin, \(T_{z^{*}}\)-\(\mathcal{M}\) is the span of the first \(d\) Cartesian-coordinate directions, \(z^{*}-z\) is the \((d+1)\)-th direction, and the remaining directions constitute the complement in \(\mathbb{R}^{D}\). To prove Theorem 3.1, we first provide a sufficient statement for the error bound in (3.2):
**Proposition 3.2**.: _Let \(\mu_{z}^{\mathbb{B}}=(\mu^{(1)},\cdots,\mu^{(D)})\), to show (3.2) is sufficient to show_
\[\left\{\begin{array}{rl}|\Delta-\mu^{(i)}|&\geq c_{1}\sigma,& \mbox{for }i=d+1;\\ |\mu^{(i)}|&\leq c_{2}\sigma^{2}\sqrt{\log(1/\sigma)},&\mbox{for }i\neq d+1. \end{array}\right.\]
To prove Proposition 3.2, we employ a strategy of locally approximating the manifold to the whole and using discs to approximate the local neighborhood of the manifold. Specifically, we use a disc \(\mathbb{D}=T_{z^{*}}\mathcal{M}\cap\mathcal{B}(z,R)\) to approximate \(\mathcal{M}_{R}\) and generalize the result to the entire manifold. The final error bound is achieved by combining the following lemmas:
**Lemma 3.3**.: _Let \(\tilde{\nu}_{R}(y)\) be the conditional density function within \(\mathcal{B}_{D}(z,r_{0})\) induced by \(\mathcal{M}_{R}\), and \(\tilde{\nu}_{\mathbb{D}}(y)\) be its estimator with \(\mathbb{D}\). By setting \(R=r_{0}+C_{1}\sigma\sqrt{\log(1/\sigma)}\), we have_
\[|\tilde{\nu}_{R}(y)-\tilde{\nu}_{\mathbb{D}}(y)|\leq C_{2}\sigma\sqrt{\log(1/ \sigma)}\tilde{\nu}_{\mathbb{D}}(y)\]
_for some constant \(C_{1}\) and \(C_{2}\)._
**Lemma 3.4**.: _Let the conditional expectation of \(Y\sim\tilde{\nu}_{\mathbb{D}}\) within \(\mathcal{B}_{D}(z,r)\) be_
\[\mu_{z,\mathbb{D}}^{\mathbb{B}}=(\mu_{\mathbb{D}}^{(1)},\cdots,\mu_{\mathbb{ D}}^{(D)}).\]
_Then, there is_
\[\left\{\begin{array}{rl}|\Delta-\mu_{\mathbb{D}}^{(i)}|&\geq c \sigma&\mbox{for }i=d+1\\ |\mu_{\mathbb{D}}^{(i)}|&=0,&\mbox{for }i\neq d+1\end{array}\right.,\]
Figure 3: An illustration for estimating the contraction direction.
According to Lemma 2.6 and Lemma 3.3, by setting \(R=r_{0}+C_{1}\sigma\sqrt{\log(1/\sigma)}\), we have
\[|\tilde{\nu}_{R}(z)-\tilde{\nu}_{\mathbb{D}}(z)|\leq C_{2}\sigma\sqrt{\log(1/ \sigma)}\tilde{\nu}_{\mathbb{D}}(z),\]
and thus the conditional expectations within \(\mathcal{B}_{D}(z,r_{0})\) should also be close, namely
\[\|\mu_{z,\mathbb{D}}^{\mathbb{B}}-\mu_{z}^{\mathbb{B}}\|_{2}\leq C\sigma^{2} \sqrt{\log(1/\sigma)},\]
for some constant \(C\).
Therefore, together with Lemma 3.4, the statement in Proposition 3.2 is fulfilled, and hence the proof of Theorem 3.1 is completed.
### Estimation with finite sample
In practice, we typically have access to only the data point collection \(\mathcal{Y}\), which is sampled from the distribution \(\nu(y)\). To construct an estimator for \(\mu_{z}^{\mathbb{B}}\) as defined in (3.1), a natural approach is to use the local average, defined as
\[\tilde{\mu}_{z}^{\mathbb{B}}=\frac{1}{|I_{z}|}\sum_{i\in I_{z}}y_{i},\]
where \(I_{z}\) is the index of \(y_{i}\)'s that lie in \(\mathcal{Y}\cap\mathcal{B}_{D}(z,r_{0})\). Although \(\tilde{\mu}_{z}^{\mathbb{B}}\) converges to \(\mu_{z}^{\mathbb{B}}\) as the size of \(I_{z}\) goes to infinity, it is not a continuous mapping of \(y\) because of the discontinuity introduced by the change in the neighborhood. The discontinuity can adversely affect the smoothness of \(\widehat{\mathcal{M}}\). To address this issue, we need a smooth version of \(\tilde{\mu}_{z}^{\mathbb{B}}\).
Let the local weighted average at point \(z\) be
\[F(z)=\sum_{i}\alpha_{i}(z)y_{i}, \tag{3.3}\]
with the weights being defined as
\[\tilde{\alpha}_{i}(z)=\left\{\begin{aligned} &\left(1-\frac{\|z-y_{i}\|_{2}^{2}}{r_{0}^{ \mathbb{D}}}\right)^{k},\,\|z-y_{i}\|_{2}\leq r_{0};\quad\tilde{\alpha}(z)= \sum_{i\in I_{z}}\tilde{\alpha}_{i}(z),\quad\alpha_{i}(z)=\frac{\tilde{\alpha }_{i}(z)}{\tilde{\alpha}(z)},\\ & 0,\qquad\qquad\mathrm{otherwise};\end{aligned}\right. \tag{3.4}\]
with \(k>2\) being a fixed integer guaranteeing a twice-differentiable smoothness. Similar to \(\mu_{z}^{\mathbb{B}}-z\), the direction of \(F(z)-z\) approximates the direction \(z\) to \(z^{*}\) well:
**Theorem 3.5**: _If the sample size \(N=C_{1}\sigma^{-(d+3)}\), for a point \(z\) such that \(d(z,\mathcal{M})=\mathcal{O}(\sigma)\), \(F(z)\) as defined in (3.3) provides an estimation of the contraction direction, whose error can be bounded by_
\[\sin\{\Theta\left(F(z)-z,\;z^{*}-z\right)\}\leq C_{2}\sigma\sqrt{\log(1/ \sigma)},\]
_with probability at least \(1-C_{3}\exp(-C_{4}\sigma^{-c})\), for some constant \(c\), \(C_{1}\), \(C_{2}\), \(C_{3}\), and \(C_{4}\)._
## 4 Local contraction
This section presents the theoretical results of the local contraction process. Let \(z\) be a point within a distance of \(C\sigma\) to \(\mathcal{M}\), and let \(\mathbb{V}_{z}\) be a neighborhood of \(z\). The conditional expectation of \(\nu\) within \(\mathbb{V}_{z}\) can be viewed as a denoised version of \(z\), namely
\[\mathbb{E}_{Y\sim\nu}\left(Y|Y\in\mathbb{V}_{z}\right).\]
To minimize noise and avoid distortion by the manifold, \(\mathbb{V}_{z}\) should be narrow in the directions tangent to the manifold and broad in the direction perpendicular to it, like inserting a straw into a ball. Thus, determining the orientation of \(\mathbb{V}_{z}\) and the scale in two directions is crucial. In the following sub-sections, we analyze the population-level denoising result for three different orientation settings and provide a smooth estimator for the last case.
### Contraction with known projection direction
In the simplest scenario, we assume the direction of \(T_{z^{*}}\)-\(\mathcal{M}\), i.e., \(\Pi^{\perp}_{z^{*}}\), is known. Then, \(\mathbb{V}_{z}\) can be constructed as the Cartesian product of two balls. Specifically,
\[\mathbb{V}_{z} =\mathcal{B}_{d}(z,r_{1})\times\mathcal{B}_{D-d}(z,r_{2}) \tag{10}\] \[=\Pi^{-}_{z^{*}}\mathcal{B}_{D}(z,r_{1})\times\Pi^{\perp}_{z^{*} }\mathcal{B}_{D}(z,r_{2}),\]
where the first ball is \(d\)-dimensional, lying in \(\mathbb{R}^{d}=T_{z^{*}}\mathcal{M}\), while the second one is in the orthogonal complement of \(\mathbb{R}^{d}\) in \(\mathbb{R}^{D}\) with a radius \(r_{2}\gg r_{1}\). Let \(\mu^{\mathbb{V}}_{z}\) be the denoised point, calculated with the conditional expectation within \(\mathbb{V}_{z}\); precisely,
\[\mu^{\mathbb{V}}_{z}=z+\Pi^{\perp}_{z^{*}}\mathbb{E}_{Y\sim\nu}\left(Y-z|Y\in \mathbb{V}_{z}\right), \tag{11}\]
where \(Y\) is a random vector with density function \(\nu(y)\). The refined point \(\mu^{\mathbb{V}}_{z}\) is much closer to \(\mathcal{M}\). This result can be summarized as the following theorem:
**Theorem 4.1**: _Consider a point \(z\) such that \(d(z,\mathcal{M})<C\sigma\). Let its neighborhood \(\mathbb{V}_{z}\) be defined as (10) with radius_
\[r_{1}=c\sigma\quad\text{and}\quad r_{2}=C\sigma\sqrt{\log(1/\sigma)}.\]
_The refined point \(\mu^{\mathbb{V}}_{z}\) given by (11) satisfies_
\[d(\mu^{\mathbb{V}}_{z},\mathcal{M})\leq C\sigma^{2}\log(1/\sigma),\]
_for some constant \(C\)._
Recall that \(Y=X+\xi\) in our model setting, and \(\Pi^{-}_{z^{*}}\) is the orthogonal projection onto \(T_{z^{*}}\mathcal{M}\). If we analogously write \(z\) as
\[z=z^{*}+(z-z^{*}):=z^{*}+\delta_{z},\]
\(\mu^{\mathbb{V}}_{z}\) in (11) can be decomposed as
\[\mu^{\mathbb{V}}_{z} =z+\Pi^{\perp}_{z^{*}}\mathbb{E}_{Y\sim\nu}\left(Y-z|Y\in\mathbb{ V}_{z}\right) \tag{12}\] \[=z^{*}+\delta_{z}+\Pi^{\perp}_{z^{*}}\mathbb{E}_{\nu}\left((X+\xi )-(z^{*}+\delta_{z})|Y\in\mathbb{V}_{z}\right)\] \[=z^{*}+\Pi^{-}_{z^{*}}\delta_{z}+\mathbb{E}_{\nu}\left(\Pi^{\perp }_{z^{*}}(X-z^{*})|Y\in\mathbb{V}_{z}\right)+\mathbb{E}_{\nu}\left(\Pi^{\perp }_{z^{*}}\xi|Y\in\mathbb{V}_{z}\right).\]
With such an expression, \(\mu^{\mathbb{V}}_{z}-z^{*}\) can be decomposed into three terms. The next step is to show that the norms of these terms are upper bounded by \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\). According to Lemma 2.6, to get a bound in the order of \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\), we only need to consider a local part of \(\mathcal{M}\), i.e., \(\mathcal{M}_{R}\) with \(R=C\sigma\sqrt{\log(1/\sigma)}\), and thus it is safe to assume \(\|X-z\|_{2}\leq C\sigma\sqrt{\log(1/\sigma)}\) for some constant \(C\).
* \(\Pi^{-}_{z^{*}}\delta_{z}\): As \(\delta_{z}\perp T_{z}\mathcal{M}\), we have (13) \[\Pi^{-}_{z^{*}}\delta_{z}=0.\]
* \(\mathbb{E}_{\nu}\left(\Pi^{\perp}_{z^{*}}(X-z^{*})|Y\in\mathbb{V}_{z}\right)\): Since \(z^{*}\) and \(X\) are exactly on \(\mathcal{M}\), from Jensen's inequality and Lemma 2.3 we have \[\left\|\mathbb{E}_{\nu}\left(\Pi^{\perp}_{z^{*}}(X-z^{*})|Y\in \mathbb{V}_{z}\right)\right\|_{2} \leq\mathbb{E}_{\nu}\left(\left\|\Pi^{\perp}_{z^{*}}(X-z^{*}) \right\|_{2}|Y\in\mathbb{V}_{z}\right)\] \[\leq\frac{1}{2\tau}\mathbb{E}_{\nu}\left(\|X-z^{*}\|_{2}^{2}|Y\in \mathbb{V}_{z}\right),\]
where
\[\|X-z^{*}\|_{2}^{2} =\|X-z+z-z^{*}\|_{2}^{2}\] \[\leq\|X-z\|_{2}^{2}+\|z-z^{*}\|_{2}^{2}\] \[\leq C\sigma^{2}\log(1/\sigma).\]
Hence,
\[\left\|\mathbb{E}_{\nu}\left(\Pi_{z^{*}}^{\perp}(X-z^{*})|Y\in\mathbb{V}_{z} \right)\right\|_{2}\leq\frac{C}{\tau}\sigma^{2}\log(1/\sigma). \tag{4.5}\]
* \(\mathbb{E}_{\nu}\left(\Pi_{z^{*}}^{\perp}\xi|Y\in\mathbb{V}_{z}\right)\): Because
\[\mathbb{E}_{\nu}\left(\Pi_{z^{*}}^{\perp}\xi|Y\in\mathbb{V}_{z}\right)= \mathbb{E}_{\omega}\left(\mathbb{E}_{\phi}\left(\Pi_{z^{*}}^{\perp}\xi|X,\;X+ \xi\in\mathbb{V}_{z}\right)\right),\]
we evaluate the inner part \(\mathbb{E}_{\phi}\left(\Pi_{z^{*}}^{\perp}\xi|X,\;X+\xi\in\mathbb{V}_{z}\right)\) first. Assume the origin is transferred to \(X\) as illustrated in Fig. 4. Now,
\[\mathbb{V}_{z}=\mathcal{B}_{d}(\Pi_{z^{*}}^{-}(z-X),r_{1})\times\mathcal{B}_{ D-d}(\Pi_{z^{*}}^{\perp}(z-X),r_{2})\]
and there is a dislocation \(\Delta=\|\Pi_{z^{*}}^{\perp}(z-X)\|_{2}\) in \(\mathbb{R}^{D-d}\), which is bounded by
\[\Delta\leq\|\Pi_{z^{*}}^{\perp}(z-z^{*})\|_{2}+\|\Pi_{z^{*}}^{\perp}(z^{*}-X) \|_{2}\leq C\sigma.\]
Let \(\xi^{\prime}=\Pi_{z^{*}}^{\perp}\xi\); then, according to Proposition 2.5, we have
\[\left\|\mathbb{E}_{\phi}\left(\Pi_{z^{*}}^{\perp}\xi|X,\;X+\xi\in\mathbb{V}_{z }\right)\right\|_{2}=\|\mathbb{E}\left(\xi^{\prime}|\xi^{\prime}\in\mathcal{B} _{D-d}(a_{\Delta},r_{2})\right)\|_{2}\leq C\sigma^{2}, \tag{4.6}\]
where \(a_{\Delta}\) is the projection of \(z-X\) onto \(\mathbb{R}^{D-d}\).
Combining the above result in (4.4), (4.5), and (4.6), for any \(z\) such that \(d(z,\mathcal{M})<C\sigma\), the corresponding \(\mu_{z}^{\mathbb{V}}\) satisfies
\[\|\mu_{z}^{\mathbb{V}}-z^{*}\|_{2}\leq C\sigma^{2}\log(1/\sigma),\]
for some constant \(C\). Thus, the revised point \(\mu_{z}^{\mathbb{V}}\) is \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\)-close to \(\mathcal{M}\)
Figure 4: Illustration for the three parts of the error bound in (4.3). (a) \(\delta_{z}\), perpendicular to \(T_{z^{*}}\mathcal{M}\); (b) Projection of \(X-z^{*}\), in a higher order than the length of \(X-z^{*}\); (c, d) Projection of noise term, in two Cartesian-coordinate systems. A large area is canceled out because of symmetry.
### Contraction with estimated projection direction
Usually, the projection matrix is unknown, but it can be estimated via many statistical methods. Assume \(\widehat{\Pi}^{\perp}_{z^{*}}\) is an estimator for \(\Pi^{\perp}_{z^{*}}\), whose bias is
\[\left\|\widehat{\Pi}^{\perp}_{z^{*}}-\Pi^{\perp}_{z^{*}}\right\|_{F}\leq c\sigma ^{\kappa}. \tag{4.7}\]
Based on this estimation, a similar region \(\widehat{\mathbb{V}}_{z}\) can be defined as
\[\widehat{\mathbb{V}}_{z}=\mathcal{B}_{d}(z,r_{1})\times\mathcal{B}_{D-d}(z,r_ {2}), \tag{4.8}\]
where the first ball \(\mathcal{B}_{d}(z,r_{1})\) is in the span space of \(\widehat{\Pi}^{\perp}_{z^{*}}\) with a radius \(r_{1}=c\sigma\), and the second one is in the span space of \(\widehat{\Pi}^{\perp}_{z^{*}}\) with a radius \(r_{2}=C\sigma\sqrt{\log(1/\sigma)}\). Then, an estimated version of \(\mu^{\mathbb{V}}_{z}\) can be obtained:
\[\widehat{\mu}^{\mathbb{V}}_{z}=z+\widehat{\Pi}^{\perp}_{z^{*}}\mathbb{E}_{Y \sim\nu}\left(Y-z|Y\in\widehat{\mathbb{V}}_{z}\right), \tag{4.9}\]
which is still closer to \(\mathcal{M}\). The error bound can be summarized as the following theorem:
**Theorem 4.2**: _Consider a point \(z\) such that \(d(z,\mathcal{M})<C\sigma\). Let its neighborhood \(\widehat{\mathbb{V}}_{z}\) be defined as in (4.8), and the estimation error of \(\widehat{\Pi}^{\perp}_{z^{*}}\) be bounded as in (4.7). The refined point \(\widehat{\mu}^{\mathbb{V}}_{z}\) given by (4.9) satisfies_
\[d(\widehat{\mu}^{\mathbb{V}}_{z},\mathcal{M})\leq C\sigma^{1+\kappa}\sqrt{ \log(1/\sigma)},\]
_for some constant \(C\)._
Such an estimator \(\widehat{\Pi}^{\perp}_{z^{*}}\) can be obtained via classical dimension-reduction methods such as local PCA. Here we cite an error bound of local PCA estimators and implement the result of Theorem 4.2 in the following remark.
**Lemma 4.3** (Theorem 2.1 in [42]): _For a point \(z\) such that \(d(z,\mathcal{M})<C\sigma\), let \(\widehat{\Pi}^{\perp}_{z}\) be the estimator of \(\Pi^{\perp}_{z^{*}}\), obtained via local PCA with \(r=C\sqrt{\sigma}\). The difference between \(\widehat{\Pi}^{\perp}_{z}\) and \(\Pi^{\perp}_{z^{*}}\) is bounded by_
\[\|\widehat{\Pi}^{\perp}_{z}-\Pi^{\perp}_{z^{*}}\|_{F}\leq C\frac{r}{\tau}\]
_with high probability._
_With the PCA estimator \(\widehat{\Pi}^{\perp}_{z^{*}}\) mentioned above, the distance between \(\widehat{\mu}^{\mathbb{V}}_{z}\) and \(\mathcal{M}\) is bounded by_
\[d(\widehat{\mu}^{\mathbb{V}}_{z},\mathcal{M})\leq C\sigma^{3/2}\sqrt{\log(1/ \sigma)}\]
_with high probability._
### Contraction with estimated contraction direction
In the previous two cases, we attempted to move \(z\) closer to \(z^{*}\) in the direction of \(\Pi^{\perp}_{z^{*}}\). However, instead of estimating the entire projection matrix, finding an estimator in the main direction is sufficient and can be more accurate. Specifically, let the projection matrix onto \(z^{*}-z\) be
\[U=(z^{*}-z)(z^{*}-z)^{T}/\|z^{*}-z\|_{2}^{2},\]
and, according to the discussion in Section 3, there is one estimator
\[\widetilde{U}=(\mu^{\mathbb{B}}_{z}-z)(\mu^{\mathbb{B}}_{z}-z)^{T}/\|\mu^{ \mathbb{B}}_{z}-z\|_{2}^{2},\]
whose error bound of \(\widetilde{U}\) satisfies
\[\|\widetilde{U}-U\|_{F}\leq C\sigma\sqrt{\log(1/\sigma)}. \tag{4.10}\]
A narrow region can be analogously constructed based on \(\widetilde{U}\), namely
\[\widehat{\mathbb{V}}_{z}=\mathcal{B}_{D-1}(z,r_{1})\times\mathcal{B}_{1}(z,r_ {2}), \tag{4.11}\]
where the second ball is actually an interval in the direction of \(\widetilde{U}\) with \(r_{2}=C\sigma\sqrt{\log(1/\sigma)}\), and the first ball is in the span space of the complement of \(\widetilde{U}\) in \(\mathbb{R}^{D}\) with \(r_{1}=c\sigma\). Similarly, \(y\) can be refined by
\[\widehat{\mu}_{z}^{\mathbb{V}}=z+\widetilde{U}\mathbb{E}_{\nu}\left(Y-z|Y\in \widehat{\mathbb{V}}_{z}\right), \tag{4.12}\]
whose distance to \(\mathcal{M}\) can be bounded with the following theorem.
**Theorem 4.4**.: _Consider a point \(z\) such that \(d(z,\mathcal{M})<C\sigma\). Let its neighborhood \(\widehat{\mathbb{V}}_{z}\) be defined as in (4.11), and the estimation error of \(\widetilde{U}\) be bounded as in (4.10). The refined point \(\widehat{\mu}_{z}^{\mathbb{V}}\) given by (4.12) satisfies_
\[\|\widehat{\mu}_{z}^{\mathbb{V}}-z^{*}\|_{2}\leq C\sigma^{2}\log(1/\sigma)\]
_for some constant \(C\)._
For reasons similar to those discussed in Section 3.2, a smooth estimator constructed with finite samples is needed. Recall that the continuous estimator for \(U\) is
\[\widehat{U}=\frac{(F(z)-z)(F(z)-z)^{T}}{\|(F(z)-z)\|_{2}^{2}},\]
whose asymptomatic property is given in Theorem 3.5. For a data point \(y_{i}\), we define
\[u_{i}=\widehat{U}(y_{i}-z),\quad v_{i}=y_{i}-z-u_{i}, \tag{4.13}\]
which can be interpreted as the illustration in Fig. 5. Let the contracted point of \(z\) be
\[G(z)=\sum_{i}\beta_{i}(z)y_{i}, \tag{4.14}\]
Figure 5: Geometrical interpretation of \(u_{i}\) and \(v_{i}\) defined in (4.13): decomposing \(y_{i}-z\) into its components; \(u_{i}\) denotes the projection along \(F(z)-z\), and \(v_{i}\) represents the orthogonal component.
with the weights given by
\[w_{u}(u_{i}) = \left\{\begin{array}{cc}1,&\|u_{i}\|_{2}\leq\frac{r_{2}}{2}\\ \left(1-(\frac{2\|u_{i}\|_{2}-r_{2}}{r_{2}})^{2}\right)^{k},&\|u_{i}\|_{2}\in( \frac{r_{2}}{2},r_{2})\,\\ 0,&otherwise\end{array}\right. \tag{4.15}\] \[w_{v}(v_{i}) = \left\{\begin{array}{cc}\left(1-\frac{\|v_{i}\|_{2}^{2}}{r_{2}^ {2}}\right)^{k},&\|v_{i}\|_{2}\leq r_{1}\\ 0,&otherwise\end{array}\right.,\] \[\beta_{i}(z) = w_{u}(u_{i})w_{v}(v_{i}),\quad\tilde{\beta}(z)=\sum\tilde{\beta }_{i}(z),\quad\beta_{i}(z)=\frac{\tilde{\beta}_{i}(y)}{\tilde{\beta}(z)},\]
with \(k\geq 2\) being a fixed integer. It is clear that \(G\) is a \(C^{2}\)-continuous map from \(\mathbb{R}^{D}\) to \(\mathbb{R}^{D}\). The estimation accuracy of \(G(z)\) is summarized in the following theorem:
**Theorem 4.5**: _If the sample size \(N=C_{1}\sigma^{-(d+3)}\), for a point \(z\) such that \(d(z,\mathcal{M})=\mathcal{O}(\sigma)\), \(G(z)\), as defined in (4.14), provides an estimation of \(z^{*}\), whose error can be bounded by_
\[\|G(z)-z^{*}\|_{2}\leq C_{2}\sigma^{2}\mathrm{log}(1/\sigma)\]
_with probability at least \(1-C_{3}\exp(-C_{4}\sigma^{-c})\), for some constant \(c\), \(C_{1}\), \(C_{2}\), \(C_{3}\), and \(C_{4}\)._
## 5 Fit a smooth manifold
Up to this point, we have explicated the techniques for estimating the contraction direction and executing the contraction process for points proximal to \(\mathcal{M}\). In this section, we synthesize these two procedures to yield the ultimate smooth manifold estimator. The estimator is predicated upon a tubular neighborhood of \(\mathcal{M}\), denoted by \(\Gamma=\{y:d(y,\mathcal{M})\leq C\sigma\}\), and manifests in two distinct incarnations, corresponding to the population and sample levels.
On the population level, we assume the distribution \(\nu(y)\) is known, so that we can calculate all the expectations. As mentioned in the introduction, estimating \(\omega\) or \(\mathcal{M}\) with a known density function in the form of \(\nu=\omega*\phi_{\sigma}\) is closely related to the singular deconvolution problem discussed in [20]. In contrast to their approach, our method uses geometrical structures to generate an estimate in the form of an image set, yielding a similar error bound. Formally, we have:
**Theorem 5.1**: _Assume the density function \(\nu(y)\) and region of interest \(\Gamma\) are given. With the \(\widehat{\mu}_{y}^{\forall}\) defined in (4.12), we let_
\[\mathcal{S}=\{\widehat{\mu}_{y}^{\forall}:\;y\in\Gamma\}.\]
_Then, we have_
\[d_{H}(\mathcal{S},\mathcal{M})\leq C\sigma^{2}\log(1/\sigma)\]
_for some constant \(C\)._
When only the sample set \(\mathcal{Y}\) is available, the function \(G(y)\), as defined in (4.14), can be used as an estimator of \(\widehat{\mu}_{y}^{\forall}\). First, \(G(y)\) provides a good estimate of \(y^{*}\) with high probability. Additionally, by definition, \(G(\cdot)\) is a \(C^{2}\)-continuous mapping in \(\mathbb{R}^{D}\). Hence, similar to the population case, the image set of \(\Gamma\) under the mapping \(G\) also has a good approximation property. Moreover, because of the smoothness of both \(G\) and \(\Gamma\), the output we obtain is also a smooth manifold. Specifically, we have the following theorem:
**Theorem 5.2**.: _Assume the region \(\Gamma\) is given. With the \(G(y)\) defined in (4.14), we let_
\[\widehat{\mathcal{S}}=G(\Gamma)=\{G(y):\;y\in\Gamma\}. \tag{5.1}\]
_Then, \(\widehat{\mathcal{S}}\) is a smooth sub-manifold in \(\mathbb{R}^{D}\), and the following claims simultaneously hold for some constant \(C\) with high probability:_
* _For any_ \(x\in\mathcal{M}\)_,_ \(d(x,\widehat{\mathcal{S}})\leq C\sigma^{2}\log(1/\sigma)\)_;_
* _For any_ \(s\in\widehat{\mathcal{S}}\)_,_ \(d(x,\mathcal{M})\leq C\sigma^{2}\log(1/\sigma)\)_._
The output manifold \(\widehat{\mathcal{S}}\) furnishes a narrow tubular neighborhood of \(\mathcal{M}\). By disregarding any anomalous points situated within a low-probability regime, we establish that the Hausdorff distance separating \(\widehat{\mathcal{S}}\) and \(\mathcal{M}\) scales as \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\). To further refine the intrinsic dimension of the manifold estimator to \(d\), we introduce a partial solution in Theorem 5.3 and a global solution in Theorem 5.4.
**Theorem 5.3**.: _For \(x\in\mathcal{M}\), let \(\widehat{\Pi}_{x}\) be the estimation of \(\Pi_{x}\) as the one defined in (1.5). Then there exists a constant \(c>0\) such that_
\[\widehat{\mathcal{M}}_{x}=\{y\in\Gamma\cap\mathcal{B}_{D}(x,c\tau):\widehat{ \Pi}_{x}^{\perp}(G(y)-y)=0\}\]
_is a \(d\)-dimensional manifold embedded in \(\mathbb{R}^{D}\). Meanwhile, for any point \(y\in\widehat{\mathcal{M}}_{x}\),_
\[d(y,\mathcal{M})\leq C\sigma^{2}\log(1/\sigma)\]
_for some constant \(C\) with high probability._
Theorem 5.3 provides a local solution, by guaranteeing that the function \(\widehat{\Pi}_{x}^{\perp}(G(y)-y)\) has a constant rank \(D-d\) through predetermined regions of interest and a fixed projection matrix. The resulting estimator is a piecewise \(d\)-dimensional manifold, which is more natural and smooth, but requires further manipulations to integrate the piecewise manifolds into an entirely smooth one. To avoid these manipulations, we assume there is a smooth initial manifold \(\widetilde{\mathcal{M}}\) contained by \(\Gamma\). Additionally, since \(G\) is a \(C^{2}\) continuous mapping in \(\Gamma\), we can assume that the Jacobi matrix of \(G\) is bounded by \(L_{G}\) and \(\ell_{G}\), and the Hessian matrix of \(G\) is bounded by \(M_{G}\). Then a global estimator \(\widetilde{\mathcal{M}}\) can be obtained via the following theorem:
**Theorem 5.4**.: _Let \(\widetilde{\mathcal{M}}\subset\Gamma\) be a \(d\)-dimensional manifold with a positive reach \(\tau_{0}\). Suppose that for each point \(x\in\mathcal{M}\), there exists a point \(y\) such that \(y^{*}=x\). Then, the estimator defined by \(\widehat{\mathcal{M}}=G(\widetilde{\mathcal{M}})\) is also a \(d\)-dimensional manifold with the following conditions holding for some constant \(c\) and \(C\) with high probability:_
* _For any point_ \(y\in\widehat{\mathcal{M}}\)_,_ \(d(y,\mathcal{M})\) _is less than_ \(C\sigma^{2}\log(1/\sigma)\)_;_
* _For any point_ \(x\in\mathcal{M}\)_,_ \(d(x,\widehat{\mathcal{M}})\) _is less than_ \(C\sigma^{2}\log(1/\sigma)\)_;_
* _The reach of_ \(\widehat{\mathcal{M}}\) _is larger than a constant_ \(\widehat{\tau}=\min\left\{c\sigma\tau_{0},\;\frac{c\ell_{G}}{M_{G}+L_{G}}\right\}\)_._
Notably, the estimator defined in Theorem 5.4 requires an initial estimate \(\widetilde{\mathcal{M}}\), which can be obtained using the methods proposed in [32, 13, 42, 16]. In this paper, we also provide a defined strategy for reference.
**Proposition 5.5**.: _Let \(\widetilde{\mathcal{M}}\) be a level set such that_
\[\widetilde{\mathcal{M}}=\{y\in\Gamma:\Pi^{*}(F(y)-y)=0\},\]
_where \(\Pi^{*}\) is any arbitrary fixed projection matrix with rank \(D-d\). Then, with high probability, \(\widetilde{\mathcal{M}}\) is a \(d\)-dimensional submanifold embedded in \(\Gamma\), and \(d_{H}(\widetilde{\mathcal{M}},\mathcal{M})\leq C\sigma\)._
In summary, we present two manifold estimators in the form of image sets and one in the form of level set, all satisfying the Hausdorff-distance condition under certain statistical conditions. Among them, the estimator proposed in Theorem 5.2 is computationally simpler and more suitable for scenarios involving sample points, while the other estimators offer stronger theoretical guarantees for the geometric properties. As discussed in the introduction, prior works often employed level sets as manifold estimators, despite their inherent limitations: the existence of solutions to \(f(x)=\mathbf{0}\), where \(f(x)\) maps from \(\mathbb{R}^{D}\) to \(\mathbb{R}^{D}\), is not always evident. Thus the nonemptiness of the level sets is uncertain, requiring additional scrutiny. Furthermore, this approach lacks an explicit solution, making it difficult to obtain the projection of a given point onto \(\widehat{\mathcal{M}}\). Iterative solvers are necessary to approximate the projections, although their convergence remains unproven.
## 6 Numerical study
This section presents a comprehensive numerical investigation of the superior performance of our method (ysl23) in manifold fitting. The experiments are divided into three parts, each showcasing the advantages of ysl23 from different perspectives.
* We comprehensively demonstrate ysl23's effectiveness through various numerical visualizations, performance evaluations on diverse manifolds, and exploration of its asymptotic properties. The experiments confirm that the asymptotic behavior of ysl23 aligns with the main theorems presented in this paper as we increase the number of samples and reduce noise. Through this, we establish the reliability and validity of ysl23.
* We compare ysl23 with three major manifold-fitting methods: yx19 [42], cf18 [13], and km17 [32], on two constant curvature manifolds and one inconstant curvature manifold. Their performance is evaluated using metrics such as the Hausdorff distance, average distance, and running time. The comparisons demonstrate that ysl23 outperforms the other methods in terms of both accuracy and efficiency.
* We apply ysl23 to a particularly challenging class of manifolds, the Calabi-Yau manifolds [3, 43], which have a complex structure and diverse shapes. We demonstrate the effectiveness of ysl23 by fitting Calabi-Yau manifolds and evaluating their performance by comparing the output with the underlying Calabi-Yau manifold. Through these experiments, we show that ysl23 can accurately fit the most complex manifolds, demonstrating its versatility and applicability in challenging scenarios.
To ensure reproducibility, we followed a standardized setup similar to [32]. For each manifold \(\mathcal{M}\), The generation and evaluation of the output manifold are based on the following steps.
1. Independently generate the sample set \(\mathcal{Y}\) with size \(N\) from the distribution \(\nu\) defined in (2), where \(\sigma\) is predefined.
2. Generate another set of initial points \(\mathcal{W}=\{w_{1},...,w_{N_{0}}\}\) near the underlying manifold, satisfying \(\sigma/2\leq d(w_{i},\mathcal{M})\leq 2\sigma\).
3. Project every point in \(\mathcal{W}\) from each tested method to the output manifold, respectively. Denote the projection of \(\mathcal{W}\) as \(\widehat{\mathcal{W}}\).
4. Evaluate the performance of all tested methods via the three measures: * The supremum of the approximation error, \(\max_{j}d(\widehat{w}_{j},\mathcal{M}),\) calculated as an estimation of the Hausdorff distance between \(\widehat{\mathcal{M}}\) and \(\mathcal{M}\). * The average of the approximation error, \(\frac{1}{N_{0}}\sum_{j}d(\widehat{w}_{j},\mathcal{M}),\) calculated as an estimation of the average distance between \(\widehat{\mathcal{M}}\) and \(\mathcal{M}\). * The CPU time of the tested method.
**Implementation and code** The numerical study is conducted on a standard tower workstation with AMD ThreadRipper 3970X @4.5 GHz and 128GB of DDR4-3200mHz RAM. The
operating system is Windows 10 Professional 64 Bit. The simulations are implemented with Matlab R2023a, which is chosen for its ability to perform parallel running conveniently and reliably. The detailed algorithm used in this paper can be found in the supplementary material, and the latest version of Python and Matlab implementation are available at [https://github.com/zhigang-yao/manifold-fitting](https://github.com/zhigang-yao/manifold-fitting).
### Numerical illustrations of ysl23
Three different manifolds, including two constant-curvature manifolds - a circle embedded in \(\mathbb{R}^{2}\) and a sphere embedded in \(\mathbb{R}^{3}\) - and a manifold with negative curvature, namely a torus embedded in \(\mathbb{R}^{3}\), will be tested in this and the next subsection. A visualization of these simulated manifolds is presented in Figure 6.
#### 6.1.1 The fundamental procedure of ysl23
Figure 7 depicts a visualization of ysl23's steps using the circle as the underlying manifold. There are two simple steps in obtaining the final output for a given noisy point \(w\). Firstly, the weighted means of a spherical neighborhood of \(w\) are computed using (3.3), which yields \(F(w)\). The first step captures the crucial information about \(w\), i.e., an approximation of the projected direction onto the underlying manifold. In the second step, the weighted means of a cylinder neighborhood of \(F(w)\) are calculated to obtain the final output \(G(w)\). The long axis of the cylinder is determined by the line connecting \(w\) and \(F(w)\). Notably, ysl23 requires no iteration or knowledge of the underlying manifold's dimension. Furthermore, ysl23 can map a noisy sample point not only approximately on the underlying manifold but also to its projection's proximity on the manifold, as demonstrated in panel (d) of Figure 7. As a summary, the detail of ysl23 can be found in Algorithm 1. We always set the radius parameters as \(r_{0}=r_{1}=5\sigma/\lg(N)\) and \(r_{2}=10\sigma\sqrt{\log(1/\sigma)}/\lg(N)\) in our experiment.
Figure 6: Manifolds employed in the numerical study. Left: a unit circle in \(\mathbb{R}^{2}\); Middle: A unit sphere in \(\mathbb{R}^{3}\); Right: a torus in \(\mathbb{R}^{3}\).
The visualization of ysl23's performance for the circle case is shown in Figure 8, and the result for the sphere and the torus case can be found in the supplementary material. In these tests, we set \(N=5\times 10^{4}\), \(N_{0}=100\) for each case. The closer \(\widehat{\mathcal{W}}\) are to the underlying manifold, the better it works. As can be observed from Figure 8, the output points are significantly closer to the hidden manifold, clearly demonstrating the efficacy of ysl23. Similar phenomena, as shown in the supplementary material, can be observed for both sphere and torus cases.
#### 6.1.2 Asymptotic analysis
To investigate the asymptotic properties of ysl23, we increased \(N\) to simulate the case where it tends to infinity and decreased \(\sigma\) to simulate the case where it tends to zero. Specifically, for the circle case, we considered \(N\in\{3\times 10^{2},3\times 10^{3},3\times 10^{4},3\times 10^{5}\}\), and \(\sigma\in\{0.12,0.1,0.08,0.06,0.04,0.02\}\). We started by fixing \(N=3\times 10^{4}\), \(N_{0}=100\), and testing the performance of ysl23 with the change of \(\sigma\). For each \(\sigma\), we randomly selected 50 different \(\mathcal{W}\) and executed ysl23 on each of them. The Hausdorff distances and average distances between the output manifold and the underlying manifold is shown at the top of Figure 9. It shows that the Hausdorff distance and average distance decrease at a quadratic rate as \(\sigma\) decreases, which matches the upper bound of the error given in Section 5. We also observe that the average distance decreases more rapidly, demonstrating the global stability of ysl23. Similarly, we fixed \(\sigma=0.06\) to test the performance of ysl23 with the change of \(N\). The Hausdorff distances and average distances between the output and hidden manifolds are shown at the bottom of Figure 9. It shows that, as \(N\) increases, the Hausdorff distances
Figure 8: Assessing the performance of ysl23 in fitting the circle (\(N=5\times 10^{4}\), \(N_{0}=100\), \(\sigma=0.06\)): the left panel displays points in \(\mathcal{W}\) surrounding the underlying manifold, while the right panel illustrates the corresponding points in \(\widehat{\mathcal{W}}\).
Figure 7: Visualization of ysl23’s steps: (a) Locating the neighborhood of a noisy observation \(w\). (b) Computing \(F(w)\) defined in (3.3). (c) Identifying the cylindrical neighborhood (points in the black rectangle) of \(w\) based on \(F(w)\). (d) Obtaining the output point \(G(w)\) using (4.14).
and average distances both decrease significantly. This improvement can be attributed to two aspects. Firstly, with the increase of \(N\), we can more accurately estimate the local geometry of the manifold. Secondly, the radius of the neighborhood in ysl23 is set to decrease with the increase of the sample size. Hence, the neighborhood in ysl23 becomes closer to its center point while maintaining a sufficient number of points in the neighborhood. Similar results and phenomena, as shown in the supplementary material, can be observed for both sphere and torus cases.
### Comparison of other manifold fitting methods
We performed ysl23, yx19, cf18, and km17 on the three aforementioned manifolds. The circles and spheres cases were combined since they both have constant curvature. The torus case was separately presented due to its inconstant curvature.
#### 6.2.1 The fitting of the circle and sphere
We set \(N=N_{0}=300\) for the circle, and \(N=N_{0}=1000\) for the sphere. The radius of the neighborhood was set as \(r=2\sqrt{\sigma}\) for yx19, cf18, and km17. Figure 10 displays the fitting results. The black and red dots correspond to \(\widehat{\mathcal{M}}\) and \(\mathcal{M}\), respectively. A higher degree of overlap between these two sets indicates a better fit. The first row presents the complete space for the circle embedded in \(\mathbb{R}^{2}\), while the second row shows the view from the positive \(z\)-axis of the sphere embedded in \(\mathbb{R}^{3}\). Notably, km17 demonstrates inferior performance compared with the other methods. Moreover, the estimated circles by cf18 exhibit two significant gaps, suggesting inaccuracies in the estimator for some local regions. The ysl23, as well as yx19, demonstrates the best performance.
We made an observation of interest when ysl23 successfully mapped the noisy samples to the proximity of the hidden manifold, but the sample distribution on the output manifold was slightly changed. This phenomenon occurred because the number of samples was not sufficient to represent the perturbation of the uniform distribution on the manifold. Because
Figure 9: The asymptotic performance of ysl23 when fitting the circle. The top two figures show how the two distances change with \(\sigma\), while the bottom two figure show how the two distances change with \(N\).
of this, our contraction strategy clustered the output points towards the denser regions on the input points. Fortunately, when the sample size is sufficiently large, ysl23 is able to ensure that the output points are approximately uniformly distributed on \(\widehat{\mathcal{M}}\) (see Figure 22 in the supplementary material).
We repeated each method \(10\) times and evaluated their effectiveness in Figure 11. We find that ysl23 and yx19 achieve slightly better results than cf18 in terms of the Hausdorff
Figure 11: The Hausdorff distance, average distance, and CPU time of fitting a circle (top, \(N=300\), \(\sigma=0.06\)) and a sphere (bottom, \(N=1000\), \(\sigma=0.06\)), using ysl23, yx19, cf18, and km17.
Figure 10: From left to right: the performance of ysl23, yx19, cf18, and km17 when fitting a circle (top, \(N=300\), \(\sigma=0.06\)) and a sphere (bottom, \(N=1000\), \(\sigma=0.06\)).
distance, while all three outperform km17 significantly. When evaluating the average distance, ysl23 and cf18 slightly outperform yx19, while all three show significant improvement over km17. Overall, ysl23 consistently ranks among the top across different metrics. In terms of computing time, ysl23 also stands out, with remarkably lower running times than those of the other three methods. Among them, yx19 is the most efficient, while km17 lags behind significantly.
We compared ysl23 and the well-performing yx19 by incrementally varying \(N\)to explore their performance dependence on it. For the circle case, we selected \(N\in\{3\times 10^{2},1\times 10^{3},3\times 10^{3}\}\), while for the sphere case, we selected \(N\in\{1\times 10^{2},2\times 10^{3},3\times 10^{3}\}\). Results in terms of Hausdorff and average distance and running time are shown in Figure 12. The Hausdorff distance showed a significant decrease for both algorithms as \(N\) increased. However, yx19 remained relatively constant with increasing \(N\) when using the average distance, while ysl23 achieved a significant reduction. Additionally, ysl23 demonstrated a clear advantage in computational efficiency, with significantly shorter running times than yx19. For example, yx19 took over 10 seconds to terminate when \(N\) reached 3000 in the presented examples, while ysl23 was completed in under 0.5 seconds.
#### 6.2.2 The fitting of the torus
We set \(N=10^{3}\) for the torus case. The results, displayed in Figure 13, show that ysl23 outperformed the other three methods in terms of the Hausdorff distance, average distance, and computing time. To evaluate the performance of ysl23 and yx19 on the torus, we set an increasing sample size of \(N\in\{1000,2000,3000\}\) and compared their results. Figure 14 illustrates the results of both algorithms for each \(N\). As \(N\) increased, we observed a reduction in the distance for both algorithms. However, ysl23 consistently achieved a much lower distance than yx19, no matter which metric is used. Furthermore, ysl23 demonstrated a remarkable advantage in computational efficiency, completing the task with a
Figure 12: The Hausdorff distance, average distance, and CPU time of fitting a circle (top, \(\sigma=0.06\)) and a sphere (bottom, \(\sigma=0.06\)) with increasing \(N\), using ysl23 and yx19.
significantly shorter running time than yx19. Specifically, in the presented examples, yx19 took over 10 seconds to terminate when \(N\) reached 3000, while ysl23 finished in under 0.5 seconds.
### Fitting of a Calabi-Yau manifold
Calabi-Yau manifolds [3] are a class of compact, complex Kahler manifolds that possess a vanishing first Chern class. They are highly significant because they are Ricci-flat manifolds, which means that their Ricci curvature is zero at all points, aligning with the universe model of physicists. A simple example of a Calabi-Yau manifold of complex dimension \(3\) is the Fermat quartic:
\[x^{4}+y^{4}+z^{4}+w^{4}+v^{4}=0,\quad(x,y,z,w,v)\in\mathbb{P}^{4}, \tag{6.1}\]
where \(\mathbb{P}^{4}\) is the \(4\)-dimensional complex projective space. To visualize it, we generate low-dimensional projections of the manifold by eliminating variables as in [23], dividing by \(v^{4}\), and setting \(\frac{w^{4}}{v^{4}}\) and \(\frac{z^{4}}{v^{4}}\) to be constant. We then normalize the resulting inhomogeneous equation as
\[x^{4}+y^{4}=1,\quad x,y\in\mathbb{C}. \tag{6.2}\]
The resulting surface is embedded in 4D and can be projected to ordinary 3D space for display. The parametric representation of (6.2) is
\[x(\theta,\,k_{1})=e^{2\pi ik_{1}/n}\cosh(\theta+\zeta i)^{2/n} \tag{6.3}\]
Figure 14: The Hausdorff distance, average distance, and CPU time of fitting a torus (\(\sigma=0.06\)) with increasing \(N\), using ysl23 and yx19.
Figure 13: The Hausdorff distance, average distance, and CPU time of fitting a torus (\(N=1000\), \(\sigma=0.06\)), using ysl23, yx19, cf18, and km17.
\[y(\theta,\zeta,k_{2})=e^{2\pi ik_{2}/n}\sinh(\frac{\theta+\zeta i}{i})^{2/n}, \tag{6.4}\]
where the integer pair \((k_{1},k_{2})\) is selected by \(0<k_{1},k_{2}<3\). Such \(\{(x,y)\}\) can be seen as points in \(\mathbb{R}^{4}\), denoted by \(\{Re(x),Re(y),Im(x),Im(y)\}\). A natural three-dimensional projection is
\[(Re(x),Re(y),\cos(\psi)Im(x)+\sin(\psi)Im(y)),\]
where \(\psi\) is a parameter. The left panel of Figure 15 shows the surface plot of the 3D projection.
We generated a set of points in (6.3) and (6.4) on a uniform grid \((\theta,\zeta)\), where \(\theta\) is a sequence of numbers ranging from \(-1.5\) to \(1.5\) with a step size of \(0.05\) between consecutive values, and \(\zeta\) a sequence of numbers ranging from \(0\) to \(\pi/2\) with a step size of \(1/640\) between consecutive values. In total, the dataset contains \(N=313296\) samples with Gaussian noise added in \(\mathbb{R}^{4}\). As shown in the middle panel of Figure 15, the initial point distribution is not close to the true manifold. However, after running ysl23, the output is significantly closer to the true manifold, as shown in the right panel of Figure 15. This phenomenon indicates that ysl23 performs well in estimating complicated manifolds. It should be noted that we only applied ysl23 to the Calabi-Yau manifold without running other algorithms because the sample size would cause very long running times for other algorithms and would not yield usable results.
Figure 16: The asymptotic performance of ysl23 when fitting the Calabi–Yau manifold. The two columns show how the two distances change with \(\sigma\).
Figure 15: Performance of ysl23 when fitting a Calabi–Yau manifold. The left panel illustrates the shape of a 3D projection of a Calabi–Yau manifold. The middle panel shows some noisy points around the underlying manifold, and the right panel shows the points on the output manifold.
We also executed ysl23 on the Calabi-Yau manifold with different \(\sigma\). Specifically, we tested ysl23 with decreasing \(\sigma\in\{0.03,0.025,0.02,0.015,0.01,0.005\}\). As we decrease \(\sigma\), both the Hausdorff distance and average distance decrease at a quadratic rate, which matches Theorem 5.4. These results further support the effectiveness and reliability of ysl23.
## 7 Conclusion
In this paper, the manifold-fitting problem is investigated by proposing a novel approach to construct a manifold estimator for the latent manifold in the presence of ambient-space noise. Our estimator achieves the best error rate, to our knowledge, with a sample size bounded by a polynomial of the standard deviation of the noise term, and preserves the smoothness of the latent manifold. The performance of the estimator is demonstrated through rigorous theoretical analysis and numerical experiments. Our method provides a reliable and efficient solution to the problem of manifold fitting from noisy observations, with potential applications in various fields, such as computer vision and machine learning.
Our approach uses a two-step local contraction strategy to obtain an output manifold with a significantly smaller error. First, we estimate the direction of contraction for a point around \(\mathcal{M}\) using a local average. Compared with previous methods that estimate the basis of the tangent space, our approach provides a significant advantage in terms of the error rate and facilitates the obtaining of better-contracted points. Next, we construct a hyper-cylinder, and the local average within it is regarded as the contracted point. This point is \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\)-close to \(\mathcal{M}\). Our hyper-cylinder has a length in a higher order of \(\sigma\) than the width, which differs from the approach proposed in [16]. This difference in order allows us to eschew their requirement of directly sampling from \(\mathcal{M}\).
We provide several methods to obtain the estimators of \(\mathcal{M}\). All of these estimators can roughly achieve a Hausdorff distance in the order of \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\), with or without the high probability statement. Unlike in previous work, we achieve the state-of-the-art error bound by reducing the required sample size to \(N=\mathcal{O}(\sigma^{-(d+3)})\). Using image sets to generate estimators, our method is faster and more applicable to larger data sets. We also conduct comprehensive numerical experiments to validate our theoretical results and demonstrate that our algorithm not only achieves higher approximation accuracy but also consumes significantly less time and computational resources than other methods. These simulation results indicate the significant superiority of our approach in fitting the latent manifold, and suggest its potential in various applications.
Overall, our approach has demonstrated promising results in fitting smooth manifolds from ambient space, but nevertheless has some limitations that warrant further investigation. First, our current assumption that the observations are from the convolution of a uniform distribution on the manifold with a homogeneous Gaussian distribution may not capture the full complexity of real-world data. Therefore, future research could explore the effects of relaxing these assumptions. Second, while our theoretical results are promising, there is still scope for optimization because of the application of inequalities in the proof and the choice of weights in the two-step mapping. This limitation arises from the lack of an explicit expression for some integrations with respect to Gaussian distributions. We believe that further research addressing these limitations can lead to significant advancements in manifold-fitting methods, at both the theoretical and applied levels.
To conclude, we discuss potential avenues for further research. In the real world, data often exist on complicated manifolds, such as spheres, tori, and shape spaces, requiring specialized analysis methods. Our manifold-fitting algorithm projects data onto a low-dimensional manifold, allowing the use of other algorithms. Firstly, our approach has wide-ranging implications for research involving the manifold hypothesis. For example, in GAN-based image-to-image translation, images are assumed to lie around a low-dimensional manifold. Incorporating our manifold-fitting method can significantly enhance the performance of the discriminator
and improve the overall GAN model. Secondly, numerous statistical studies concentrate on non-Euclidean data originating from manifolds, including the principal nested spheres [24] and the principal flows [35]. As our method can fit smooth \(d\)-dimensional manifolds from ambient space, it provides a natural framework for generalizing statistical work on manifolds to ambient space. Additionally, our method can also aid in the analysis of Euclidean data by facilitating data clustering and simplifying subsequent objectives. We believe that our approach will inspire further research in these areas.
## Appendix A Mathematical Preliminary
We briefly review the basic concepts of topology and smooth manifolds essential for the study of manifold fitting; for further details, see, for example, [27, 28, 29].
### Topology
#### a.1.1 Topological Space
Let \(X\) be a set. A _topology_ on \(X\) is a collection \(\mathcal{T}\) of subsets of \(S\), called _open subsets_, satisfying the following:
1. \(X\) and \(\varnothing\) are open.
2. The union of any family of open sets is open.
3. The intersection of any finite family of open subsets is open.
A pair \((X,\mathcal{T})\) consisting of a set \(X\) and a topology \(\mathcal{T}\) on \(X\) is called a _topological space_. Usually, when the topology is understood, these details will be omitted, with only the statement that "\(X\) is a topological space".
The most common examples of topological spaces, from which most of our examples of manifolds are built, are presented below.
**Example A.1** (Metric Spaces): _A metric space is a set \(M\) endowed with a distance function (also called a metric) \(d:M\times M\to\mathbb{R}\) (where \(\mathbb{R}\) denotes the set of real numbers) satisfying the following properties for all \(x,y,z\in M\) :_
1. _Positivity:_ \(d(x,y)\geq 0\)_, with equality if and only if_ \(x=y\)_._
2. _Symmetry:_ \(d(x,y)=d(y,x)\)_._
3. _Triangle inequality:_ \(d(x,z)\leq d(x,y)+d(y,z)\)_._
_If \(M\) is a metric space, \(x\in M\), and \(r>0\), the open ball of radius \(\mathbf{r}\) around \(\mathbf{x}\) is the set_
\[B(x,r)=\{y\in M:d(x,y)<r\}.\]
_The metric topology on \(M\) is defined by declaring a subset \(S\subseteq M\) to be open if, for every point \(x\in S\), there is some \(r>0\) such that \(B(x,r)\subseteq S\)._
**Example A.2** (Euclidean Spaces): _For integer \(n\geq 1\), the set \(\mathbb{R}^{n}\) of ordered \(n\)-tuples of real numbers is called \(\mathbf{n}\)-dimensional Euclidean space. We let a point in \(\mathbb{R}^{n}\) be denoted by \(\left(x^{(1)},\cdots,x^{(n)}\right)\) or \(\mathbf{x}\). The numbers \(x^{(i)}\) are called the \(\mathbf{i}\)-th components or coordinates of \(\mathbf{x}\). For \(\mathbf{x}\in\mathbb{R}^{n}\), the Euclidean norm of \(\mathbf{x}\) is the nonnegative real number_
\[\|\mathbf{x}\|_{2}=\sqrt{\left(x^{(1)}\right)^{2}+\cdots+\left(x^{(n)}\right)^{2}},\]
_and, for \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\), the Euclidean distance function is defined by_
\[d(\mathbf{x},\mathbf{y})=\|\mathbf{x}-\mathbf{y}\|_{2}.\]
_This distance function turns \(\mathbb{R}^{n}\) into a complete metric space. The resulting metric topology on \(\mathbb{R}^{n}\) is called the Euclidean topology._
For the purposes of manifold theory, arbitrary topological spaces are too general. To avoid pathological situations arising when there are not enough open subsets of \(X\), we often restrict our attention to _Hausdorff space_.
**Definition A.3** (Hausdorff space): _A topological space \(X\) is said to be a Hausdorff space if, for every pair of distinct points \(p,q\in X\), there exist disjoint open subsets \(U,V\subseteq X\) such that \(p\in U\) and \(q\in V\)._
There are numerous essential concepts in topology concerning _maps_, and these will be introduced next. Let \(X\) and \(Y\) be two topological spaces, and \(F:X\to Y\) be a map between them.
* \(F\) is _continuous_ if, for every open subset \(U\subseteq Y\), the preimage \(F^{-1}(U)\) is open in \(X\).
* If \(F\) is a continuous bijective map with continuous inverse, it is called a _homeomorphism_. If there exists a homeomorphism from \(X\) to \(Y\), we say that \(X\) and \(Y\) are _homeomorphic_.
* A continuous map \(F\) is said to be a _local homeomorphism_ if every point \(p\in X\) has a neighborhood \(U\subseteq X\) such that \(F(U)\) is open in \(Y\) and \(F\) restricts to a homeomorphism from \(U\) to \(F(U)\).
* \(F\) is said to be a _closed map_ if, for each closed subset \(K\subseteq X\), the image set \(F(K)\) is closed in \(Y\), and an _open map_ if, for each open subset \(U\subseteq X\), the image set \(F(U)\) is open in \(Y\). It is a _quotient map_ if it is surjective and \(V\subseteq Y\) is open if and only if \(F^{-1}(V)\) is open.
Furthermore, for a continuous map \(F\), which is either open or closed, the following rules apply:
1. If \(F\) is surjective, it is a _quotient map_.
2. If \(F\) is injective, it is a _topological embedding_.
3. If \(F\) is bijective, it is a _homeomorphism_.
For maps between metric spaces, there are several useful variants of continuity, especially in the case of compact spaces. Assume \((M_{1},d_{1})\) and \((M_{2},d_{2})\) are metric spaces, and \(F:M_{1}\to M_{2}\) is a map. Then, \(F\) is said to be _uniformly continuous_ if, for every \(\epsilon>0\), there exists \(\delta>0\) such that, for all \(x,y\in M_{1},d_{1}(x,y)<\delta\) implies \(d_{2}(F(x),F(y))<\epsilon\). It is said to be _Lipschitz continuous_ if there is a constant \(C\) such that \(d_{2}(F(x),F(y))\leq Cd_{1}(x,y)\) for all \(x,y\in M_{1}\). Any such \(C\) is called a _(globally) Lipschitz constant_ for \(F\). We say that \(F\) is _locally Lipschitz continuous_ if every point \(x\in M_{1}\) has a neighborhood on which \(F\) is Lipschitz continuous.
#### a.1.2 Bases and countability
Suppose \(X\) is merely a set, and \(\mathcal{B}\) is a collection of subsets of \(X\) satisfying the following conditions:
1. \(X=\bigcup_{B\in\mathcal{B}}B\).
2. If \(B_{1},B_{2}\in\mathcal{B}\) and \(x\in B_{1}\cap B_{2}\), then there exists \(B_{3}\in\mathcal{B}\) such that \(x\in B_{3}\subseteq B_{1}\cap B_{2}\).
Then, the collection of all unions of elements of \(\mathcal{B}\) is a topology on \(X\), called the _topology generated by \(\mathcal{B}\)_, and \(\mathcal{B}\) is a _basis_ for this topology.
A set is said to be _countably infinite_ if it admits a bijection with the set of positive integers, and _countable_ if it is finite or countably infinite. A topological space \(X\) is said to be _first-countable_ if there is a countable neighborhood basis at each point, and _second-countable_ if there is a countable basis for its topology. Since a countable basis for \(X\) contains a countable neighborhood basis at each point, second-countability implies first-countability.
#### a.1.3 Subspaces and Products
If \(X\) is a topological space and \(S\subseteq X\) is an arbitrary subset, we define the _subspace topology_ (or _relative topology_) on \(S\) by declaring a subset \(U\subseteq S\) to be open in \(S\) if and only if there exists an open subset \(V\subseteq X\) such that \(U=V\cap S\). A subset of \(S\) that is open or closed in the subspace topology is sometimes said to be _relatively open_ or _relatively closed_ in \(S\), to make it clear that we do not mean open or closed as a subset of \(X\). Any subset of \(X\) endowed with the subspace topology is said to be a _subspace_ of \(X\).
If \(X\) and \(Y\) are topological spaces, a continuous injective map \(F:X\to Y\) is called a _topological embedding_ if it is a homeomorphism onto its image \(F(X)\subseteq Y\) in the subspace topology.
If \(X_{1},\cdots,X_{k}\) are (finitely many) sets, their _Cartesian product_ is the set \(X_{1}\times\cdots\times X_{k}\) consisting of all ordered \(k\)-tuples of the form \((\boldsymbol{x}_{1},\cdots,\boldsymbol{x}_{k})\) with \(\boldsymbol{x}_{i}\in X_{i}\) for each \(i\).
Suppose \(X_{1},\cdots,X_{k}\) are topological spaces. The collection of all subsets of \(X_{1}\times\cdots\times X_{k}\) of the form \(U_{1}\times\cdots\times U_{k}\), where each \(U_{i}\) is open in \(X_{i}\), forms a basis for a topology on \(X_{1}\times\cdots\times X_{k}\), called the _product topology_. Endowed with this topology, a finite product of topological spaces is called a _product space_. Any open subset of the form \(U_{1}\times\cdots\times U_{k}\subseteq X_{1}\times\cdots\times X_{k}\), where each \(U_{i}\) is open in \(X_{i}\), is called a _product open subset_.
#### a.1.4 Connectedness and Compactness
A topological space \(X\) is said to be _disconnected_ if it has two disjoint nonempty open subsets whose union is \(X\), and it is _connected_ otherwise. Equivalently, \(X\) is connected if and only if the only subsets of \(X\) that are both open and closed are \(\varnothing\) and \(X\) itself. If \(X\) is any topological space, a _connected subset_ of \(X\) is a subset that is a connected space when endowed with the subspace topology.
Closely related to connectedness is _path connectedness_. If \(X\) is a topological space and \(p,q\in X\), a _path_ in \(X\) from \(p\) to \(q\) is a continuous map \(f:I\to X\) (where \(I=[0,1]\) ) such that \(f(0)=p\) and \(f(1)=q\). If for every pair of points \(p,q\in X\) there exists a path in \(X\) from \(p\) to \(q\), then \(X\) is said to be _path-connected_.
A topological space \(X\) is said to be _compact_ if every open cover of \(X\) has a finite subcover. A _compact subset_ of a topological space is one that is a compact space in the subspace topology. For example, it is a consequence of the Heine-Borel theorem that a subset of \(\mathbb{R}^{n}\) is compact if and only if it is closed and bounded. We list some of the properties of compactness as follows.
* If \(F:X\to Y\) is continuous and \(X\) is compact, then \(F(X)\) is compact.
* If \(X\) is compact and \(f:X\to\mathbb{R}\) is continuous, then \(f\) is bounded and attains its maximum and minimum values on \(X\).
* Any union of finitely many compact subspaces of \(X\) is compact.
* If \(X\) is Hausdorff and \(K\) and \(L\) are disjoint compact subsets of \(X\), then there exist disjoint open subsets \(U,V\subseteq X\) such that \(K\subseteq U\) and \(L\subseteq V\).
* Every closed subset of a compact space is compact.
* Every compact subset of a Hausdorff space is closed.
* Every compact subset of a metric space is bounded.
* Every finite product of compact spaces is compact.
* Every quotient of a compact space is compact.
### Smooth Manifold
#### a.2.1 Topological Manifolds
A \(\boldsymbol{d}\)-_dimensional topological manifold_ (or simply a _\(\boldsymbol{d}\)-manifold_) is a second-countable Hausdorff topological space that is _locally Euclidean of dimension \(\boldsymbol{d}\)_, which means every point has a neighborhood homeomorphic to an open subset of \(\mathbb{R}^{d}\). Given a \(d\)-manifold \(\mathcal{M}\), a _coordinate chart_ for \(\mathcal{M}\) is a pair \((U,\varphi)\), where \(U\subseteq\mathcal{M}\) is an open set and \(\varphi:U\to\widetilde{U}\) is a homeomorphism from \(U\) to an open subset \(\widetilde{U}\subseteq\mathbb{R}^{d}\). If \(p\in\mathcal{M}\) and \((U,\varphi)\) is a chart such that \(p\in U\), we say that \((U,\varphi)\) is a _chart containing \(p\)_.
On occasion, we may need to consider manifolds with boundaries. A _\(\boldsymbol{d}\)-dimensional topological manifold with boundary_ a is a second-countable Hausdorff topological space in which every point has a neighborhood homeomorphic either to an open subset of \(\mathbb{R}^{d}\) or to an open subset of the half space of \(\mathbb{R}^{d}\). The corresponding concepts are slightly different with the manifolds without boundaries. For consistency, in the following sections a _manifold_ without further qualification is always assumed to be a manifold without a boundary.
#### a.2.2 Smooth Manifolds
Briefly speaking, _smooth manifolds_ are topological manifolds endowed with an extra structure that allows us to differentiate functions and maps. To introduce the smooth structure, we first recall the smoothness of a map \(F:U\to\mathbb{R}^{k}\). When \(U\) is an open subset of \(\mathbb{R}^{d}\), \(F\) is said to be _smooth_ (or \(C^{\infty}\)), and all of its component functions have
continuous partial derivatives of all orders. More generally, when the domain \(U\) is an arbitrary subset of \(\mathbb{R}^{d}\), not necessarily open, \(F\) is said to be smooth if, for each \(x\in U,\,F\) has a smooth extension to a neighborhood of \(x\) in \(\mathbb{R}^{n}\). A _diffeomorphism_ is a bijective smooth map whose inverse is also smooth.
If \(\mathcal{M}\) is a topological \(d\)-manifold, then two coordinate charts \((U,\varphi),(V,\psi)\) for \(\mathcal{M}\) are said to be _smoothly compatible_ if both of the _transition maps_\(\psi\circ\varphi^{-1}\) and \(\varphi\circ\psi^{-1}\) are smooth where they are defined (on \(\varphi(U\cap V)\) and \(\psi(U\cap V)\), respectively). Since these maps are inverses of each other, it follows that both transition maps are in fact diffeomorphisms. An _atlas_ for \(\mathcal{M}\) is a collection of coordinate charts whose domains cover \(\mathcal{M}\). It is called a _smooth atlas_ if any two charts in the atlas are smoothly compatible. A _smooth structure_ on \(\mathcal{M}\) is a smooth atlas that is maximal, which means it is not properly contained in any larger smooth atlas. A smooth manifold is a topological manifold endowed with a specific smooth structure. If \(\mathcal{M}\) is a set, a smooth manifold structure on \(\mathcal{M}\) is a second-countable, Hausdorff, locally Euclidean topology together with a smooth structure, making it a smooth manifold. If \(\mathcal{M}\) is a smooth \(d\)-manifold and \(W\subseteq\mathcal{M}\) is an open subset, then \(W\) has a natural smooth structure consisting of all smooth charts \((U,\varphi)\) for \(\mathcal{M}\) such that \(U\subseteq W\), and so every open subset of a smooth \(d\)-manifold is a smooth \(d\) manifold in a natural way.
Suppose \(\mathcal{M}\) and \(\mathcal{N}\) are smooth manifolds. A map \(F:\mathcal{M}\to\mathcal{N}\) is said to be _smooth_ if, for every \(p\in\mathcal{M}\), there exist smooth charts \((U,\varphi)\) for \(\mathcal{M}\) containing \(p\) and \((V,\psi)\) for \(\mathcal{N}\) containing \(F(p)\) such that \(F(U)\subseteq V\) and the composite map \(\psi\circ F\circ\varphi^{-1}\) is smooth from \(\varphi(U)\) to \(\psi(V)\). In particular, if \(\mathcal{N}\) is an open subset of \(\mathbb{R}^{k}\) or \(\mathbb{R}^{k}_{+}\) with its standard smooth structure, we can take \(\psi\) to be the identity map of \(\mathcal{N}\), and then smoothness of \(F\) simply means that each point of \(\mathcal{M}\) is contained in the domain of a chart \((U,\varphi)\) such that \(F\circ\varphi^{-1}\) is smooth. It is a clear and direct consequence of the definition that identity maps, constant maps, and compositions of smooth maps are all smooth. A map \(F:\mathcal{M}\to\mathcal{N}\) is said to be a _diffeomorphism_ if it is smooth and bijective and \(F^{-1}:\mathcal{N}\to\mathcal{M}\) is also smooth.
We let \(C^{\infty}(\mathcal{M},\mathcal{N})\) denote the set of all smooth maps from \(\mathcal{M}\) to \(\mathcal{N}\), and \(C^{\infty}(\mathcal{M})\) the vector space of all smooth functions from \(\mathcal{M}\) to \(\mathbb{R}\). For every function \(f:M\to\mathbb{R}\) or \(\mathbb{R}^{k}\), we define the support of \(f\), denoted by \(supp\;f\), as the closure of the set \(\{x\in\mathcal{M}:f(x)\neq 0\}\). If \(A\subseteq\mathcal{M}\) is a closed subset and \(U\subseteq\mathcal{M}\) is an open subset containing \(A\), then a _smooth bump function_ for \(A\) supported in \(U\) is a smooth function \(f:\mathcal{M}\to\mathbb{R}\) satisfying \(0\leq f(x)\leq 1\) for all \(x\in M,\,f|_{A}\equiv 1\), and \(supp\;f\subset U\). Such smooth bump functions always exist.
There are various equivalent approaches to define tangent vectors on \(\mathcal{M}\). The most convenient one is via the following definition: for every point \(p\in\mathcal{M}\), a _tangent vector_ at \(p\) is a linear map \(v:C^{\infty}(\mathcal{M})\to\mathbb{R}\) that is a derivation at \(p\), which means that, for all \(f,g\in C^{\infty}(\mathcal{M})\), \(v\) satisfies the product rule
\[v(fg)=f(p)vg+g(p)vf.\]
The set of all tangent vectors at \(p\) is denoted by \(T_{p}\mathcal{M}\) and called the _tangent space_ at \(p\).
Suppose \(\mathcal{M}\) is \(d\)-dimensional and \(\varphi:U\to\widetilde{U}\subseteq\mathbb{R}^{d}\) is a smooth coordinate chart on some open subset \(U\subseteq\mathcal{M}\). Writing the coordinate functions of \(\varphi\) as \(\big{(}x^{(1)},\cdots,x^{(n)}\big{)}\), we define the coordinate vectors \(\partial/\left.\partial x^{(1)}\right|_{p},\cdots,\partial/\left.\partial x^{(n )}\right|_{p}\) by
\[\left.\frac{\partial}{\partial x^{(i)}}\right|_{p}f=\left.\frac{\partial}{ \partial x^{(i)}}\right|_{\varphi(p)}\left(f\circ\varphi^{-1}\right).\]
These vectors form a basis for \(T_{p}\mathcal{M}\), which therefore has dimension \(d\). Thus, once a smooth coordinate chart has been chosen, every tangent vector \(\boldsymbol{v}\in T_{p}M\) can be written uniquely in the form
\[\boldsymbol{v}=v^{(1)}\partial/\left.\partial x^{(1)}\right|_{p}+\cdots+v^{(n) }\partial/\left.\partial x^{(n)}\right|_{p}.\]
If \(F:\mathcal{M}\to\mathcal{N}\) is a smooth map and \(p\) is any point in \(\mathcal{M}\), we define a linear map \(dF_{p}:T_{p}\mathcal{M}\to T_{F(p)}\mathcal{N}\), called the _differential of \(F\) at \(p\)_, with
\[dF_{p}(v)f=v(f\circ F),\quad v\in T_{p}\mathcal{M}.\]
Once we have chosen local coordinates \(\left(x^{(i)}\right)\) for \(\mathcal{M}\) and \(\left(y^{(j)}\right)\) for \(\mathcal{N}\), we find, by unwinding the definitions, that the coordinate representation of the differential map is given by the _Jacobian matrix_ of the coordinate representation of \(F\), which is its matrix of first-order partial derivatives:
\[dF_{p}\left(v^{(i)}\frac{\partial}{\partial x^{(i)}}\bigg{|}_{p}\right)=\left. \frac{\partial\widetilde{F}^{(j)}}{\partial x^{(i)}}(p)v^{(i)}\frac{\partial }{\partial y^{(j)}}\right|_{F(p)}.\]
#### a.2.3 Submanifolds
The theory system of submanifolds is established on the inverse function theorem and its corollaries.
**Theorem A.4** (_Inverse Function Theorem for Manifolds_, Thm. 4.5 of [28]).: _Suppose \(\mathcal{M}\) and \(\mathcal{N}\) are smooth manifolds and \(F:\mathcal{M}\to\mathcal{N}\) is a smooth map. If the linear map \(dF_{p}\) is invertible at some point \(p\in\mathcal{M}\), then there exist connected neighborhoods \(U_{0}\) of \(p\) and \(V_{0}\) of \(F(p)\) such that \(\left.F\right|_{U_{0}}:U_{0}\to V_{0}\) is a diffeomorphism._
The most useful consequence of the inverse function theorem is that a smooth map \(F:\mathcal{M}\to\mathcal{N}\) is said to have _constant rank_ if the linear map \(dF_{p}\) has the same rank at every point \(p\in\mathcal{M}\).
**Theorem A.5** (_Rank Theorem_, Thm. 4.12 of [28]).: _Suppose \(\mathcal{M}\) and \(\mathcal{N}\) are smooth manifolds of dimensions \(m\) and \(n\), respectively, and \(F:\mathcal{M}\to\mathcal{N}\) is a smooth map with constant rank \(r\). For each \(p\in\mathcal{M}\) there exist smooth charts \((U,\varphi)\) for \(\mathcal{M}\) centered at \(p\) and \((V,\psi)\) for \(\mathcal{N}\) centered at \(F(p)\) such that \(F(U)\subseteq V\), in which \(F\) has a coordinate representation of the form_
\[\widetilde{F}\left(x^{(1)},\cdots,x^{(r)},x^{(r+1)},\cdots,x^{(m)}\right)= \left(x^{(1)},\cdots,x^{(r)},0,\cdots,0\right)\]
The most important types of constant-rank maps are listed below. In all of these definitions, \(\mathcal{M}\) and \(\mathcal{N}\) are smooth manifolds, and \(F:\mathcal{M}\to\mathcal{N}\) is a smooth map.
* \(F\) is a _submersion_ if its differential is surjective at each point, or equivalently if it has constant rank equal to \(\dim\mathcal{N}\).
* \(F\) is an _immersion_ if its differential is injective at each point, or equivalently if it has constant rank equal to \(\dim\mathcal{M}\).
* \(F\) is a _local diffeomorphism_ if every point \(p\in\mathcal{M}\) has a neighborhood \(U\) such that \(\left.F\right|_{U}\) is a diffeomorphism onto an open subset of \(\mathcal{N}\), or equivalently if \(F\) is both a submersion and an immersion.
* \(F\) is a _smooth embedding_ if it is an injective immersion that is also a topological embedding (a homeomorphism onto its image, endowed with the subspace topology).
_Remark_ (Prop. 5.5 of [28]).: _If \(\mathcal{M}\) is a smooth manifold, then an embedded submanifold \(\mathcal{N}\subseteq\mathcal{M}\) is properly embedded if and only if it is a closed subset of \(\mathcal{M}\)._
Most submanifolds are presented in the following manner. Suppose \(\Phi:\mathcal{M}\to\mathcal{N}\) is any map. Every subset of the form \(\Phi^{-1}(\{y\})\subseteq\mathcal{M}\) for some \(y\in\mathcal{N}\) is called a _level set_ of \(\Phi\), or the _fiber_ of \(\Phi\) over \(y\). The simpler notation \(\Phi^{-1}(y)\) is also used for a level set when there is no likelihood of ambiguity. Let the _codimension of \(\mathcal{N}\)_ be the difference \(\dim\mathcal{N}-\dim\mathcal{M}\).
**Theorem A.6** (_Constant-Rank Level Set Theorem_, Thm. 5.12 of [28]).: _Suppose \(\mathcal{M}\) and \(\mathcal{N}\) are smooth manifolds, and \(\Phi:\mathcal{M}\to\mathcal{N}\) is a smooth map with constant rank \(r\). Every level set of \(\Phi\) is a properly embedded submanifold of codimension \(r\) in \(\mathcal{M}\)_
**Corollary A.6.1** (_Submersion Level Set Theorem_, Cor. 5.13 of [28]).: _Suppose \(\mathcal{M}\) and \(N\) are smooth manifolds, and \(\Phi:\mathcal{M}\to\mathcal{N}\) is a smooth submersion. Every level set of \(\Phi\) is a properly embedded submanifold of \(\mathcal{M}\), whose codimension is equal to \(\dim N\)._
In fact, a map does not have to be a submersion, or even to have constant rank, for its level sets to be embedded submanifolds. If \(\Phi:\mathcal{M}\to\mathcal{N}\) is a smooth map, a point \(p\in\mathcal{M}\) is called a _regular point_ of \(\Phi\) if the linear map \(d\Phi_{p}:T_{p}\mathcal{M}\to T_{\Phi(p)}\mathcal{N}\) is surjective, and \(p\) is called a _critical point_ of \(\Phi\) if it is not. A point \(c\in\mathcal{N}\) is called a _regular value_ of \(\Phi\) if every point of \(\Phi^{-1}(c)\) is a regular point of \(\Phi\), and a _critical value_ otherwise. A level set \(\Phi^{-1}(c)\) is called a _regular level set_ of \(\Phi\) if \(c\) is a regular value of \(\Phi\).
**Corollary A.6.2** (_Regular Level Set Theorem_, Cor. 5.14 of [28]).: _Let \(\mathcal{M}\) and \(\mathcal{N}\) be smooth manifolds, and let \(\Phi:\mathcal{M}\to\mathcal{N}\) be a smooth map. Every regular level set of \(\Phi\) is a properly embedded submanifold of \(\mathcal{M}\) whose codimension is equal to \(\dim\mathcal{N}\)._
### Riemannian manifold
There are many important geometric concepts in Euclidean space, such as length and angle, which are derived from inner product. To extend these geometric ideas to abstract smooth manifolds, we need a structure that amounts to a smoothly varying choice of inner product on each tangent space.
Let \(\mathcal{M}\) be a smooth manifold. A _Riemannian metric_ on \(\mathcal{M}\) is a collection of inner products, whose element at \(p\mathcal{M}\) is an inner product \(g_{p}:T_{p}\mathcal{M}\times T_{p}\mathcal{M}\to\mathbb{R}\) that varies smoothly with respect to \(p\). A _Riemannian manifold_ is a pair \((\mathcal{M},g)\), where \(\mathcal{M}\) is a smooth manifold and \(g\) is a specific choice of Riemannian metric on \(\mathcal{M}\). If \(\mathcal{M}\) is understood to be endowed with a specific Riemannian metric, a conventional statement often used is "\(\mathcal{M}\) is a Riemannian manifold." In the following sections, we assume \((\mathcal{M},g)\) is an oriented Riemannian \(d\)-manifold.
Another important construction provided by a metric on an oriented manifold is a canonical volume form. For \((\mathcal{M},g)\), there is a unique \(d\)-form \(dV_{g}\) on \(\mathcal{M}\), called the _Riemannian volume form_, characterized by
\[dV_{g}=\sqrt{\det{(g_{ij})}}dx^{(1)}\wedge\cdots\wedge dx^{(d)},\]
where the \(dx^{(i)}\) are \(1\)-forms from any oriented local coordinates. Here, \(\det{(g_{ij})}\) is the absolute value of the determinant of the matrix representation of the metric tensor on the manifold. The Riemannian volume form allows us to integrate functions on an oriented Riemannian manifold. Let \(f\) be a continuous, compactly supported real-valued function on \((\mathcal{M},g)\). Then, \(fdV_{g}\) is a compactly supported \(d\)-form. Therefore, the integral \(\int_{\mathcal{M}}fdV_{g}\) makes sense, and we define it as the _integral of \(\boldsymbol{f}\) over \(\boldsymbol{\mathcal{M}}\)_. Similarly, we can define probability measures on \(\mathcal{M}\), and if \(\mathcal{M}\) is compact, the _volume_ of \(\boldsymbol{\mathcal{M}}\) can be evaluated as
\[\operatorname{Vol}(\mathcal{M})=\int_{\mathcal{M}}dV_{g}=\int_{\mathcal{M}}1dV _{g}.\]
A _curve_ in \(\mathcal{M}\) usually means a _parametrized curve_, namely a continuous map \(\gamma:I\to\mathcal{M}\), where \(I\subseteq\mathbb{R}\) is some interval. To say that \(\gamma\) is a _smooth curve_ is to say that it is smooth as a map from \(I\) to \(M\). A smooth curve \(\gamma:I\to\mathcal{M}\) has a _well-defined velocity_\(\gamma^{\prime}(t)\in T_{\gamma(t)}\mathcal{M}\) for each \(t\in I\). We say that \(\gamma\) is a _regular curve_ if \(\gamma^{\prime}(t)\neq 0\) for \(t\in I\). This implies that the image of \(\gamma\) has no "corners" or "kinks." For brevity, we refer to a _piecewise regular curve segment
\(\gamma:[a,b]\to\mathcal{M}\) as an _admissible curve_, and any partition \((a_{0},\cdots,a_{k})\) such that \(\gamma|_{[a_{i-1},a_{i}]}\) is smooth for each \(i\) as an _admissible partition_ for \(\gamma\). If \(\gamma\) is an admissible curve, we define _the length of \(\boldsymbol{\gamma}\)_ as
\[L_{g}(\gamma)=\int_{a}^{b}\left|\gamma^{\prime}(t)\right|_{g}dt.\]
The _speed_ of \(\gamma\) at any time \(t\in I\) is defined as the scalar \(\left|\gamma^{\prime}(t)\right|\). We say that \(\gamma\) is a _unit-speed curve_ if \(\left|\gamma^{\prime}(t)\right|=1\) for all \(t\), and a _constant-speed curve_ if \(\left|\gamma^{\prime}(t)\right|\) is constant. If \(\gamma:[a,b]\to M\) is a unit-speed admissible curve, then its _arc-length function_ has the simple form \(s(t)=t-a\). For this reason, a unit-speed admissible curve whose parameter interval is of the form \([0,b]\) is said to be _parametrized by arc-length_.
For each pair of points \(p,q\in\mathcal{M}\), we define the _Riemannian distance_ from \(p\) to \(q\), denoted by \(d_{\mathcal{M}}(p,q)\), as the infimum of the lengths of all admissible curves from \(p\) to \(q\). When \(\mathcal{M}\) is connected, we say an admissible curve \(\gamma\) is a _minimizing curve_ if and only if \(L_{g}(\gamma)\) is equal to the distance between its endpoints. A unit-speed minimizing curve is also called a _geodesic_. Thus, we use _geodesic distance_ and Riemannian distance interchangeably.
[Existence and Uniqueness of Geodesics, Thm 4.27 of [29]] For every \(p\in\mathcal{M},w\in T_{p}\mathcal{M}\), and \(t_{0}\in\mathbb{R}\), there exist an open interval \(I\subseteq\mathbb{R}\) containing \(t_{0}\) and a geodesic \(\gamma:I\to\mathcal{M}\) satisfying \(\gamma\left(t_{0}\right)=p\) and \(\gamma^{\prime}\left(t_{0}\right)=w\). Any two such geodesics agree on their common domain.
A geodesic \(\gamma:I\to\mathcal{M}\) is said to be maximal if it cannot be extended to a geodesic on a larger interval. A geodesic segment is a geodesic whose domain is a compact interval. For each \(p\in\mathcal{M}\), the _(restricted) exponential map at \(\boldsymbol{p}\)_, denoted by \(\exp_{p}\), is defined by
\[\exp_{p}(v)=\gamma_{v}(1),\]
where \(v\in T_{p}\mathcal{M}\) and \(\gamma_{v}\) are the unique geodesic with initial location \(\gamma_{v}(0)=p\) and \(\gamma^{\prime}_{v}=v\). The exponential map is a diffeomorphism in a neighborhood of the tangent space. Similarly, we define the _logarithm map_\(\log_{p}\) as the inverse of \(\exp_{p}\). The injectivity radius of \(\mathcal{M}\) at \(p\), denoted by \(\mathrm{inj}(p)\), is the supremum of all \(r>0\) such that \(\exp_{p}\) is a diffeomorphism from \(\mathcal{B}(0,r)\subseteq T_{p}\mathcal{M}\) onto its image.
### Other concepts
[Normal matrices] A matrix square matrix \(A\) is normal when \(AA^{*}=A^{*}A\), where \(A^{*}\) is its conjugate-transpose. This is equivalent to saying that there exists a unitary matrix \(U\) such that \(UAU^{*}\) is diagonal (and the diagonal elements are precisely the eigenvalues of \(A\)). Every Hermitian and every unitary matrix is normal.
[Trace norm] The trace norm is defined for every \(A\) by
\[\|A\|_{F}^{2}:=\mathrm{Tr}\left(AA^{*}\right)=\mathrm{Tr}\left(A^{*}A\right)= \sum_{1\leq i,j\leq n}\left|A_{i,j}\right|^{2}.\]
This is also known as the Frobenius, Schur, or Hilbert-Schmidt norm.
[Principal angles] Suppose \(\mathcal{A}\) and \(\mathcal{B}\) are two vector spaces; we call each
\[\theta_{i}(\mathcal{A},\mathcal{B})=\arccos(\lambda_{i}(\mathcal{A},\mathcal{ B}))\]
the \(i\)-th principal angle between \(\mathcal{A}\) and \(\mathcal{B}\), where \(\lambda_{i}(\mathcal{A},\mathcal{B})\) is the \(i\)-th largest eigenvalue of \(\mathcal{A}^{T}\mathcal{B}\). Let \(\Theta(\mathcal{A},\mathcal{B})\) denote the diagonal matrix whose \(i\)-th diagonal entry is \(\theta_{i}(\mathcal{A},\mathcal{B})\), and let \(\sin\Theta(\mathcal{A},\mathcal{B})\) be performed entrywise, i.e.,
\[\sin\Theta(\mathcal{A},\mathcal{B}):=diag\left(\sin\theta_{i}(\mathcal{A}, \mathcal{B})\right).\]
## Appendix B Proof omitted from the main text
### Some useful lemmas and corollaries
**Lemma B.1** (Chernoff bound).: _The generic Chernoff bound for a random variable \(X\) is attained by applying Markov's inequality to \(e^{tX}\). For every \(t>0\), there is_
\[\mathbb{P}(X\geq a)=\mathbb{P}\left(e^{tX}\geq e^{ta}\right)\leq\frac{\mathbb{ E}\left(e^{tX}\right)}{e^{ta}}.\]
_Since the inequality holds for every \(t>0\), we have_
\[\mathbb{P}(X\geq a)\leq\inf_{t>0}\frac{\mathbb{E}\left(e^{tX}\right)}{e^{ta}}.\]
**Corollary B.1.1**.: _Let \(\xi\sim N(0,\sigma^{2}I_{D})\) be a \(D\)-dimensional normal random vector with mean \(0\) and covariance matrix \(\sigma^{2}I_{D}\). According to the Chernoff bound,_
\[\mathbb{P}(\|\xi\|_{2}\geq t)\leq\left(\frac{t^{2}}{D\sigma^{2}}\exp\left(1- \frac{t^{2}}{D\sigma^{2}}\right)\right)^{D/2}\]
_for \(t\geq\sqrt{D}\sigma\)._
**Corollary B.1.2**.: _Let \(n\sim Bino(N,p)\) be a binomial random variable with size \(N\) and probability \(p\). According to the Chernoff bound,_
\[\mathbb{P}\left(\frac{n}{N}\geq p+\epsilon\right)\leq\exp\left\{-N\mathcal{D} _{KL}\left(p+\epsilon\|p\right)\right\},\]
\[\mathbb{P}\left(\frac{n}{N}\leq p-\epsilon\right)\leq\exp\left\{-N\mathcal{D} _{KL}\left(p-\epsilon\|p\right)\right\},\]
_for \(\epsilon>0\), where_
\[\mathcal{D}_{KL}(a\|b)=a\log(\frac{a}{b})+(1-a)\log(\frac{1-a}{1-b})\]
_denotes the Kullback-Leibler divergence between Bernoulli distributions \(Be(a)\) and \(Be(b)\)._
**Lemma B.2**.: _Assume there is a sequence of observed points \(\{y_{i}\}_{i=1}^{n}\), with a series of weights \(W(y_{1}),\cdots,W(y_{1})\). Let the local moving weighted average be_
\[\widehat{\mu}_{n}=\frac{\sum_{i=1}^{n}W(y_{i})y_{i}}{\sum_{i=1}^{n}W(y_{i})}.\]
_Then, if \(\{y:W(y)>0\}\subset\mathcal{B}_{D}(z,r)\),_
\[\sqrt{n}\left(\widehat{\mu}_{n}-\widehat{\mu}_{w}\right)\overset{d}{\to}N \left(0,\frac{\Sigma}{\mathbb{E}(W)^{2}}\right),\]
_with \(\Sigma\leq r^{2}I_{D}\) and \(\widehat{\mu}_{w}=\mathbb{E}(WY)/\mathbb{E}(W)\)._
Proof.: According to the central limit theorem and the law of large numbers,
\[\frac{\sum_{i=1}^{n}w_{i}}{n}\overset{a.s.}{\to}\mathbb{E}(W),\]
\[\sqrt{n}\left(\frac{\sum_{i=1}^{n}w_{i}y_{i}}{n}-\mathbb{E}(WY)\right) \overset{d}{\to}N(0,\Sigma),\]
where \(\Sigma\leq r^{2}I_{D}\). Thus,
\[\sqrt{n}\left(\widehat{\mu}_{n}-\frac{\mathbb{E}(WY)}{\mathbb{E}(W)}\right) \overset{d}{\rightarrow}N\left(0,\frac{\Sigma}{\mathbb{E}(W)^{2}}\right).\]
**Corollary B.2.1**.: _In the case of \(n=CD\sigma^{-3}\), with \(\sigma\) sufficiently small,_
\[\mathbb{P}(\|\widehat{\mu}_{n}-\widehat{\mu}_{w}\|_{2}\leq c\sigma^{2})\geq 1 -C_{1}\sigma^{c_{1}-1}\exp\left(-C_{2}\sigma^{c_{1}-1}\right),\]
_for some constant \(C_{1}\), \(C_{2}\), and any \(c_{1}\in(0,1)\)._
Proof.: According to Corollary B.1.1, when \(\sigma\) is sufficiently small,
\[\mathbb{P}(\|\widehat{\mu}_{n}-\widehat{\mu}_{w}\|_{2}\leq c\sigma^{2})\geq \mathbb{P}(\frac{r}{\sqrt{n}}\sqrt{\chi}\leq c\sigma^{2})\]
\[\geq 1-\left(\frac{c}{D}\frac{n\sigma^{2}}{\log(1/\sigma)}\exp\left\{1-\frac{ c}{D}\frac{n\sigma^{2}}{\log(1/\sigma)}\right\}\right)^{D/2},\]
for \(n\geq\frac{D}{c}\sigma^{-2}\log(1/\sigma)\). Thus, in the case of \(n=CD\sigma^{-3}\), the probability is close to 1.
### Proof of content in Section 2
#### b.2.1 Proof of Lemma 2.4
Proof.: Recall that, in our model, \(Y=X+\xi\), with \(X\sim\omega(\mathcal{M})\) and \(\xi\sim N(0,\sigma^{2}I_{D})\). We first check the Chernoff bound for the noise term, which is
\[\mathbb{P}(\|\xi\|_{2}\geq c_{1}r) \leq \left(\frac{c_{1}^{2}r^{2}}{D\sigma^{2}}\exp\left\{1-\frac{c_{1}^ {2}r^{2}}{D\sigma^{2}}\right\}\right)^{D/2}\] \[= \left(\frac{c_{1}^{2}C^{2}2d}{D}\log(1/\sigma)\exp\left\{1-\frac {c_{1}^{2}C^{2}2d}{D}\log(1/\sigma)\right\}\right)^{D/2}\] \[= c_{2}\left(\log(1/\sigma)\right)^{D/2}\sigma^{c_{1}^{2}C^{2}d}\] \[\leq c_{2}r^{d},\]
where the first inequality comes from the Chernoff bound, while the last one occurs because \(\sigma\) is sufficiently small.
Then, for \(\mathbb{P}(Y\in\mathcal{B}_{D}(z,r))\), on one hand,
\[\mathbb{P}(Y\in\mathcal{B}_{D}(z,r)) \geq \mathbb{P}(\|\xi\|_{2}\leq c_{1}r)\mathbb{P}(X\in\mathcal{M}\cap \mathcal{B}_{D}(z,(1-c_{1})r))\] \[\geq (1-c_{2}r^{d})\frac{vol(\mathcal{M}\cap\mathcal{B}_{D}(z,(1-c_{1} )r))}{vol(\mathcal{M})}\] \[\geq c_{3}r^{d}.\]
On the other,
\[\mathbb{P}(Y\in\mathcal{B}_{D}(z,r)) = \mathbb{P}(X\in\mathcal{M}\cap\mathcal{B}_{D}(z,C_{2}r),\|Y-z\|_{ 2}\leq r)\] \[+\mathbb{P}(X\notin\mathcal{M}\cap\mathcal{B}_{D}(z,C_{2}r),\|Y-z \|_{2}\leq r)\] \[\leq \mathbb{P}(X\in\mathcal{M}\cap\mathcal{B}_{D}(z,C_{2}r))+\mathbb{ P}(\|\xi\|_{2}\geq(C_{2}-1)r)\] \[\leq \frac{vol(\mathcal{M}\cap\mathcal{B}_{D}(z,(1-c_{1})r))}{vol( \mathcal{M})}+c_{4}r^{d}\] \[\leq c_{5}r^{d}.\]
Therefore, \(\mathbb{P}(Y\in\mathcal{B}_{D}(z,r))=cr^{d}\) for some constant \(c\).
#### b.2.2 Proof of Corollary 2.4.1
Proof.: The number of points \(n\) can be viewed as a binomial random variable with size \(N\) and probability parameter \(p=cr^{d}\). For any \(c_{1}\in(0,1)\), according to Corollary B.1.2,
\[\mathbb{P}\left(\frac{n}{N}\leq(1-c_{1})p\right)\leq \exp\left\{-N\left((1-c_{1})p\log(1-c_{1})\right)\right\}\] \[\times\exp\left\{-N\left((1-(1-c_{1})p)\log(\frac{1-(1-c_{1})p}{ 1-p})\right)\right\}\] \[\leq \exp\left(-C_{1}\sigma^{-3}\right),\]
\[\mathbb{P}\left(\frac{n}{N}\geq(1+c_{1})p\right)\leq \exp\left\{-N\left((1+c_{1})p\log(1+c_{1})\right)\right\}\] \[\times\exp\left\{-N\left((1-(1+c_{1})p)\log(\frac{1-(1+c_{1})p}{ 1-p})\right)\right\}\] \[\leq \exp\left(-C_{2}\sigma^{-3}\right).\]
Therefore,
\[\mathbb{P}(C_{3}D\sigma^{-3}\leq n\leq C_{4}D\sigma^{-3})\geq 1-2\exp\left(-C_ {5}\sigma^{-3}\right).\]
When \(\sigma\) is sufficiently small, the probability will be close to \(1\).
#### b.2.3 Proof of Proposition 2.5
Proof.: Without loss of generality, we adjust the Cartesian-coordinate system such that \(z=(\Delta,0,\cdots,0)\) and \(\xi=(\xi^{(1)},\cdots,\xi^{(D)})\), with \(\Delta=\|z\|_{2}\leq C_{1}\sigma\) for some constant \(C_{1}\). As
Figure 17: Illustration of the integral region in the proof of Proposition 2.5: (a) The region of calculating the conditional expectation \(\mathbb{E}(\xi|\xi\in\mathcal{B}_{D}(\Delta U,r))\), where the two shaded parts cancel each other out; (b) Three multidimensional cubes designed for bounding the expectation.
illustrated in Fig.17, a large part of the calculating region is canceled, and the expectations can be bounded through three integrations on multidimensional cubes. That is,
\[V_{1} = [\Delta-r,\ \Delta+r]\times[-r,\ r]^{D-1},\] \[V_{2} = [\frac{\Delta-r}{\sqrt{D}},\ \frac{r-\Delta}{\sqrt{D}}]^{D},\] \[V_{3} = [\Delta-\frac{r}{\sqrt{D}},\ \Delta+\frac{r}{\sqrt{D}}]\times[- \frac{r}{\sqrt{D}},\ \frac{r}{\sqrt{D}}]^{D-1}.\]
To bound the distance between \(\mathbb{E}(\xi|\xi\in\mathcal{B}_{D}(z,r))\) and the origin, let
\[I_{1} = \int_{V_{1}}|\xi^{(1)}|(2\pi\sigma^{2})^{-\frac{D}{2}}\exp\{- \frac{\|\xi\|_{2}^{2}}{2\sigma^{2}}\}d\xi,\] \[I_{2} = \int_{V_{2}}|\xi^{(1)}|(2\pi\sigma^{2})^{-\frac{D}{2}}\exp\{- \frac{\|\xi\|_{2}^{2}}{2\sigma^{2}}\}d\xi,\] \[I_{3} = \int_{V_{3}}(2\pi\sigma^{2})^{-\frac{D}{2}}\exp\{-\frac{\|\xi\|_ {2}^{2}}{2\sigma^{2}}\}d\xi.\]
Because of the symmetry,
\[\|\mathbb{E}(\xi|\xi\in\mathcal{B}_{D}(z,r))\|_{2} = \frac{\int_{\mathcal{B}_{D}(z,r)}\xi^{(1)}(2\pi\sigma^{2})^{- \frac{D}{2}}\exp\{-\frac{\|\xi\|_{2}^{2}}{2\sigma^{2}}\}d\xi}{\int_{\mathcal{B }_{D}(z,r)}(2\pi\sigma^{2})^{-\frac{D}{2}}\exp\{-\frac{\|\xi\|_{2}^{2}}{2 \sigma^{2}}\}d\xi}\] \[\leq \frac{I_{1}-I_{2}}{I_{3}}.\]
For simplicity, denote
\[2^{D-1}(2\pi\sigma^{2})^{\frac{D}{2}}I_{1} = \int_{\Delta-r}^{\Delta+r}|t|\exp\{-\frac{t^{2}}{2\sigma^{2}}\} dt\left(\int_{0}^{r}\exp\{-\frac{s^{2}}{2\sigma^{2}}\}ds\right)^{D-1}\] \[:= I_{1}^{+}+I_{1}^{-},\] \[2^{D-1}(2\pi\sigma^{2})^{\frac{D}{2}}I_{2} = \int_{\frac{\Delta-r}{\sqrt{D}}}^{\frac{r-\Delta}{\sqrt{D}}}|t| \exp\{-\frac{t^{2}}{2\sigma^{2}}\}dt\left(\int_{0}^{\frac{r-\Delta}{\sqrt{D}} }\exp\{-\frac{s^{2}}{2\sigma^{2}}\}ds\right)^{D-1}\] \[:= I_{2}^{+}+I_{2}^{-}.\]
Meanwhile, let
\[a = \int_{0}^{\frac{r-\Delta}{\sqrt{D}}}t\exp\{-\frac{t^{2}}{2\sigma^{2}} \}dt,\quad\delta_{a}=\int_{\frac{r-\Delta}{\sqrt{D}}}^{r+\Delta}t\exp\{-\frac{ t^{2}}{2\sigma^{2}}\}dt,\] \[b = \int_{0}^{\frac{r-\Delta}{\sqrt{D}}}\exp\{-\frac{s^{2}}{2\sigma^{ 2}}\}ds,\quad\delta_{b}=\int_{\frac{r-\Delta}{\sqrt{D}}}^{r}\exp\{-\frac{s^{2} }{2\sigma^{2}}\}ds.\]
Then, there is
\[a = \sigma^{2}\left(1-\exp\{-\frac{Cr^{2}}{2\sigma^{2}}\}\right)< \sigma^{2},\quad b=\sigma(\Phi(C)-\Phi(0))<C\sigma,\] \[\delta_{a} = \sigma^{2}\left(\exp\left\{-\frac{(\frac{r-\Delta}{\sqrt{D}})^{2} }{2\sigma^{2}}\right\}-\exp\left\{-\frac{(r+\Delta)^{2}}{2\sigma^{2}}\right\} \right)<C\sigma^{3},\]
\[\delta_{b}<(r-\frac{r-\Delta}{\sqrt{D}})\exp\left\{-\frac{(\frac{r-\Delta}{\sqrt{D}}) ^{2}}{2\sigma^{2}}\right\}=C\sigma^{2}\log(1/\sigma)<C\sigma.\]
Furthermore, we can obtain
\[I_{1}^{+}-I_{2}^{+} = (a+\delta_{a})(b+\delta_{b})^{D-1}-ab^{D-1}\] \[= a(b+\delta_{b})^{D-1}-ab^{D-1}+\delta_{a}(b+\delta_{b})^{D-1}\] \[= \delta_{a}(b+\delta_{b})^{D-1}+a((b+\delta_{b})^{D-1}-b^{D-1})\] \[< C\sigma^{D+2}+a\delta_{b}\left\{(b+\delta_{b})^{D-2}+(b+\delta_ {b})^{D-3}b+\cdots+(b+\delta_{b})b^{D-3}+b^{D-2}\right\}\] \[< C\sigma^{D+2}.\]
Thus, \(I_{1}-I_{2}<C\sigma^{-D}(I_{1}^{+}-I_{2}^{+})=C\sigma^{2}\).
Additionally, it is clear that \(I_{3}>C\), and hence,
\[\|\mathbb{E}(\xi|\xi\in\mathcal{B}_{D}(z,r))\|_{2}\leq C\sigma^{2}.\]
#### b.2.4 Proof of Lemma 2.6
Proof.: Let \(\nu_{c}(y)=\int_{\mathcal{M}\setminus\mathcal{M}_{R}}\phi_{\sigma}(y-x) \omega(x)dx\); then, according to the model setting,
\[\nu_{c}(y)=\nu(y)-\nu_{R}(y)\geq 0.\]
The probability measure within \(\mathcal{B}_{D}(z,r)\) is proportional to \(\nu(y)\), which can be expressed as
\[\tilde{\nu}(y) = \frac{\nu(y)}{\int_{\mathcal{B}_{D}(z,r)}\nu(y)dy}\] \[= \frac{\nu_{R}(y)+\nu_{c}(y)}{\int_{\mathcal{B}_{D}(z,r)}\nu_{R}( y)dy+\int_{\mathcal{B}_{D}(z,r)}\nu_{c}(y)dy}\]
for \(y\in\mathcal{B}_{D}(z,r)\). Then, the relative difference between \(\tilde{\nu}(y)\) and \(\tilde{\nu}_{R}(y)\) can be evaluated as
\[|\tilde{\nu}(y)-\tilde{\nu}_{R}(y)| = \left|\frac{\nu_{R}(y)+\nu_{c}(y)}{\int_{\mathcal{B}_{D}(z,r)} \nu_{R}(y)dy+\int_{\mathcal{B}_{D}(z,r)}\nu_{c}(y)dy}-\frac{\nu_{R}(y)}{\int_ {\mathcal{B}_{D}(z,r)}\nu_{R}(y)dy}\right|\] \[\leq \left|\frac{\nu_{c}(y)}{\nu_{R}(y)}-\frac{\int_{\mathcal{B}_{D}( z,r)}\nu_{c}(y)dy}{\int_{\mathcal{B}_{D}(z,r)}\nu_{R}(y)dy}\right|\tilde{\nu}_{R}(y).\]
Let \(R=r+C_{1}\sigma\sqrt{(d+\eta)\log(1/\sigma)}\) and \(R^{\prime}=r+C_{1}\sigma<R\); then,
\[\frac{\nu_{c}(y)}{\nu_{R}(y)} = \frac{\int_{\mathcal{M}\setminus\mathcal{M}_{R}}\phi_{\sigma}(y-x )\omega(x)dy}{\int_{\mathcal{M}_{R}}\phi_{\sigma}(y-x)\omega(x)dy}\] \[\leq \frac{\phi_{\sigma}(R-r)\omega(\mathcal{M})}{\phi_{\sigma}(R^{ \prime}-r)\omega(\mathcal{M}\cap\mathcal{B}_{D}(y,R^{\prime}))}\] \[\leq C\frac{\sigma^{d+\eta}V}{vol(\mathcal{M}\cap\mathcal{B}_{D}( y,R^{\prime}))}\] \[\leq C\frac{\sigma^{d+\eta}}{(R^{\prime})^{d}}.\]
Therefore,
\[|\tilde{\nu}(y)-\tilde{\nu}_{R}(y)|\leq C\sigma^{\eta}\tilde{\nu}_{R}(y).\]
### Proof of content in Section 3
#### b.3.1 Proof of Proposition 3
Recall that \(z^{*}\) is the origin, and \(z-z^{*}\) is the \((d+1)\)-th direction in the Cartesian-coordinate system. Then,
\[\mu_{z}^{\mathbb{B}} =(\mu^{(1)},\cdots,\mu^{(d)},\mu^{(d+1)},\mu^{(d+2)},\cdots,\mu^{ (D)}),\] \[z =(0,\cdots,0,\Delta,0,\cdots,0),\]
where \(\Delta=\|z-z^{*}\|\leq c\sigma\). The angle between \(\mu_{z}^{\mathbb{B}}-z\) and \(z^{*}-z\) can be represented by its sine as follows:
\[\sin^{2}(\Theta(\mu_{z}^{\mathbb{B}}-z,\;z^{*}-z)) =1-\cos^{2}(\Theta(\mu_{z}^{\mathbb{B}}-z,\;z^{*}-z))\] \[=1-\left(\frac{(\mu_{z}^{\mathbb{B}}-z)\cdot(z^{*}-z)}{\|\mu_{z} ^{\mathbb{B}}-z\|_{2}\|z^{*}-z\|_{2}}\right)^{2}\] \[=\frac{\sum_{i\neq d+1}(\mu^{(i)})^{2}}{\sum_{i\neq d+1}(\mu^{(i) })^{2}+(\mu^{(d+1)}-\Delta)^{2}}\]
If
\[\left\{\begin{aligned} |\mu^{(i)}-\Delta|& \geq c_{1}\sigma,&\text{for }i=d+1;\\ |\mu^{(i)}|&\leq c_{2}\sigma^{2}\sqrt{\log(1/\sigma)},& \text{for }i\neq d+1.\end{aligned}\right. \tag{14}\]
then
\[\sin^{2}(\Theta(\mu_{z}^{\mathbb{B}}-z,\;z^{*}-z))\leq\frac{(D-1)C_{2}^{2} \sigma^{4}\log(1/\sigma)}{(D-1)C_{2}^{2}\sigma^{4}\log(1/\sigma)+C_{1}^{2} \sigma^{2}}\leq C\sigma^{2}\log(1/\sigma)\]
for some constant \(C\).
In other words, equation (14) is sufficient for \(\sin(\Theta(\mu_{z}^{\mathbb{B}}-z,\;z^{*}-z))\leq C\sigma\sqrt{\log(1/\sigma)}\).
#### b.3.2 Proof of Lemma 3
To prove Lemma 3, the following propositions are needed.
Assume there is a mapping \(\Psi:\mathbb{D}\to\mathcal{M}_{R}\) that satisfies, for any point \(z=(z_{1},\cdots,z_{d},0,\cdots,0)\in\mathbb{D}\),
\[\Psi(z)=(z_{1},\cdots,z_{d},\psi(z_{1},\cdots,z_{d})).\]
Since the approximation error of the tangent space for the manifold localization is in quadratic order,
\[d(\Psi(z))=(1+g(z))dz\]
with \(|g(z)|<C\|z\|_{2}\).
Let \(\delta_{z}=\Psi(z)-z\) for \(z\in\mathbb{D}\). Then, there is
\[\nu_{R}(y) =\int_{\mathcal{M}_{R}}\phi_{\sigma}(y-x)\omega(x)dx\] \[=\int_{\mathcal{M}_{R}}\phi_{\sigma}(y-\Psi(z))\omega(\Psi(z))d \Psi(z)\] \[=\frac{1}{V}\int_{\mathbb{D}}\phi_{\sigma}(y-z-\delta_{z})(1+g(z) )dz\] \[=\frac{1}{V}\int_{\mathbb{D}}\phi_{\sigma}(y-z-\delta_{z})dz+\frac {1}{V}\int_{\mathbb{D}}\phi_{\sigma}(y-z-\delta_{z})g(z)dz.\]
The difference between \(\nu_{\mathbb{D}}(y)\) and the first term of \(\nu_{R}(y)\) can be expressed as follows:
\[\Big{|}\nu_{\mathbb{D}}(y) -\frac{1}{V}\int_{\mathbb{D}}\phi_{\sigma}(y-z-\delta_{z})dz\Big{|} \!=\!\Big{|}\frac{1}{V}\int_{\mathbb{D}}\phi_{\sigma}(y-z)-\phi_{\sigma}(y-z- \delta_{z})dz\Big{|}\] \[\leq\frac{1}{V}\int_{\mathbb{D}}\phi_{\sigma}(y-z)\left|1-\exp \left\{-\frac{\|x-z-\delta_{z}\|_{2}^{2}-\|x-z\|_{2}^{2}}{2\sigma^{2}}\right\} \right|dz\] \[\leq\frac{1}{V}\int_{\mathbb{D}}\phi_{\sigma}(y-z)\frac{\|x-z- \delta_{z}\|_{2}^{2}-\|x-z\|_{2}^{2}}{2\sigma^{2}}dz\] \[\leq\frac{1}{V}\int_{\mathbb{D}}\phi_{\sigma}(y-z)\frac{\|\delta_ {z}\|_{2}^{2}}{2\sigma^{2}}dz\] \[\leq C\sigma\nu_{\mathbb{D}}(y),\]
for some constant \(C\). Moreover, the second term of \(\nu_{R}(y)\) is of higher order:
\[\left|\frac{1}{V}\int_{\mathbb{D}}\phi_{\sigma}(y-z-\delta_{z})g( z)dz\right| \leq\frac{1}{V}\int_{\mathbb{D}}\phi_{\sigma}(y-z-\delta_{z})\|z \|_{2}dz\] \[\leq C\sigma\sqrt{\log(1/\sigma)}\nu_{\mathbb{D}}(y).\]
#### b.3.3 Proof of Lemma 3.4
Proof.: Assume the manifold can be regard as \(\mathbb{D}=T_{z^{*}}\mathcal{M}\cap\mathcal{B}_{D}(y,R)\) locally with
\[R=r+C\sigma\sqrt{\log(1/\sigma)}\gg r.\]
We still investigate the conditional expectation within \(\mathcal{B}_{D}(z,r)\), where we use \(\nu_{\mathbb{D}}(y)\) to denote the density function of \(y\) and \(\widetilde{\nu}_{\mathbb{D}}(y)\) to denote its normalized version in \(\mathcal{B}_{D}(z,r)\). Similarly, we let \(z^{*}\) be the origin, \(z-z^{*}\) be the \((d+1)\)-th direction, and \(\mu_{z}^{\mathbb{B}}=(\mu^{(1)},\cdots,\mu^{(D)})\). Then, for the \(i\)-th direction,
\[\mu^{(i)}=\int_{\mathcal{B}_{D}(z,r)}y^{(i)}\widetilde{\nu}_{\mathbb{D}}(y)\, dy=\frac{\int_{\mathcal{B}_{D}(z,r)}y^{(i)}\nu_{\mathbb{D}}(y)\,dy}{\int_{ \mathcal{B}_{D}(z,r)}\nu_{\mathbb{D}}(y)\,dy}.\]
We first prove that \(\mu^{(i)}=0\) for \(i\neq d+1\):
Since the ball \(\mathcal{B}_{D}(z,r)\) is centrosymmetric for \(i\neq d+1\), there exists a mapping \(h_{i}\) for each \(i\neq d+1\) such that, for any \(y=(y^{(1)},\cdots,y^{(i)},\cdots,y^{(D)})\in\mathcal{B}_{D}(z,r)\),
\[h_{i}\!:(y^{(1)},\cdots,y^{(i)},\cdots,y^{(D)})\mapsto(y^{(1)},\cdots,-y^{(i) },\cdots,y^{(D)}),\]
and \(h_{i}(y)\in\mathcal{B}_{D}(z,r)\). That is, for all \(y\in\mathcal{B}_{D}(z,r)\), \(h_{i}(y)\) is its mirror with respect to the \(i\)-th direction, and
\[y\in\mathcal{B}_{D}(y_{r}) \Leftrightarrow h_{i}(y)\in\mathcal{B}_{D}(z,r),\quad\text{for }i\neq d+1,\] \[x\in\mathbb{D} \Rightarrow h_{i}(x)\in\mathbb{D},\quad\text{for }i=1,\cdots,d,\] \[x\in\mathbb{D} \Rightarrow h_{i}(x)=x,\quad\text{for }i=d+1,\cdots,D.\]
Let \(\mathcal{B}_{i}^{+}\) and \(\mathcal{B}_{i}^{-}\) be two hemispheres such that
\[\mathcal{B}_{i}^{+}=\left\{y\in\mathcal{B}_{D}(z,r):y^{(i)}>0\right\},\quad \mathcal{B}_{i}^{-}=\left\{y\in\mathcal{B}_{D}(z,r):y^{(i)}<0\right\}.\]
Then,
\[\mu^{(i)} = \int_{\mathcal{B}_{i}^{+}}y^{(i)}\widetilde{\nu}_{\mathbb{D}}(y) dy+\int_{\mathcal{B}_{i}^{-}}y^{(i)}\widetilde{\nu}_{\mathbb{D}}(y)dy\] \[= \int_{\mathcal{B}_{i}^{+}}y^{(i)}\widetilde{\nu}_{\mathbb{D}}(y) dy+\int_{\mathcal{B}_{i}^{+}}(h_{i}(y))^{(i)}\widetilde{\nu}_{\mathbb{D}}(h_{i}(y) )d(h_{i}(y))\] \[= \int_{\mathcal{B}_{i}^{+}}y^{(i)}(\widetilde{\nu}_{\mathbb{D}}(y )-\widetilde{\nu}_{\mathbb{D}}(h_{i}(y)))dy\]
To show \(\mu^{(i)}=0\), it is sufficient to show \(\widetilde{\nu}_{\mathbb{D}}(y)=\widetilde{\nu}_{\mathbb{D}}(h_{i}(y))\) or \(\nu_{\mathbb{D}}(y)=\nu_{\mathbb{D}}(h_{i}(y))\). Recall that
\[\nu_{\mathbb{D}}(y)=\int_{\mathbb{D}}\phi_{\sigma}(y-x)\omega(x)dx,\]
and
\[\|y-x\|_{2}=\|h_{i}(y)-h_{i}(x)\|_{2},\quad\|h_{i}(y)-x\|_{2}=\|y-h_{i}(x)\|_{ 2}.\]
Therefore, for \(i=1,\cdots,d\),
\[\nu_{\mathbb{D}}(h_{i}(y)) = \int_{\mathbb{D}}\phi_{\sigma}(h_{i}(y)-x)\omega(x)dx\] \[= \int_{\mathbb{D}}\phi_{\sigma}(y-h_{i}(x))\omega(h_{i}(x))dh_{i}(x)\] \[= \nu_{\mathbb{D}}(y),\]
and, for \(i=d+2,\cdots,D\),
\[\nu_{\mathbb{D}}(y) = \int_{\mathbb{D}}\phi_{\sigma}(y-x)\omega(x)dx\] \[= \int_{\mathbb{D}}\phi_{\sigma}(h_{i}(y)-h_{i}(x))\omega(h_{i}(x) )dh_{i}(x)\] \[= \int_{\mathbb{D}}\phi_{\sigma}(h_{i}(y)-x)\omega(x)dx\] \[= \nu_{\mathbb{D}}(h_{i}(y)).\]
Thus,
\[\mu^{(i)}=0,\quad\text{for }i\neq d+1.\]
For \(i=d+1\), we need to bound \(|\Delta-\mu^{(d+1)}|\) below:
According to Lemma 2.4 and our model setting,
\[|\Delta-\mu^{(d+1)}| = \left|\frac{\int_{\mathcal{B}_{D}(z,r)}\int_{\mathbb{D}}(\Delta-y^ {(d+1)})\phi_{\sigma}(y-x)\omega(x)\,dx\,dy}{\int_{\mathcal{B}_{D}(z,r)}\nu_{ \mathbb{D}}(y)\,dy}\right|\] \[= Cr^{-d}\left|\int_{\mathcal{B}_{D}(z,r)}\int_{\mathbb{D}}( \Delta-y^{(d+1)})\phi_{\sigma}(y-x)\omega(x)\,dx\,dy\right|.\]
If we express the numerator in the form of elements in the Cartesian coordinates,
\[\left|\int_{\mathcal{B}_{D}(z,r)}\int_{\mathbb{D}}(\Delta-y^{(d+1 )})\phi_{\sigma}(y-x)\omega(x)\,dx\,dy\right|\] \[= \left|\int_{\mathcal{B}_{D}(z,r)}(\Delta-y^{(d+1)})\phi_{\sigma} (y^{(d+1)})\int_{\mathbb{D}}\prod_{j=1}^{d}\phi_{\sigma}(y^{(j)}-x^{(j)}) \omega(x)\,dx\,\prod_{j=d+2}^{D}\phi_{\sigma}(y^{(j)})\,dy\right|\] \[\geq C\left|\int_{\mathcal{B}_{D}(z,r)}(\Delta-y^{(d+1)})\phi_{ \sigma}(y^{(d+1)})\prod_{j=d+2}^{D}\phi_{\sigma}(y^{(j)})\,dy\right|\] \[\geq C\int_{0}^{\Delta}t(\phi_{\sigma}(t-\Delta)-\phi_{\sigma}(t +\Delta))\,dt\,\int_{\sum_{j\neq d+1}(y^{(j)})^{2}\leq r^{2}-\Delta^{2}}\phi_{ \sigma}(y^{(j)})\,dy\] \[:= CI_{1}I_{2},\]
where the last inequality is the result of the cropping of the integral area (similar to that in Fig. 17), while the first inequality stems from the fact that
(B.2) \[\int_{\mathbb{D}}\prod_{j=1}^{d}\phi_{\sigma}(y^{(j)}-x^{(j)}) \omega(x)\,dx \geq \mathbb{P}\left(\|\xi^{\prime}\|_{2}\geq(R-r)|\xi^{\prime}\sim N( 0,\sigma^{2}I_{d})\right)\] \[\geq 1-c\sigma^{C}\approx 1.\]
If we let \(p=(y^{(1)},\cdots,y^{(d)})\) and \(q=(y^{(d+2)},\cdots,y^{(D)})\), with \(\Delta=C_{0}\sigma\) and \(r\geq\Delta+c_{0}\sigma\), we have
\[I_{1} = \int_{-\Delta}^{\Delta}t\phi_{\sigma}(t-\Delta)\,dt=\frac{C_{0} \sqrt{\pi}Erf(C_{0})-2(e^{-C_{0}^{2}}-1)}{\sqrt{2\pi}}\sigma,\] \[I_{2} = \int_{\|p\|^{2}+\|q\|_{2}^{2}\leq r^{2}-\Delta^{2}}\phi_{\sigma} (q)\,dp\,dq\] \[\geq \int_{\|p\|^{2}+\|q\|_{2}^{2}\leq c_{0}^{2}\sigma^{2}}\phi_{ \sigma}(q)\,dp\,dq\] \[= C\sigma^{-(D-d-1)}\int_{0}^{c_{0}\sigma}(c_{0}^{2}\sigma^{2}-s^ {2})^{\frac{d}{2}}s^{D-d-2}\exp\left(-\frac{s^{2}}{2\sigma^{2}}\right)\,ds\] \[\geq c\sigma^{d}.\]
In other words, when \(C_{0}>0\) and \(c_{0}>0\), we have \(I_{1}\geq c\sigma\), \(I_{2}\geq c\sigma^{d}\), and thus
\[|\Delta-\mu^{(d+1)}|\geq Cr^{-d}I_{1}I_{2}\geq c\sigma.\]
Combining all the results above, we have
\[\begin{cases}|\Delta-\mu_{\mathbb{D}}^{(d+1)}|\geq c\sigma\\ |\mu_{\mathbb{D}}^{(i)}|=0,\quad\text{for }i\neq d+1\end{cases}.\]
#### b.3.4 Proof of Theorem 3.5
Proof.: The proof is based on the framework of Lemma B.2 and Corollary 2.4.1. We first provide an estimation of the local sample size, and then show the equivalent property between \(\mu_{z}^{\mathbb{B}}\) and \(\mathbb{E}(W(Y)Y)/\mathbb{E}(W(Y))\).
For simplicity, we let the collection of observation that falls in \(\mathcal{B}_{D}(z,r_{0})\) be \(\{y_{i}\}_{i=1}^{n}\), with size \(n\). According to Corollary 2.4.1, if \(N=Cr_{0}^{-d}\sigma^{-3}\),
\[\mathbb{P}(C_{3}D\sigma^{-3}\leq n\leq C_{4}D\sigma^{-3})\geq 1-2\exp\left(-C_{ 5}\sigma^{-3}\right).\]
In the definition of \(F(z)\), the weight function is constructed as
\[W(y)=\left(1-\frac{\|z-y\|_{2}^{2}}{r_{0}^{2}}\right)^{k}.\]
To obtain the asymptotic distribution, we need to evaluate \(\mathbb{E}(W(Y)Y)\) and \(\mathbb{E}(W(Y))\). Same with the proof of Theorem 3.1, we only need to work on \(\nu_{\mathbb{D}}(y)\) under the same setting of Cartesian coordinates, which means, \(z^{*}\) is the origin, \(z-z^{*}\) is the \((d+1)\)-th direction in the Cartesian-coordinate system, and
\[z=(\underbrace{0,\cdots,0}_{d},\Delta,\underbrace{0,\cdots 0}_{D-d-1}).\]
If we define \(p=(y^{(1)},\cdots,y^{(d)})\), \(t=y^{(d+1)}\), and \(q=(y^{(d+2)},\cdots,y^{(D)})\), and assume \(\eta\in\mathbb{R}^{2k}\) being an auxiliary vector, there is
\[\mathbb{E}(W(Y))=\frac{\int_{\mathcal{B}_{D}(z,r_{0})}\int_{ \mathbb{D}}W(y)\phi_{\sigma}(y-x)\omega(x)\,dx\,dy}{\int_{\mathcal{B}_{D}(z,r _{0})}\int_{\mathbb{D}}\phi_{\sigma}(y-x)\omega(x)\,dx\,dy}\] \[\approx cr_{0}^{-d}\int_{\|p\|_{2}^{2}+(t-\Delta)^{2}+\|q\|_{2}^{2}\leq r _{0}}W(y)\phi_{\sigma}(t-\Delta)\phi_{\sigma}(q)\,dp\,dt\,dq\] \[= cr_{0}^{-(d+2k)}\int_{\|p\|_{2}^{2}+(t-\Delta)^{2}+\|q\|_{2}^{2} \leq r_{0}}(r_{0}^{2}-\|p\|_{2}^{2}-(t-\Delta)^{2}-\|q\|_{2}^{2})^{k}\] \[\times\phi_{\sigma}(t-\Delta)\phi_{\sigma}(q)\,dp\,dt\,dq\] \[= cr_{0}^{-(d+2k)}\int_{\|p\|_{2}^{2}+(t-\Delta)^{2}+\|q\|_{2}^{2} +\|\eta\|_{2}^{2}\leq r_{0}}\phi_{\sigma}(t-\Delta)\phi_{\sigma}(q)\,d\eta\,dp \,dt\,dq\] \[= c.\]
Meanwhile, the \(i\)-th element of \(\mathbb{E}(W(Y)Y)\) can be expressed as
\[(\mathbb{E}(W(Y)Y))^{(i)}=\frac{\int_{\mathcal{B}_{D}(z,r_{0})} \int_{\mathbb{D}}W(y)y^{(i)}\phi_{\sigma}(y-x)\omega(x)\,dx\,dy}{\int_{ \mathcal{B}_{D}(z,r_{0})}\int_{\mathbb{D}}\phi_{\sigma}(y-x)\omega(x)\,dx\,dy}\] \[\approx cr_{0}^{-d}\int_{\|p\|_{2}^{2}+(t-\Delta)^{2}+\|q\|_{2}^{2}\leq r _{0}}W(y)y^{(i)}\phi_{\sigma}(t-\Delta)\phi_{\sigma}(q)\,dp\,dt\,dq\]
\[= c\tau_{0}^{-(d+2k)}\int_{\|p\|_{2}^{2}+(t-\Delta)^{2}+\|q\|_{2}^{2}+ \|\eta\|_{2}^{2}\leq r_{0}}y^{(i)}\phi_{\sigma}(t-\Delta)\phi_{\sigma}(q)\,d \eta\,dp\,dt\,dq\]
where the two approximation marks are the result of (B.2). By introducing the auxiliary vector \(\eta\), these two expectations can be viewed as the analogy of our manifold-fitting model in a higher-dimension case where the dimensionalities of the ambient space and latent manifold are \(D+2k\) and \(d+2k\), respectively.
Hence, let \(\widehat{\mu}_{w}=\mathbb{E}(W(Y)Y)/\mathbb{E}(W(Y))\); then, according to Theorem 3.1,
\[\left\{\begin{aligned} |\widehat{\mu}_{w}^{(d+1)}-\Delta|& \geq c_{1}\sigma\\ |\widehat{\mu}_{w}^{(i)}|&\leq c_{2}\sigma^{2}, \quad\text{for }i\neq d+1\end{aligned}\right..\]
Combining the result with Corollary 2.4.1 and Corollary B.2.1, if the total sample size \(N=Cr_{0}^{-d}\sigma^{-3}\),
\[\mathbb{P}(\|F(z)-\widehat{\mu}_{w}\|_{2}\leq c\sigma^{2})\geq 1-C_{1}\sigma^{ c_{1}-1}\exp\left(-C_{2}\sigma^{c_{1}-1}\right),\]
for some constant \(C_{1}\), \(C_{2}\), and any \(c_{3}\in(0,1)\), and thus
\[\sin\{\Theta\left(F(z)-z,\;z^{*}-z\right)\}\leq C_{1}\sigma\sqrt{\log(1/\sigma)}\]
with probability at least \(1-C_{2}\exp(-C_{3}\sigma^{-c})\), for some constant \(c\), \(C_{1}\), \(C_{2}\), and \(C_{3}\).
### Proof of content in Section 4
#### b.4.1 Proof of Theorem 4.2
Assume \(\widehat{\Pi}_{z^{*}}^{\perp}\) satisfies
\[\|\widehat{\Pi}_{z^{*}}^{\perp}-\Pi_{z^{*}}^{\perp}\|_{F}\leq c\sigma^{\kappa},\]
and the region \(\widehat{\mathbb{V}}_{z}\) is constructed correspondingly. As in the proof of Theorem 4.1, \(\widehat{\mu}_{z}^{\mathbb{V}}\) can be written as
\[\widehat{\mu}_{z}^{\mathbb{V}} = z+\widehat{\Pi}_{z^{*}}^{\perp}\mathbb{E}_{Y\sim\nu}\left(Y-z|Y \in\widehat{\mathbb{V}}_{z}\right)\] \[= z^{*}+\widehat{\Pi}_{z^{*}}^{-}\delta_{z}+\mathbb{E}_{\nu} \left(\widehat{\Pi}_{z^{*}}^{\perp}(X-z^{*})|Y\in\widehat{\mathbb{V}}_{z} \right)+\mathbb{E}_{\nu}\left(\widehat{\Pi}_{z^{*}}^{\perp}\xi|Y\in\widehat{ \mathbb{V}}_{z}\right),\]
which also can be divided into three parts. According to Lemma 2.6, we can assume \(\|X-z^{*}\|_{2}\leq C\sigma\sqrt{\log(1/\sigma)}\) for some constant \(C\). Let \(\delta_{z}=z-z^{*}\), and the three parts can be evaluated as follows:
1. \(\widehat{\Pi}_{z^{*}}^{-}\delta_{z}\): The norm of \(\widehat{\Pi}_{z^{*}}^{-}\delta_{z}\) is upper bounded as \[\|\widehat{\Pi}_{z^{*}}^{-}\delta_{z}\|_{2} = \|\Pi_{z^{*}}^{-}\delta_{z}+(\widehat{\Pi}_{z^{*}}^{-}-\Pi_{z^{*} }^{-})\delta_{z}\|_{2}\] \[\leq \|\Pi_{z^{*}}^{-}\delta_{z}\|_{2}+\|(\widehat{\Pi}_{z^{*}}^{-}- \Pi_{z^{*}}^{-})\|_{F}\|\delta_{z}\|_{2}\] \[\leq 0+c\sigma^{\kappa}\sigma\] \[\leq C\sigma^{1+\kappa},\] for some constant \(C\).
2. \(\mathbb{E}_{\nu}\left(\widehat{\Pi}_{z^{*}}^{\perp}(X-z^{*})|Y\in\widehat{\mathbb{V }}_{z}\right)\): From Jensen's inequality, \[\left\|\mathbb{E}_{\nu}\left(\widehat{\Pi}_{z^{*}}^{\perp}(X-z^{*})|Y\in\widehat{ \mathbb{V}}_{z}\right)\right\|_{2}\leq\mathbb{E}_{\nu}\left(\left\|\widehat{ \Pi}_{z^{*}}^{\perp}(X-z^{*})\right\|_{2}|Y\in\widehat{\mathbb{V}}_{z}\right).\] Since \(z^{*}\) and \(X\) are exactly on \(\mathcal{M}\), according to Lemma 2.3, \[\left\|\widehat{\Pi}_{z^{*}}^{\perp}(X-z^{*})\right\|_{2} =\left\|\Pi_{z^{*}}^{\perp}(X-z^{*})+(\widehat{\Pi}_{z^{*}}^{ \perp}-\Pi_{z^{*}}^{\perp})(X-z^{*})\right\|_{2}\] \[\leq\frac{1}{2\tau}\|X-z^{*}\|_{2}^{2}+\sigma^{*}\|X-z^{*}\|_{2},\] where \[\|X-z^{*}\|^{2} =\|X-z+z-z^{*}\|_{2}^{2}\] \[\leq\|X-z\|_{2}^{2}+\|z-z^{*}\|_{2}^{2}\] \[\leq C\sigma^{2}\log(1/\sigma),\] and thus \[\mathbb{E}_{\nu}\left(\widehat{\Pi}_{z^{*}}^{\perp}(X-z^{*})|Y\in\widehat{ \mathbb{V}}_{z}\right)\leq C\sigma^{1+\kappa}\sqrt{\log(1/\sigma)}.\]
3. \(\mathbb{E}_{\nu}\left(\widehat{\Pi}_{z^{*}}^{\perp}\xi|Y\in\widehat{\mathbb{V }}_{z}\right)\): The dislocation \(\Delta\) can be evaluated as follows: \[\Delta =\|\widehat{\Pi}_{z^{*}}^{\perp}(z-X)\|_{2}\] \[\leq\|\widehat{\Pi}_{z^{*}}^{\perp}(z-z^{*})\|_{2}+\|\widehat{ \Pi}_{z^{*}}^{\perp}(z^{*}-X)\|_{2}\] \[\leq\|z-z^{*}\|_{2}+C\sigma^{1+\kappa}\sqrt{\log(1/\sigma)}\] \[\leq C\sigma.\]
Thus, if we let \(\xi^{\prime}=\widehat{\Pi}_{z^{*}}^{\perp}\xi\), according to Proposition 2.5,
\[\left\|\mathbb{E}_{\nu}\left(\widehat{\Pi}_{z^{*}}^{\perp}\xi|Y \in\widehat{\mathbb{V}}_{z}\right)\right\|_{2} \leq\mathbb{E}_{\nu}\left(\left\|\mathbb{E}_{\phi}\left[\Pi_{z^{ *}}^{\perp}\xi|X,\;X+\xi\in\mathbb{V}_{z}\right]\right\|_{2}\right)\] \[\leq\mathbb{E}_{\nu}\left(\|\mathbb{E}\left(\xi^{\prime}|\xi^{ \prime}\in\mathcal{B}_{D-d}(a_{\Delta},r_{2})\right)\|_{2}\right)\] \[\leq C\sigma^{2}\]
for some constant \(C\).
Therefore,
\[\|\widehat{\mu}_{z}^{\mathbb{V}}-z^{*}\|_{2}\leq C\sigma^{1+\kappa}\sqrt{\log (1/\sigma)},\]
for some constant \(C\), and \(\widehat{\mu}_{z}^{\mathbb{V}}\) is \(\mathcal{O}(\sigma^{1+\kappa}\sqrt{\log(1/\sigma)})\) to \(\mathcal{M}\).
#### b.4.2 Proof of Theorem 4.4
Proof.: Assume \(U\) is the projection matrix onto \(z-z^{*}\), and \(\widetilde{U}\) is its estimation via \(\mu_{z}^{\mathbb{B}}-z\) such that \(\|U-\widetilde{U}\|_{F}\leq C\sigma\sqrt{\log(1/\sigma)}\) for some constant \(C\). Let \(U^{-}\) be the complement in \(\mathbb{R}^{D}\); then, \(\widehat{\mu}_{z}^{\mathbb{V}}\) can be rewritten as follows:
\[\widehat{\mu}_{z}^{\mathbb{V}} =z+\widetilde{U}\mathbb{E}_{\nu}\left(Y-z|Y\in\widehat{\mathbb{V }}_{z}\right)\] \[=z^{*}+\widetilde{U}^{-}\delta_{z}+\mathbb{E}_{\nu}\left( \widetilde{U}(X-z^{*})|Y\in\widehat{\mathbb{V}}_{z}\right)+\mathbb{E}_{\nu} \left(\widetilde{U}\xi|Y\in\widehat{\mathbb{V}}_{z}\right),\]
which also can be divided into three parts. According to Lemma 2.6, we can assume \(\|X-z^{*}\|_{2}\leq C\sigma\sqrt{\log(1/\sigma)}\) for some constant \(C\). Let \(\delta_{z}=z-z^{*}\). The three parts can be evaluated as follows:
1. \(\widetilde{U}^{-}\delta_{z}\): As \(\delta_{z}\) is orthogonal to the base of \(U^{-}\), \[\|\widetilde{U}^{-}\delta_{z}\|_{2}\leq\|U^{-}\delta_{z}\|_{2}+\|U^{-}- \widetilde{U}^{-}\|_{F}\|\delta_{z}\|_{2}\leq C\sigma^{2}\sqrt{\log(1/\sigma)}.\]
2. \(\mathbb{E}_{\nu}\left(\widetilde{U}(X-z^{*})|Y\in\widehat{\mathbb{V}}_{z}\right)\): Using Jensen's inequality, \[\left\|\mathbb{E}_{\nu}\left(\widetilde{U}(X-z^{*})|Y\in\widehat{\mathbb{V}} _{z}\right)\right\|_{2}\leq\mathbb{E}_{\nu}\left(\left\|\widetilde{U}(X-z^{* })\right\|_{2}|Y\in\widehat{\mathbb{V}}_{z}\right).\] Since \(U\) is one direction of \(\Pi_{z^{*}}^{\perp}\), \[\|U(X-z^{*})\|_{2}\leq\|\Pi_{z^{*}}^{\perp}(X-z^{*})\|_{2}.\] As \(z^{*}\) and \(X\) are exactly on \(\mathcal{M}\), according to Lemma 2.3, \[\left\|\widetilde{U}(X-z^{*})\right\| =\left\|U(X-z^{*})+(\widetilde{U}-U)(X-z^{*})\right\|_{2}\] \[\leq\left\|\Pi_{z^{*}}^{\perp}(X-z^{*})\right\|_{2}+\|\widetilde{ U}-U\|_{F}\left\|X-z^{*}\right\|_{2}\] \[\leq\frac{1}{2\tau}\|X-z^{*}\|_{2}^{2}+\sigma\|X-z^{*}\|_{2}\] \[\leq C\sigma^{2}{\log(1/\sigma)}.\]
3. \(\mathbb{E}_{\nu}\left(\widetilde{U}\xi|Y\in\widehat{\mathbb{V}}_{z}\right)\): The dislocation \(\Delta\) can be evaluated as \[\Delta =\|\widetilde{U}(z-X)\|_{2}\] \[\leq\|\widetilde{U}(z-z^{*})\|_{2}+\|\widetilde{U}(z^{*}-X)\|_{2}\] \[\leq\|z-z^{*}\|_{2}+C\sigma^{2}\sqrt{\log(1/\sigma)}\] \[\leq C\sigma.\]
Thus, if we let \(\xi^{\prime}=\widetilde{U}\xi\), according to Proposition 2.5,
\[\left\|\mathbb{E}_{\nu}\left(\widetilde{U}\xi|Y\in\widehat{\mathbb{ V}}_{z}\right)\right\|_{2} \leq\mathbb{E}_{\nu}\left(\left\|\mathbb{E}_{\phi}\left(\widetilde{U} \xi|X,\ X+\xi\in\mathbb{V}_{z}\right)\right\|_{2}\right)\] \[\leq\mathbb{E}_{\omega}\left(\|\mathbb{E}\left(\xi^{\prime}|\xi^ {\prime}\in\mathcal{B}_{D-d}(a_{\Delta},r_{2})\right)\|_{2}\right)\] \[\leq C\sigma^{2}\]
for some constant \(C\).
Therefore,
\[\|\widehat{\mu}_{z}^{\mathbb{V}}-z^{*}\|_{2}\leq C\sigma^{2}{\log(1/\sigma)},\]
for some constant \(C\).
#### b.4.3 Proof of Theorem 4.5
Let \(n\) be the number of samples falling in \(\widehat{\mathbb{V}}_{z}\). According to Lemma B.2,
\[\sqrt{n}\,(G(z)-\widehat{\mu}_{w})\stackrel{{ d}}{{\to}}N(0,\Sigma),\]
where \(\Sigma\leq r^{2}I_{D}\) and
\[\widehat{\mu}_{w} =\frac{\mathbb{E}(\beta(Y)Y)}{\mathbb{E}(\beta(Y))}=\frac{\int y \beta(y)\nu(y)\,dy}{\int\beta(y)\nu(y)\,dy}\] \[=\frac{\int yw_{u}(\widehat{U}(y-z))w_{v}((I_{D}-\widehat{U})(y-z ))\nu(y)\,dy}{\int w_{u}(\widehat{U}(y-z))w_{v}((I_{D}-\widehat{U})(y-z))\nu(y )\,dy}.\]
To obtain the asymptomatic property of \(G(z)\), we need to investigate \(\widehat{\mu}_{w}\) first. For simplicity, we define two more expectations:
\[\mu_{w} =\frac{\int yw_{u}(U(y-z))w_{v}((I_{D}-U)(y-z))\nu(y)\,dy}{\int w_ {u}(U(y-z))w_{v}((I_{D}-U)(y-z))\nu(y)\,dy}\] \[=:\frac{\int y\beta^{*}(y)\nu(y)\,dy}{\int\beta^{*}(y)\nu(y)\,dy},\] \[\mu_{w,\mathbb{D}} =\frac{\int y\beta^{*}(y)\nu_{\mathbb{D}}(y)\,dy}{\int\beta^{*}( y)\nu_{\mathbb{D}}(y)\,dy}.\]
In what follows, we will show that \(\widehat{\mu}_{w}\) is \(\mathcal{O}(\sigma^{2}\log(1/\sigma))\)-close to \(\mathcal{M}\) with high probability. Since \(\|\mu_{w,\mathbb{D}}-\mu_{w}\|_{2}\leq C\sigma^{2}\log(1/\sigma)\), from Lemma 3.3, we only need to show that \(\|\widehat{\mu}_{w}-\mu_{w}\|_{2}\leq C\sigma^{2}\log(1/\sigma)\) with high probability and \(\mu_{w,\mathbb{D}}\) is \(\mathcal{O}(\sigma^{2})\)-close to \(\mathcal{M}\).
Bound of \(\|\widehat{\mu}_{w}-\mu_{w}\|_{2}\):
According to Theorem 3.5, \(\|\widehat{U}-U\|_{F}\leq C_{1}\sigma\sqrt{\log(1/\sigma)}\) with probability at least \(1-C_{2}\exp(-C_{3}\sigma^{-c})\), and the first derivatives of \(w_{u}\) and \(w_{v}\) are both upper bounded by a constant \(C\). We have
\[|\widehat{W}_{u}-W_{u}| =:|w_{u}(\widehat{U}(y-z))-w_{u}(U(y-z))|\] \[\leq C\|\widehat{U}-U\|_{F}\|y-z\|_{2}\] \[\leq C_{4}\sigma^{2}\log(1/\sigma),\]
\[|\widehat{W}_{v}-W_{v}| =:|w_{v}((I_{D}-\widehat{U})(y-z))-w_{v}((I_{D}-U)(y-z))|\] \[\leq C\|\widehat{U}-U\|_{F}\|y-z\|_{2}\] \[\leq C_{5}\sigma^{2}\log(1/\sigma),\]
and thus,
\[|\beta^{*}(y)-\beta(y)| =|W_{u}W_{v}-\widehat{W}_{u}\widehat{W}_{v}|\] \[=|W_{u}W_{v}-W_{u}\widehat{W}_{v}+W_{u}\widehat{W}_{v}-\widehat{W }_{u}\widehat{W}_{v}|\] \[\leq W_{u}|\widehat{W}_{v}-W_{v}|+\widehat{W}_{v}|\widehat{W}_{u} -W_{u}|\] \[\leq C_{6}\sigma^{2}\log(1/\sigma),\]
where the last inequality is the result of both \(W_{u}\) and \(\widehat{W}_{v}\) being in the interval \([0,1]\). Then,
\[\|\widehat{\mu}_{w}-\mu_{w}\|_{2} = \left\|\frac{\int y\beta(y)\nu(y)\,dy}{\int\beta(y)\nu(y)\,dy}- \frac{\int y\beta^{*}(y)\nu(y)\,dy}{\int\beta^{*}(y)\nu(y)\,dy}\right\|_{2}\] \[\leq \left\|\frac{\int y(\beta(y)-\beta^{*}(y))\nu(y)\,dy}{\int\beta(y )\nu(y)\,dy}\right\|_{2}\] \[\leq C_{6}\sigma^{2}\log(1/\sigma)\frac{1+\|\mu_{w}-z^{*}\|_{2}}{ \mathbb{E}(\beta(Y))}\] \[\leq C_{7}\sigma^{2}\log(1/\sigma),\]
with probability at least \(1-C_{2}\exp(-C_{3}\sigma^{-c})\).
Property of \(\mu_{w,\mathbb{D}}\):
As in the proof of Section 3, we let \(z^{*}\) be the origin and \(z-z^{*}\) be the \((d+1)\)-th direction in the Cartesian-coordinate system. We also let \(p=(y^{(1)},\cdots,y^{(d)})\), \(t=y^{(d+1)}\), and \(q=(y^{(d+2)},\cdots,y^{(D)})\). With \(U\) the same as before, let \(\|u\|=\|U(y-z)\|=|t-\Delta|\), \(\|v\|=\|p+q\|_{2}\). Assume \(\mu_{w,\mathbb{D}}=(\mu^{(1)},\cdots,\mu^{(D)})\); then, the \(i\)-th element of \(\mu_{w,\mathbb{D}}\), i.e., \(\mu^{(i)}\), can be expressed as
\[\frac{\int_{\|p\|_{2}^{2}+\|q\|_{2}^{2}\leq r_{1}^{2}}\int_{(t- \Delta)^{2}\leq r_{2}^{2}}y^{(i)}w_{u}(|t-\Delta|)(r_{1}^{2}-\|p\|_{2}^{2}-\| q\|_{2}^{2})^{k}\phi_{\sigma}(t)\phi_{\sigma}(q)\,dt\,dp\,dq}{\int_{\|p\|_{2}^{2}+ \|q\|_{2}^{2}\leq r_{1}^{2}}\int_{(t-\Delta)^{2}\leq r_{2}^{2}}w_{u}(|t-\Delta |)(r_{1}^{2}-\|p\|_{2}^{2}-\|q\|_{2}^{2})^{k}\phi_{\sigma}(t)\phi_{\sigma}(q)\, dt\,dp\,dq}.\]
For \(i\neq d+1\):
\[\mu^{(i)} \approx \frac{\int_{\|p\|_{2}^{2}+\|q\|_{2}^{2}\leq r_{1}^{2}}y^{(i)}(r _{1}^{2}-\|p\|_{2}^{2}-\|q\|_{2}^{2})^{k}\phi_{\sigma}(q)\,dp\,dq}{\int_{\|p\|_ {2}^{2}+\|q\|_{2}^{2}\leq r_{1}^{2}}(r_{1}^{2}-\|p\|_{2}^{2}-\|q\|_{2}^{2})^{ k}\phi_{\sigma}(q)\,dp\,dq}\] \[= \frac{\int_{\|p\|_{2}^{2}+\|q\|_{2}^{2}+\|\eta\|_{2}^{2}\leq r_{1 }^{2}}y^{(i)}\phi_{\sigma}(q)\,d\eta\,dp\,dq}{\int_{\|p\|_{2}^{2}+\|q\|_{2}^{ 2}+\|\eta\|_{2}^{2}\leq r_{1}^{2}}\,\phi_{\sigma}(q)\,d\eta\,dp\,dq}\] \[= 0,\]
where \(\eta\in\mathbb{R}^{2k}\) is an auxiliary vector making the above conditional expectation an analogy of Lemma 3.4 in \(D+2k-1\)-dimensional space.
For \(i=d+1\), we assume \(r_{2}=C\sigma\sqrt{\log(1/\sigma)}>2\Delta\). We have
\[\mu^{(d+1)} \approx \frac{\int_{\Delta-r_{2}}^{\Delta+r_{2}}tw_{u}(|t-\Delta|)\phi_{ \sigma}(t)\,dt}{\int_{\Delta-r_{2}}^{\Delta+r_{2}}w_{u}(|t-\Delta|)\phi_{\sigma }(t)\,dt}\] \[\leq C\int_{\Delta-r_{2}}^{\Delta+r_{2}}tw_{u}(|t-\Delta|)\phi_{ \sigma}(t)\,dt\] \[= C\int_{0}^{\Delta+r_{2}}t[w_{u}(|t-\Delta|)-w_{u}(|t+\Delta|)] \phi_{\sigma}(t)\,dt\] \[= C\int_{r_{2}/2-\Delta}^{\Delta+r_{2}}t[w_{u}(|t-\Delta|)-w_{u}(|t +\Delta|)]\phi_{\sigma}(t)\,dt\]
\[\leq C\int_{r_{2}/2-\Delta}^{\Delta+r_{2}}t\phi_{\sigma}(t)\,dt\] \[\leq C\sigma^{2}.\]
Therefore, \(\|\mu_{w,\mathbb{D}}-z^{*}\|_{2}\leq C\sigma^{2}\).
Combining all the results above, we have
\[\|\widehat{\mu}_{w}-z^{*}\|_{2}\leq\|\widehat{\mu}_{w}-\mu_{w}\|_{2}+\|\mu_{w} -\mu_{w,\mathbb{D}}\|_{2}+\|\mu_{w,\mathbb{D}}-z^{*}\|_{2}\leq C\sigma^{2}\log (1/\sigma),\]
with probability at least \(1-C_{2}\exp(-C_{3}\sigma^{-c})\).
According to Corollary 2.4.1 and Corollary B.2.1, if the sample size \(N=C_{1}r_{1}^{-d}\sigma^{-3}\),
\[\|G(z)-z^{*}\|_{2}\leq C_{2}\sigma^{2}\log(1/\sigma)\]
with probability at least \(1-C_{2}\exp(-C_{3}\sigma^{-c})\), for some constant \(c\), \(C_{1}\), \(C_{2}\), and \(C_{3}\).
### Proof of content in Section 5
#### b.5.1 Proof of Theorem 5.1
Proof.: To show \(d_{H}(\mathcal{S},\mathcal{M})\leq C\sigma^{2}\log(1/\sigma)\) is equivalent to showing that
\[\begin{cases}d(s,\mathcal{M})\leq C\sigma^{2}\log(1/\sigma),\text{ for all }s\in\mathcal{S},\\ d(x,\mathcal{S})\leq C\sigma^{2}\log(1/\sigma),\text{ for all }x\in\mathcal{M}. \end{cases}\]
The first condition is clear. For any \(s\in\mathcal{S}\), there exists a \(y_{s}\in\Gamma\) such that \(s=\widehat{\mu}_{y_{s}}^{\mathbb{V}}\). Then, according to Theorem 4.4,
\[d(s,\mathcal{M})\leq\|\mu_{y_{s}}^{\mathbb{V}}-y_{s}^{*}\|_{2}\leq C\sigma^{2} \log(1/\sigma).\] (B.3)
For the second inequality, let \(x\) be an arbitrary point on \(\mathcal{M}\). Then, there exists a point \(y_{x}\in\Gamma\) such that \(x\) is its projection on \(\mathcal{M}\). Hence, from Theorem 4.4 again,
\[d(x,\mathcal{S})\leq\|x-\mu_{y_{x}}^{\mathbb{V}}\|_{2}\leq C\sigma^{2}\log(1/ \sigma).\] (B.4)
Because (B.3) and (B.4) hold for any \(s\in\mathcal{S}\) and \(x\in\mathcal{M}\), we complete the proof.
#### b.5.2 Proof of Theorem 5.2
Proof.: From the smoothness of \(\Gamma\) and \(G\), it is evident that \(\widehat{\mathcal{S}}\) becomes a smooth manifold.
For any \(s\in\widehat{\mathcal{S}}\), there exists a \(y_{s}\in\Gamma\) such that \(s=G(y_{s})\). Then, according to Theorem 4.5,
\[d(s,\mathcal{M})\leq\|G(y_{s})-y_{s}^{*}\|_{2}\leq C\sigma^{2}\log(1/\sigma),\] (B.5)
with a high probability. For the second inequality, let \(x\) be an arbitrary point on \(\mathcal{M}\). Then, there exists a point \(y_{x}\in\Gamma\) such that \(x\) is its projection on \(\mathcal{M}\). Hence, from Theorem 4.5 again,
\[d(x,\mathcal{S})\leq\|x-G(y_{x})\|_{2}\leq C\sigma^{2}\log(1/\sigma)\] (B.6)
with a high probability. Thus the proof is completed.
#### b.5.3 Proof of Theorem 5.3
Proof.: By fixing the projection matrix \(\widehat{\Pi}_{x}^{\perp}\) within a neighbour, the function defining \(\widehat{\mathcal{M}}_{x}\) is a smooth map with constant rank \(D-d\), and thus, according to the Constant-Rank Level-Set Theorem, \(\widehat{\mathcal{M}}_{x}\) is a properly embedded submanifold of dimension \(d\) in \(\mathbb{R}^{D}\).
To show the distance, let \(y\) be an arbitrary point on \(\widehat{\mathcal{M}}_{x}\). Then there is
\[\widehat{\Pi}_{x}^{\perp}(G(y)-y)=\widehat{\Pi}_{x}^{\perp}(G(y)-y^{*}-(y-y^{* }))=0,\]
where \(y^{*}\) is the projection of \(y\) onto \(\mathcal{M}\). Thus, there is
\[\|\widehat{\Pi}_{x}^{\perp}(y-y^{*})\|_{2} =\|\widehat{\Pi}_{x}^{\perp}(G(y)-y^{*})\|_{2}\] \[\leq\|G(y)-y^{*}\|_{2}\] \[\leq C\sigma^{2}(\log(1/\sigma)),\]
with high probability. Since \(y\in\mathcal{B}_{D}(x,c\tau)\), there exists \(c_{1}\in(0,1)\) such that \(\|\widehat{\Pi}_{x}^{\perp}-\Pi_{y^{*}}^{\perp}\|\leq c_{1}\) with high probability. Hence,
\[\|\widehat{\Pi}_{x}^{\perp}(y-y^{*})\|_{2} \geq\Big{|}\|\Pi_{y^{*}}^{\perp}(y-y^{*})\|_{2}-\|(\Pi_{y^{*}}^{ \perp}-\widehat{\Pi}_{x}^{\perp})(y-y^{*})\|_{2}\Big{|}\] \[\geq|1-c_{1}|\|y-y^{*}\|_{2}\] \[\geq c\|y-y^{*}\|_{2}.\]
Therefore, for any \(y\in\widehat{\mathcal{M}}_{x}\)
\[d(y,\mathcal{M})=\|y-y^{*}\|\leq C\sigma^{2}\log(1/\sigma)\]
with high probability.
#### b.5.4 Proof of Theorem 5.4
Proof.: The proof of (I) and (II) is exactly the same as the proof in Theorem 5.2. To reveal (III), Let \(a,b\in\widehat{\mathcal{M}}\), and \(a\neq b\). When \(\|a-b\|_{2}\geq c\sigma\tau_{0}\), \(\|a-b\|_{2}^{2}/d(b,T_{a}\widehat{\mathcal{M}})\geq c\sigma\tau_{0}\) is clearly true since \(\|a-b\|_{2}\geq d(b,T_{a}\widehat{\mathcal{M}})\). Hence, we assume that \(\|a-b\|_{2}<c\sigma\tau_{0}\). We further denote \(a_{0}=G^{-1}(a)\in\widehat{\mathcal{M}}\) and \(b_{0}=G^{-1}(b)\in\widehat{\mathcal{M}}\).
Let \(J_{G}\) denote the Jacobi matrix of \(G\), then \(J_{G}(a_{0})\) is a linear mapping from \(T_{a_{0}}\widetilde{\mathcal{M}}\) to \(T_{a}\widehat{\mathcal{M}}\). Consider a local chart of \(\Gamma\) at \(T_{a_{0}}\Gamma\), then the natural projection from \(\widehat{\mathcal{M}}\cap\mathcal{B}(a_{0},\|b_{0}-a_{0}\|_{2})\) to \(T_{a_{0}}\widehat{\mathcal{M}}\cap\mathcal{B}(a_{0},\|b_{0}-a_{0}\|_{2})\) is an invertible mapping. Denote the inverse mapping of the natural projection as \(\phi\), and then there exists \(\eta_{b_{0}}\in T_{a_{0}}\widehat{\mathcal{M}}\) such that \(\phi(0)=a_{0}\), and \(\phi(\eta_{b_{0}})=b_{0}\). Since \(\|a-b\|_{2}<c\sigma\tau_{0}\), there exists \(0<c<C\) such that
\[c\|a_{0}-b_{0}\|_{2}\leq\|\eta_{b_{0}}-\eta_{a_{0}}\|_{2}=\|\eta_{b_{0}}\|_{2} \leq C\|a_{0}-b_{0}\|_{2}.\]
Using the Taylor expansion of G at \(a_{0}\), there is
\[d(b,T_{a}\widehat{\mathcal{M}}) \leq\|b-J_{G}(a_{0})\eta_{b_{0}}-G(a_{0})\|_{2}\] \[=\|G(\phi(\eta_{b_{0}}))-J_{G}(a_{0})\eta_{b_{0}}-G(a_{0})\|_{2}\] \[=\|G(\eta_{b_{0}})-J_{G}(a_{0})\eta_{b_{0}}-G((a_{0}))\|_{2}+\|G( \eta_{b_{0}})-G(\phi(\eta_{b_{0}}))\|_{2}\] \[\leq\|H_{G}(z_{1})\|_{2}\|\eta_{b_{0}}\|_{2}^{2}+\|J_{G}(z_{2})\|_{ 2}\|\eta_{b_{0}}-\phi(\eta_{b_{0}})\|_{2}\] \[\leq C(M_{G}+L_{G})\|\eta_{b_{0}}\|_{2}^{2}\] \[\leq C(M_{G}+L_{G})\|a_{0}-b_{0}\|_{2}.\]
Here, \(H_{G}\) is the Hessian matrix of \(G\), \(M_{G}\) and \(L_{G}\) are the upper bound of \(\|H_{G}\|_{2}\) and \(\|J_{G}\|_{2}\). Moreover,
\[\|a_{0}-b_{0}\|_{2}\leq\frac{1}{\ell_{G}}\|G(a_{0})-G(b_{0})\|_{2}=\frac{1}{\ell _{G}}\|G(a_{0})-G(b_{0})\|_{2},\]
where \(\ell_{G}\) is the lower bound of \(J_{G}\). Hence, we have
\[d(b,T_{a}\widehat{\mathcal{M}})\leq C\frac{M_{G}+L_{G}}{\ell_{G}}\|a-b\|_{2}^{2}.\]
Finally, the reach of \(\widehat{\mathcal{M}}\) can be bounded below as
\[\mathrm{reach}(\widehat{\mathcal{M}})\geq\min\left\{c\sigma\tau_{0},\ c\frac{ \ell_{G}}{M_{G}+L_{G}}\right\}.\]
#### b.5.5 Proof of Proposition 5.5
Proof.: Since \(\widetilde{\mathcal{M}}\subset\Gamma\), it is clear that \(d_{H}(\widetilde{\mathcal{M}},\mathcal{M})\leq C\sigma\). In the following section, we show that \(\dim\widetilde{\mathcal{M}}=d\).
Recall that \(F(y)=\sum_{i}\alpha_{i}(y)y_{i}\), with \(\sum_{i}\alpha_{i}(y)=1\). Let \(H(y)=F(y)-y\); then, we have
\[H(y) =\sum_{i}\alpha_{i}(y)y_{i}-y\] \[=\sum_{i}\alpha_{i}(y)(y_{i}-y).\]
According to Lemma 17 and Theorem 18 in [42], for any unit norm direction vector \(v\in\mathbb{R}^{D}\),
\[\|\partial_{v}H(y)-v\|_{2}\leq Cr_{0},\]
with high probability. In the case of \(\sigma\) being sufficiently small, the Jacobian matrix of \(H\), denoted by \(J_{H}\), is full rank. For any fixed arbitrary rank \(D-d\) projection matrix \(\Pi^{*}\),
\[\Pi^{*}H:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D},\]
\[J_{\Pi^{*}H}=\Pi^{*}J_{H}.\]
In other words, \(\Pi^{*}H\) is a smooth map with constant rank \(D-d\), and thus, according to the Constant-Rank Level-Set Theorem, \(\widetilde{\mathcal{M}}=\{y\in\Gamma:\Pi^{*}H(y)=0\}\) is a properly embedded submanifold of co-dimension \(D-d\) in \(\Gamma\). Therefore, \(\dim\widetilde{\mathcal{M}}=d\).
Figure 18: Assessing the performance of ysl23 in fitting the sphere (\(N=5\times 10^{4}\), \(N_{0}=100\), \(\sigma=0.06\)): the left panel displays points in \(\mathcal{W}\) surrounding the underlying manifold, while the right panel illustrates the corresponding points in \(\widehat{\mathcal{W}}\).
Figure 20: The asymptotic performance of ysl23 when fitting a sphere. The top two figures show how the two distance change with \(\sigma\), while the bottom two figures show how the distances change with \(N\).
Figure 21: The asymptotic performance of ysl23 when fitting a torus. The top two diagrams show how the two distance change with \(\sigma\), while the bottom two figures show how the distances change with \(N\).
Figure 22: The performance of ysl23 with increasing \(N\). Top row, from left to right: \(N=3\times 10^{2}\), \(3\times 10^{3}\), \(3\times 10^{4}\), \(3\times 10^{5}\). Middle row, from left to right: \(N=1\times 10^{3}\), \(5\times 10^{3}\), \(2.5\times 10^{4}\), \(1.25\times 10^{5}\). Bottom row, from left to right: \(N=1\times 10^{3}\), \(5\times 10^{3}\), \(2.5\times 10^{4}\), \(1.25\times 10^{5}\). It can be observed that for each example, as the number of samples increases, the distribution of \(\mathcal{W}\) output by ysl23 becomes more uniform.
## Supplementary Material
**Supplementary material for "Manifold Fitting: an Invitation to Statistics"**
(doi: COMPLETED BY THE TYPESETTER;.pdf). We include all materials omitted from the main text.
|
2303.03441 | Safe Importance Sampling in Model Predictive Path Integral Control | We introduce the notion of importance sampling under embedded barrier state
control, titled Safety Controlled Model Predictive Path Integral Control
(SC-MPPI). For robotic systems operating in an environment with multiple
constraints, hard constraints are often encoded utilizing penalty functions
when performing optimization. Alternative schemes utilizing optimization-based
techniques, such as Control Barrier Functions, can be used as a safety filter
to ensure the system does not violate the given hard constraints. In contrast,
this work leverages the principle of a safety filter but applies it during
forward sampling for Model Predictive Path Integral Control. The resulting set
of forward samples can remain safe within the domain of the safety controller,
increasing sample efficiency and allowing for improved exploration of the state
space. We derive this controller through information theoretic principles
analogous to Information Theoretic MPPI. We empirically demonstrate both
superior sample efficiency, exploration, and system performance of SC-MPPI when
compared to Model-Predictive Path Integral Control (MPPI) and Differential
Dynamic Programming (DDP) optimizing the barrier state. | Manan Gandhi, Hassan Almubarak, Evangelos Theodorou | 2023-03-06T19:02:55Z | http://arxiv.org/abs/2303.03441v1 | # Safe Importance Sampling in Model Predictive Path Integral Control
###### Abstract
We introduce the notion of importance sampling under embedded barrier state control, titled Safety Controlled Model Predictive Path Integral Control (SC-MPPI). For robotic systems operating in an environment with multiple constraints, hard constraints are often encoded utilizing penalty functions when performing optimization. Alternative schemes utilizing optimization-based techniques, such as Control Barrier Functions, can be used as a safety filter to ensure the system does not violate the given hard constraints. In contrast, this work leverages the principle of a safety filter but applies it during forward sampling for Model Predictive Path Integral Control. The resulting set of forward samples can remain safe within the domain of the safety controller, increasing sample efficiency and allowing for improved exploration of the state space. We derive this controller through information theoretic principles analogous to Information Theoretic MPPI. We empirically demonstrate both superior sample efficiency, exploration, and system performance of SC-MPPI when compared to Model-Predictive Path Integral Control (MPPI) and Differential Dynamic Programming (DDP) optimizing the barrier state.
## I Introduction
Safety-critical control is a fundamental problem in dynamical systems with many problems in robotics, healthcare, and aviation requiring safe operation. In the field of terrestrial and aerial agility, sampling-based control [22, 11] has been utilized to achieve high performing, aggressive control structures, however hard constraints for these systems is often implemented in terms of a safety filter or as penalty functions in the optimization. In this work we will present SC-MPPI, an algorithm to embed safety into the sampling phase of MPPI.
In the context of safe control, we review some candidate techniques for maintaining the safety of a known system model through feedback. This feedback forms the lyunch pin of SC-MPPI, as the feedback mechanism in question will be applied on each sample individually (see Figure 1). Potential field methods utilize attractive forces for a given goal state and repulsive forces generated by obstacles. The difficulty in potential field methods lies in optimizing the ratio between the two forces, additionally, the method itself is subject to becoming trapped before narrow passages, as demonstrated in Koren and Borenstein [15]. Singletary et al. [18] demonstrate that potential functions are a subset of Control Barrier Functions, whereas Control Barrier Functions can be utilized in a more general fashion as a safety filter. Control Barrier Functions (CBFs) have been widely successful in tasks such as bipedal walking [1, 5] and automatic lane keeping [4], however for complex constraints, the control is the result of an optimization scheme that is computationally expensive to run in real-time thousands of samples. Additionally, a particular structure is required to handle systems with high relative degree [23] and in the context of discrete CBFs, even linear systems with linear constraints can result in a non-convex optimization problem [1]. Even in the realm of sampling-based control, CBFs can offer a useful method to improve sample efficiency, but is held back again due to computational complexity [19]. Embedded barrier state (BaS) methods developed by Almubarak et al. [2] append the state space with the state of a barrier function, then utilize a Lyapunov stability criterion, satisfied by an optimal controller, to ensure safety through stabilization of the appended model. Through similar arguments, Almubarak et al. [3] propose using discrete barrier states in discrete time trajectory optimization
Fig. 1: Quadrotor in a dense obstacle field with visualization of the sampling. Safe samples are shown in blue, and unsafe samples are shown in red. The green sphere is the initial position, the red sphere is the desired final position. The left figure is SC-MPPI shown with obstacles. The right figure is MPPI. Note that MPPI trajectories are closer to the nominal trajectory, while SC-MPPI trajectories project further forward while moving away from the obstacles.
settings and show that the safety embedding technique exhibits great performance improvements over penalty methods which, implicitly, result in similar cost functions. This method is applicable to general nonlinear systems, and can combine a variety of nonlinear constraints at the expense of sensitivity to gradients of the barrier state dynamics, time discretization, and increasing the model's dimension. A great advantage of designing a feedback controller for the embedded system is the fact that the controller now is a function of the barrier. In the proposed work, we utilize discrete embedded barrier state feedback control as a means of safety in sampling, due to relaxation in problem assumptions, computational complexity, and ability to combine a large number of disjoint, unsafe regions.
Safety in sampling based model predictive control has received recent attention from Koch et al. [14] and Zeng et al. [25], due to the high performance capabilities of sampling-based MPC [22]. Additionally, sampling based trajectory optimization shares a close connection with safe model-based reinforcement learning, which can be loosely categorized into two categories: safety within the optimization metric, and safety in sample exploration by Garcia and Fernandez [12]. Safety in importance sampling falls into the latter category, in the same vein as the work by Berkenkamp et al. [7] which utilizes Lyapunov-based stability metrics to explore the policy space, the work by Thananjeyan [20], who utilize sampling to iteratively improve the policy for a nonlinear system. Additionally, there is the work in covariance steering control in combination with MPPI by Yin et al. [24]. In this work, the authors formulate a convex optimization problem to solve for a feedback control which satisfies an upper bound on the covariance of the terminal states. While in CC-MPPI, a feedback controller is utilized to guide trajectory samples away from high cost areas, the mechanism in the proposed work relies of feedback from the barrier state, enabling safety without having a soft cost on obstacles. CC-MPPI is also computationally intensive, with the controller given in [24] running at 13 Hz, due to the complexity of solving the covariance steering problem around the given reference trajectory. In our work, the safety embedded controller is utilized to _both modify the reference trajectory_ (if needed), _and compute feedback gains_ on a barrier state in order to establish safety, all while maintaining _optimization times below 10 ms_.
There is also the use of constrained covariance steering in Tube-Based MPPI by Balci et al. [6], where the authors apply a constrained covariance steering controller as a probabilistic safety filter on top of MPPI. This is again fundamentally different from our work, since safety is not applied during the sampling phase. In the attempt to unify the safety critical control and sampling-based MPC, we develop a new algorithm based on information theoretic MPC that encodes knowledge of the safe controller into the forward sampling. We show **three key contributions** in this work:
1. Derivation of a new control scheme for embedding safety into MPPI.
2. Empirical results of both computational efficiency (real-time performance), and sample efficiency (% of collision free samples).
3. Superior performance of the proposed algorithm versus existing methods in a navigation task in a cluttered environment, with respect to vehicle speed, task completion and final position error.
## II Mathematical Background
For the background of this work, we will first review the problem at hand, and then describe the fundamentals of our choice of safety controller: Differential Dynamic Programming with _Embedded Barrier States_ by Almuabarak et al. [3]. The decision to utilize Embedded Barrier States for safety was due to the flexibility and computational ease of the framework. Alternative safety schemes can be utilized under this same proposed framework, typically with a significant increase in computational complexity. Next we will review the fundamentals of Information Theoretic Path Integral Control, then delve into the proposed algorithm.
Consider the discrete, nonlinear dynamical system:
\[\mathbf{x}_{k+1}=F(k,\mathbf{x}_{k},\mathbf{u}_{k}), \tag{1}\]
where, at a time step \(k\in\mathbb{R}^{+}\), \(\mathbf{x}\in\mathcal{D}\subset\mathbb{R}^{n}\), \(\mathbf{u}\in\mathbb{R}^{m}\), and \(F:(\mathbb{R}^{+}\times\mathbb{R}^{n}\times\mathbb{R}^{m})\rightarrow\mathbb{R }^{n}\). Within the domain \(\mathcal{D}\), we can define a _safe set_\(\mathcal{C}\subset\mathcal{D}\). The goal of the model predictive controller will be to achieve an objective in finite time while keeping the state trajectory, \(\mathbf{x}_{\tau}=\{\mathbf{x}_{0},\mathbf{x}_{1},...,\mathbf{x}_{T}\}\) inside the safe set \(\mathcal{C}\). Specifically, the model predictive control problem considers minimizing the cost functional
\[J(\mathbf{x}_{\tau},\mathbf{u}_{\tau})=\phi(\mathbf{x}_{T})+\sum_{k=0}^{T-1} \big{(}q(\mathbf{x}_{k},k)+\lambda\mathbf{u}^{\text{T}}\Sigma^{-1}\mathbf{u} \big{)}, \tag{2}\]
subject to safety state constraints and control limits with \(\mathbf{u}_{\tau}=\{\mathbf{u}_{0},\mathbf{u}_{1},...,\mathbf{u}_{T-1}\}\). Here \(\phi\) is a terminal cost, and \(q\) is a nonlinear, potentially time varying state cost with a quadratic control cost penalty. \(\lambda\) is known as the inverse temperature and \(\Sigma\in\mathbb{R}^{m\times m}\) is a positive definite control penalty matrix. For the safety state constraints, we consider the superlevel set \(\mathcal{C}\subset\mathbb{R}^{n}\) defined by a continuously differentiable real valued function \(h:\mathcal{D}\subset\mathbb{R}^{n}\rightarrow\mathbb{R}\) such that the set \(\mathcal{C}\), its interior \(\mathcal{C}^{\circ}\), and boundary \(\partial\mathcal{C}\) are defined respectively as
\[\mathcal{C} =\{\mathbf{x}\in\mathcal{D}:h(\mathbf{x})\geq 0\}, \tag{3}\] \[\mathcal{C}^{\circ} =\{\mathbf{x}\in\mathcal{D}:h(\mathbf{x})>0\},\] \[\partial\mathcal{C} =\{\mathbf{x}\in\mathcal{D}:h(\mathbf{x})=0\}.\]
To ensure safety, the safe set \(\mathcal{C}\) needs to be rendered controlled forward invariant, i.e. the system's safety critical states never leave the set \(\mathcal{C}\). The notion of _forward invariance_ of the safe set \(\mathcal{C}\) with respect to the safety-critical dynamical system (1) can be formally defined as follows [8]:
**Definition II.1**: _The set \(\mathcal{C}\subset\mathbb{R}^{n}\) is said to be controlled forward invariant for the dynamical system \(\mathbf{x}_{k+1}=F(k,\mathbf{x}_{k},\mathbf{u}_{k})\), if, for all \(x_{0}\in\mathcal{C}\), there exists a feedback control \(\mathbf{u}_{k}=K_{\text{safe}}(k,\mathbf{x}_{k})\), such that \(\mathbf{x}_{k+1}=F(k,\mathbf{x}_{k},K_{\text{safe}}(k,\mathbf{x}_{k}))\in \mathcal{C}\), for all \(k\in[t_{0},T]\)._
### _Embedded Barrier States_
Consider the undisturbed system in (1) and the safety constraint (3). A smooth scalar valued function \(B:\mathcal{C}^{\circ}\rightarrow\mathbb{R}\) defined over \(h\), \(B\big{(}h(\mathbf{x}_{k})\big{)}\), is a barrier function if \(B\rightarrow\infty\) as \(\mathbf{x}_{k}\rightarrow\partial\mathcal{C}\). In control barrier functions [16, 21, 17, 4], to ensure boundedness of the barrier function, which implies satisfaction of the safety condition \(h(\mathbf{x}_{x})>0\), the barrier's rate of change is required to be decreasing (or not exceed a certain value associated with the barrier function as proposed in [4]). The authors in [2] proposed _barrier states_ (BaS) to transform the safety objective into a performance objective by embedding the state of the barrier into the model of the safety-critical system. In essence, the safe control synthesis is coupled with the control design of other performance objectives. The idea is that the barrier function's rate of change is controlled along the system's state to ensure its boundedness in lieu of enforcing this rate through an inequality hard constraint as in CBFs. In [2], the barrier state embedded model is asymptotically stabilized, which implies safety due to boundedness of the barrier state (see [2] Theorem 3). Next, the authors proposed discrete barrier states (DBaS) with differential dynamic programming to perform safe trajectory optimization [3], which was shown to greatly simplified the problem formulation.
Defining the barrier over the safety condition as \(\beta_{k}:=B\big{(}h(\mathbf{x}_{k})\big{)}\), the DBaS dynamics are defined as
\[F^{\beta}(\mathbf{x}_{k},\beta_{k},\mathbf{u}_{k}) :=\beta_{k+1} \tag{4}\] \[=B\circ h\circ F(k,\mathbf{x}_{k},\mathbf{u}_{k})\] \[-\gamma(\beta_{k}-B\circ h\circ\mathbf{x}_{k}).\]
The additional term \(\gamma(\beta_{k}-B\circ h\circ\mathbf{x}_{k})\), parameterized by \(|\gamma|\leq 1\), is the DBaS pole for the linearized system. This then guarantees a non-vanishing gradient of the barrier, ensuring a non-zero feedback. Notice that this term is essentially zero by definition of the barrier (see the detailed proof of the continuous time case in [2]). Details and a numerical example of the barrier states feedback, as well as how it appears in differential dynamic programming (DDP) are provided in Appendix VI-A. This algorithm is used to generate feedback gains for importance sampling in Section III. For multiple constraints, multiple barrier functions can be added to form a single barrier [3] or multiple barrier states. Then, the barrier state vector \(\beta\in\mathcal{B}\subset\mathbb{R}^{n_{\beta}}\), where \(n_{\beta}\) is the dimensionality of the barrier state vector, is appended to the dynamical model resulting in the safety embedded system:
\[\bar{\mathbf{x}}_{k+1}=\bar{F}(k,\bar{\mathbf{x}}_{k},\mathbf{u}_{k}), \tag{5}\]
where \(\bar{F}=\begin{bmatrix}F(k,\mathbf{x}_{k},\mathbf{u}_{k}),&F^{\beta}\end{bmatrix} ^{\text{T}}\) and \(\bar{\mathbf{x}}=\begin{bmatrix}\mathbf{x},&\beta\end{bmatrix}^{\text{T}}\).
One of the benefits of a safety embedded model is the direct transmission of safety constraint information to the optimal controller (see Appendix VI-A). This prevents two separate algorithms from _fighting_ one another for control bandwidth, i.e. a controller attempting to maximize performance and a safety filter attempting to maximize safety. This comes at a cost of the user having to specify the weighting between task performance and safety, similar to barrier methods in optimization. For the model predictive control problem in this work, the following proposition [3] depicts the safety guarantees provided by the embedded barrier state method.
**Proposition II.1** ([3]): _Under the control sequence \(\mathbf{u}_{\tau}\), the safe set \(\mathcal{C}\) is controlled forward invariant if and only if \(\beta(\mathbf{x}(0))<\infty\Rightarrow\beta_{k}<\infty\)\(\forall k\in[1,T]\)._
For the optimal control problem considered in this work Equation (1), Equation (2), with the constraints in Equation (3), the embedded barrier state paradigm transforms the problem to the following:
\[\min_{\mathbf{u}_{\tau}} \sum_{k=0}^{T-1}\!\big{(}q(\bar{\mathbf{x}}_{k},k)+\mathbf{u}^{ \text{T}}\Sigma^{-1}\mathbf{u}\big{)}+\phi(\bar{\mathbf{x}}_{T})\] (6) subject to \[\bar{\mathbf{x}}_{k+1}=\bar{F}(k,\bar{\mathbf{x}}_{k},\mathbf{u}_{k})\]
This transformation was first proposed in [3] and used in [9] with MPC for safe driving of autonomous vehicles. Note that in this work, we use embedded barrier state DDP to generate safe reference trajectories and safe feedback gains along the reference trajectories. The safe feedback gains are applied on the barrier state \(\beta\) to guide samples for MPPI away from obstacles.
### _Model Predictive Path Integral Control_
We briefly review Free Energy, Relative Entropy, and the connection to model predictive control. Additional details can be found in [22]. First, we define free energy with the following expression:
\[\mathcal{F}(S,\mathbb{P},\mathbf{x}_{0},\lambda) =-\lambda\log\Bigg{[}\mathbb{E}_{\mathbb{P}}(\exp(-\frac{1}{ \lambda}S(V))\Bigg{]}, \tag{7}\] \[S(V,\mathbf{x}_{0}) =\phi(\mathbf{x}_{T})+\sum_{t=0}^{T-1}q(\mathbf{x}_{t}). \tag{8}\]
\(\lambda\) is the inverse temperature. \(S\) is the cost-to-go function, which takes in initial condition \(\mathbf{x}_{0}\) and set of random variables that generate a sequence of controls \(V=\{\mathbf{v}_{0},\mathbf{v}_{1},...,\mathbf{v}_{T-1}\}\), which in turn generate a trajectory \(\mathbf{x}_{\tau}\) evaluated with the terminal and state cost functions \(\phi\) and \(q\) respectively. The probability measure \(\mathbb{P}\) is utilized to sample the controls of the system when computing the free energy.
We can upper bound the free energy using Jenson's Inequality,
\[\text{KL}(\mathbb{Q}||\mathbb{P}) =\mathbb{E}_{\mathbb{Q}}\Big{[}\log(\frac{\text{d}\mathbb{Q}}{ \text{d}\mathbb{P}})\Big{]}, \tag{9}\] \[\mathcal{F}(S,\mathbb{P},\mathbf{x}_{0},\lambda) \leq\mathbb{E}_{\mathbb{Q}}\Big{[}S(V)\Big{]}+\lambda\text{KL}( \mathbb{Q}||\mathbb{P}). \tag{10}\]
Equation (10) now represents an optimization problem. Assume \(\mathbb{Q}^{*}\) is the optimal control distribution, and when the free energy is computed with respect to \(S\) and \(\mathbb{Q}^{*}\), the free energy is minimized. This optimal free energy is upper bounded by the free energy computed from another distribution \(\mathbb{Q}\), summed with the KL-divergence between \(\mathbb{Q}\) and \(\mathbb{Q}^{*}\). In practice, MPPI assumes a form of the optimal distribution \(\mathbb{Q}^{*}\), which cannot be directly sampled from. Instead the KL-divergence term is
utilized as an information theoretic metric that is used to drive a controlled distribution \(\mathbb{Q}\) closer to the optimal:
\[U^{*}=\operatorname*{argmin}_{U}\text{KL}(Q^{*}||Q). \tag{11}\]
The authors in [22] show that the solution to (11) is equivalent to solving \(\mathbf{u}_{t}^{*}=\int q^{*}(V)\mathbf{v}_{t}dV\). Where
\[q^{*}(V)=\frac{1}{Z}\exp\big{(}-\frac{1}{\lambda}S(V,\mathbf{x}_{0})\big{)}p(V)\]
MPPI utilizes iterative importance sampling to approximate samples from the optimal distribution.
\[\mathbf{u}_{t}^{*} =\mathbb{E}_{\mathbb{Q}^{*}}\big{[}\mathbf{v}_{t}\big{]}\] \[=\mathbb{E}_{\mathbb{Q}}\big{[}\mathbf{v}_{t}\frac{\text{d} \mathbb{Q}^{*}}{\text{d}\mathbb{P}}\frac{\text{d}\mathbb{P}}{\text{d}\mathbb{ Q}}\big{]}\] \[=\mathbb{E}_{\mathbb{Q}}\big{[}\mathbf{v}_{t}\exp\big{(}-\frac{1 }{\lambda}S(V,\mathbf{x}_{0})\big{)}\frac{\text{d}\mathbb{P}}{\text{d} \mathbb{Q}}\big{]}\]
**Remark.** At this point, an obvious step would be to implement MPPI with the safety embedded dynamics (5). This results in the Information Theoretic MPPI algorithm applied to the embedded barrier state control problem in Equation (6). The barrier state must be explicitly penalized in the cost function. This formulation allows MPPI to combine the optimization of task performance with safety, however, as we will see in Section IV, some limitations exist. In general, there are scenarios where we lose the ability to explore if the dynamics are too close to an obstacle or in an undesirable region of state space.
In the next section, we go a step further and apply the safe controller to the importance sampling step. This has the effect of adding feedback with respect to the barrier state of the system, pushing samples away from unsafe regions. Additionally, this step circumvents the need to find tuning parameters to weight the cost of the barrier state versus the cost of the trajectory. The burden of solving the safe control problem falls onto a sub-optimization step that is purely focused on safety, while the MPC controller is tuned for performance.
## III Safe Information Theoretic Model Predictive Control
### _Safe Information Theoretic Measure_
In this section, we will re-derive Information Theoretic Model Predictive Control with an alternative definition of the state-to-path cost function. This outline closely follows the derivation from [22]. First, we define the state-to-path-cost function as
\[S(V,\mathbf{x}_{0})=\begin{cases}\phi(\mathbf{x}_{T})+\sum_{t=0}^{T-1}q( \mathbf{x}_{t}),&\mathbf{x}\in\mathcal{C},\\ \infty,&\mathbf{x}\in\mathcal{C}^{\text{C}}.\end{cases} \tag{12}\]
This cost function is applied to the following system,
\[\begin{bmatrix}\mathbf{x}_{k+1}\\ \beta_{k+1}\end{bmatrix}=\begin{bmatrix}F(k,\mathbf{x}_{k},\mathbf{u}_{k}+K_{ \text{BaS}}\cdot\beta_{k})\\ F^{\beta}(\mathbf{x}_{k},\beta_{k},\mathbf{u}_{k}+K_{\text{BaS}}\cdot\beta_{k}) \end{bmatrix}, \tag{13}\]
with _a safety controller applied during the importance sampling_, see Figure 2.
**Remark** The state-to-path-cost function (12) is not a function of \(\beta\), and does not penalize the barrier state in any way. This separation of safety and performance enables the user to first tune an appropriate safety controller for a given tracking task, then design a path cost for the overall problem. The free energy under this state-to-path-cost is potentially infinite if the set of controls \(V\) drives the system into the unsafe region \(\mathcal{C}^{C}\).
Take note that the expectation in (7) is a _conditional expectation_, in this case being conditioned on the initial state \(\mathbf{x}_{0}\). We can split the measure \(\mathbb{P}\) into two disjoint measures, one with control samples, that when combined with an initial condition, result in trajectories that are forward-invariant in \(\mathcal{C}\) and another with trajectories that enter the unsafe region. In other words, the safe measure is parameterized by mean \(\mathbf{u}+K_{\text{BaS}}\cdot\beta\).
\[\mathbb{P}=\mathbb{P}_{S}\cup\mathbb{P}_{U}, \tag{14}\] \[\mathbb{P}_{S}\cap\mathbb{P}_{U}=\emptyset. \tag{15}\]
Using the additivity property of measures, we can then split the free energy into two terms. Using the fact that \(S(V,\mathbf{x}_{0})=\infty\) when \(V\) is sampled from \(\mathbb{P}_{U}\), we see that the unsafe term goes to zero, since \(\exp(-\infty)=0\).
\[\mathcal{F}(S,\mathbb{P},\mathbf{x}_{0},\lambda)=-\lambda\log \left[\mathbb{E}_{\mathbb{P}_{S}}\big{[}\exp(-\frac{1}{\lambda}S(V,\mathbf{x}_{0 }))\big{]}+\right.\] \[\left.\mathbb{E}_{\mathbb{P}_{U}}\underbrace{\left[\exp(-\frac{1} {\lambda}S(V,\mathbf{x}_{0}))\right]}_{\overline{\lambda}S(V,\mathbf{x}_{0} ))}\right]=-\lambda\log\left[\mathbb{E}_{\mathbb{P}_{S}}\big{[}\exp(-\frac{1 }{\lambda}S(V,\mathbf{x}_{0}))\big{]}\right] \tag{16}\]
We can upper bound the free energy \(\mathcal{F}\) using Jenson's Inequality,
\[\mathcal{F} =-\lambda\log\left[\mathbb{E}_{\mathbb{Q}_{S}}\big{[}\exp(-\frac{1 }{\lambda}S(V,\mathbf{x}_{0}))\frac{\text{d}\mathbb{P}_{S}}{\text{d}\mathbb{ Q}_{S}}\big{]}\right]\] \[=-\lambda\log\left[\mathbb{E}_{\mathbb{Q}_{S}}\big{[}\exp(-\frac{1 }{\lambda}S(V,\mathbf{x}_{0}))\frac{\text{d}\mathbb{P}_{S}}{\text{d}\mathbb{ Q}_{S}}\big{]}\right.\] \[+\left.\mathbb{E}_{\mathbb{Q}_{U}}\underbrace{\left[\exp(-\frac{1 }{\lambda}S(V,\mathbf{x}_{0}))\right]}_{\overline{\lambda}S(V,\mathbf{x}_{0} )}\right]\] \[\leq\mathbb{E}_{\mathbb{Q}_{S}}\big{[}S(V,\mathbf{x}_{0})\big{]} +\lambda\mathbb{E}_{\mathbb{Q}_{S}}\big{[}\log\frac{\text{d}\mathbb{Q}_{S}}{ \text{d}\mathbb{P}_{S}}\big{]} \tag{17}\]
Equation (17) now represents a constrained optimization problem with the solution \(\mathbb{Q}^{*}\) achieving the lower bound in the free energy inequality.
**Lemma 1**: _Let \(\frac{\text{d}\mathbb{Q}_{S}^{*}}{\text{d}\mathbb{P}_{S}}=\frac{1}{\eta}\exp(- \frac{1}{\lambda}S(V,\mathbf{x}_{0}))\), with \(\eta=\mathbb{E}_{\mathbb{P}_{S}}\big{[}\exp(-\frac{1}{\lambda}S(V,\mathbf{x}_{0 }))\big{]}\). Then (10) reduces to an equality._
Proof:: \[\mathcal{F} \leq\mathbb{E}_{\mathbb{Q}_{S}}\big{[}S(V,\mathbf{x}_{0})\big{]}+ \lambda\mathbb{E}_{\mathbb{Q}_{S}}\big{[}\log\frac{\text{d}\mathbb{Q}_{S}^{*}}{ \text{d}\mathbb{P}_{S}}\big{]}\] \[=\mathbb{E}_{\mathbb{Q}_{S}}\big{[}S(V,\mathbf{x}_{0})\big{]}+ \lambda\mathbb{E}_{\mathbb{Q}_{S}}\big{[}-\frac{1}{\lambda}S(V,\mathbf{x}_{0}) -\log\eta\big{]}\] \[=\mathbb{E}_{\mathbb{Q}_{S}}\big{[}S(V,\mathbf{x}_{0})\big{]}- \mathbb{E}_{\mathbb{Q}_{S}}\big{[}S(V,\mathbf{x}_{0})\big{]}\] \[-\lambda\mathbb{E}_{\mathbb{Q}_{S}}\big{[}\log\Big{[}\mathbb{E}_ {\mathbb{P}_{S}}\big{[}\exp(-\frac{1}{\lambda}S(V,\mathbf{x}_{0}))\big{]}\Big{]} \big{]}\big{]}\] \[=-\lambda\log\Big{[}\mathbb{E}_{\mathbb{P}_{S}}\big{[}\exp(-\frac {1}{\lambda}S(V,\mathbf{x}_{0}))\big{]}=\mathcal{F}\] (18)
From Lemma 1, we see that as long as our likelihood ratio \(\frac{\text{d}\mathbb{Q}_{S}^{*}}{\text{d}\mathbb{P}_{S}}\) is proportional to \(\exp(-\frac{1}{\lambda}S(V,\mathbf{x}_{0}))\), we can use iterative importance sampling to estimate the optimal control distribution. In this case, the samples are taken from the safe distributions \(\mathbb{P}_{S}\) and \(\mathbb{Q}_{S}\). Now the control objective is the following, and can be solved utilizing the same methods as regular MPPI [22].
\[U^{*}=\operatorname*{argmin}_{U}\mathbb{KL}\left(\mathbb{Q}_{S}^{*}\parallel \mathbb{Q}_{S}\right) \tag{19}\]
### _Safety Controlled Model Predictive Path Integral Control (SC-MPPI)_
In this section we present an algorithm for safety controlled importance sampling in the context of MPPI. This framework is known as SC-MPPI. The differences between this algorithm and traditional MPPI are subtle, but important. The first difference is the computation of the Importance Sampling Sequence \(U=(\mathbf{u}_{0}\ldots\mathbf{u}_{T-1})\). The initial state of the system, along with an initial (potentially unsafe) importance sampling sequence are utilized to compute a safe importance sampling sequence \(U_{S}\), along with any required parameters for the safe feedback controller \(K_{\text{BAS}}\), see Figure 2. In this work, we utilize Discrete Barrier State DDP [3] for the embedded safety controller. Note that this feedback controller is only valid within a domain of the nominal trajectory. The second difference appears in the form of safe sampling, where the embedded safety controller is utilized to perform feedback on the barrier state and the barrier state alone while forward sampling trajectories. The sampling procedure is summarized in Algorithm 1, and the full control algorithm is summarized in Algorithm 2.
```
Given:\(F\), \(q\), \(\phi\), \(\Sigma\), \(R\), \(R_{\text{fb}}\): Dynamics, Cost function parameters; \(T\), \(N\): Sampling parameters; \(\lambda\), \(\beta\), \(\nu\), \(j\): Temperature and control cost smoothing parameter, nominal feedback scale, barrier state index; Input:\(\mathbf{x}_{0}\), \(U_{S}\), \(K_{\text{BAS}}\): Current state, Safe IS sequence, safe feedback controller; for\(n\)\(\leftarrow\)\(1\)to\(N\)do \(\mathbf{x}\leftarrow\mathbf{x}_{0}\); \(S^{n}\gets 0\); Sample \(\mathcal{E}^{n}=\left(\epsilon_{0}^{n}\ldots\epsilon_{T-1}^{n}\right)\), \(\epsilon_{k}^{n}\in\mathcal{N}(0,\Sigma)\); for\(k\)\(\leftarrow\)\(0\)to\(T-1\)do if\(k>0\)then \(S^{n}\)\(\leftarrow\)\(q(\mathbf{x})\); \(S^{n}\)\(\leftarrow\)\(\frac{\lambda(T-1)}{2}(k_{\text{fb}}^{T}R_{\text{fb}}\Sigma^{-1}k_{\text{fb}}+( \mathbf{u}_{k}+2\epsilon_{k}^{n})^{\text{T}}R\Sigma^{-1}\mathbf{u}_{k})\); \(\widetilde{\mathbf{x}}\leftarrow\mathbf{x}\); \(\widetilde{\mathbf{x}}(j)\gets 0\); \(k_{\text{fb}}\leftarrow\nu\cdot K_{\text{BAS}}(\mathbf{x},\widetilde{\mathbf{x}})\); \(\mathbf{x}\gets F\left(\mathbf{x},\mathbf{u}_{k}+\epsilon_{k}^{n}+k_{ \text{fb}}\right)\); \(S^{n}\)\(+\)\(\phi(\mathbf{x})\) return\(\mathbf{S}=\left(S^{0}\ldots S^{N}\right)\), \(\mathcal{E}^{n}\);
```
**Algorithm 1**Safety Controlled Importance Sampler (SCIS)
Fig. 2: The proposed importance sampling scheme, with a red unsafe reference trajectory in (a), a blue corrected safe trajectory shown in (b), and blue safe samples under barrier state feedback shown in (c). Note how the samples attempt to curve around the obstacles.
## IV Results
We now test the proposed algorithm SC-MPPI against _vanilla_ MPPI, under the barrier state dynamics. The algorithms were tested on a Dubins vehicle and a multirotor system in simulation. All experiments were run on an Intel i7-12700K with 32 GB of RAM and a NVIDIA RTX 3080 Ti GPU. The cost functions used in the experiments have the following form:
\[J_{\text{DDP}} =\sum_{k=0}^{T-1}\left(\mathbf{x}_{k}^{\text{T}}Q\mathbf{x}_{k}+ \mathbf{u}_{k}^{\text{T}}R\mathbf{u}_{k}+q_{\beta}\beta_{k}^{2}\right)+ \mathbf{x}_{T}^{\text{T}}\Phi\mathbf{x}_{T}\] \[J_{\text{MPPI}} =\sum_{k=0}^{T-1}\left(\mathbf{x}_{k}^{\text{T}}Q\mathbf{x}_{k}+ \frac{\lambda(1-\alpha)}{2}\mathbf{u}_{k}^{\text{T}}R\Sigma^{-1}\mathbf{u}_{k}\right.\] \[+q_{\beta}\beta_{k}^{2}\right)+\mathbf{x}_{T}^{\text{T}}\Phi \mathbf{x}_{T}\] \[J_{\text{SC-MPPI}} =\sum_{k=0}^{T-1}\left(\mathbf{x}_{k}^{\text{T}}Q\mathbf{x}_{k}+ \frac{\lambda(1-\alpha)}{2}\mathbf{u}_{k}^{\text{T}}R\Sigma^{-1}\mathbf{u}_{k}\right.\] \[+k_{\text{fb}}^{\text{T}}R_{\text{fb}}\Sigma^{-1}k_{\text{fb}} \right)+\mathbf{x}_{T}^{\text{T}}\Phi\mathbf{x}_{T}\]
where \(k_{\text{fb}}=K_{\text{BaS}}\cdot\beta_{k}\). Note, an important aspect of SC-MPPI is the computation of the safe controller along the importance sampling trajectory. MPPI might generate an unsafe reference trajectory and hence using classical barrier functions with DBaS-DDP is not feasible as it requires safe initializations (see [3]). Relaxed barrier functions [13, 10] would allow such scenarios on the other hand. In other words, relaxed barrier functions allow DDP to converge to a solution even if the majority of the reference trajectory was unsafe in addition to ensuring numerical stability.
### _Dubins Vehicle_
As a proof of concept of the proposed algorithm, a Dubins vehicle must navigate a cluttered environment. We set up a _dense_ navigation problem such that the vehicle should narrowly move between the obstacles given its size (the vehicle's radius is \(0.2\) units). The experiment's details and the details of the implementation are provided in subsection VI-B.
Implementation of the proposed algorithm, safety controlled MPPI, is shown in Fig. 3 (a), and vanilla MPPI with barrier in the cost is shown in Fig. 3 (b). To validate the idea of barrier states feedback in importance sampling, we show the two algorithms' samples (512 samples per step) at select time instances in the environment. Safe samples are shown in blue while unsafe samples are shown in red. Note that the environment is purposefully challenging to navigate,and as a result, a small perturbation can result in unsafe samples. As shown in Fig. 3, the proposed algorithm SC-MPPI, results in the samples to be deflected away from the obstacles, due to the barrier state feedback, effectively encouraging safe exploration. On the other hand, vanilla MPPI's samples are agnostic to the constraints and thus are distributed around the nominal trajectory in a parabola-like shape. In addition, it can be observed that vanilla MPPI's samples collide with more obstacles while SC-MPPI's samples stop before the obstacles most of the time. Furthermore, SC-MPPI has more safe samples (blue) than vanilla MPPI.
Next, we provide detailed numerical comparisons between the algorithms for the multirotor example.
### _Multirotor_
The environment for this task has 19 obstacles of various sizes, and the system (with radius \(1.5m\)) must navigate through this dense field, pictured in Fig. 1. Safety violation is determined by collision of the system into any of the obstacles for a single time instant. Task completion is defined as entering within a \(0.5m\) radius of the desired final position. Two experiments are shown here, highlighting differences
Fig. 3: Dubins vehicle samples visualized with safe samples in blue, and unsafe samples in red. The red square is the start position, and the green X is the target location. The top figure is SC-MPPI (with safety feedback on the samples) and the bottom one is MPPI. It can be seen that SC-MPPI samples are deflected, moving away from the obstacles projecting further safe exploration in addition to be close-packed between obstacles.
in performance which emerge from tuning, control variance, and problem horizon for both MPPI and SC-MPPI. We also compare the two algorithms against Discrete Barrier States (DBaS) embedded Model Predictive Differential Dynamic Programming (MPC-DDP). The parameters for each of the controllers are provided in the Appendix (Section VI-C).
**Experiment 1:** For the first experiment, the problem horizon is set to _3 seconds_, with control variances for Model-Predictive Path Integral Control (MPPI) and Safety Controlled Model Predictive Path Integral Control (SC-MPPI) set to \(\sigma=[5.0,5.0,5.0,15.0]\). Trajectories for each of the trials are visualized in Figure 4, with each trajectory colored by velocity. The effect of lower control exploration variance is clear for MPPI, where the trajectories that both maintain safety and complete the task pass almost directly through the dense obstacle field. Under the given tuning parameters, DDP has a high velocity but also high safety violation percentage at \(36.32\%\) (Table I). In contrast, MPPI has a low completion percentage, but an even lower task completion rate when compared to DDP. SC-MPPI outperforms both algorithms in safety, completion, RMS error, and task completion time under this task.
The task completion issues for MPPI are likely due to the lower control exploration variance, as well as the limited time horizon for the problem. MPPI takes more time to find a solution around the obstacles, and does not have enough time enter the completion radius. DDP has large average and maximum speeds, but typically takes a longer path around all the obstacles. Under the same problem, SC-MPPI has a safe sample rate of \(38.54\%\), and MPPI has a safe sample rate of \(37.20\%\), when safe samples are averaged across all trajectories and timesteps. While the difference in the number of safe samples is quite small, the performance margin is quite large. The reasoning behind the low safe sample percentage is likely due to the density of the obstacle course, and the time limitation for the task. Since the task attempts to send the quadrotor through the field in under 3 seconds, most trajectories will impact the obstacles. In Figure 1, we can observe the differences in MPPI and SC-MPPI sampling for the quadrotor for Experiment 1 and directly see the variations in safe versus unsafe samples for the two sampling-based algorithms. The safe underlying controller for SC-MPPI allows the reference trajectories to move closer towards the goal, and the barrier state feedback clearly forces the samples away from the obstacles.
Figure 5 displays the mean and standard deviation of the system from the closest obstacle, as well as to the final target _for all episodes that did not crash_. Clearly, the proposed algorithm SC-MPPI demonstrates the lowest final error and a tight distribution to the final target amongst the trials. MPPI shows a wide variation in regards to the distance to the closest obstacle, with DDP having a very large variation in the distance to the terminal obstacle for the final state. This large variation stems from the fact that DDP could often diverge away from the arena if it could not find a solution, resulting in the distance from the system to any given obstacle becoming quite large.
**Experiment 2:** In this second experiment, the control variance for both MPPI and SC-MPPI was set to be quite large at \(\sigma=[150.0,150,50.0,500.0]\), and the time horizon for the overall problem was set to be _4 seconds_. This high control variance turned out to be a necessity for MPPI to find a solution. We can see the trajectories of this experiment in Figure 6. The effect of the increased time horizon and exploration is immediately apparent in the experiment statistics, given in Table II. Here we can see greatly increased task completion rates for the sampling-based algorithms, and a shift in the performance of safety violation, where MPPI has the fewest crashed trajectories. The task completion and final error still favors SC-MPPI. Interestingly, the safety sample percentage for MPPI is \(45\%\), when averaged across all trajectories and timesteps, whereas for SC-MPPI the safe sample percentage is \(58.43\%\). Here the correlation between safe sample percentage and performance is less clear. Empirically we see that sample efficiency of the system matters less than the system having a "better" nominal trajectory. Since we are only approximating the true free energy with sampling, a safer initialization appears to lead to more performant solutions. In Figure 7, we can see how both DDP and SC-MPPI complete the task faster with a tight distribution, but MPPI as almost able to achieve the same level of RMS error and distance from the nearest obstacle as SC-MPPI. The larger time horizon and larger control variance has a clear benefit for MPPI to complete the navigation task.
## V Conclusion
In this work, we proposed the idea of importance sampling under safety embedded feedback control. This was then utilized to develop the algorithm SC-MPPI. We derived this new algorithm under the principles of information theoretic MPPI and compare it against embedded barrier state MPC-DDP and MPPI. We empirically show that the proposed
\begin{table}
\begin{tabular}{l|c|c|c} & DDP & MPPI & SC-MPPI \\ \hline \hline Compute Time (ms) & \(33.12\pm(4.98)\) & \(\mathbf{3.51\pm(0.13)}\) & \(4.08\pm(0.11)\) \\ \hline Safety Violation \% & \(71.10\%\) & \(\mathbf{0.20\%}\) & \(3.50\%\) \\ \hline Task Completion \% & \(26.10\%\) & \(80.40\%\) & \(\mathbf{9.50\%}\) \\ \hline Completion Time (s) & \(2.55\pm(0.57)\) & \(3.39\pm(0.28)\) & \(\mathbf{2.07\pm(0.33)}\) \\ \hline Position RMSE (m) & \(0.49\pm(0.04)\) & \(0.29\pm(0.10)\) & \(\mathbf{0.24\pm(0.14)}\) \\ \hline Avg Velocity (m/s) & \(\mathbf{5.09\pm(1.17)}\) & \(4.44\pm(0.31)\) & \(4.95\pm(0.39)\) \\ \hline Max Velocity (m/s) & \(\mathbf{18.17\pm(2.46)}\) & \(11.29\pm(1.41)\) & \(15.38\pm(1.78)\) \\ \hline \end{tabular}
\end{table} TABLE II: **Experiment 2** Multirotor Dense Navigation Statistical Trial
\begin{table}
\begin{tabular}{l|c|c|c} & DDP & MPPI & SC-MPPI \\ \hline \hline Compute Time (ms) & \(16.56\pm(1.56)\) & \(\mathbf{3.16\pm(0.06)}\) & \(6.15\pm(0.61)\) \\ \hline Safety Violation \% & \(36.32\%\) & \(1.05\%\) & \(\mathbf{0.84\%}\) \\ \hline Task Completion \% & \(16.63\%\) & \(4.32\%\) & \(\mathbf{68.84\%}\) \\ \hline Completion Time (s) & \(2.30\pm(0.33)\) & \(2.93\pm(0.05)\) & \(\mathbf{2.11\pm(0.15)}\) \\ Position RMSE (m) & \(0.62\pm(0.18)\) & \(0.46\pm(0.03)\) & \(\mathbf{0.14\pm(0.14)}\) \\ \hline Avg Velocity (m/s) & \(\mathbf{7.87\pm(0.97)}\) & \(6.30\pm(0.27)\) & \(6.78\pm(0.44)\) \\ \hline Max Velocity (m/s) & \(\mathbf{18.15\pm(2.20)}\) & \(14.21\pm(1.46)\) & \(17.15\pm(1.67)\) \\ \hline \end{tabular}
\end{table} TABLE I: **Experiment 1** Multirotor Dense Navigation Statistical Trial
algorithm can provide a distinct improvement in system performance, system safety, and control exploration, even with lower control variance. Additionally, the algorithm is shown to be computationally feasible to be run in real time, with our experiments demonstrating optimization times of 4-6 milliseconds. SC-MPPI does require more optimization time overall when compared to MPPI, and the requirement of tuning the additional safety controller can be overkill depending on the problem at hand. We have shown that for difficult, dense navigation tasks, our proposed method can outperform existing techniques. The utilization of a safety controller to improve both the initial importance sampling trajectory for MPPI, as well as maintain the safety of samples moving forward opens the door to further research in safety-critical, sampling-based MPC methods.
|
2310.03579 | Causal Inference in Gene Regulatory Networks with GFlowNet: Towards
Scalability in Large Systems | Understanding causal relationships within Gene Regulatory Networks (GRNs) is
essential for unraveling the gene interactions in cellular processes. However,
causal discovery in GRNs is a challenging problem for multiple reasons
including the existence of cyclic feedback loops and uncertainty that yields
diverse possible causal structures. Previous works in this area either ignore
cyclic dynamics (assume acyclic structure) or struggle with scalability. We
introduce Swift-DynGFN as a novel framework that enhances causal structure
learning in GRNs while addressing scalability concerns. Specifically,
Swift-DynGFN exploits gene-wise independence to boost parallelization and to
lower computational cost. Experiments on real single-cell RNA velocity and
synthetic GRN datasets showcase the advancement in learning causal structure in
GRNs and scalability in larger systems. | Trang Nguyen, Alexander Tong, Kanika Madan, Yoshua Bengio, Dianbo Liu | 2023-10-05T14:59:19Z | http://arxiv.org/abs/2310.03579v1 | # Causal Inference in Gene Regulatory Networks
###### Abstract
Understanding causal relationships within Gene Regulatory Networks (GRNs) is essential for unraveling the gene interactions in cellular processes. However, causal discovery in GRNs is a challenging problem for multiple reasons including the existence of cyclic feedback loops and uncertainty that yields diverse possible causal structures. Previous works in this area either ignore cyclic dynamics (assume acyclic structure) or struggle with scalability. We introduce Swift-DynGFN as a novel framework that enhances causal structure learning in GRNs while addressing scalability concerns. Specifically, Swift-DynGFN exploits gene-wise independence to boost parallelization and to lower computational cost. Experiments on real single-cell RNA velocity and synthetic GRN datasets showcase the advancement in learning causal structure in GRNs and scalability in larger systems.
## 1 Introduction
Gene regulatory networks (GRNs) play a critical role in cell biology, orchestrating a highly intricate interplay of molecular interactions that ultimately govern the behavior of cells (Karlebach and Shamir, 2008, Emmert-Streib et al., 2014). Understanding causality entails deciphering the intricate web of relationships between genes, shedding light on how the activity of one gene can intricately influence the expression or behavior of another (Roy et al., 2013, Ahmed et al., 2020, Chen and Liu, 2022, Li et al., 2023, Murphy, 2001). However, uncovering causality within GRNs faces challenges, including the cyclic feedback loops (Mitrophanov and Groisman, 2008, Atanackovic et al., 2023) and uncertainty that leads to multiple possible causal structures (Atanackovic et al., 2023, Denic et al., 2009, Dehghannasiri et al., 2015). Prior efforts in this domain have encountered two restrictions. Firstly, most previous works formulate the problem as a Directed Acyclic Graph (DAG) (Glymour et al., 2019, Lorch et al., 2021, Deleu et al., 2022, Annadani et al., 2021), thus overlooking the cyclic nature inherent in these networks. Secondly, scalability remains a concern when dealing with large GRNs (Atanackovic et al., 2023, Deleu et al., 2022).
The _Bayesian dynamic structure learning_ framework introduced in Atanackovic et al. (2023) and the incorporation with Generative Flow Networks (GFlowNets) (Bengio et al., 2021, 2023) sheds light on solving the first limitation. Particularly, to model causality within GRNs, Atanackovic et al., 2023 simplifies structure learning in GRNs as a sparse identification problem within a dynamical system, utilizing RNA velocity data to estimate the rate of change of a gene's expression. In the
context of dynamical systems, it is feasible to depict both the causal relations between variables and the system's changing behavior through time. Besides, GFlowNet plays a critical role in the work to model intricate distributions over cyclic structures. However, the scalability is still one restriction of their work on larger systems.
In this study, we propose Swift-DynGFN to continue to navigate the intricacies of causal structures in GRNs and address scalability. Swift-DynGFN improves the architecture of GFlowNet and draws upon the _Bayesian dynamic structure learning_ framework. Notably, we enhance causal structure learning by optimizing variable-wise influence, leveraging predictions from previous variables, and allowing the model to decide on the variable order to be processed. Furthermore, we improve the scalability by parallelizing the prediction process, reducing the trajectory length from \(n^{2}\) to \(n\) (where \(n\) is the number of variables), significantly reducing the time and space requirements.
The main contributions of this work are summarized as (1) Introduce Swift-DynGFN that improves causal structure learning in GRNs and tackles the scalability challenge, (2) Showcase the capability to capture uncertainty and accurately infer causal relationships within GRNs on real single-cell velocity, and (3) Prove the scalability potential of Swift-DynGFN on the synthetic GRN data.
## 2 Preliminaries
### Dynamical systems in Single-cell RNA Velocity
Denote a finite dataset as \(\mathcal{D}\), containing dynamic pairs \((x,dx)\), where \(x\) represents a state in a time-invariant stochastic dynamical system and \(dx\) denotes its time derivative. In estimating the change rate in gene expression by leveraging _RNA velocity_Bergen et al. (2020), \(x\) and \(dx\) align to _gene expression levels_ and _velocity_ of changes in gene expression, respectively. We aim to learn posterior over explanatory graphs, \(G\), that defines the sparsity graph pattern among variables in \(\mathcal{D}\), represented as \(Q(G|\mathcal{D})\).
### Generative Flow Networks
Generative Flow Networks (GFlowNet) constitute a probabilistic framework designed to facilitate the generation of a diverse array of candidates over spaces of discrete objects (Deleu et al., 2022; Bengio et al., 2021, 2023). Particularly, GFlowNet learns a probabilistic strategy for building structured objects, _e.g._ a graph, by generating a sequence of actions that gradually transform a partial object to a complete object by taking a sequence of actions where each action corresponds to adding an edge. Several training objectives have been used (Deleu et al., 2022; Bengio et al., 2021; Malkin et al., 2022; Madan et al., 2022), one of which is the _detailed balance_ objective Bengio et al. (2023).
\[F(s)P_{F}(s^{\prime}|s)=F(s^{\prime})P_{B}(s|s^{\prime}) \tag{1}\]
Detailed Balance (DB) Training ObjectiveDenoting a GFlowNet model parameterized by \(\psi\) that optimizes _forward policy_\(P_{F}(s^{\prime}|s,\psi)\) and _backward policy_\(P_{B}(s^{\prime}|s,\psi)\) corresponding to _Markovian flow_ of a non-terminal state \(F_{\psi}(s)\), the _DB constraint_Bengio et al. (2023) is presented as Equation 1 in transformation \(s\to s^{\prime}\). Subsequently, the _detail balance loss_ is presented in Equation 2 that optimizes the _DB constraint_. With a terminal state \(s_{n}\), a _Reward matching loss_Bengio et al. (2023); Malkin et al. (2022) is added, formulated as \(\mathcal{L}_{R}(s_{n})=(\log(R(s_{n}))-\log(F_{\psi}(s_{n})))^{2}\) with \(R(s_{n})\) implies to the reward obtained at state \(s_{n}\).
\[\mathcal{L}_{DB}(s_{i-1},s_{i})=\bigg{(}\log\frac{F_{\psi}(s)P_{F}(s_{i}|s_{i -1},\psi)}{F_{\psi}(s_{i})P_{B}(s_{i-1}|s_{i},\psi)}\bigg{)}^{2} \tag{2}\]
## 3 Proposed method: Swift-DynGFN
Intuitively, Swift-DynGFN enhances the variable-wise influence in predicting causal structures, raising parallelization for each variable and conducting sequential computing among different variables to alleviate time and space requirements.
Facilitating causal influence, we tackle two questions: (1) **"What has been done?**", means the prediction of the current variable is affected by predictions made on other nodes that ensures variables
causally contribute to others and (2) "**What's next?**", means Swift-DynGFN selects the next node to be processed that harnesses the causal relationships in the precision of predictions for each other.
In terms of computational complexities, Swift-DynGFN design two strategies: (1) the prediction of incoming edges to the current node of interest is parallel, drastically reducing processing time and (2) we cut off the computation decomposition into each variable as observed in prior work (Atanackovic et al., 2023) to avoid parameter outbreak when scaling up the number of nodes.
### Variable-wise causal influence in Swift-DynGFN
Regarding paying attention to predictions made on previous variables, we formulate \(Q(G|D)\) as in Equation 3. Particularly, \(G\in\mathbb{R}^{B\times(n+1)\times(n\times n)}\) denotes states of \(B\) causal structures during \(t\in[0\ldots n]\) steps, which have a shape of \(n\times n\) per graph, where \(n\) is the number of variables. From the implementation point of view, the GFlowNet model takes incoming edges on visited nodes and the current node's index inputs.
\[Q(G|D)=\prod_{i\in 0\ldots n}Q(G_{t=i}|G_{t=i-1},D) \tag{3}\]
To decide the next variable to be processed, a variable's index that has not been visited yet is sampled as an action by the forward policy, along with the incoming edges of the current variable of interest. We employ the forward probability to sample both actions, with a binary mask to mask the set of visited nodes when selecting the next variable. In the first turn (\(t=0\)), only the action of the next variable index is taken into account, whereas incoming edges are not sampled since the node of interest has not been selected yet.
### Parallelization in Swift-DynGFN
Different from prior approaches Atanackovic et al. (2023); Deleu et al. (2022); Bengio et al. (2021); Malkin et al. (2022); Madan et al. (2022) that sample a single edge at a time, we sample all incoming edges of the current node from the same forward probability. Subsequently, the time complexity is dramatically optimized from \(n^{2}\) to \(n\) regarding the length of the prediction trajectory. In addition, we avoid designing a GFlowNet in each node, as it introduces a potential threat to parameter explosion in a larger number of nodes. Instead, we operate a single GFlowNet model shared among variables, reducing the number of parameters required in larger systems.
Figure 1: **Swift-DynGFN Intuition and Computation Flow**. Edges added from the previous turns are gray, and newly added edges are red. The model acknowledges _all_ previously added edges and outputs (1) _all_ incoming edges of the current node and (2) the node for the next turn (node_id). The visited binary mask marks nodes are done. A batch of graphs is predicted in parallel.
### Optimization strategy in Swift-DynGFN
We utilize the _DB objective_ as presented in Section 2.2 with the terminal state defined as \(G_{n}\) and reward as \(R(G_{d})=e^{-||dx_{b}-\widetilde{dx}_{b}||_{2}^{2}+\lambda_{0}||G_{n}||_{0}}\) where \(dx_{b}\) is the ground truth of the input batch \(x_{b}\), and the \(\lambda_{0}||G_{n}||_{0}\) term encourage sparsity of the GRNs.
## 4 Experiments
We conduct experiments to verify two hypotheses: \(\mathcal{H}_{1}\) - Leveraging the causal relationship among genes enhances causal inference in GRN (Section 4.1) and \(\mathcal{H}_{2}\)- Swift-DynGFN reduces time and space complexities, contributing to improvements in large-scale systems (Section 4.2).
We compare our method to DynGFN, DynBCD, and DynDiBS, reported in DynGFN paper Atanackovic et al. (2023). Particularly, besides GFlowNet, Atanackovic et al., 2023 integrates BCD (Cundy et al., 2021) and DiBS (Lorch et al., 2021), which are designed for static systems, to _Bayesian dynamic structure learning_, denoted as DynBCD and DynDiBS. In addition, we employ Bayes-SHD and AUC metrics to evaluate the predicted structures over the true _graphs_.
### Experiment on Single-Cell RNA-velocity Data
DatasetWe investigate the cell cycle dataset of human Fibroblasts (Riba et al., 2022) that contains records of 5000 cells and more than 10,000 genes. Following Atanackovic et al., 2023, we utilize a group of five genes, where Cdc25A activates Cdk1, which, in turn, inhibits Cdc25C, while the Mcm complex is correlated with Cdc25A but does not directly interact with Cdk1 in the cell cycle regulation. With this setting, the GRN system contains 81 admissible causal structures.
Supporting \(\mathcal{H}_{1}\), Table 1 presents the performance and computation details of Swift-DynGFN alongside the reproduction of all baselines, experimented on RNA velocity on a single NVIDIA A100 and 4 CPUs. Overall, our proposed method delivers precise predictions while reducing computational complexities. Regarding prediction quality, Swift-DynGFN outperforms all baselines by a notable margin in both metrics, evident by the \(0.14\) increased AUC and \(0.44\) reduced Bayes-SHD compared to DynGFN. Regarding computational complexities, our best configuration utilizes far fewer parameters and GPU hours for training than DynGFN. Even when configured to match DynGFN's parameter count, our method maintains superior performance and faster training time.
\begin{table}
\begin{tabular}{l|r r|r r} \hline \hline & \multicolumn{4}{c}{**Cellular System - RNA Velocity**} \\
**Methods** & **Bayes-SHD\(\downarrow\)** & **AUC\(\uparrow\)** & **\#Params** & **Duration (h)** \\ \hline DynBCD & **2.79\(\pm\)**0.34 & 0.53\(\pm\)0.07 & 100 & 3.76 \\ DynDiBS & 6.82\(\pm\)0.78 & 0.46\(\pm\)0.03 & 50.0k & 1.02 \\ DynGFN & 3.37\(\pm\)0.51 & 0.59\(\pm\)0.03 & 255.4k & 1.16 \\ Swift-DynGFN \({}_{best}\) & **2.93\(\pm\)**0.21 & **0.73\(\pm\)**0.04 & 87.4k & 0.53 \\ Swift-DynGFN \({}_{large}\) & 3.22\(\pm\)0.36 & 0.69\(\pm\)0.03 & 255.4k & 0.99 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Dynamic causal structure inference in GRN**. Reported scores are mean and std over five seeds. Swift-DynGFN outperforms all baselines and reduces computational complexities.
Figure 2: **Scalability comparison on synthetic data.** Swift-DynGFN shows strong scalability as it avoids complexities outbreak in large-scale systems while producing remarkable performance.
### Experiment on Synthetic Data
DatasetWe adopt the dataset generation using the indeterminacy model from Atanackovic et al., 2023 (more detail is in Appendix C) and create a non-linear dynamical system \(dx=\mathsf{sigmoid}(\mathbf{A}x)\). We design the number of variables to vary from \(20\), \(50\), and \(100\) with a fixed sparsity equal to \(0.9\).
Examining \(\mathcal{H}_{2}\), Figure 2 illustrates the Swift-DynGFN scalability, conducted on 2 NVIDIA A100-80GB GPUs and 8 CPUs. Swift-DynGFN consistently demonstrates robust scalability, requiring fewer computational resources in large-scale systems while delivering considerable performance. On the left, Swift-DynGFN outperforms or is on par with baselines as the number of nodes increases. On the right, Swift-DynGFN avoids computational outbreak, a phenomenon observed in baselines as the system scales up. For instance, with 100 nodes, DynGFN's training duration is five times longer than that of Swift-DynGFN, while DynBCD and DynDiBS struggle with CUDA memory requirements.
## 5 Conclusion
In this study, we proposed Swift-DynGFN that improves dynamical causal structure inference in large-scale systems. We leverage the GFlowNet model to strengthen the causal consideration while simultaneously parallelizing the computation to reduce time and space requirements. Our experiments on GRN and synthetic data illustrate Swift-DynGFN's effectiveness in improving causal inference ability and scalability in dealing with large-scale systems compared to baselines.
Future Works: Potential directions to further improve this study include (1) extending the number of genes to estimate ability in large-scale GRN systems and (2) investigating the causal structure in diverse biological contexts, such as Metabolic Pathways and Immune Response.
## Acknowledgement
We gratefully acknowledge the support received for this research. This research was enabled in part by computational resources provided by Mila. Each member involved in this research is funded by their primary institution.
|
2302.04007 | A NICER look at the jet-like corona of MAXI J1535-571 through type-B
quasi-periodic oscillations | MAXI J1535-571 is a black-hole X-ray binary that in 2017 exhibited a very
bright outburst which reached a peak flux of up to 5 Crab in the 2-20 keV band.
Given the high flux, several X-ray space observatories obtained unprecedented
high signal-to-noise data of key parts of the outburst. In our previous paper
we studied the corona of MAXI J1535-571 in the hard-intermediate state (HIMS)
with Insight-HXMT. In this paper we focus on the study of the corona in the
soft-intermediate state (SIMS) through the spectral-timing analysis of 26 NICER
detections of the type-B quasi-periodic oscillations (QPOs). From simultaneous
fits of the energy, rms and lag spectra of these QPOs with our time-dependent
Comptonization model, we find that in the SIMS the corona size is ~ 6500 km and
vertically extended. We detect a narrow iron line in the energy spectra, which
we interpret to be due to the illumination of the outer part of the accretion
disk by this large corona. We follow the evolution of the corona and the radio
jet during the HIMS-SIMS transition, and find that the jet flux peaks after the
time when the corona extends to its maximum vertical size. The jet flux starts
to decay after the corona contracts vertically towards the black hole. This
behavior points to a connection between the X-ray corona and the radio jet
similar to that seen in other sources. | Yuexin Zhang, Mariano Méndez, Federico García, Diego Altamirano, Tomaso M. Belloni, Kevin Alabarta, Liang Zhang, Candela Bellavita, Divya Rawat, Ruican Ma | 2023-02-08T11:45:58Z | http://arxiv.org/abs/2302.04007v2 | A _Nicer_ look at the jet-like corona of MAXI J1535\(-\)571 through type-B quasi-periodic oscillations
###### Abstract
MAXI J1535\(-\)571 is a black-hole X-ray binary that in 2017 exhibited a very bright outburst which reached a peak flux of up to 5 Crab in the 2-20 keV band. Given the high flux, several X-ray space observatories obtained unprecedented high signal-to-noise data of key parts of the outburst. In our previous paper we studied the corona of MAXI J1535\(-\)571 in the hard-intermediate state (HIMS) with _Insight_-HXMT. In this paper we focus on the study of the corona in the soft-intermediate state (SIMS) through the spectral-timing analysis of 26 _NICER_ detections of the type-B quasi-periodic oscillations (QPOs). From simultaneous fits of the energy, rms and lag spectra of these QPOs with our time-dependent Comptonization model, we find that in the SIMS the corona size is \(\sim 6500\) km and vertically extended. We detect a narrow iron line in the energy spectra, which we interpret to be due to the illumination of the outer part of the accretion disk by this large corona. We follow the evolution of the corona and the radio jet during the HIMS-SIMS transition, and find that the jet flux peaks after the time when the corona extends to its maximum vertical size. The jet flux starts to decay after the corona contracts vertically towards the black hole. This behavior points to a connection between the X-ray corona and the radio jet similar to that seen in other sources.
keywords: accretion, accretion discs - stars: individual: MAXI J1535\(-\)571 - stars: black holes - X-rays: binaries
## 1 Introduction
Black-hole low-mass X-ray binaries (BH LMXBs) consist of a stellar-mass black hole with a low-mass companion star overflowing its Roche lobe, forming an accretion disk (see Belloni et al., 2011; Kalemci et al., 2022, for reviews). During an outburst, a black-hole X-ray transient traces an anticlockwise 'q' path in the hardness-intensity diagram (HID; Homan and Belloni, 2005; Belloni et al., 2005), during which the proportion of the thermal and Comptonized components changes leading to several well-defined spectral states (Belloni et al., 2005). The accretion disk emits thermal emission in the soft X-ray band with the disk spectrum peaking at around 0.2-2.0 keV; a fraction of the soft disk photons are Compton up-scattered in the corona, forming a power-law like component in the energy spectrum in the hard X-ray band up to 100 keV (for reviews, see Done et al., 2007; Gilfanov, 2010). When the outburst starts, the X-ray luminosity of the source increases by several orders of magnitude compared to the quiescent state and the source enters the low-hard state (LHS) with a dominant hard Comptonized component and a relatively weak thermal disk component in the X-ray spectrum (e.g. Sharma et al., 2018; Wang-Ji et al., 2018). As the mass accretion rate increases, the source becomes brighter and quickly goes into the hard-intermediate state (HIMS), the soft-intermediate state (SIMS), the high-soft state (HSS), and sometimes an anomalous state at the highest luminosity (e.g. Mendez and van der Klis, 1997; Belloni et al., 2005; Belloni, 2010; Motta et al., 2012). From the LHS to the HSS, the truncated disk moves closer to the innermost circular stable orbit (ISCO) around the black hole (Esin et al., 1997; but see Reis et al., 2008; Wang-Ji et al., 2018; Miller et al., 2018 for a different interpretation) and the thermal component becomes dominant, while the spectrum of the hard component becomes steeper and weaker (e.g. Sharma et al., 2018; Dong et al., 2022). With the gradual decay of the mass accretion rate, the source evolves back to the LHS and finally returns to the quiescent state. In some so-called failed-transition outbursts the source stays in the LHS and HIMS and never enters the SIMS (Alabarta et al., 2021).
The Comptonized photons in the corona can impinge back onto
the accretion disk, resulting in the reprocessing and Compton back-scattering of the hard photons, which leads to a relativistic reflection component (Fabian et al., 1989; Bambi et al., 2021). The reflection spectrum includes characteristic X-ray emission lines among which the most prominent is the iron \(K_{\alpha}\) line around 6.4-7.0 keV and the Compton hump around 20 keV (Garcia et al., 2014). Recent studies have also modeled the relativistic reflection considering the incident photons coming from a hot blackbody or a returning reflection spectrum due to the strong gravitational bending (e.g. Connors et al., 2020; Garcia et al., 2022; Dauser et al., 2022). A relativistic reflection component can appear through the spectral evolution from the LHS to the HSS (e.g. Wang-Ji et al., 2018; Connors et al., 2020; Dong et al., 2022).
The X-ray light curves of black-hole binaries (BHBs) show variability on time scales from milliseconds to years (see Ingram & Motta, 2019, for a recent review). In the Fourier domain, the power density spectrum (PDS) of the X-ray light curve shows different kinds of variability, e.g. broadband noise and narrow peaks called quasi-periodic oscillations (QPOs; Van der Klis, 1989; Nowak, 2000; Belloni et al., 2002). The QPOs observed in BHBs are classified as low-frequency QPOs (LFQPOs), with central frequencies ranging from mHz to \(\sim\) 30 Hz, and high-frequency QPOs (HFQPOs), with central frequencies \(\gtrsim\) 60 Hz (Belloni & Stella, 2014). Depending on the shape of the broadband noise, the fractional root-mean-square amplitude (hereafter rms), the phase lags of the QPOs and the spectral state of the source, LFQPOs are divided into three classes, namely type A, B, and C (Remillard et al., 2002; Casella et al., 2004, 2005). Type-C QPOs generally appear in the hard states (both LHS and HIMS) with central frequency in the \(\sim\) 0.01-30 Hz range, high rms (up to 20%), strong band-limited noise, and usually second and sub harmonics. Type-B QPOs only appear in the SIMS (Belloni & Motta, 2016), with central frequency below \(\sim\) 6 Hz, low rms (\(\lesssim\) 5%), weak red noise, and sometimes the second and sub harmonics. Type-A QPOs, which are rarely detected, appear in the SIMS and HSS, with central frequencies in the \(\sim\) 6-8 Hz range, very weak rms and no harmonics. In the short-lived SIMS, fast transitions are sometimes observed between the type-B and either other types of QPOs or the disappearance of QPO (e.g. Casella et al., 2004; Zhang et al., 2021).
The Fourier cross spectrum of two simultaneous light curves in different energy bands (subject and reference band) can be used to compute the phase lags as a function of Fourier frequency. It is common to report lags of the signals identified in the PDS extending over a given frequency range, e.g. the broadband noise and the QPOs (Nowak et al., 1999). Hard (positive) lags (Miyamoto et al., 1988) can be produced by propagation of fluctuations of the mass accretion from the outer part towards the inner part of the disk and the corona (e.g. Arevalo & Uttley, 2006; Ingram & van der Klis, 2013). Soft (negative) lags can be due to reverberation or thermalization of hard photons when the corona photons impinge back onto the accretion disk (e.g. Uttley et al., 2014; Karpouzas et al., 2020).
Apart from the X-ray emission, BHBs radio emission from a jet is sometimes prominent, and can be classified into two types depending on the radio spectral index and the morphology of the jet: a small-scale, optically thick, compact jet and an extended, optically thin, transient jet (for a review, see Fender, 2006). The relation between the spectral states and the radio emission indicates the existence of an accretion-ejection coupling. In the LHS and the HIMS, a compact jet is observed, while during the state transition from the HIMS to the SIMS, the compact jet can be quenched for a few days (Fender et al., 2004). In the SIMS, there is no longer a compact jet but a bright transient jet appears with observable discrete relativistic ejecta, while in the HSS, the jet disappears (e.g. Mirabel & Rodriguez, 1994; Corbel et al., 2004; Ingram, 2019). Time variability in the radio band sometimes appears, but much less often than in the X-ray band (e.g. Tetarenko et al., 2019, 2021). Type-B QPOs in the X-ray band are usually thought to be connected to the relativistic transient jet but the exact mechanisms that explain the connection are still unknown (e.g. Fender et al., 2009; Homan et al., 2020; Garcia et al., 2021).
There are still many open questions regarding the the disk-corona-jet evolution, for instance, the disk truncation during the evolution of the black hole transients (Esin et al., 1997), the nature of the corona (the inner hot flow, disk sandwich, or the base of the jet; Galeev et al., 1979; Haardt & Maraschi, 1991; Markoff et al., 2005), and the geometry of the corona and its typical size (Martocchia & Matt, 1996; Lee & Miller, 1998). The geometry of the corona and its connection with the disk and the jet are still to be understood. A universal radio-X-ray correlation in the LHS provides evidence that the corona can be related to the radio jet (e.g. Gallo et al., 2003; Fender et al., 2004). Using data of GRS 1915+105, Mendez et al. (2022) proposed that (part of) the spread in the radio-X-ray correlation (Gallo et al., 2012) could be due to changes of the corona temperature. Studies of the spectral energy distribution (SED) from the radio to the X-ray band shows that the corona emission can originate from a shock acceleration region of tens of gravitational radii (\(R_{g}\)) as the jet base (Markoff et al., 2001; Connors et al., 2019; Cao et al., 2022). The spectral analysis of the reflection component in MAXI J1820+070 suggests that the corona outflows with a higher relativistic velocity as it is closer to the black hole (You et al., 2021). Through X-ray variability studies, the size of the corona shows a continuous evolution during the outburst that could be related to the change of the radio jet emission, suggesting a disk-corona-jet connection (Kara et al., 2019; Wang et al., 2021; Mendez et al., 2022; Zhang et al., 2022).
Many corona models of time variability have been proposed to explain the corona geometry. Using the broadband noise, reltrans models the reverberation lags and measures the corona height assuming a lamppost geometry of the corona (Ingram et al., 2019). Assuming the corona is a wide, low-velocity, wind-like structure, a corona outflow model is proposed to explain the observed correlations of, for instance, the power-law photon index and time lags and the photon index and radio flux (Kylafis et al., 2008; Kylafis & Reig, 2018). The propagation of mass accretion rate fluctuations model assumes that the corona lies in a truncated disk. This model follows the variability (both QPOs and broadband noise) and also fits the time-averaged energy spectrum (e.g. Ingram & van der Klis, 2013; Zdziarski et al., 2021; Kawamura et al., 2022). The JED-SAD model assumes that the hard part of the spectrum comes from a jet-emitting disk (JED) and the soft part from a standard accretion disk (SAD) (Ferreira, 1997; Petrucci et al., 2008; Marcel et al., 2018). The JED-SAD model not only explains the 'q' path in the HID in terms of transitions between accretion modes, but also matches the observed variability like the LFQPOs and the hard-soft lags (Marcel et al., 2019, 2020). The dynamical origin of the LFQPOs can be explained by the Lense-Thirring (L-T) precession of the corona which also restricts the corona region within a truncated disk or a jet base (Stella & Vietri, 1998; Ingram et al., 2009; You et al., 2018; Ma et al., 2021), or instabilities in the disk accretion flow (Tagger & Pellat, 1999).
Recently Karpouzas et al. (2020) and Bellavita et al. (2022) developed a time-dependent Comptonization model called \(\dot{\rm K}\)ompthd that explains the radiative properties (rms and phase lags) of QPOs and measures the corona geometry around the black hole. The \(\dot{\rm vK}\)ompthdk model assumes that the temperatures of the disk and the corona and the rate at which the corona is heated up oscillate coherently at the frequency of the QPO. The unspecified external heating source is required to keep the temperature of the corona
that, otherwise, would cool down very quickly and become undetectable. In this model, a hot spherical corona partially covers the soft disk and scatters the seed photons from the disk into the Comptonized hard photons. Note that we assume that the temperature of the corona is constant with radius (Sunyaev and Titarchuk, 1980). While inverse-Comptonization scattering in the corona may lead to a radial dependence of the corona temperature, this effect is likely negligible, especially at a large radii (Meyer-Hofmeister et al., 2012). Part of the out-going hard photons are observed while the others feedback onto the soft disk, are reprocessed, and finally reach thermal equilibrium with the disk. The steady state energy spectrum produced by vKompthdk is the same as that of nthcomp(Zdziarski et al., 1996; Zycki et al., 1999). The inverse-Compton scattering of the soft photons in the corona results in hard lags while the reprocessing of the hard photons in the disk leads to soft lags, both of which can be reproduced by vKompthdk to predict the size of the corona. We note that the model defines the flux of the feedback hard photons divided by the flux of the total Comptonized photons as an intrinsic feedback fraction, \(\eta_{\rm int}\). Apart from the \(\eta_{\rm int}\), in the model an explicit feedback fraction parameter, \(\eta\), is the flux of the feedback hard photons divided by the flux of the observed soft disk, thus \(\eta\) is in the range of 0-1. The intrinsic feedback fraction indicates the efficiency of hard photons that feedback onto the disk. Combining the measurements of the corona size and the feedback fraction, we can understand to what extent the corona covers the disk and whether the shape of the corona is sphere-like or jet-like (Karpouzas et al., 2021; Garcia et al., 2021; Mendez et al., 2022; Zhang et al., 2022).
MAXI J1535\(-\)571 is an X-ray transient discovered by _MAXI_/GSC and _Swift_/BAT independently when it went into outburst on 2017 September 2 (Kennea et al., 2017; Negoro et al., 2017). The source reached a peak flux of up to 5 Crab in the 2-20 keV band (Nakahira et al., 2017). It has been proposed that MAXI J1535\(-\)571 has a rapidly spinning (\(>\) 0.84) black hole and subtends a high inclination angle (\(>\) 60\({}^{\circ}\)) through the modeling of the relativistic reflection component (Miller et al., 2018; Xu et al., 2018; Dong et al., 2022). Dong et al. (2022) also reported that the corona temperature increases from 18 keV to \(>\) 300 keV as the source evolves from the LHS to the SIMS. In the soft state, the luminosity of MAXI J1535\(-\)571 is near the Eddington luminosity and the structure of the standard disk is likely to become slim (Tao et al., 2018). Timing studies of MAXI J1535\(-\)571 show that through the outburst, different types of LFQPOs appear and the LFQPOs evolve as the spectral state changes (Huang et al., 2018; Stevens et al., 2018). Bhargava et al. (2019) showed a correlation between the frequency of type-C QPOs and the hard photon index, indicating a connection between the timing features and the spectral parameters. From the radio observations of MAXI J1535\(-\)571, the jet is first a compact jet in the HIMS, then quenches during the transition from the HIMS to the SIMS, and finally a transient jet appears in the SIMS (Russell et al., 2019, 2020; Chauhan et al., 2019).
In this paper, we continue our previous study of the corona geometry of MAXI J1535\(-\)571 and the connection between the X-ray corona and the radio jet through the type-C QPOs in the HIMS using _Insight_-HXMT observations (Zhang et al., 2022). We further explore the corona properties the SIMS through the type-B QPOs, which are weak and much less frequently detected, using _NICER_ observations. We fit jointly the rms and phase-lag spectra of the type-B QPO and the time-averaged energy spectra of the source using the latest version of the time-dependent Comptonization model vKompthdk(Bellavita et al., 2022). This paper is organized as follows: In section 2 we describe the data reduction of the _NICER_ observations of MAXI J1535\(-\)571, and explain how we calculate the rms and phase lags of the type-B QPO in different energy bands. We also explain the parameter settings of the model used to fit jointly the rms and phase-lag spectra of the QPO and the time-averaged energy spectra of the source. In section 3 we show the X-ray temporal evolution of MAXI J1535\(-\)571, the rms and the phase-lag spectra of the identified type-B QPO, and the spectral parameters measured from the joint spectral fitting. Finally, in section 4 we discuss our results and compare them with previous studies of type-B QPOs and the corona geometry. In that section, we combine the properties of the corona measured in this work with our previous results and propose a more complete picture of the corona-jet connection during the whole evolution of MAXI J1535\(-\)571 from the HIMS to the SIMS.
## 2 Observations and Data Analysis
The _Neutron Star Interior Composition Explorer_(_NICER_; Gendreau et al., 2016) observed MAXI J1535\(-\)571 from 2017 September 7 to 2019 May 11 for a total of 219 observations. We use the standard _NICER_ reduction routine nicerl2 with CALDB version xti20210707 to process the data. We remove the data of detectors # 14 and # 34 which are affected by episodes of increased electronic noise. We require the pointing direction of the instrument to be less than 0.015\({}^{\circ}\) offset, at least 40\({}^{\circ}\) above the bright Earth limb, and at least 30\({}^{\circ}\) above the Earth limb. For a bright source like MAXI J1535\(-\)571, we apply an undershoot count rate range 0-200 per module (underonly range). We set the column types of the prefilter to be NICERV3 and 3C50 as the recommended background columns. We require each GTI to be longer than 16 s and split the observations into separate orbits.
### Light curve and hardness-intensity diagram
We focus on the 36 observations in the period MJD 58008-58036, i.e. the outburst before the four reflares (Caneo et al., 2020). We use XSELECT to extract light curves at 1-s resolution for each orbit in the 1-3 keV and the 3-10 keV bands. We exclude the data below 1 keV since we find that the interstellar absorption towards the source is very high (see subsection 3.2), and there is no significantly detected emission from the source below that energy. In order to obtain the HID, for each orbit we use the average count rate in the 1-10 keV band and calculate the hardness ratio using the ratio of the average count rate in the 3-10 keV band and the 1-3 keV band.
### Energy spectra
We extract the energy spectra of the source and the background in each orbit using the nibackgen3C50 tool (Remillard et al., 2022), and use nicerraf and nicerrmf to generate the ancillary files and the response files, respectively. We group the spectrum such that we have at least 25 counts per bin and we oversample the intrinsic resolution of the instrument by a factor of 3.
We fit the energy spectra using XSPEC v12.12.1 (Arnaud, 1996). For the time-averaged energy spectrum of MAXI J1535\(-\)571, following Miller et al. (2018) we use the 2.3-10 keV energy band in order to avoid the calibration in the Si band (1.7-2.1 keV) and the Au band (2.2-2.3 keV), but we notice the energy range between 2.1 keV and 2.2 keV. A systematic error of 0.5% is added in quadrature to the model when performing the spectral fitting.
We apply the model TBfeo*(diskbb+vKompthdk+gaussian), hereafter Model 1, to fit the energy spectra. The component TBfeo models the Galactic absorption towards the source with variable
oxygen and iron abundances. We set the cross section and the solar abundance of the ISM using the tables of Verner et al. (1996) and Wilms et al. (2000), respectively. The component di s kbbb(Mitsuda et al., 1984) represents the emission of a multi-temperature optically thick and geometrically thin disk with parameters being the disk temperature, \(kT_{\rm in}\), and a normalization. The time-averaged or steady-state version of the time-dependent Comptonization model vKompthdk(Karpouzas et al., 2020; Bellavita et al., 2022) is the same as nthcomp(Zdziarski et al., 1996; Zycki et al., 1999). The parameters of this component are the seed photon temperature, \(kT_{\rm s}\), the corona temperature, \(kT_{\rm e}\), the photon index, \(\Gamma\), and a normalization. The optical depth, \(\tau\), is a function of \(kT_{\rm e}\) and \(\Gamma\) in vKompthdk:
\[\tau=\sqrt{\frac{9}{4}+\frac{3}{\left(kT_{\rm e}/m_{\rm e}c^{2}\right)\left( \left(\Gamma+1/2\right)^{2}-9/4\right)}}-\frac{3}{2}, \tag{1}\]
where \(m_{\rm e}\) and \(c\) are the mass of electron and the speed of light, respectively, and the Compton \(y\)-_parameter_(Zel'dovich & Shakura, 1969; Shapiro et al., 1976), \(y={\rm max}\left(\tau,\tau^{2}\right)4kT_{\rm e}/(m_{\rm e}c^{2})\), drives the shape of the spectrum. The seed photon temperature, \(kT_{\rm s}\), in vKompthdk is linked to the inner disk temperature, \(kT_{\rm in}\), in di s kbbb. In fact vKompthdk contains four extra parameters that only affect the time-dependent spectrum produced by this model, and describe the corona size, \(L\), the feedback fraction, \(\eta\), the amplitude of the variability of the rate at which the corona is heated by an (unspecified) external source, \(\delta H_{\rm ext}\), and an additive parameter, reflag, that gives the phase lag in the 2-3 keV band1. Note that none of these four parameters changes the steady-state spectrum produced by vKompthdk. These four parameters, plus \(kT_{\rm s}\), \(kT_{\rm e}\), and \(\Gamma\), describe the radiative properties of the QPOs, i.e. the rms and the phase lags. We add a Gaussian component to fit the broad iron \(K_{\alpha}\) emission line in the spectrum. We do not use more complicated reflection models like relxill, since in the energy band of _NICER_ we only detect the iron line feature and we do not have data above 10 keV where the Compton hump appears.
Footnote 1: Since in the data the reference band used to compute the lags is arbitrary, the lags are defined up to an additive constant.
### Power spectrum
We generate the PDS for each orbit in the 1-10 keV energy band and in five narrow bands, 1-2.5, 2.5-3.9, 3.9-5.5, 5.5-7.3, 7.3-10.0 keV bands. The length of each PDS segment is 8.192 s and the Nyquist frequency is 125 Hz. In each orbit we average all the PDS segments, subtract the Poisson noise using the average power from 100 Hz to 125 Hz in the PDS and normalize the PDS to fractional rms amplitude (Belloni & Hasinger, 1990):
\[{\rm rms}=\frac{\sqrt{P(S+B)}}{S}, \tag{2}\]
where \(P\) is the power in Leahy units (Leahy et al., 1983), and \(S\) and \(B\) are, respectively, the source and background count rates. We apply a logarithmic rebin in frequency to the PDS such that the bin size increases by a factor \(\exp(1/100)\) compared to the previous one.
We fit the averaged PDS of each orbit in the 1-10 keV band with Lorentzian functions (Nowak, 2000; Belloni et al., 2002) in the frequency range of 0.1-30 Hz. The parameters of a Lorentzian function are the central frequency, the full width at half maximum (FWHM), and the normalization. We fit the PDS with four Lorentzians representing the QPO fundamental, the second harmonic, and two broad-band noise components. An extra Lorentzian is needed if there is a QPO sub harmonic. All parameters in all the Lorentzians are free, but we freeze the central frequency of one Lorentzian function which fits the broadband noise at 0. If there is no QPO, we reduce the number of Lorentzian functions to one or two to only fit the broadband noise. Using the models that we apply to fit the PDS of each orbit in the 1-10 keV band as baselines, we further fit the PDS of each orbit in the five narrow energy bands (see above). We fix all central frequencies and the FWHMs of all the Lorentzians to the values we obtained for the PDS in the 1-10 keV band and only let the normalizations free to fit the PDS in those five narrow energy bands. We also check that the central frequency and width of those Lorentzians do not change significantly with energy. Finally we take the square root of the normalizations of the Lorentzians that represent the variability components to calculate the rms.
### Phase lag spectrum
We generate and average the Fast Fourier Transformation (FFT) in each orbit using the 1-10 keV energy band as reference band to compute the phase lags. The length of each FFT segment is 8.192 s and the Nyquist frequency is 125 Hz. We also compute the FFT of the data in the energy bands 1-2.5 keV, 2.5-3.9 keV, 3.9-5.5 keV, 5.5-7.3 keV, 7.3-10.0 keV that we use as the subject bands to compute the cross spectrum for each subject band with respect to the reference band.
Considering that the cross spectrum as a function of Fourier frequencies is \(G(f)={\rm Re}\,G(f)+i\,{\rm Im}\,G(f)\), where the phase lag is \(\phi(f)={\rm tan}^{-1}\left({\rm Im}\,G(f)/{\rm Re}\,G(f)\right)\), we can calculate the phase lags of the variability components that we measure in the PDS (Vaughan & Nowak, 1997; Nowak et al., 1999; Ingram, 2019). As in Beirano & Mendez (2022) and Alabatra et al. (2022), we perform a simultaneous fitting of both the real and the imaginary parts of \(G(f)\) using the same Lorentzian components that we used to fit the PDS. We add a constant in the model to fit the correlated part of the signal that arises because the subject bands are always within the 1-10 keV reference band (Ingram, 2019). This constant is not linked between the real and the imaginary parts of the cross spectra. We fix the central frequency and the FWHM of all the Lorentzians at the same values that we get from fitting the PDS. For the real part, we fit the normalizations of the Lorentzians, while for the imaginary part, we define an extra free parameter, \(\phi\), for each Lorentzian so that the normalization of a Lorentzian in the imaginary part is equal to \(\tan\phi\) times the normalization of the corresponding Lorentzian in the real part. We note that this assumes that the phase lag of each component is constant with frequency (Peirano & Mendez, 2022). Using this method, for the energy bands we defined, we compute all the phase lags of all the variability components fitted by the Lorentzian functions.
## 3 Results
### Light curve and hardness-intensity diagram
Fig. 1 shows the light curve (left) and the HID (right) of MAXI J1535\(-\)571 in the time period MJD 58008-58036. Each data point corresponds to one _NICER_ orbit. During the outburst, the source moves in an anticlockwise direction in the HID. From MJD 58008 to 58011, MAXI J1535\(-\)571 stays in the LHS with a count rate \(\sim\) 8000 cts/s and a hardness ratio increasing from \(\sim\) 0.6 to \(\sim\) 0.64. From MJD 58011 to 58017 the count rate increases quickly from \(\sim\) 7000 cts/s to \(\sim\) 18000 cts/s while the hardness ratio increases slowly
from \(\sim 0.63\) to \(\sim 0.64\) with some excursions up to a maximum of 0.66. At this point the source moves to the left in the HID and transits from the LHS to the HIMS (e.g. Belloni et al., 2005). From MJD 58017 to 58024 the count rate gradually decreases to \(\sim 15000\) cts/s, while the hardness ratio decreases from \(\sim 0.62\) to \(\sim 0.56\), indicating a state transition from the HIMS to a softer state. During the final part of the outburst, from MJD 58024 to 58036, the count rate continues decreasing gradually to 10000 cts/s, while the hardness ratio decreases from 0.6 to 0.56.
We mark with blue in Fig. 1 the observations where we detect the type-C QPOs. The data of the type-C QPOs are from Rawat et al. (2023) who performed a systematic study of the type-C QPOs in the _NICER_ observations of MAXI J1535\(-\)571. The observations mark the hard and the hard-intermediate states, and do not include the observations of the soft-intermediate and the soft states where weak QPOs may appear (e.g. Belloni & Stella, 2014, for a review). In the top part of the HID (Fig. 1) we plot in grey observations with no significant QPO in the PDS. The remainder of the orbits, 26 in total, show a QPO with a rather constant frequency of \(\sim 5\)-6 Hz. The information of the 26 orbits is listed in Tab. 11. We plot these observations in Fig. 1 with red. Since these QPOs in the SIMS are weak and their central frequencies are within a small range, to improve the signal-to-noise ratio we average the PDS of the 26 orbits. The left panel of Fig. 2 shows the averaged PDS. Based on the fact that both the QPO fundamental and the second harmonic appear and given the weak rms of the zero-centered broadband noise (0.7%; see Tab. 1), we tentatively identify these QPOs as type-B QPOs.
Since the red points in the HID (the right panel of Fig. 1) extend to a very soft region with hardness ratio less than 0.59, some of the QPOs we detected may be type-A (e.g. Casella et al., 2004). To check whether all our detections are type-B QPOs we divide the observations into two groups with hardness ratio either greater or smaller than 0.59. We find that in both groups the averaged PDS are consistent with the PDS shown in Fig. 2. We therefore conclude that all the QPOs that appear in the observations marked with red in Fig. 1 are type-B QPOs.
### Initial fitting to the time-averaged energy spectra
We first fit all the energy spectra of the 26 orbits with the type-B QPOs separately using the model TBfeeo*(diskbb+vKompthdk+gaussian), as introduced in subsection 2.2. The best-fitting values of \(\Gamma\), \(kT_{\rm in}\) and the iron line profiles in the separate orbits are consistent with being the same within errors and only the normalizations of the Comptonized and the disk components change. Given that \(kT_{\rm c}\) cannot be constrained using the _NICER_ spectra, we fix it to 250 keV.
Since the only parameters that change for the different orbits are the normalizations of the disk and the corona components, to improve the signal-to-noise ratio (SNR) we fit the energy spectra of the 26 orbits together, using the same model TBfeeo*(diskbb+vKompthdk+gaussian) linking all the parameters in each of the 26 data groups, except for the normalizations of diskbb and vKompthdk components. Note that this fitting can also make it more convenient for our later joint fitting in subsection 3.4. This spectral fitting shows that, at this stage, the electron temperature of the corona \(kT_{\rm e}\) still cannot be constrained, therefore we fix it again to 250 keV. We find that the best-fitting hydrogen column density is \(4.2\times 10^{22}\) cm\({}^{-2}\), consistent with previous measurements (Miller et al., 2018). The disk temperature is 1.14 keV, while the corona photon index is 2.84, indicating that the source is in a relatively soft state (Dong et al., 2022). The gaussian used to fit the iron line has a central energy of 6.8 keV with a width of 0.98 keV. For the 26 orbits, the 0.5-10 keV averaged flux of the disk and the Comptonized components is, respectively, \(1.6\times 10^{-7}\) erg cm\({}^{-2}\) s\({}^{-1}\) and \(0.4\times 10^{-7}\) erg cm\({}^{-2}\) s\({}^{-1}\).
### Fractional rms and phase-lag spectra
As shown in the left panel of Fig. 2, we fit the PDS with four Lorentzians representing two broadband noise components, the QPO fundamental, and the second harmonic. Tab. 1 gives the best-fitting parameters of the Lorentzians. The average type-B QPO has a central frequency of \(5.64\pm 0.07\) Hz with a FWHM of \(2.9\pm 0.3\) Hz. The type-B QPO is weak, with an rms of \(1.1\pm 0.05\)%. The second harmonic of the type-B QPO appears at a central frequency of \(10.5\pm 0.5\) Hz, consistent with being twice the central frequency of the QPO fundamental.
We plot the rms spectrum of the QPO in the left panel of Fig. 3. The horizontal error bars are the width of the energy channels. As the energy increases from 1 keV to 10 keV, the rms increases monotoni
Figure 1: Light curve (left) and hardness-intensity diagram (right) of MAXI J1535\(-\)571 during the main outburst from MJD 58008 to 58037. Each point corresponds to a _NICER_ orbit. The grey, red, and blue points indicate observations with, respectively, no QPO, type-B QPOs, and type-C QPOs. The error bars indicating 68% confidence level are smaller than the size of the points. The grey dashed lines connect the data points in time sequence.
cally from \(\sim 1\%\) to \(\sim 6\%\), indicating that the type-B QPO is mainly modulated by the hot corona.
The right panel of Fig. 3 shows the phase-lag spectrum of the type-B QPO. The horizontal error bars are the width of the energy channels. As the energy increases from 1 keV to 7.3 keV, the phase lags decrease from 0.8 rad to -0.7 rad, while above 7.3 keV the phase lags increase slightly to around -0.6 rad, resulting in a minimum in the phase-lag spectrum at around 6-7 keV. The energy where the minimum phase lag appears is consistent with the results of Stevens et al. (2018). The phase lag with respect to the the 1-10 keV reference band crosses the zero line (grey dashed line in the right panel of Fig. 3) from the positive to the negative at an energy of \(\sim 3\) keV.
### Joint fitting of the rms and phase-lag spectra of the QPO and the energy spectra of the source
Similar to what we did in Zhang et al. (2022), we fit jointly the rms and phase-lag spectra of the QPO and the energy spectra of the 26 orbits of MAXI J1535\(-\)571 (the latter are initially fitted in subsection 3.2) using the model TBEto\({}^{\circ}\)(diskbb+vKompthdk+gaussian).
Note that for the fitting of the rms spectrum, vKompthdk is multiplied by a dilution component which is not explicitly written in the total model. The introduction of the dilution model is justified since the time-dependent Comptonization model computes the rms spectrum of a variable corona but the observed rms is reduced by all the non-variable spectral components. Here, we assume that the diskbb and the gaussian components are not variable, and therefore the dilution component is Flux\({}_{\rm Compt}(E)/\)Flux\({}_{\rm Total}(E)\) such that \(\rm rms_{Obs}=\rm rms_{\rm Compt}\) + Flux\({}_{\rm Compt}/\)Flux\({}_{\rm Total}\). (Note that this
\begin{table}
\begin{tabular}{c c c c} \hline Component & Frequency (Hz) & FWHM (Hz) & rms (\%) \\ \hline QPO & \(5.64\pm 0.07\) & \(2.9\pm 0.3\) & \(1.11\pm 0.05\) \\ Harmonic & \(10.5\pm 0.5\) & \(3\pm 2\) & \(0.43\pm 0.08\) \\ BBN1\({}^{*}\) & 0 & \(0.5\pm 0.1\) & \(0.73\pm 0.08\) \\ BBN2\({}^{*}\) & \(0.8\pm 0.1\) & \(2.5\pm 0.2\) & \(1.40\pm 0.07\) \\ \hline \multicolumn{4}{l}{\({}^{*}\) BBN: Broadband noise component.} \\ \end{tabular}
\end{table}
Table 1: The parameters of the Lorentzians used to fit the averaged PDS of MAXI J1535\(-\)571. The error bar indicates the 1-\(\sigma\) confidence level.
Figure 3: Fractional rms amplitude spectrum (left panel) and phase-lag spectrum (right panel) of the type-B QPO of MAXI J1535\(-\)571. The dashed grey line in the right panel marks the zero phase lag. In both panels, the vertical error bars indicates the 68% confidence level, while the horizontal error bars indicate the width of the energy channels.
Figure 2: Left panel: Averaged PDS of MAXI J1535\(-\)571 in the 1–10 keV band for the _NICER_ orbits listed in Tab. A1, with the best-fitting model and the residuals. The black points are the data and the red line is the best-fitting model. The dashed lines from the left to the right show two broadband noise, the type-B QPO fundamental and the second harmonic components. The residuals are the data minus the model divided by the error. Right panel: The QPO central frequency vs. energy for the type-B QPO. The error bars indicate the 68% confidence level.
dilution component does not introduce any new parameters to the fits.)
Initially, when we fit simultaneously the energy spectra of the 26 orbits and the rms and the phase-lag spectra of the QPO, \(kT_{\rm s}\) in vKompthdk is linked to \(kT_{\rm in}\) in diskbb. The fit yields a relatively large \(\chi^{2}=104.0\) using 10 bins for the timing spectra and a corona size of \(\sim 10^{4}\) km. The large corona size indicates that the softer seed photons from the outer part of the disk play an important role in producing the phase lags. Therefore, we let \(kT_{\rm s}\) in vKompthdk linked with \(kT_{\rm in}\) in diskbb to fit the energy spectra of the 26 orbits, but allow \(kT_{\rm s}\) to vary freely when fitting the rms and phase-lag spectra. We show a comparison of the best-fitting results of the cases with \(kT_{\rm s}=kT_{\rm in}\) and \(kT_{\rm s}\) free in Fig. 4. Letting \(kT_{\rm s}\) free gives a significantly better fit, \(\chi^{2}=62.3\) (Tab. 2), with \(kT_{\rm s}=0.59\pm 0.07\) keV, lower than \(kT_{\rm in}=1.139\pm 0.002\) keV; the corona is still relatively large with a size of \(6500\pm 500\) km. In the time-averaged energy spectrum \(kT_{\rm in}\) and \(\Gamma\) are the same as when \(kT_{\rm s}=kT_{\rm in}\). The feedback fraction pegs at the upper boundary 1, which gives \(\eta_{\rm int}=0.33\pm 0.02\). (For an explanation of the difference between the \(\eta\) and \(\eta_{\rm int}\), see section 1.) This means that \(\sim 33\%\) of the corona photons return to the accretion disk where they are thermalized and re-emitted, resulting in a soft lag, while the other 67% of the corona photons are the observed Comptonized photons. The temperature of the inner disk, \(kT_{\rm in}=1.139\pm 0.002\) keV, and the photon index, \(\Gamma=2.91\pm 0.06\), indicate that the source is in a relatively soft state. The spectral state matches well with the fact that the type-B QPO appears in the SIMS. Since we now include the rms and phase-lag spectra of the QPO and let \(kT_{\rm s}\) free in the fits, we try to let \(kT_{\rm e}\) free. From the joint fitting, we measure a hot corona with temperature \(kT_{\rm e}=330\pm 50\) keV, which is consistent with the corona temperature measured by Dong et al. (2022) in the SIMS of MAXI J1535\(-\)571 using the broadband data of _NuSTAR_ and _Insight_-HXMT.
From the values of \(kT_{\rm e}\) and \(\Gamma\) from Model 2, the optical depth of the corona is \(\tau=0.16\) and the Compton \(y\)-_parameter_ is \(y=0.41\), indicating that the system is in the unsaturated Comptonization regime. This is consistent with the assumptions of the time-dependent Comptonization model vKompthdk that we used.
We note that, although a fit with a dual corona (see Garcia et al., 2021) may reduce further the \(\chi^{2}\), we do not explore it here because of the limited number of energy bands in the rms and phase-lag spectra of the QPO and the large number of free parameters that would be involved.
Since the corona is large, it should illuminate the outer part of the disk and produce a narrow iron line due to reflection off cold material there (e.g. Miller et al., 2018). Therefore, we add an extra Gaussian line to Model 1, which then becomes Model 2: TBFeo*(diskbb+vKompthdk+gaussian1+gaussian2). We find that a second Gaussian line centered at \(6.62\pm 0.02\) keV with a width of \(0.12\pm 0.02\) keV fits the data well. The significance of the narrow Gaussian line, measured as the ratio of the line normalization to its error, is \(5\,\sigma\) and an F-test yields a probability of \(1.8\times 10^{-9}\), which indicates that the narrow Gaussian line improves the fit significantly. The final results of the fitting are plotted in Fig. 5 and the parameters are given in Tab. 2.
To test a possible degeneracy of the parameters, we run an MCMC simulation for Model 2, using the Goodman-Weare chain algorithm (Goodman and Weare, 2010). After testing the convergence of the chain, we set the chain length to 240000 with 1200 walkers. We discard the first 120000 steps and record the last 120000 steps. The entire covariance matrix is divided by a factor of 10000 to ensure that the walkers initially sample a large range of the parameter space. Fig. 6 shows the results of the MCMC simulation. The inner disk temperature, \(kT_{\rm in}\), is somewhat covariant with the Galactic absorption, \(N_{\rm H}\), which is expected since the absorption influences mainly the soft part of the spectrum, while the outer disk temperature, \(kT_{\rm s}\), is covariant with the corona temperature, \(kT_{\rm e}\), since these two temperatures play an important role when computing the phase lags (Bellavita et al., 2022). For the corona geometry, the size of the corona may be modulated by the seed photons from the outer disk and the hot corona temperature. The feedback fraction reaches its upper limit and a 3-\(\sigma\) error also constrain the feedback fraction to be larger than 0.9. Since the external heating rate maintains the temperature of the system when describing the timing spectra, \(\delta H_{\rm ext}\) is correlated with the outer disk temperature, \(kT_{\rm s}\), the corona temperature, \(kT_{\rm e}\), and the corona size, \(L\); the latter is always large, between 5600 km and 7200 km.
## 4 Discussion
We have analyzed 36 _NICER_ observations of the black hole candidate MAXI J1535\(-\)571 during the 2017/2018 outburst. We identify a type-B QPO in 26 _NICER_ orbits where the source is in the SIMS, during which the inner disk is relatively hot, \(kT_{\rm in}\sim 1.1\) keV, and the corona photon index is relatively high, \(\Gamma\sim 2.9\). From the simultaneous fit of the energy spectra of the source and the rms and phase-lag spectra of the type-B QPO, we find that the corona has a
\begin{table}
\begin{tabular}{c c c c} \hline Component & Parameter & Model 1 & Model 2 \\ \hline TBfeo & \(N_{\rm H}\) (\(10^{22}\) cm\({}^{-2}\)) & \(4.18\pm 0.08\) & \(4.18\pm 0.08\) \\ & \(A_{\rm O}\) & \(0.44\pm 0.05\) & \(0.52\pm 0.05\) \\ & \(A_{\rm Fe}\) & \(1.74\pm 0.08\) & \(1.51\pm 0.09\) \\ diskbb & \(kT_{\rm in}\) (keV) & \(1.138\pm 0.002\) & \(1.139\pm 0.002\) \\ vKompthdk & \(kT_{\rm s}\) (keV) & \(0.59\pm 0.06\) & \(0.59\pm 0.07\) \\ & \(kT_{\rm s}\) (keV) & \(330\pm 50\) & \(330\pm 30\) \\ & \(\Gamma\) & \(2.91\pm 0.06\) & \(2.88\pm 0.04\) \\ & \(L\) (\(10^{3}\) km) & \(6.5\pm 0.4\) & \(6.5\pm 0.5\) \\ & \(\eta^{\rm a}\) & \(1_{-0.07}\) & \(1_{-0.07}\) \\ & \(\delta H\) & \(0.15\pm 0.02\) & \(0.16\pm 0.02\) \\ gaussian1 & LineE (keV) & \(6.80\pm 0.04\) & \(6.75\pm 0.05\) \\ & \(\sigma\) (keV) & \(0.96\pm 0.04\) & \(1.01\pm 0.05\) \\ & Norm (\(10^{-2}\)) & \(6.2\pm 0.5\) & \(5.9\pm 0.4\) \\ gaussian2 & LineE (keV) & & \(6.62\pm 0.02\) \\ & \(\sigma\) (keV) & & \(0.14\pm 0.02\) \\ & Norm (\(10^{-3}\)) & & \(3.0\pm 0.7\) \\ \hline \(\chi^{2}\)/bin (SSS\({}^{\rm b}\)) & 4137.84 / 4576 & 4098.42 / 4576 \\ \(\chi^{2}\)/bin (rms\({}^{\rm c}\)) & 46.76 / 5 & 45.81 / 5 \\ \(\chi^{2}\)/bin (phase lag) & \(15.57\pm 0.5\) & \(15.47\pm 5\) \\ \(\chi^{2}\)/d.o.f. & 4200.08 / 4520 & 4159.70 / 4517 \\ \hline Disk flux\({}^{\rm c}\) (erg cm\({}^{-2}\) s\({}^{-1}\)) & \(1.577\pm 0.004\times 10^{-7}\) \\ Corona flux\({}^{\rm c}\) (erg cm\({}^{-2}\) s\({}^{-1}\)) & \(0.444\pm 0.003\times 10^{-7}\) \\ \hline \end{tabular}
* We give only the negative error of the feedback fraction, \(\eta\), because the parameter is consistent with 1, the maximum possible value at the 1-\(\sigma\) confidence level.
* Steady-state spectrum.
* Fractional rms spectrum.
* Phase-lag spectrum.
* Unabsorbed flux.
\end{table}
Table 2: Spectral parameters of the joint fitting for MAXI J1535\(-\)571. The error indicates the 1-\(\sigma\) confidence level. See the text for more details about the parameters.
Figure 4: A comparison of the best-fitting model to the rms spectra (upper panels) and phase-lag spectra (lower panels) in the case in which \(kT_{\rm s}=kT_{\rm in}=1.141\) keV (left panels) and \(kT_{\rm s}\) is free (right panels). In the case in which \(kT_{\rm s}=kT_{\rm in},\chi^{2}=104.0\) for 10 bins, while in the case in which \(kT_{\rm s}\) is free, \(\chi^{2}=62.3\) for 10 bins. The data and the best-fitting model are in black and red, respectively.
Figure 5: The joint fitting of MAXI J1535\(-\)571 using _NICER_ data. From top to bottom, the left panels show, respectively, the energy spectra of the 26 orbits, and the rms and phase-lag spectra of the type-B QPO with the best-fitting models. The right panels show the residuals with respect to the best-fitting model.
size \(L=6500\pm 500\) km and a feedback fraction \(\eta=1_{-0.07}\)2. We detect an additional significant narrow iron line with central energy of \(\sim 6.6\) keV and FWHM of \(\sim 0.1\) keV, consistent with a large corona illuminating the outer and cooler parts of the accretion disk.
Footnote 2: This value of the feedback fraction gives \(\eta_{\rm int}=0.33\pm 0.02\) which means that \(\sim 33\%\) of the total Comptonized photons impinge back onto the disk, while the remaining \(67\%\) are the observed Comptonized photons.
### The phase-lag spectrum of type-B QPOs
By systematically studying the type-B QPOs using _RXTE_ data, the lags of type-B QPOs are hard for the BHB systems of low inclination angle, while they are either hard or soft for the BHB systems of high inclination angle (Van den Eijnden et al., 2017; Gao et al., 2017). The hard lags are likely due to inverse-Compton scattering of the soft disk photons in the corona (Kylafis et al., 2008), while the soft lags are likely due to the down-scattering of the hard photons in the disk (Uttley et al., 2014). Stevens & Uttley (2016) studied the hard lags of type-B QPOs in GX 339\(-\)4 using _RXTE_ data and suggested that a corona with a large-scale height (\(\sim 1.8\times 10^{4}\) km) is the jet
Figure 6: MCMC simulation of the spectral parameters of MAXI J1535\(-\)571. The contours in each panel indicate the 1-\(\sigma\), 2-\(\sigma\) and 3-\(\sigma\) confidence ranges. The parameters are the same as those in the Model 2 in Tab. 2.
base. This scenario of a large corona derived from the phase-resolved spectroscopy of the type-B QPO is later modeled by Kylafis et al. (2020) who proposed that the type-B QPO in GX 339\(-\)4 originates from a precessing jet. Note that _RXTE_ is not sensitive to photons below 2-3 keV. Using _NICER_ observations of MAXI J1348\(-\)630, Belloni et al. (2020) found that the phase lags are positive both in the 3-10 keV and the 0.7-2 keV with respect to the reference band at 2-3 keV. The positive phase lags in the 0.7-2 keV band exclude the possibility that the type-B QPOs are due to propagation of mass accretion rate fluctuations which, in this scenario, the phase lags below 2 keV relative to the 2-3 keV band should be negative. Comptonization of the soft photons in the corona naturally explains the positive lags at energy above 3 keV, while Belloni et al. (2020) suggested that the positive lags of the type-B QPO below 2 keV in MAXI J1348\(-\)630 could be due to Compton down-scattering in the corona by using Monte Carlo simulations assuming a flat seed spectrum between 2 and 3 keV. However, because their seed spectrum does not emit below 2 keV, they neglected the effect of the direct emission of the seed source on the phase-lag spectrum. Indeed, the direct emission of a more realistic seed source (e.g. an accretion disc) leads to a flat phase-lag spectrum below \(\sim 3\) keV (Kylafis, priv. comm.; see also Figure 1 in Bellavita et al. 2022), contrary to the observations.
From the phase-lag spectrum (right panel of Fig. 3) in MAXI J1535\(-\)571, the phase lags generally decrease as the energy increases, with the lags being a minimum at around 6 keV. If we take the lowest energy band (1-2.5 keV) as the reference band, in MAXI J1535\(-\)571 all the lags are soft. Stevens et al. (2018) proposed that the soft lags of the type-B QPOs in MAXI J1535\(-\)571 are due to the phase offset between the peaks in the corona emission and the modulation of the disk spectrum. Based on the fitting results of our vKompthdk model, we find that \(33\%\pm 2\%\) of the corona photons return to and are reprocessed in the accretion disk, producing the soft lags. Therefore, the observed soft lags can be explained as the light-crossing time of a large corona illuminating the disk. Note that the minimum in the phase-lag spectrum at \(\sim 6\) keV could be related to the iron line feature, but since the type-B QPO in this source is weak, we do not have good enough resolution to perform a detailed line analysis.
### Comparison of corona models of variability
X-ray variability in the accreting X-ray binaries is generally classified as broadband noise and QPOs (Ingram and Motta, 2019, for a review). The broadband noise at low frequencies usually shows large hard lags that are thought to be due to Comptonization, while at relatively high frequencies it sometimes shows soft lags, which have been proposed to be caused by X-ray reverberation of corona photons reflected off the accretion disk (Uttley et al., 2014). In a lamppost geometry of the corona above a black hole, the model returns calculates the difference in the light-travel time of the photons reflected off the accretion disk relative to the corona photons that travel directly to the observer (Ingram et al., 2019). This model is able to fit the time-lag spectrum of the broadband noise in the black hole X-ray transient MAXI J1820+070, giving a corona height of up to \(\sim 500~{}R_{\rm g}\)(Wang et al., 2021), equivalent to \(\sim 6400\) km for an 8.5-\(M_{\odot}\) black hole (Torres et al., 2020). The lamppost geometry of the corona is a simplification to allow the calculation of the ray tracing in the spacetime around the black hole. On the other hand, this model cannot explain the hard lags observed sometimes in these systems that are therefore assigned to either Comptonization or fluctuation of accretion propagation (Arevalo and Uttley, 2006; Kylafis et al., 2008). Since the soft (reverberation) and the hard (inverse-Compton scattering and mass accretion rate fluctuation) are treated separately, the model is in essence two separate mechanisms that are connected by the fitting procedure.
From the perspective of QPOs that dynamically originate from the L-T precession, Ingram et al. (2016) and Nathan et al. (2022) developed a tomographic model that fits the QPO phase-dependent energy spectrum, explaining the energy shifts of the observed iron \(K_{\alpha}\) emission line in different QPO phases. Using _NICER_ and _NuSTAR_ data of GRS 1915+105 with a QPO frequency at 2.2 Hz, Nathan et al. (2022) measured a thermalization time delay of 70 ms, which is too long since in this long time delay the QPO signal would be washed out. Nathan et al. (2022) attributed this problem to the oversimplification of the precessing corona model. For instance, the authors did not take into account the precessing corona/jet obscuring different disk azimuths, which would result in a variation of the shape of the observed disk spectrum, and they ignored the light-crossing delays which can be of the order of milliseconds. Systematic error in the spectral modeling would potentially affect the measured time lags as well. Nathan et al. (2022) also measured an inner radius, \(R_{\rm in}=1.5R_{\rm g}\), of the accretion disk which is too small to produce the 2.2-Hz QPO predicted by the L-T precession of the corona lying inside a truncated disk (Ingram et al., 2009). It could be that the corona is not horizontally, but more likely vertically, extended in the form of a jet-like structure, namely the outflow (Stevens and Uttley, 2016; Kylafis et al., 2020).
Regardless of the dynamical origin of the QPOs, the time-dependent Comptonization model vKompthdk developed by Karpouzas et al. (2020) and Bellavita et al. (2022) describes both the hard and soft lags by considering inverse-Compton scattering in the corona and thermal reprocessing in the disk. Since the model solves the linearized Kompaneets equation, it provides the distribution of photons in energy at any given time, regardless of the history of these photons via a single mechanism3. This model can successfully fit the steady-state Comptonization spectrum and the rms and phase-lag spectra of QPOs with energies in the 1-100 keV range (Karpouzas et al., 2020; Garcia et al., 2021; Mendez et al., 2022; Garcia et al., 2022; Zhang et al., 2022). A limitation of the model is that even though it considers the thermal reprocessing of hard photons in the accretion disk, it ignores the relativistic reflection that produces the atom floccent emission lines and Compton hump (Garcia et al., 2014). The corona size measured by vKompthdk is generally by compared with the corona size predicted by L-T precession inside the truncated disk (see supplementary Fig. 4 in Mendez et al. 2022 and Fig. 5 in Garcia et al. 2022). If the corona size is large, regardless of the feedback fraction, a big corona is likely to be vertically extended (Mendez et al., 2022; Zhang et al., 2022). A dual-Comptonization model is also used to fit the _NICER_ data of the type-B QPO of MAXI J1348\(-\)630 (Garcia et al., 2021), showing that the inner part of the corona is small (\(L\simeq 400\) km) and spherical or slab, with the hard photons efficiently returning to the accretion disk, while the outer part of the corona is large (\(L\simeq 10^{4}\) km) and jet-like with nearly no hard photon feedbacking onto the accretion disk. The corona geometry, e.g. the size, measured from different models mentioned above has features in common (see below).
Footnote 3: In that respect this model is simpler than the reverberation model that requires two separate mechanisms to explain both the soft and hard lags at different Fourier frequencies.
### Corona geometry
Fast transitions between type-B and other types (A and C) QPOs are often observed (e.g. Casella et al., 2004; Zhang et al., 2021). The detailed spectral-timing analysis of H 1743\(-\)322 indicates that the transition from type-B to type-C QPOs could be explained by the presence of a jet or a vertically extended optically thick Comptonization region (Harikrishna and Sriram, 2022). From a spectral-timing analysis of the _Insight_-HXMT data of MAXI J1348\(-\)630 in the SIMS, Liu et al. (2022) proposed a vertically extended corona that is the base of the jet, and explained the disappearance and appearance of the type-B QPO as the jet being parallel to the BH spin axis or not, due to the Bardeen-Petterson effect. In our study of MAXI J1535\(-\)571 in the SIMS, we notice that the type-B QPOs disappear in some _NICER_ orbits as shown in Fig. 1 which may be explained by the L-T precession of the corona or the jet proposed by Liu et al. (2022).
The modeling of the corona through type-B QPOs in MAXI J1348\(-\)630 shows that the size of the jet-like corona is \(\sim 10^{4}\) km (Garcia et al., 2021). This size is comparable with the corona size of \(6500\pm 500\) km that we find in MAXI J1535\(-\)571. The intrinsic feedback fraction in this work is \(33\%\pm 2\%\), which indicates that the corona should be covering the accretion disk to some extent, as shown in the right panel of Fig. 7. From the fitting results in Tab. 2, the inner disk provides the seed photons and the Comptonized photons contribute mainly to the steady-state energy spectrum, while the outer disk provides relatively cold seed photons and the long light-crossing time of the Comptonized photons contribute mainly to the phase-lag spectrum. In MAXI J1535\(-\)571 we have already measured the corona geometry through type-C QPOs in the HIMS from MJD 58008 to 58017 (Zhang et al., 2022). The left panel of Fig. 7 shows the corona size, intrinsic feedback fraction, and the 9-GHz jet flux density (Russell et al., 2019) using the results both in Zhang et al. (2022) and this work4. Note that we have converted the feedback fraction, \(\eta\), into the intrinsic feedback fraction, \(\eta_{\rm int}\). As discussed in Zhang et al. (2022), the corona on MJD 58017 is a vertically extended, jet-like corona, which expands vertically to its maximal size, L \(\sim 9300\) km, with \(\eta_{\rm int}\sim 28\%\), two days before the transient jet reaches the maximum flux density. After MJD 58017, MAXI J1535\(-\)571 transits into the SIMS. In the SIMS from MJD 58018 to 58024, we measure a corona size \(L\sim 6500\) km with \(\eta_{\rm int}\sim 33\%\). Compared to the corona geometry in the end of the HIMS on MJD 58017, in the SIMS the corona size contracts and the hard photons feedback onto the disk more efficiently, indicating that the corona contracts vertically and expands horizontally. After MJD 58024 no QPO appears in the intermediate and high-soft states and the transient jet flux density gradually decays until it is no longer detected. The change of the morphology of the corona geometry in the SIMS, together with that in the HIMS, and the change of the jet flux density, suggests that the increasing size of the jet-like corona may give rise to the large-scale transient jet ejecta lagging behind the change of the corona size. After the ejection, the corona in the jet base contracts and the transient jet loses its energy source so the observed radio flux density drops. This suggests a connection between the corona and the jet.
Footnote 4: In this work, we improve the fitting model compared to Zhang et al. (2022). For more information about the refined model, see subsection 3.4.
The corona-jet connection has been investigated using wKomp thdk through type-C QPOs in GRS 1915+105 with 12-year _RXTE_ observations (Karpouzas et al., 2021; Mendez et al., 2022; Garcia et al., 2022). The connection between the corona and the jet in the persistent source GRS 1915+105 is similar to that in the BH X-ray transient MAXI J1535\(-\)571. In GRS 1915+105, when the radio emission is weak the corona covers large parts of the accretion disk and the hard photons efficiently feedback onto the disk, while when the radio emission is strong the corona is vertically extended and jet-like, and less hard photons feedback onto the disk (Mendez et al., 2022). The only difference is that, in the SIMS of MAXI J1535\(-\)571, the hard photons feedback onto the disk more efficiently than in the HIMS. The \(\sim 33\%\) intrinsic feedback fraction in the SIMS of MAXI J1535\(-\)571 is similar to the \(\sim 35\%\) intrinsic feedback fraction in the SIMS of MAXI J1348\(-\)630 (Garcia et al., 2021; Bellavita et al., 2022). In fact the feedback fraction of hard photons may either decrease or increase, depending on the balance between the Compton cooling process in the disk and the heating up of the corona (Merloni and Fabian, 2001; Karpouzas et al., 2020).
The evolution of the corona height using the light-crossing delays in reverberation shows quite a similar trend to our measurements using vKompthdk through the QPOs. Wang et al. (2022) studied archival data of 10 black hole candidates with _NICER_ and found
Figure 7: Left panel: The evolution of the corona geometry and the jet flux from the HIMS to the SIMS in MAXI J1535\(-\)571. The black, red, and blue points indicate the corona size, the intrinsic feedback fraction, and the jet flux density, respectively. The blue dashed line is discontinuous since the jet is quenched from MJD 58009 to 58017. The green dashed line indicates the time of the transition from the HIMS to the SIMS. Right panel: A schematic figure of the corona geometry of MAXI J1535\(-\)571 in the SIMS. The inner part of the disk provides the seed photons to the Comptonized emission that dominates the steady-state energy spectrum of the source, while the outer parts of the disk provide the seed photons to the Comptonized emission that dominates the rms and phase-lag spectra of the QPO. See text for more details.
that during the hard to intermediate state transition the soft lags first decrease and then increase (see Fig. 6 in Wang et al., 2022). Assuming a lamppost geometry, the corona height in the hard state decreases from \(\sim 1000\) km to \(\sim 200\) km, in the HIMS it increases monotonically up to 9000 km, and in the SIMS it decreases slightly to \(\sim 6000\) km. Comparing these results with those in the left panel of Fig. 7, we notice a general consistence of the measurements of the corona size. This consistence suggests the possibility to combine reltrans and vKompthdk to measure the corona geometry when reverberation lags and QPOs appear simultaneously.
## Acknowledgements
We thank the referee for constructive comments that helped us improve the quality of this paper. YZ acknowledges support from the China Scholarship Council (CSC 201906100030). MM acknowledges support from the research programme Athena with project number 184.034.002, which is (partly) financed by the Dutch Research Council (NWO). FG is a CONICET researcher. FG acknowledges support by PIP 0113 (CONICET) and PIBAA 1275 (CONICET). This work received financial support from PICT-2017-2865 (ANPCYT). DA acknowledges support from the Royal Society. TMB acknowledges financial contribution from PRIN INAF 2019 n.15. RM acknowledges support from the China Scholarship Council (CSC 202104910402). MM, FG and TMB have benefited from discussions during Team Meetings of the International Space Science Institute (Bern), whose support we acknowledge.
## Data Availability
The X-ray data used in this article are accessible at NASA's High Energy Astrophysics Science Archive Research Center [https://heasarc.gsfc.nasa.gov/](https://heasarc.gsfc.nasa.gov/). The software GHATS for Fourier timing analysis is available at [http://www.brera.inaf.it/utenti/belloni/GHATS_Package/Home.html](http://www.brera.inaf.it/utenti/belloni/GHATS_Package/Home.html). The time-dependent Comptonization model and the generator of the MCMC corner plot are available at the GitHub repositories [https://github.com/candebellavita/vkompt](https://github.com/candebellavita/vkompt) and [https://github.com/garciafederico/pyXspecCorner](https://github.com/garciafederico/pyXspecCorner).
|
2308.11337 | Evaluation of the Speech Resynthesis Capabilities of the VoicePrivacy
Challenge Baseline B1 | Speaker anonymization systems continue to improve their ability to obfuscate
the original speaker characteristics in a speech signal, but often create
processing artifacts and unnatural sounding voices as a tradeoff. Many of those
systems stem from the VoicePrivacy Challenge (VPC) Baseline B1, using a neural
vocoder to synthesize speech from an F0, x-vectors and bottleneck
features-based speech representation. Inspired by this, we investigate the
reproduction capabilities of the aforementioned baseline, to assess how
successful the shared methodology is in synthesizing human-like speech. We use
four objective metrics to measure speech quality, waveform similarity, and F0
similarity. Our findings indicate that both the speech representation and the
vocoder introduces artifacts, causing an unnatural perception. A MUSHRA-like
listening test on 18 subjects corroborate our findings, motivating further
research on the analysis and synthesis components of the VPC Baseline B1. | Ünal Ege Gaznepoglu, Nils Peters | 2023-08-22T10:32:21Z | http://arxiv.org/abs/2308.11337v1 | # Evaluation of the Speech Resynthesis Capabilities
###### Abstract
Speaker anonymization systems continue to improve their ability to obfuscate the original speaker characteristics in a speech signal, but often create processing artifacts and unnatural sounding voices as a tradeoff. Many of those systems stem from the VoicePrivacy Challenge (VPC) Baseline B1, using a neural vocoder to synthesize speech from an F0, x-vectors and bottleneck features-based speech representation. Inspired by this, we investigate the reproduction capabilities of the aforementioned baseline, to assess how successful the shared methodology is in synthesizing human-like speech. We use four objective metrics to measure speech quality, waveform similarity, and F0 similarity. Our findings indicate that both the speech representation and the vocoder introduces artifacts, causing an unnatural perception. A MUSHRA-like listening test on 18 subjects corroborate our findings, motivating further research on the analysis and synthesis components of the VPC Baseline B1.
Unal Ege Gaznepoglu, Nils Peters International Audio Laboratories, Friedrich-Alexander-Universitat Erlangen-Nurnberg, Germany {ege.gaznepoglu, nils.peters} @fau.de
**Index Terms**: speaker anonymization, x-vector, bottleneck features, F0, neural source-filter (NSF), quality evaluation
## 1 Introduction
+
Footnote †: The International Audio Laboratories Erlangen are a joint institution of the University of Erlangen-Nurnberg and Fraunhofer IIS.
Numerous developments in the speech signal processing domain have rendered the collection of speech data as well as its adversarial utilization simpler [1]. As a result, voice privacy is an emerging issue in today's world. Many technical applications either require by law, or would benefit from, a preliminary processing to mitigate the risks to user privacy. In this regard, a VoicePrivacy Challenge (VPC) has been organized to promote the development of voice anonymization systems via the introduced baselines, evaluation metrics and attack models, which are widely adopted by the researchers in the field.
Depending on the downstream task, i.e., the purpose the acquired speech signals shall serve, the anonymization procedure may be expected to preserve the prosody and the naturalness. One such use case is a psychiatric support context where the patients want to stay anonymous [2]. However, the results from the VPC 2020 and 2022 point out that none of the published systems up to date can achieve subjective naturalness scores on par with recorded human speech [3, 4]. Furthermore, our previous work utilizing contrastive systems revealed that using original x-vectors during synthesis surprisingly yields worse utility and an increase in the privacy [5]. Therefore, in this work, we evaluate the speech resynthesis capabilities of the VPC Baseline B1, using metrics from other domains, to understand if the speech representation and synthesis block shared across systems of multiple contestants have any improvement potential.
## 2 Related work
### VPC Baseline B1 and its derivatives
The VoicePrivacy Challenge Baseline B1 has been a source of inspiration to many challenge participants [3, 4]. The system [6], which consists of three feature extractors, an anonymization block, and a neural vocoder, is depicted in Figure 1. The feature extractors and their purposes are outlined in the Table 1.
More than 10 systems are proposed to improve the various aspects of the baseline over the last three years. Majority of these contributions target the anonymization block and keep the speech representation or the vocoder intact. Some however, such as [7], propose alternatives to the bottleneck features. For the speaker embedding, [8] proposes switching to ECAPA and [9] reports increased speaker representation capabilities when both ECAPA and x-vectors are used together.
The neural source-filter (NSF), i.e., the neural vocoder, has also received some attention. The 2022 edition of the challenge included two vocoders (NSF-HiFiGAN and HiFiGAN) that directly predict the waveforms from the speech representation, discarding the acoustic model (AM) that was present in the 2020 baseline. Works such as [9, 10] use the IMS Toucan toolkit that provides a modular neural vocoder and utilize local energy in addition to F0 for further prosody control.
### Evaluation of voice conversion systems
The voice anonymization problem, especially the way VPC framework treats it, has some similarities to the voice conversion problem. An overview on voice conversion mentions intrusive metrics like perceptual evaluation of speech quality
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Feature (purpose)** & **Extractor** & **Properties** \\ \hline F0 (Prosody) & YAAPT & (Nx1), W: \(35\), H: \(10\) \\ BN (Verbal content) & TDNN-F & (Nx256), H: \(10\) \\ X-vector (Identity) & TDNN & (1x512) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Extracted features per utterance. The quantity in parentheses indicates the resulting tensor shape. N: number of frames of an utterance. W: window size (ms), H: hop size (ms)
Figure 1: The VPC 2022 Baseline B1.a/b.
(PESQ) and mel-cepstral distortion (MCD) to evaluate synthesized speech quality [11]. In our study, the availability of the reference signals lets us employ such methods. Recently, torchaudio-SQUIM was proposed to estimate metrics such as PESQ on synthesized speech without needing a reference [12].
### Evaluation of voice anonymization systems
The VPC framework introduced objective and subjective metrics to evaluate different aspects of the anonymized speech signals [13]. The word error rate (WER), whose lower values indicate a better utility, is measured by an automated speech recognition (ASR) system. An automated speaker verification (ASV) system is used to measure the anonymization success, where higher equal error rate (EER) values indicate better anonymization. Prosody retention to a certain extent is ensured by a lower bound on F0 correlation and finally, a gain of voice distinctiveness measures whether the speaker diversity of the input speech datasets are preserved by the anonymization process. However, none of the introduced objective metrics can successfully measure the perceived naturalness thus the challenge organizers have resorted to a subjective evaluation of the utterances [13].
The VPC community uses contrastive systems [5, 7, 14], an idea similar to the ablation studies performed by the machine learning community. A contrastive system is a marginally different configuration of the anonymization system that provides further insights into how different modules thereof contribute to the performance. The cited studies use VPC metrics to assess the privacy versus utility (ASR scenario) tradeoff and also reported that synthesis with original set of features cause an increase to EER as well as to WER, hinting that some artifacts are introduced by the analysis-synthesis pipeline.
To conclude, the existing objective metrics of the VPC do not account for naturalness. The alternative, subjective listening tests, are non-ideal because they are time-consuming and costly. Furthermore, the evaluation methods in the VoicePrivacy literature are not capable of detecting abnormal behaviors in time, which in our opinion is necessary to find what causes unnatural outputs. To go beyond the contrastive system studies with EER and WER, we decided to investigate whether intrusive metrics and their non-intrusive estimates could be exploited.
## 3 Methodology
### Dataset
We use the VPC datasets libri-* and vctk-* for our evaluations. A summary of their content is provided in Tab. 2. These datasets are resynthesized using the systems in Table 3. The system B1b-spk is the same as the 2022 baseline, except it uses the original speaker-level x-vectors for synthesis. The system B1b-utt, using the utterance-level x-vectors for synthesis, imitates the training conditions of the neural vocoders. Both systems were trained using HiFiGAN discriminators. The system joint-hifigan-spk denotes the alternative vocoder (HiFiGAN [15]) provided by the VPC organizers. The system am-nsf-spk is the baseline used in 2020, that features an additional autoregressive AM that converts the speech representation into mel-spectrograms. The system mel-nsf-spk bypasses the AM and performs synthesis using the mel-spectrograms computed from original utterances, also referred to as _copy-synthesis_[16]. We feature both the PyTorch variant (denoted with a suffix -pt) and the C-based implementation utilized in VPC 2020. We also included an anchor equivalent mel-nsf-spk-4k that sets the mel-spectogram values for frequency bands with \(f_{c}>4\)kHz to zero.
A number of pre-processing steps are performed before the evaluation. Systems we evaluate introduce different amounts of delay, so we align the outputs with the references using cross-correlation. Many of the utterances contain silence, as well as some pauses, hence we ran Silero voice activity detection [17] on the references and computed the metrics on the segments with voice. Also, a number of utterances were visually inspected to ensure that the synthesis procedure preserves the loudness, which could bias the evaluation scores [18].
### Objective evaluation metrics
We adopt four different objective metrics to evaluate the resynthesis capabilities. These metrics are all intrusive, meaning that their computation requires access to a reference signal.
#### 3.2.1 Mel-cepstral distortion (MCD)
MCD is used to measure the signal similarity in a perceptual sense. The implementation we use is provided by [19].
#### 3.2.2 Scale-invariant signal-to-noise ratio (SI-SNR)
SI-SNR is used to measure the signal similarity [20]. The reference signal is first projected on the estimated signal, to obtain a scaling coefficient. Then the signal-to-noise ratio is computed. The implementation we use is a NumPy port of [21].
#### 3.2.3 Perceptual evaluation of speech quality (PESQ)
PESQ is an intrusive measure introduced by ITU to predict the subjective speech quality evaluations. We use the implementation in python-pesq[22].
#### 3.2.4 Gross pitch error (GPE)
GPE is a metric for F0 extractor evaluation. In our work, we use it to compare the synthesized F0 to the original. We adopt the definition in (1), also used in a previous work of us [5].
\[\begin{split}\text{GPE:}\quad\frac{\text{num. of frames whose error}>20\%}{\text{num. of correctly identified voiced frames}}\end{split} \tag{1}\]
MCD, SI-SNR and GPE have the advantage that they could be computed on smaller segments.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Subset Name** & **\#F** & **\#M** & **\#Uterances** \\ \hline libri-test-\{enrolls,trials\} & 15 & 15 & 1934 \\ vctk-test-\{enrolls,trials\} & 15 & 15 & 12048 \\ \hline \hline \end{tabular}
\end{table}
Table 2: VPC data subsets [13] utilized in this work. #F, #M: number of unique female/male speakers
\begin{table}
\begin{tabular}{l l c} \hline \hline
**ID** & **X-vector** & **Vocoder** \\ \hline mel-nsf-pt-spk & speaker-level & NSF \\ mel-nsf-spk & speaker-level & (C-based) NSF \\ mel-nsf-spk-4k & speaker-level & (C-based) NSF \\ am-nsf-spk & speaker-level & (C-based) AM + NSF \\ B1b-utt & utterance-level & joint NSF (+HiFiGAN-D) \\ B1b-spk & speaker-level & joint NSF (+HiFiGAN-D) \\ joint-hifigan-spk & speaker-level & joint WiFiGAN \\ \hline \hline \end{tabular}
\end{table}
Table 3: Systems evaluated in this paper. The x-vectors are not anonymized to assess the resynthesis capability. Vocoders are VPC PyTorch implementations, unless noted in the table.
### torchaudio-SQUIM
In addition to the intrusive metrics, we also tested the torchaudio-SQUIM [12], which provides non-intrusive estimates of the intrusive metrics. We use their PESQ prediction and report numbers for all the classes as well as for the reference signals. If these estimates correlate well with their intrusive complements or with user preferences, SQUIM could be also tested for evaluating anonymized speech.
### Subjective listening test
We conducted a MUSHRA-like listening test on 18 subjects of varying listening test experiences, using webMUSHRA software [23]. We randomly picked eight utterances from libri-test and six from vctk-test, (7 male and 7 female speakers, utterance lengths between \([5.5,8]\) seconds), which are available at 1. The users are presented each synthesis output and asked to rate the naturalness using the following prompt, inspired by the VPC subjective test [13].
Footnote 1: [https://audiolabs-erlangen.de/resources/2023-VPC-resynth-eval](https://audiolabs-erlangen.de/resources/2023-VPC-resynth-eval)
You will listen to a series of audio samples, comprising of both original recordings (referred to as reference) and versions that have been resynthesized using different neural vocoders, resulting in varying degrees of artifacts. Your task is to rate the naturalness of each recording.
**Naturalness:** Please judge how much audio degradation you can hear in each file. You need to select a score in the interval \([0,100]\), where higher numbers correspond to a more natural sounding audio, a \(0\) corresponding to'severely degraded' and a \(100\) to 'no degradation at all'. For this score please only consider the sound characteristics and not the content. Also note that the reference contains some background noise. Finally, the deviations from original speaker's voice also count as degradations.
## 4 Results and Discussion
### Objective evaluation
#### 4.1.1 Signal similarity metrics
Figure 2 depicts the SI-SNR and MCD results. We saw no significant differences during visual inspection of the "per-dataset" and "per-gender" distributions. Therefore we display averages over datasets and gender instead.
Copy synthesis, e.g., mel-nsf-spk, outperformed the others, but mel-nsf-pt-spk and mel-nsf-spk, two implementations of the same system, behaved differently. PyTorch copy-synthesis achieved better SI-SNR and MCD. The anchor, i.e., mel-nsf-spk-4, attained comparable SI-SNR but the worst MCD. Other vocoders attained a similar MCD, standing between the copy synthesis and the anchor. We interpret the discrepancy between mel-nsf and synthesis from the representation as a sign of inadequacies of the utilized speech representation, resulting in some further information loss on top of the artifacts due to NSF. am-nsf-spk performed slightly better than other vocoders, indicating the AM contributed to the resynthesis performance.
#### 4.1.2 F0 similarity
In a similar manner, Figure 3 depicts the GPE results. Behavior across female and male recordings are shown this time.
NSF-based systems maintained a certain standard in terms of F0 preservation, due to the source-filter model. HiFiGAN takes some extra liberty whilst synthesizing the signals, thus attained significantly higher GPE and this probably explains why it attained the worst SI-SNR too. An interesting outcome is that female speech has slightly higher GPE for NSF, but HiFiGAN corrupts pitch significantly more for male speakers.
#### 4.1.3 PESQ and torchaudio-SQUIM
Finally, we compare the PESQ computations as well as PESQ estimates by torchaudio-SQUIM in Figure 4.
PESQ with respect to the reference (left) shows a similar, but a more pronounced version of the trend in the SI-SNR plots. PyTorch copy-synthesis, i.e., mel-nsf-pt-spk, attained the best PESQ scores. The anchor performed better than the variants that synthesize from the speech representation (e.g., Blb). am-nsf-spk performed slightly better than other vocoders, again hinting the joint AM-NSF approach introduced in 2022 causing minor degradation. joint-hifigan-spk performed the worst. On metrics that take the perceptual aspects into account, such as MCD and PESQ, Blb-spk performed better than Blb-ut, which imitates the vocoder training scenario. This may indicate an underfit. A number of factors could have caused this, such as an insuffici
Figure 3: F0 similarity evaluation.
Figure 2: Evaluation results for signal similarity metrics.
plexity, lack of augmentation (augmenting x-vectors might help the vocoder to better learn the neighboring relations of the x-vectors) or may simply indicate that the training procedure has been cut off too early. The NSF was trained using L1 loss [6] on the magnitude spectrogram, which could be substituted with a perceptual loss to improve the performance.
Interestingly, the PESQ scores exhibit a greater inter-utterance variance for mel-nsf-spk variants. Additional investigations are required to understand this phenomenon. In particular it is crucial to understand if a confounding variable affects the scores, as previously shown by [18] with PESQ for factors such as loudness and alignment.
Torchaudio-SQUIM estimates of PESQ showed a different behavior. Systems joint-hifigan-spk, Blb-utt and Blb-spk achieved better performance with torchaudio-SQUIM evaluation. The systems have the HiFiGAN discriminators in common, which possibly explains the outcome. Some systems, e.g., am-nsf-spk, showed an unexpected bimodal distribution that is not explained by the gender or the dataset, which needs further investigations. The SQUIM estimates for the reference signals, depicted by the right-most violin plot in the Figure 4, again show a bimodal behavior.
### Subjective listening test
The listening test responses are filtered such that the answers for an utterance, whose reference was rated with less than \(90\) points, are removed. This results in at least \(14\) subjects rating each utterance. The ratings are presented in Figure 5.
Subjects reported that some mel-nsf-spk utterances had a severe muffling effect, often at their beginnings, rendering the part of the utterance completely unintelligible. In contrast, the mel-nsf-pt-spk was reported to suffer from random impulsive artifacts, somewhat like a "sizzling frying pan constantly accompanying the recordings". joint-hifigan-nsf was reported to change the accents, "Americanizing" the voices, and the identity perception was different to what reference or other systems evoked. Otherwise the speech was reported to sound most human-like.
Now turning to the analysis of the gathered scores, most subjects were able to identify the reference stimuli and grade accordingly. Removed answers constitute less than \(10\%\) of the acquired data. The anchor mel-nsf-spk-4k was rated the lowest whereas joint-hifigan-spk was rated the best, except it compromises on the speaker identity. Systems other than joint-hifigan-spk exhibited a higher inter-utterance variance. Notwithstanding the few utterances with unintelligible segments causing a second modality at the bottom, the copy-synthesis, mel-nsf-spk was rated the second best, followed by Blb variants. mel-nsf-pt-spk and am-nsf-spk were rated only slightly better than the anchor.
#### 4.2.1 Predictability of the subjective test scores
Intrusive metrics could not predict the outcome that joint-hifigan-nsf would be perceived the most natural, B1b performing better than am-nsf-spk and mel-nsf-spk outperforming mel-nsf-pt-spk. Only evaluation we ran that anticipated this outcome was torchaudio-SQUIM. We conclude that, the reference being available causes the intrusive evaluation to focus on the differences in signals that our subjects did not consider. Among the objective metrics, MCD was relatively successful.
Comparison of the PyTorch-based mel-nsf-pt-spk and C-based mel-nsf-spk, our subjects rated the latter better. The subjects penalized non-stationary artifacts less. To conclude, even though the objective metrics we utilize in this paper contribute to understanding how the blocks interact, these are not sufficient to explain the subject preferences completely.
### Future work
We think it would be worthwhile to study the effects of using additional speaker embeddings such as ECAPA [24], as multiple systems in the literature utilized it and performed well in the VPC 2022. Part of ECAPA's success comes from a better temporal pooling strategy using attention. However, VPC simply uses temporal averaging to obtain the utterance-level x-vectors, and mere utterance averages to obtain speaker-level x-vectors, so modifications to these aspects are worth investigating.
Some of the metrics we used, e.g., MCD, GPE and SI-SNR allow computation on very small segments, unlike PESQ. The time segments with the reported muffling effect could be
Figure 4: _Evaluation results for PESQ and torchaudio-SQUIM estimate of PESQ._
Figure 5: _The subjective evaluation results._
automatically located with these and further analysis could be conducted. Also for these metrics, different temporal pooling strategies could be experimented with.
## 5 Conclusion
In this paper, we have investigated the reproduction capabilities of the VoicePrivacy Challenge Baseline B1 by utilizing a diverse set of objective evaluation metrics. Our subjective and objective evaluation results indicate that the copy synthesis scores better than the synthesis from representations, likely indicating the speech representation is causing additional information loss and yielding unnatural sounding output. Previous studies found that a more recent speaker embedding could help improve the anonymization performance, and our results hint that it could also improve the synthesized speech quality. In addition, the vocoder training scheme may benefit from a number of changes to bolster its understanding of the speaker embedding space.
The objective metrics we utilize in this work show limited effectiveness in evaluating the system behavior for anonymization, primarily because they are intrusive and no references are available for anonymization, and metrics we evaluated partially align with the listening test subject preferences. Torchaudio-SQUIM's PESQ implementation performed relatively well, and it does not require a reference, so the voice anonymization evaluations may benefit from it.
|
2303.02987 | Leveraging the Existing German Transmission Grid with Dynamic Line
Rating | The integration of large shares of wind and solar power into the power system
benefits from transmission network expansion. However, the construction of new
power lines requires long planning phases and is often delayed by citizen
protests. As a non-invasive alternative, Dynamic Line Rating (DLR) offers the
potential to leverage the existing grid by dynamically adjusting the
transmission line capacities to the prevailing weather conditions. In this
study, we present the first investment model that includes DLR in a large-scale
power system with real-world network data and a high temporal resolution. Using
Germany as an example, we show that a system-wide integration of DLR improves
the integration of existing and additional renewables while reducing grid
congestion. The evolving synergies between DLR and increased wind generation
result in total cost savings of about 3% of all system costs for a scenario
with 80% renewable power production, mainly due to reduced storage and solar
capacity needs. If considering a fully decarbonized electricity system, the
cost savings from DLR amount to up to 5.5% of the system costs, i.e. 4 billion
Euro per year. Our results underscore the importance of a rapid implementation
of DLR in power systems to support the energy transition and relieve grid
congestion. | Philipp Glaum, Fabian Hofmann | 2023-03-06T09:28:39Z | http://arxiv.org/abs/2303.02987v1 | # Leveraging the Existing German Transmission Grid with Dynamic Line Rating
###### Abstract
The integration of large shares of wind and solar power into the power system benefits from transmission network expansion. However, the construction of new power lines requires long planning phases and is often delayed by citizen protests. As an non-invasive alternative, Dynamic Line Rating (DLR) offers the potential to leverage the existing grid by dynamically adjusting the transmission line capacities to the prevailing weather conditions. In this study, we present the first investment model that includes DLR in a large-scale power system with real-world network data and a high temporal resolution. Using Germany as an example, we show that a system-wide integration of DLR improves the integration of existing and additional renewables while reducing grid congestion. The evolving synergies between DLR and increased wind generation result in total cost savings of about 3% of all system costs for a scenario with 80% renewable power production, mainly due to reduced storage and solar capacity needs. If considering a fully decarbonized electricity system, the cost savings from DLR amount to up to 5.5% of the system costs, i.e. 4 billion Euro per year. Our results underscore the importance of a rapid implementation of DLR in power systems to support the energy transition and relieve grid congestion.
keywords: power system modelling, thermal rating, power system planning, renewable energy source, energy transition +
Footnote †: journal: Journal of Renewable Energy
## 1 Introduction
The efficient integration of renewable energy sources into today's power system requires restructuring key components, in particular the transmission system. With evolving production centers at sites with favorable renewable resources and decentralized power generation on house-hold level, electricity is flowing along new paths that are pushing the transmission network to its physical limits. To meet high share renewable targets, an expansion of the transmission system is essential for most countries. However, the process of expanding transmission lines can take 5-10 years and has often faced delays due to administrative issues and protest activities in the past [41; 44; 23]. The situation of lacking transmission capacity is therefore expected to worsen in the future which, in the light of rapid installation of renewable generation, will likely increase transmission system congestion, curtailment and grid instability [18]. In the example of Germany, most of the wind infrastructure built up throughout the last two decades is located in the North while there is significant industrial demand in the South [38]. The current federal government plans to have 100 GW onshore and 30 GW offshore wind power capacity in place by 2030 [20], which means roughly a doubling of today's capacities. Like in other countries, there is therefore a strong incentive to make better use of the existing grid infrastructure in near future.
To mitigate the lack of transmission capacities, several, complementing measures can be taken: (1) large scale implementation of storage facilities to flatten power feed-ins and power demands [10]; (2) usage of alternative energy carrier networks like hydrogen to relieve the electricity grid [54]; (3) leveraging the existing grid infrastructure to increase the transmission line capacity. In the following, we will focus on the last option, which in comparison to (1) and (2) stands out through a fast, low-cost and non-invasive implementation [13; 42].
The transmission of electricity dissipates resistive losses in form of heat. In meshed networks like in Germany, the limiting factor for line capacity is most often the thermal limit, i.e. the maximally allowed temperature of the conductor ensuring a stable transmission. This maximal line capacity is also regarded as thermal capacity. If the transmission line exceeds its thermal capacity, a safe operation of the transmission line is no longer guaranteed due increased line sag and clearance infringements [23].
Traditionally, the thermal capacity of a transmission line is calculated
assuming unfavorable, static weather conditions such as 40\({}^{\circ}\)C ambient temperature and 0.6 m/s wind speed [23]. This is referred to as Static Line Rating (SLR). By design, SLR underestimates the thermal capacity of a transmission line and, when implemented in practice, leads to an underutilization of the transmission infrastructure. On the other hand, Dynamic Line Rating (DLR) calculates the line capacity taking into account prevailing weather conditions. Cold weather and wind cool overheated transmission lines, enabling the thermal capacity to be raised. This results in key benefits in cost-efficiency, congestion reduction and better wind power integration [18]. Today, DLR is applied by the German Transmission System Operators (TSOs) in a few projects and in a simplified form, distinguishing between regional winter and summer capacities [8]. This is set to change according to the Federal Network Agency's network development plan, which calls on TSOs to implement real-time DLR wherever possible [8]. In particular against the backdrop of the current energy crisis, the importance of DLR was pointed out by the Federal Ministry of Economical Affairs and Climate Action [19].
In practice, there are several methods to ensure the thermal capacity for a transmission line is not exceeded [1]:
* **Weather measurement:** Nearby weather stations measure ambient conditions to calculate the theoretical thermal capacity.
* **Conductor temperature measurement:** Sensors along the line measure the real time temperature of the conductor.
* **Tension measurement:** A load cell measures the tension of the line to derive the clearance of the line.
* **Sag measurement:** The security clearance is directly monitored.
Among these, weather based measuring of the thermal capacity is considered the simplest as well as the cheapest method to implement [36]. The assumed investment costs for the implementation of DLR with the different methods vary between 60k and 80k E per km with an expected service life of 30 years [50].
In the literature, the field of energy system modelling has gained increasing importance in recent years. Detailed models of energy systems using real-world input data are used to assess new technologies and/or policies in the context of the energy transition [24; 7; 31; 47].
Despite its expected system benefits, none of these models consider DLR as a transmission line capacity enhancement method. Typically, DLR is studied based on artificial IEEE systems with low spatial and temporal resolution. The publications in [45; 53; 12; 11; 39] perform an operational optimization with DLR of bus systems with 24-118 nodes and a time-horizon of 24 hours. In particular, the studies in [45] presents a mixed-integer optimization model for evaluating the optimal set of line candidates to be upgraded with DLR. A slight expansion is provided in [35] where an investment model of a 118 IEEE bus system using a mixed-integer optimization with benders decomposition is presented.
The literature lacks a comprehensive assessment of DLR based on a detailed investment model with (1) high spatial and temporal resolution, (2) long time-horizon, (3) high renewable penetration and (4) real-world input data. This study addresses this gap. Our study is the first to examine the benefits of DLR considering (1)-(4), all which are necessary to assess the compound system effects of DLR. By focusing on Germany only, we allow for a high quality of the input data and a network representation in its original topology. The workflow, openly available at [26], can be easily extended to other countries.
The theoretical transmission capacity potential with DLR has already been discussed in [50] for Germany and in [14] for Rhineland-Palatinate with high renewable penetration. However, both reports do not include any operational or capacity expansion optimization of the power system. Differing from our first study [25], the following work has improved methodology and extended analysis that uses updated cost assumptions and broadens the scope of scenarios to the year 2035.
The article is structured as follows. Section 2 describes the methodology of the DLR implementation (2.1) and the underlying power system modelling (2.2). Section 3 presents the results of the different analyzed scenarios with limitations outlined in Section 4. A conclusion is presented in Section 5.
## 2 Methodology
### Dynamic Line Rating
The key concept of DLR is based on a dynamic estimation of the maximally allowed electrical current for the conducting material, also referred to as ampacity. The ampacity is set such that the conductor does not surpass the maximally allowed temperature after which impermissible
sag of the line or hardware damage can be expected [36]. There are two widely used standards to calculate the ampacity of a conductor, namely the IEEE and the CIGRE standard [32], [34]. As the IEEE standard is known to be more conservative [51], we choose it for our modeling. The following outlines the basic concept of DLR according to IEEE. For further details refer to [33], [32]. For each conductor, the heat balance equation
\[q_{c}+q_{r}=q_{s}+I^{2}\cdot R(T) \tag{1}\]
relates the heat losses on the left hand side to the heat gains on the right hand side. Convective heat loss \(q_{c}\), which represents cooling by ambient air, depends on ambient temperature, wind speed and angle, conductor material and geometry. The radiated heat loss \(q_{r}\) is the net energy lost through black body radiation. Solar heat gain \(q_{s}\), on the other hand, is caused by solar heat radiating onto the conductor. Finally, the resistive heat gain \(I^{2}\cdot R(T)\) is given for an electrical current \(I\) and temperature-dependent resistance \(R(T)\), where \(T\) is the temperature of the conductor. The latter can be approximated by linearly interpolating from reference resistance \(R_{ref}=R(T_{ref})\) using a material specific temperature coefficient \(\alpha\), i.e.
\[R(T)=R_{ref}\cdot(1+\alpha\cdot(T-T_{ref})) \tag{2}\]
Solving (1) for the electrical current and setting the temperature to its maximally allowed limit \(T_{max}\), yields the ampacity
\[I_{max}=\sqrt{\frac{q_{c}+q_{r}-q_{s}}{R(T_{max})}} \tag{3}\]
which for three-phase electric power transmission operating at voltage level \(V\) leads to a maximally allowed, constant power transfer of
\[P_{max}=\sqrt{3}\cdot I_{max}\cdot V \tag{4}\]
Fig. 1 shows \(P_{max}\) for a single wire of a typical 3-phase transmission line with \(R(T_{max}=80^{\circ}C)=9.39\cdot 10^{-5}\,\Omega/\)m and \(V=380\) kV, as a function of temperature and wind speed for three different wind incidence angles, \(0^{\circ}\), \(45^{\circ}\) and \(90^{\circ}\). The cooling effect at cold temperature with strong perpendicular wind leads to a transmission capacity increase of factor 4-5 compared to conservative conditions with \(40^{\circ}\)C and low wind.
For this study the IEEE standard was implemented in Atlte [27], a Python package used for converting weather data into renewable power
Figure 1: Transmission capacity for different environmental conditions. For three different wind incidence angles (\(0^{\circ}\), \(45^{\circ}\), \(90^{\circ}\)), the maximal transmission capacity of a typical electricity line (\(R(T_{max})\)=\(9.39\cdot 10^{-5}\,\Omega/\)m, \(V=380\) kV) is shown as a function of temperature and wind speed. The cooling effect of a perpendicular, strong, cold wind can lead to capacity increase of factor 4-5 compared to conservative conditions with \(40^{\circ}\)C and low wind.
potentials. Atlite obtains the weather data from the ECMWF Reanalysis v5 (ERA5) dataset providing various weather-related variables in a hourly resolution on a \(0.25^{\circ}\times 0.25^{\circ}\) grid.
The transmission lines are mapped onto the weather grid and their transmission capacity is calculated according to Equation 3. If multiple weather grid cells overlap with a transmission line, the grid cell with the most unfavorable condition provides the ampacity for the entire line. This is illustrated in Fig. 2.
### Power System Modeling
Using the Python package _Python for Power System Analysis_ (PyPSA) [5; 3], the linear ESOM is represented by a set of buses interconnected via transmission lines and complemented by loads, generators and storage facilities. Further, time-dependent electric demand per bus and generation potentials per generator are included for one year with hourly resolution. The generator dispatch and deployment is determined by minimizing the total system cost. The optimization uses the linearized power flow approximation [49; 4] and is subject to systemic constraints (CO\({}_{2}\) budget, limited capacity expansion, etc.).
Figure 2: The figure shows a transmission line intersecting different weather cells with individual wind speeds. The implementation in Atlite calculates different ampacities for each of the overlapping line segments, of which the minimum value determines the ampacity for the entire line.
Data on the transmission grid, power plants, renewable potentials and demand for the model are created with the workflow _PyPSA-EUR_[30; 29]. The representative network of Germany comprises all 256 substations and 333 transmission lines operating at 220 kV and above. Electrical parameters of transmission lines are derived by mapping voltage level of the lines to standard line types given in [46]. The demand data and renewable profile data, provided by Atlite, have an hourly resolution.
For the DLR model, the transmission capacity per line and time step is given by the formulation in Section 2.1, for the SLR model by the standard static transmission capacities. As DLR is calculated with averaged hourly wind speed data, sub-hourly wind speed fluctuations are neglected. This leads to a slightly overestimated \(P_{max}\), which we consequently scale down by an empirically derived factor of 0.95, see A.2 for details. Furthermore, we account for N-1 network security in both SLR and DLR models by restricting the power transmission P per line to \(-0.7\,P_{max}\leq P\leq 0.7\,P_{max}\) leaving a 30% capacity buffer [6].
As already shown in Fig. 1, DLR can easily lead to a capacity increase of more than 100% if the circumstances are beneficial. To ensure network security, TSO's typically limit the relative DLR capacity to 150% [52] of the SLR capacity. On the other hand, the ENTSO-E reports manageable capacity increases of 100% [15]. To account for this uncertainty, we calculate each limit separately for \(P_{max}\) at 130%, 150%, 180%, and 200%. If not mentioned further, no \(P_{max}\) limit is considered.
## 3 Results
The presented work includes three scenarios, which are listed in Table 1. In the first scenario, the operation of the existing network is optimized for 2019 to validate the model and demonstrate the near-term system benefits of DLR. In the second scenario, we run an investment optimization for the year 2030 to show the long-term benefits of DLR. By enforcing an 80% renewable production share, denoted as renewable share, we are able to assess DLR as a supporting measure to reach the German government's target of 80% renewable power supply by 2030 [20]. Beyond that, the German government targets climate neutrality of the power sector by 2035. In our third scenario, we therefore alter the renewable share from 85-100% in steps of 5%. The parameters settings for the study cases can be found in the A.1.
### Optimal Operation for 2019
For quantifying the system benefits of DLR in the existing grid, we choose to model the pre-pandemic year 2019 when operations were relatively average. Electrical load and renewable potentials are derived from historical data of the year 2019 [43], transmission and generation infrastructure are aligned to the state of 2019, using selected data from [28; 48; 55]. To approximate the operation of nuclear and lignite power plants mostly running at base load, we enforce a minimal requirement of operation, based on historical data from the ENTSO-E Transparency Platform [16].
Fig. 3 shows the geographical layout of existing generation and transmission capacities for the 2019 scenario. The circles with its subdivisions show the installed power generation capacity per site and technology with their area being proportional to the capacity. The widths of the transmission lines indicate the installed nominal capacities, while their colors show the relative average change in capacity when applying DLR. From the latter, we see that the changes in average line capacity vary across Germany. In the north, line capacity increases by 60-90%, while in the south it increases by 30-50%. Note that this effect is related to higher average wind speeds in northern Germany.
In addition to location, of particular interest is the time periods over which transmission improvements occur. The upper graph in Fig. 4 shows the number of observations of a given capacity factor, i.e. the total power output relative to the installed capacity, for solar, onshore and offshore wind, when collecting into 40 bins of equal width. For better visualization the y-axis was clipped at 1200 observations. The lower graph shows the average total DLR capacity relative to SLR for the bins as a function of the capacity factor. The shaded areas around the lines correspond to the 95% confidence intervals.
For onshore wind power, the transmission capacity increase correlates
\begin{table}
\begin{tabular}{l l l} Scenario & Type of optimisation & Minimal renewable share \\ \hline
2019 & Operational & – \\
2030 & Operational \& capacity expansion & 80\% \\
2035 & Operational \& capacity expansion & 85\%, 90\%, 95\%, 100\% \\ \hline \end{tabular}
\end{table}
Table 1: Table showing the three regarded scenarios with their type of optimization and their renewable shares.
positively and more or less evenly with the capacity factor. During periods of almost no onshore wind production, the average transmission capacity is close to the SLR transmission capacity. The DLR transmission capacity then rises sharply up to a capacity factor of 0.2. From there, the line rating increases almost linearly finalizing at an overall transmission capacity increase of 75% for periods onshore wind capacity factors close to 1. This correlation reflects the dependency of DLR on the wind speed. As can be seen in the upper graph, most time steps have an onshore wind capacity factor of \(\leq\)0.5. Higher capacity factors only occur in 16% of the time steps.
Figure 3: Capacity layout of the 2019 scenario of the German power system. The circle with its subdivisions are proportionally to the installed generation capacity. The widths of the line indicate their nominal transmission capacity, their color the relative average change when going from SLR to DLR.
For offshore wind power, the correlation between transmission capacity and power production is also positive, but not as steady as for onshore wind power. During periods with low offshore wind production, the transmission capacity increases by roughly 20%. The transmission improvement rises when going to periods with more offshore wind power. After a steep positive trend when reaching periods of highest offshore wind supply, an increase in transmission capacity of almost 70% is observed. The overall distribution of capacity factors is relatively flat and peaks at the maximally allowed capacity factor 0.93, set to account for wake losses [2].
For solar power, the trend is roughly the opposite. During periods of low
Figure 4: Lower graph shows the ratio between total DLR and SLR transmission capacity as a function of the capacity factor. Onshore and offshore wind production correlate positively with the increase in transmission capacity, solar productions reveal a negative correlation. The upper graph shows the number of observations for a given DLR to SLR ratio and the capacity factor.
solar supply, the increase in transmission capacity averages 20%, while it decreases during periods of higher supply. However, the DLR transmission capacity stays above the transmission capacity of SLR. The overall negative correlation results from the fact that on sunny days the cooling effect of temperature and wind tends to be lower. The radiating heating from solar influx plays a marginal role only. In the upper graph, the number of observations was clipped at 1200, leaving out the data point of 4400 time steps without solar supply.
In the operational optimization, these effects prove themselves to be quite impactful. Fig. 5 shows the difference in supply when going from SLR to DLR in the 2019 scenario. Of the 490 TWh total net electricity generation throughout the year, 31% (149 TWh) is provided by fossil power generation (hard coal, lignite and gas) with SLR. This share drops to 28.9% (137 TWh) when implementing DLR, while the share of onshore wind power increases from 24% (116 TWh) to 25% (122 TWh) and of offshore wind power from 4.4% (21.4 TWh) to 5.8% (28.3 TWh). The total generation of the other carriers change marginally. We recall that the installed generation capacities are the same for both scenarios. Solely, from the change in operation, the system saves around 583 mEUR/yr operational expenditure (OPEX) which translates to roughly 7 % of total cost. Note here that the assumed gas price is 27UR
leads to a carbon emission reduction of 10.8 MtCO\({}_{2}\) (6.5%) compared to the SLR model.
The main reason for the shift in generation is the improved transmission capacity during periods when wind power was previously curtailed. Fig. 6 shows the number of congested lines as a function of the total renewable power potential relative to the total load in each time step. The color and size of the dots indicate the renewable curtailment. It stands out that DLR (right panel) leads to a significant decrease in transmission congestion as well as curtailment. While the SLR model (left panel) reveals up to 16 congested lines, maximally 8 lines are congested in the DLR model. In
Figure 6: Number of congested lines for SLR and DLR as a function of the ratio between the renewable generation potential and the load for each hour during the year. A ratio higher than 1 implies an over-supply which is curtailed. The size and color of the scatter points correspond to the amount of curtailment at the regarded hour.
particular, at times with a shortage of renewable energy (Potential / Load \(<\) 1) the SLR system has to curtail 7 times more power due to transmission congestion. This does not hold for the DLR model. The same accounts for times with renewable excess power (Potential / Load \(>\) 1), where power is mainly curtailed due to oversupply in the DLR model and curtailed due to oversupply and congestion in the SLR model.
### Optimal Investment 2030
In the scenario for 2030 the capacity of renewable power plants, gas turbines, batteries and hydrogen infrastructure are optimized. From the existing power plant fleet, only gas power plants with a decommissioning date later than 2030 are included. To align our model with the government's coal-phase out plans [20], we do not consider the use of existing coal and lignite assets. The grid infrastructure is supplemented by grid expansion projects from the TYNDP [17] to be built by 2030. The electrical load time series, originally representing the default year 2013 of PyPSA-Eur [29], is scaled up to meet the predicted total net demand for 2030 of 555 TWh [37]. The supply from renewables, i.e. solar, wind, biomass and hydro facilities, is enforced to meet the 80% target. The cost parameters for 2030 are retrieved from the technology data-base published at [40]. In addition, to properly reflect the future operational costs of fossil generators, we impose a price of 120 EUR per tonne CO\({}_{2}\).
Table 2 shows the optimal capacities of technologies varying across the models, in comparison to today's capacities. Roughly speaking, the optimal solution to meet the 2030 target requires tripli
\begin{table}
\begin{tabular}{l r r r r r} & Today & SLR & DLR & \(\Delta\) & Unit \\ \hline Solar & 64 & 203 & 179.9 & –23 & GW \\ Onshore Wind & 57.7 & 102.1 & 96.4 & –5.7 & GW \\ Offshore Wind & 8 & 9.4 & 13.7 & 4.3 & GW \\ Natural Gas & 32 & 45.9 & 47.6 & 1.7 & GW \\ Battery Discharge & - & 20.9 & 14.2 & –6.7 & GW \\ Battery Storage & - & 133.8 & 88.1 & –45.7 & GWh \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of a selection of capacity totals for today, reported at [9] by the time of writing, and the optimized 2030 scenario with a 80% renewable share with Static Line Rating and Dynamic Line Rating.
wind capacity, and expanding batteries. While solar and onshore wind capacity totals are roughly in alignment the government's target for 2030 (200 GW solar, 100 GW onshore wind), the model expands much less offshore wind power then planned by the government (30 GW) [20]. In both scenarios, no hydrogen facilities are installed, which, as we show later, turn out to be profitable for renewable shares higher than 90%.
On the one hand, the DLR model results in 4.3 GW additional offshore wind expansion and 1.7 GW natural gas expansion. On the other hand, other technologies are expanded significantly less: solar by 23 GW, onshore wind by 5.7 GW, and most importantly battery by a total storage capacity of 45 GWh. This equals to a third of total battery installations in the SLR model.
Fig. 7 shows the optimal spatial distribution of generation and transmission capacities in the SLR and DLR model. In the DLR model, the line capacities are higher as well as more renewable capacities are built in the north of Germany.
Figure 7: Optimized capacities for the 2030 scenario with 80% renewable share.
Together with the changes in the capacity layout, we perceive significant changes in the operation. When implementing DLR, the capacity factor of onshore wind power is lifted from 21% to 22.5% leading to additional supply of 1.5 TWh by onshore wind despite its lower capacity. The capacity factor of offshore wind power increases from 39.1% to 41.5% leading to an additional supply of 17.7 TWh. In contrast, roughly 17.5 TWh less are produced from solar facilities. By reducing the use of storage technologies and the associated losses, the system produces 91 GWh less in total with DLR. To this end, Fig. 8 shows the duration curve of the total congestion throughout the simulation year. We see that through DLR the transmission congestion is significantly reduced. More than three quarters of the year are free from congestion while, for SLR, congestion appears in nearly 4000 hours. Furthermore, the number of maximally congested lines drops from 58 to 14 when going from SLR to DLR.
From these numbers we conclude the following. The implementation of DLR enables the model to better integrate offshore wind power in the system. By leveraging offshore wind potentials, the DLR model profits from relatively steady power feed-in. Improved transmission capacity allows offshore wind to penetrate deeper into the system, significantly reducing the need for solar and short-term storage, as well as onshore wind. Overall, the DLR model is 1.15 bnEUR/yr cheaper than the SLR model. Most of the cost savings are due
Figure 8: Duration curve for number of congested lines in the 2030 study case.
to the lower solar and battery installations.
In most of today's real-world implementations, TSO's set a static upper limit to the DLR capacity, e.g. for network security reasons. In the following, we show that the positive effects DLR are robust against such a limit. Fig. 9 shows the system cost, optimal generation capacity, total curtailment,
Figure 9: Total cost, optimal capacity, curtailment, and relative total cost savings compared to the SLR model as a function of the maximum line rating allowed per line in the DLR model. Already with a conservative cap at 130% of the SLR capacity, strong system benefits are achieved.
and relative total cost savings as a function of the maximum allowed DLR transmission capacity per line. It can be seen that the largest benefits occurs in the first step going from SLR (1.0) to DLR with a rather conservative upper limit of 130% of the SLR capacity. From then onwards, optimal generation capacity and curtailment hardly change and system costs are only slightly reduced. In the light of Fig. 4, these findings can be attributed to two facts: First, only a minority of time steps (\(<20\%\)) reveal a relative DLR transmission capacity increase of more than 50%. Second, even when higher transmission rates are possible, the overall wind power production is so high that the system does not profit from it. With a capacity factor of above 0.5, wind power produced more than 65 GW which is more than the average load of 63.3 GW of the system. Therefore, in most of these times the excess energy is curtailed independently of the DLR transmission capacity limit.
### Optimal Investment 2035
Looking beyond the scope of 2030, we vary the production share of renewable technologies gradually in the SLR and DLR model from 80 to 100%. This variation represents the pathway from 2030 to 2035 in which the German government targets an almost decarbonized power sector [22]. For this scenario, no DLR limits are considered. Note that all the other parameters, including the electrical load and weather year, remain the same as in the 2030 scenario to ensure comparability.
Fig. 10 shows the capacity changes between the SLR and DLR model for all considered renewable shares. Other renewables are not depicted as their capacities are equal for both models. Continuing the trend of the 2030 scenario, we find that the DLR models require less generation capacity for all technologies except offshore wind. The optimal offshore wind capacity in the DLR model increases continuously with the share of renewables, from +4.3 GW at a share of 80% to +14.9 GW at a share of 100%. For onshore wind, we perceive an opposite trend. With increasing renewable share, DLR builds progressively less capacity resulting in a 39 GW reduction. The difference in solar capacities for the DLR first decreases from -23 GW at 80% to -14 GW at 95% compared to the SLR model. Here, the DLR model sees an advantage in integrating more solar into the grid. However, with the strong increase of offshore wind power in the DLR model at a share of 100% renewables, less solar capacity is needed again with -23.8 GW. Regarding fossil generators, DLR builds slightly more capacity with +1.68 GW at 80% to +2.2 GW at 95% production share. However, even though more capacities
are built in the 95% case, the fossil generators of the DLR model generate - 40 GWh less electricity. This indicates that the DLR model only builds more fossil generator capacities because it requires a higher fossil power output for single hours with scarce wind resources. For the 100% renewable share case, in both SLR and DLR model the fossil generation capacity drops to zero due to the full decarbonization of the model without considering carbon capture technologies.
At this point, we may conclude that with increasing share of renewables, the DLR implementation allows trading a relatively small increase in offshore wind power against a high decrease of onshore wind and solar PV capacity. When looking at the storage infrastructure, we see that in general DLR needs less storage capacity than SLR. For battery infrastructure, DLR builds - 45 GWh less capacity at 80% share. However, with increasing renewable share DLR gradually builds more battery capacity which finally leads to a battery capacity increase of +61 GWh at 100% share. As for the hydrogen infrastructure, DLR requires a smaller expansion for all renewable energy shares above 80%, with the difference increasing as the share increases, to -2.35 TWh storage capacity at 100% renewable share.
Figure 10: Change in expanded generation and storage capacity for different shares of renewable production.
In Fig. 11, the cost and grid congestion difference is depicted when going from 80 to 100% renewable share. The cost savings from DLR compared to SLR increase from 1.2 bnEUR/yr at 80% renewable share to 3.9 bnEUR/yr at 100% renewable share, underlying benefits of DLR with higher degree of decarbonization. In terms of numbers of congested lines, DLR similarly shows an increasing benefit in congestion relief with higher renewable shares. Here the relation between renewable share and relieved line congestions is not as steady. From 80 to 95% the benefit of DLR compared to SLR on line congestion relief decrease starting at 2.7 and going up to 3.1 relieved line congestions at 80 and 95%. However, from 95 to 100% renewable share there is a clear increase in congestion relief, with 3.8 relieved lines for DLR compared to SLR at 100% renewable share. One explanation for this is the missing fossil generation fleet which could still be used at 95% renewable share to circumvent transmission bottlenecks.
In the following, we take a closer look at the fully decarbonized power system scenario. Table 3 shows the total capacities for the DLR and the SLR model at 100% renewable share. In addition to Fig. 10, Table 3 puts the total capacities into perspective and compares them to the expansion plans of the German government. It shows that both the DLR and the SLR model tend to build less offshore wind and solar and more onshore wind capacities than planned by the German government.
In addition, the total electricity generation difference between DLR
Figure 11: The top figure shows the cost change from SLR to DLR for different shares of renewable production. The bottom figure depicts the change in averaged congested lines per hour.
and SLR is illustrated in Fig. 12. We see how additional offshore wind generation in the DLR model substitutes solar and onshore wind generation. Despite less renewable capacity in the DLR model (see Table 3), better integration and higher capacity factors of wind power allow for a stable energy supply. In total the DLR model saves 14 TWh in energy production due to reduced storage utilization and associated conversion losses. This shows the advantage of investments in DLR over investments in storage infrastructure in power system encountering grid bottlenecks. To sum up, DLR enhances the grid such that the power system can better integrate
\begin{table}
\begin{tabular}{l r r r r r} & Plan & SLR & DLR & \(\Delta\) & Unit \\ \hline Offshore Wind & 40 & 17.8 & 32.7 & 14.9 & GW \\ Onshore Wind & 157 & 215.3 & 176.2 & –39 & GW \\ Solar & 309 & 244.1 & 220.2 & –23.9 & GW \\ Battery Discharge & - & 23.7 & 20.5 & –3.2 & GW \\ Battery Storage & - & 176.6 & 237.6 & 61.1 & GWh \\ Hydrogen Electrolysis & - & 75.7 & 61.5 & –14.2 & GW \\ Hydrogen Fuel Cell & - & 77.4 & 74.8 & –2.6 & GW \\ Hydrogen Storage & - & 15.5 & 13.1 & –2.4 & TWh \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of a selection of total capacity planned by the German government in 2035 [21; 22] and from the optimized 2035 scenario with a 100% renewable share with Static Line Rating and Dynamic Line Rating. The spatial distribution of capacities in the SLR and DLR model can be found in Fig. A.3
Figure 12: The figure shows the total energy difference when going from SLR to DLR. Negative values mean less generation in the DLR model.
offshore wind while requiring less power generation and storage backup capacities.
## 4 Limitations
The presented power system model reveals a high spatial resolution as well as detailed information about installed capacities and renewable potentials. However, it is not able to represent all the power system dynamics.
The model is isolated from other countries, neglecting cross-border exchanges. In particular, missing cross-border lines lead to an overestimated storage infrastructure capacity expansion and curtailment. Particularly high curtailment can be seen in the north-west of Germany at the substations "Diele & Dorpen West", where much of the curtailed wind power could be exported to the Netherlands if considering cross-borer flows. Another limitation is the modelling of the conventional fleet. We do not consider ramping constraints for coal and gas power plants. Therefore, in our model the operation of coal and gas power plants is more flexible compared to reality. Furthermore, the optimization of the model is kept linear, neglecting power transmission losses as well as complex power flow dynamics. Note that the losses rise quadratically with current and DLR leads to high line loading, and therefore higher transmission losses. For this reason, we run a nonlinear power flow calculation based on the linearly optimized generation dispatch to compare transmission losses of SLR and DLR. We calculate the losses for the models with 100% renewable share assuming they have the highest losses.
The losses in the SLR model account for 1.7% of the total transmission while for the DLR model they account for 3.6%. In absolute numbers the losses represent 166 and 60 TWh for the DLR and SLR model, respectively. When looking at the complex power flow dynamics, the nonlinear power flow calculation converged for 99.99% of all time steps in all regarded scenarios.
## 5 Conclusion
In this paper, we evaluate the effect of DLR on multiple study scenarios. In the first setup, we consider the German power system in a historical scenario for the year 2019 to assess the effect of DLR on the operation of the existing grid. In the second scenario, we model the year 2030 enforcing an 80% renewable share throughout the simulation year in order to assess
the effect of DLR on optimal infrastructure planning. In the third scenario, we extend the second scenario and model the transition from 2030-2035 by gradually varying the renewable share from 80-100%. We show that DLR offers large potentials for system cost savings as well as considerable benefits for transmission system operation.
In the 2019 scenario, we observe operational cost savings of 583 mEUR/yr at a gas price of 27 EUR/MWh through DLR due to better wind power integration and a shift away from fossil energy carriers. The 2030 study case with 80% renewable share shows that an implementation of DLR together with an optimal deployment of renewable generation saves 1.15 bnEUR of annual capital and operational system costs. Compared to the SLR model, the DLR model reduces the need for battery storage, which must be widely deployed, by one third. This reduction is mainly due to a better integration and exploitation of wind power and less transmission congestion. These findings are robust against network security constraints, clipping the maximum power transmission on lines to 130-200% of the nominal capacity. Already with a rather conservative DLR capacity limit of 130%, system cost savings sum up to 1 bnEUR/yr. The benefits increase further when raising the line capacity limit of DLR.
With increasing share in renewable production from 80-100%, we see that the DLR model gradually deploys more offshore wind as well as less onshore wind and solar capacities than the SLR model. At 100% renewable share, the implementation of DLR clearly shifts the optimal production from onshore wind and solar generation to offshore wind power leading to steady power feed-ins and therefore less storage requirements (2.2 TWh). By reducing the operation of storage technologies and their associated losses, the system saves around 14 TWh in energy production and 3.9bnEUR/yr in total cost. In that regard, the estimated costs of a system-wide implementation of DLR of roughly 80mEUR/yr are negligible, assuming DLR cost of 80kEUR per line km, 8% interest rate, 30 years lifetime.
We conclude that given the urgent need for decarbonizing the German power system, DLR is a viable complement to transmission capacity expansion, not only increasing the total welfare through a non-invasive measure but also reducing grid congestion.
## Acknowledgement
We thank Tom Brown for suggesting the project and Rena Kuwahata from Ampacimon for fruitful discussions. This paper was conducted partly for the CoNDyNet2 project, which was supported by the German Federal Ministry of Education and Research under grant number 03EK3055E. Furthermore, we thank Breakthrough Energy for partially funding this work in the project "Hydrogen Integration and Carbon Management in Energy System Models".
## Appendix A
### Study Cases Parameters
\begin{tabular}{l l l} \hline & 2019 & 2030 - 2035 \\ \hline
**Generator capacity** & & \\
**expansion** & no & yes \\
**CO2 Limit** & 222 mil tons & - \\
**CO2 Price** & - & 120 EUR/t \\
**Gas price** & 27EUR/MWh & 27EUR/MWh \\
**Base load of nuclear** & & \\
**and lignite** & yes & no \\
**Renewable generation** & & \\
**constraint** & no & 80\% & - 100\% \\ & & renewable \\ & & production share \\
**Generator infrastructure** & historic capacities & capacities & capacities of 2022 with \\ & for 2019 & 2022 with \\ & & decommissioning \\ & & dates later then 2030 \\
**Electrical load** & 605 TWh & 658 TWh \\ \hline \end{tabular}
### DLR Factor
In the following, we illustrate why the hourly averaged wind speed overestimates the DLR transmission capacity compared to higher resolved data. According to the IEEE standard [33],
\[P_{max}\varpropto I_{max}\underset{\sim}{\propto}\qquad\overset{\propto}{\sim} \sqrt{\overline{v}_{wind}}\]
, where \(\overline{v}_{wind}\) denotes the averaged hourly wind speed. In the following equation,
\[\sqrt{\overline{v}_{wind}}=\sqrt{\frac{1}{n}\sum_{i}^{n}v_{wind,i}}\geq\frac{ 1}{n}\sum_{i}^{n}\sqrt{v_{wind_{i}}}\]
\(v_{wind,i}\) corresponds to an sub-hourly wind speed data point going from \(i\) to \(n\), i.e. from 1 to 6 when regarding 10-minute wind speed data. The Equation shows that the root of the averaged sub-hourly wind speed is greater equal the average of the rooted sub-hourly wind speed.
## Appendix A.3 Scenario Plots
Figure A.2: Difference in capital and operational expenditures when going from SLR to DLR for different renewable shares. Positive costs represent higher expenses of DLR.
Figure A.3: Optimized capacities for the 2035 scenario with 100% renewable share. |
2307.10874 | Energy-consistent discretization of viscous dissipation with application
to natural convection flow | A new energy-consistent discretization of the viscous dissipation function in
incompressible flows is proposed. It is implied by choosing a discretization of
the diffusive terms and a discretization of the local kinetic energy equation
and by requiring that continuous identities like the product rule are mimicked
discretely. The proposed viscous dissipation function has a quadratic, strictly
dissipative form, for both simplified (constant viscosity) stress tensors and
general stress tensors. The proposed expression is not only useful in
evaluating energy budgets in turbulent flows, but also in natural convection
flows, where it appears in the internal energy equation and is responsible for
viscous heating. The viscous dissipation function is such that a consistent
total energy balance is obtained: the 'implied' presence as sink in the kinetic
energy equation is exactly balanced by explicitly adding it as source term in
the internal energy equation.
Numerical experiments of Rayleigh-B\'enard convection (RBC) and
Rayleigh-Taylor instabilities confirm that with the proposed dissipation
function, the energy exchange between kinetic and internal energy is exactly
preserved. The experiments show furthermore that viscous dissipation does not
affect the critical Rayleigh number at which instabilities form, but it does
significantly impact the development of instabilities once they occur.
Consequently, the value of the Nusselt number on the cold plate becomes larger
than on the hot plate, with the difference increasing with increasing Gebhart
number. Finally, 3D simulations of turbulent RBC show that energy balances are
exactly satisfied even for very coarse grids; therefore, we consider that the
proposed discretization forms an excellent starting point for testing sub-grid
scale models. | Benjamin Sanderse, Francesc Xavier Trias | 2023-07-20T13:46:27Z | http://arxiv.org/abs/2307.10874v1 | # Energy-consistent discretization of viscous dissipation with application to natural convection flow
###### Abstract
A new energy-consistent discretization of the viscous dissipation function in incompressible flows is proposed. It is _implied_ by choosing a discretization of the diffusive terms and a discretization of the local kinetic energy equation and by requiring that continuous identities like the product rule are mimicked discretely. The proposed viscous dissipation function has a quadratic, strictly dissipative form, for both simplified (constant viscosity) stress tensors and general stress tensors. The proposed expression is not only useful in evaluating energy budgets in turbulent flows, but also in natural convection flows, where it appears in the internal energy equation and is responsible for viscous heating. The viscous dissipation function is such that a _consistent total energy balance_ is obtained: the 'implied' presence as sink in the kinetic energy equation is exactly balanced by explicitly adding it as source term in the internal energy equation.
Numerical experiments of Rayleigh-Benard convection (RBC) and Rayleigh-Taylor instabilities confirm that with the proposed dissipation function, the energy exchange between kinetic and internal energy is exactly preserved. The experiments show furthermore that viscous dissipation does not affect the critical Rayleigh number at which instabilities form, but it does significantly impact the development of instabilities once they occur. Consequently, the value of the Nusselt number on the cold plate becomes larger than on the hot plate, with the difference increasing with increasing Gebhart number. Finally, 3D simulations of turbulent RBC show that energy balances are exactly satisfied even for very coarse grids; therefore, we consider that the proposed discretization forms an excellent starting point for testing sub-grid scale models.
keywords: viscous dissipation, energy conservation, staggered grid, natural convection, Rayleigh-Benard, Gebhart number
## 1 Introduction and problem description
In this article we study the viscous dissipation function and its role in natural convection flows described by the incompressible Navier-Stokes equations, with buoyancy effects modelled by the Boussinesq approximation [1]. These 'Boussinesq' or 'Oberbeck-Boussinesq' equations have attracted much scientific interest over several decades [2], not only because of their physical relevance, but also of their intriguing mathematical properties. An important test case studied with the Boussinesq system is that of Rayleigh-Benard convection [3], in which a box of fluid is heated from the bottom and cooled from the top, giving rise to convection cells. The Boussinesq equations also describe a (miscible) form of Rayleigh-Taylor instability, which occurs when a heavy (cold) fluid is positioned above a light (warm) fluid.
A common assumption in many incompressible natural convection studies is that the effect of viscous dissipation on the internal energy (effectively on the temperature) is neglected. This assumption is not always valid, for example when considering natural convection in the Earth mantle, when considering highly viscous liquids, when large length scales are involved, or in devices operating at high rotational speed [4; 5; 6; 7; 8; 9; 10; 11]. Of course, when considering compressible flows, e.g. high-speed flows, including heating by viscous dissipation is known to be important, and several benchmarking studies have been performed related to modelling natural convection in the Earth mantle [12; 13]. These studies typically assume infinite Prandtl numbers, and ignore the unsteady and convective terms in the momentum equations. In this paper we will restrict ourselves to the
incompressible situation, for which this effect is less studied. In this incompressible case, Ostrach [11], Gebhart [10] and Turcotte et al. [9] should be explicitly mentioned, being among the first to address the role of viscous dissipation and to introduce next to the well-known Rayleigh and Prandtl numbers another dimensionless quantity, which is known as the dissipation number or the Gebhart number. In addition, we mention the work of Barletta and co-authors [14; 15; 16; 17], who considered the role of viscous dissipation in natural convection in several papers, studying the correct mathematical formulation of the problem and linear stability analysis for different geometries. Turcotte et al. [9] were probably one of the first to perform numerical experiments of incompressible natural convection flows that include viscous dissipation. They performed simulations on coarse grids (\(10\times 10\)) and low Rayleigh numbers (\(\mathrm{Ra}=10^{4},10^{5}\)) for different values of the dissipation number and concluded that Rayleigh-Benard convection was significantly affected when the dissipation number was of order unity.
From an energy perspective, the viscous dissipation source term in the internal energy equation occurs as a sink in the kinetic energy equation, which cancel each other when considering the total energy equation. However, most energy analyses, especially for incompressible flow, focus on the role of the potential energy term and its split into available and background potential energy [18; 19; 20], or on the kinetic energy budget [21]. To the author's knowledge, the role of viscous dissipation in the kinetic energy equation and its numerical treatment for the internal energy equation have not been explored in detail.
In this paper, the main novelty is that we propose a discretization of the viscous dissipation function and apply it to the context of natural convection flow, where it appears as a source term in the internal energy equation. Our discretization is such that we get a correct global energy balance, on continuous, semi-discrete, and fully discrete level. First, on the continuous level, a non-dimensionalization is proposed that makes the internal and kinetic energy scaling consistent. Second, on the semi-discrete level, we propose a discrete dissipation operator, and show that it cannot be chosen freely but is _implied_ by the discretization of the viscous terms in the momentum equations and by the definition of the kinetic energy. This discrete dissipation operator is not only of use in the internal energy equation, but also useful beyond the context of natural convection flows, e.g. when estimating the dissipation of kinetic energy in turbulent flows in a numerical simulation. Third, on the fully discrete level, we propose a time integration method that preserves the total energy balance upon time marching.
The paper is structured as follows. Section 2 introduces the governing equations, energy balances, and new non-dimensionalization. Sections 3 and 4 describe the energy-consistent spatial and temporal discretization. Section 5 describes steady-state results of Rayleigh-Benard convection including viscous dissipation, and section 6 describes energy-conserving simulations of Rayleigh-Taylor instabilities including viscous dissipation. Section 7 shows the effect of viscous dissipation in 3D DNS of Rayleigh-Benard convection.
## 2 Energy-conserving formulation
### Governing equations
The Boussinesq approximation states that density variations are small and can be ignored in all terms of the Navier-Stokes (NS) equations, except in the one pertaining to the gravity term. The NS equations describing conservation of mass and momentum then read
\[\nabla\cdot\mathbf{u} =0, \tag{1}\] \[\rho_{0}\left(\frac{\partial\mathbf{u}}{\partial t}+\nabla\cdot(\mathbf{u }\otimes\mathbf{u})\right) =-\nabla p+\mu\nabla^{2}\mathbf{u}+\rho\mathbf{g}, \tag{2}\]
where \(\mathbf{u}(\mathbf{x},t)\) is the velocity field, \(p(\mathbf{x},t)\) the pressure, \(\mu\) the dynamic viscosity, \(\rho(\mathbf{x},t)\) the density and \(\rho_{0}\) a reference density. Without loss of generality, we consider a two-dimensional domain \(\Omega\), with the gravity vector pointing in the negative \(y\)-direction so that \(\mathbf{g}=-g\mathbf{e}_{y}\). An example of the domain as used in the Rayleigh-Benard problem, including the boundary conditions, is given in figure 1. In the results section we will also consider the Rayleigh-Taylor problem, which has adiabatic boundaries on top and bottom, instead of isothermal as in case of Rayleigh-Benard.
The density \(\rho\) is assumed to vary only with temperature \(T(\mathbf{x},t)\), according to \(\rho(T)=\rho_{0}-\beta\rho_{0}(T-T_{0})\), where \(\beta\) is the isobaric coefficient of thermal expansion (\(\beta=-\frac{1}{\rho}\left(\frac{\delta\rho}{\delta T}\right)_{p}\)). The NS equations are then written as
\[\rho_{0}\left(\frac{\partial\mathbf{u}}{\partial t}+\nabla\cdot(\mathbf{u}\otimes\mathbf{u })\right)=-\nabla p^{\prime}+\mu\nabla^{2}\mathbf{u}-\beta\rho_{0}(T-T_{0})\mathbf{g}, \tag{3}\]
where \(p=p^{\prime}-\rho_{0}\mathrm{g}y\) and \(\nabla p=\nabla p^{\prime}-\rho_{0}\mathrm{g}\mathbf{e}_{y}\).
The equation for the internal energy \(\epsilon_{i}\) describes the temperature evolution according to
\[\frac{\hat{\sigma}}{\partial t}\underbrace{(\rho_{0}cT)}_{\epsilon_{i}}+ \nabla\cdot(\mathbf{u}(\rho_{0}cT))=\Phi+\lambda\nabla^{2}T, \tag{4}\]
where \(\lambda\) is the thermal conductivity and \(c\) equals \(c_{v}\) in case of an ideal gas (the specific heat at constant volume), and equals \(c_{p}-\frac{\partial\hat{\rho}}{\rho}\) for a real gas [22]). The contribution of pressure work to the change in internal energy, \(p\nabla\cdot\mathbf{u}\), has been discarded in equation (3) because of equation (1). The viscous dissipation function
\[\Phi\,:=\mu\|\nabla\mathbf{u}\|^{2}\geq 0, \tag{5}\]
is the key quantity in this work, where \(\|\nabla\mathbf{u}\|^{2}=\nabla\mathbf{u}\,:\,\nabla\mathbf{u}\) (the Frobenius inner product). In 2D and Cartesian coordinates it can be written as
\[\Phi=\mu\left[\left(\frac{\partial u}{\partial x}\right)^{2}+\left(\frac{ \partial u}{\partial y}\right)^{2}+\left(\frac{\partial v}{\partial x}\right) ^{2}+\left(\frac{\partial v}{\partial y}\right)^{2}\right]. \tag{6}\]
**Remark 1**.: _The viscous dissipation expression (5) is only valid if the diffusive terms are written as \(\nabla\cdot\mathbf{\tau}=\mu\nabla^{2}\mathbf{u}\), with \(\mathbf{\tau}=\nabla\mathbf{u}\). This simplified form follows from the more general stress tensor expression_
\[\nabla\cdot\mathbf{\hat{\tau}},\qquad\mathbf{\hat{\tau}}=\mu(\nabla\mathbf{u}+(\nabla\mathbf{ u})^{T}), \tag{7}\]
_by assuming constant \(\mu\) and incompressibility, so that the identity \(\nabla\cdot(\mu\nabla\mathbf{u})^{T}=\mu\nabla(\nabla\cdot\mathbf{u})=0\) holds. For the more general stress tensor \(\hat{\tau}\) we have instead_
\[\dot{\Phi}=\hat{\tau}\,:\,\nabla\mathbf{u}=\mu\left[2\left(\frac{\partial u}{ \partial x}\right)^{2}+2\left(\frac{\partial v}{\partial y}\right)^{2}+\left( \frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)^{2}\right], \tag{8}\]
Figure 1: Problem set-up for Rayleigh-Bénard convection.
which is notably different from expression (6). The dissipation function is therefore linked to the form of the stress tensor used in the momentum equations. In this work we mainly use expression (6), but we will also explain the extension to the more general form (8), see B, equation (B.21). For an alternative approach, see also [23].
### Total energy conservation
Conservation of kinetic energy follows by taking the dot product of equation (3) with \(\mathbf{u}\):
\[\frac{\partial}{\partial t}(\underbrace{\frac{1}{2}\rho_{0}|\mathbf{u}|^{2}}_{e_{ k}})+\nabla\cdot(\frac{1}{2}\rho_{0}|\mathbf{u}|^{2}\mathbf{u})=-\mathbf{u}\cdot\nabla p^{ \prime}+\mu\nabla\cdot(\mathbf{u}\cdot\nabla\mathbf{u})-\mu\|\nabla\mathbf{u}\|^{2}+\beta g \rho_{0}(T-T_{0})\upsilon, \tag{9}\]
where \(\mathbf{g}\cdot\mathbf{u}=-g\upsilon\) and we have used the identity
\[\mathbf{u}\cdot\nabla^{2}\mathbf{u}=-\|\nabla\mathbf{u}\|^{2}+\nabla\cdot(\mathbf{u}\cdot \nabla\mathbf{u}). \tag{10}\]
Upon adding the kinetic and internal energy equations (4) and (9), the viscous dissipation term cancels and we arrive at the equation for the total energy \(e=e_{k}+e_{i}\):
\[\frac{\partial}{\partial t}(e_{k}+e_{i})+\nabla\cdot((e_{k}+e_{i})\mathbf{u})=- \nabla\cdot(p^{\prime}\mathbf{u})+\mu\nabla\cdot(\mathbf{u}\cdot\nabla\mathbf{u})+\beta g \rho_{0}(T-T_{0})\upsilon+\lambda\nabla^{2}T. \tag{11}\]
All terms are in conservative (divergence) form, except the potential energy term. Upon integrating over the domain \(\Omega\) and assuming no-slip conditions \(\mathbf{u}=\mathbf{0}\) on all boundaries, we obtain the global balances
\[\frac{\mathrm{d}E_{k}}{\mathrm{d}t} =-\int_{\Omega}\Phi\mathrm{d}\Omega+\int_{\Omega}\beta g\rho_{0}( T-T_{0})\upsilon\mathrm{d}\Omega, \tag{12}\] \[\frac{\mathrm{d}E_{i}}{\mathrm{d}t} =\int_{\Omega}\Phi\mathrm{d}\Omega+\int_{\partial\Omega}\lambda VT \cdot\mathbf{n}\,\mathrm{d}S,\] (13) \[\frac{\mathrm{d}E}{\mathrm{d}t} =\frac{\mathrm{d}E_{k}}{\mathrm{d}t}+\frac{\mathrm{d}E_{i}}{ \mathrm{d}t}=\int_{\Omega}\beta g\rho_{0}(T-T_{0})\upsilon\mathrm{d}\Omega+ \int_{\partial\Omega}\lambda VT\cdot\mathbf{n}\,\mathrm{d}S, \tag{14}\]
where \(E=\int_{\Omega}e\,\mathrm{d}\Omega=E_{k}+E_{i}\). In case the boundary conditions are adiabatic (\(\nabla T\cdot\mathbf{n}=0\)), the last term in (14) vanishes and the total energy equation expresses that the sum of internal and kinetic energy changes due to the buoyancy flux \(\int_{\Omega}\beta g\rho_{0}(T-T_{0})\upsilon\,\mathrm{d}\Omega\) - this case will be dealt with in the Rayleigh-Taylor set-up in section 6.
In most studies of Rayleigh-Benard convection the dissipation function \(\Phi\) is left out from the internal energy equation (4), while its corresponding counterpart in the momentum equation (\(\mu\nabla^{2}\mathbf{u}\)) is still included. As a consequence, the energy lost in the kinetic energy equation is not balanced by the heat generated in the internal energy equation, so that the total energy equation features a dissipation term, which destroys the global energy balance.
**Remark 2**.: _Equations (12) and (14) feature the buoyancy flux \(f\,\beta g\rho_{0}(T-T_{0})\upsilon\mathrm{d}\Omega\) (stemming from the term \(f\,\rho\mathbf{g}\cdot\mathbf{u}\mathrm{d}\Omega\)). In general compressible fluids, i.e. those that satisfy_
\[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{u})=0, \tag{15}\]
_one can show that the buoyancy flux can be written as the time derivative of the potential energy \(E_{p}=f\,\rho\mathrm{g}\mathrm{yd}\Omega\) (see [24], section 6.4.2; [7], section 3.8). In that case, one could define \(\dot{E}=E_{k}+E_{i}+E_{p}\) and have a total energy conservation statement of the form [25]:_
\[\frac{\mathrm{d}\dot{E}}{\mathrm{d}t}=\int_{\partial\Omega}\lambda\nabla T \cdot\mathbf{n}\,\mathrm{d}S. \tag{16}\]
_However, in Boussinesq fluids, equation (15) is not satisfied; instead we have_
\[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{u})=\frac{\partial\rho}{ \partial t}+(\mathbf{u}\cdot\nabla)\rho+\underbrace{\rho\nabla\cdot\mathbf{u}}_{=0}=- \rho_{0}\beta\left[\frac{\partial T}{\partial t}+(\mathbf{u}\cdot\nabla)T\right]. \tag{17}\]
_The right-hand side can be written in terms of the sum of thermal diffusion and viscous dissipation, see equation (4), and is generally nonzero. As a consequence, the time derivative of the potential energy includes not only the buoyancy flux, but also additional terms [18; 19]. Therefore, for Boussinesq fluids additional terms appear in the right-hand side of equation (16) (independent of whether viscous dissipation is included in the internal energy equation). In this paper, the meaning 'energy-consistent' thus refers to the exchange between internal and kinetic energy, and not to the total energy (kinetic + potential + internal), which is not conserved under the Boussinesq approximation._
### Non-dimensionalization
The study of the Rayleigh-Benard convection problem is simplified by introducing dimensionless quantities. As explained in [4], p. 46, three dimensionless groups (or'similarity parameters') are needed to fully describe the problem. An important question that we address here is how the choice of non-dimensionalization changes the total energy equation.
We non-dimensionalize equations (1), (2) and (4) by taking a reference length \(H\) (cavity height), a reference temperature difference \(\Delta T\) (difference between the cold and hot plates), and a reference velocity yet to be specified. From these choices we find the time scale \(H/u_{\rm ref}\) and the pressure scale \(\rho_{0}u_{\rm ref}^{2}\). The non-dimensional quantities are thus
\[\bar{\mathbf{x}}=\frac{\mathbf{x}}{H},\qquad\qquad\tilde{t}=\frac{tu_{\rm ref}}{H}, \qquad\qquad\tilde{\mathbf{u}}=\frac{\mathbf{u}}{u_{\rm ref}},\qquad\qquad\tilde{T}= \frac{T-T_{0}}{\Delta T},\qquad\qquad\tilde{p}^{\prime}=\frac{p^{\prime}}{ \rho_{0}u_{\rm ref}^{2}}, \tag{18}\]
and the non-dimensional equations read
\[\tilde{\nabla}\cdot\hat{\mathbf{u}} =0, \tag{19}\] \[\frac{\partial\hat{\mathbf{u}}}{\partial\tilde{t}}+\tilde{\nabla} \cdot(\hat{\mathbf{u}}\otimes\hat{\mathbf{u}}) =-\tilde{\nabla}\tilde{p}^{\prime}+\frac{\mu}{\rho_{0}u_{\rm ref} H}\tilde{\nabla}^{2}\hat{\mathbf{u}}+\frac{\beta g\Delta TH}{u_{\rm ref}^{2}}\tilde{T}\mathbf{e}_{y},\] (20) \[\frac{\partial\hat{T}}{\partial\tilde{t}}+\tilde{\nabla}\cdot( \hat{\mathbf{u}}\hat{T}) =\frac{\nu u_{\rm ref}}{cH\Delta T}\Phi+\frac{\kappa}{u_{\rm ref }H}\tilde{\nabla}^{2}\hat{T}, \tag{21}\]
where \(\nu=\mu/\rho_{0}\) and \(\kappa=\lambda/(\rho_{0}c)\). The two latter equations are re-written by introducing the parameters \(\alpha_{i}\), \(i=1\,...\,4\), as
\[\frac{\partial\hat{\mathbf{u}}}{\partial\tilde{t}}+\tilde{\nabla} \cdot(\hat{\mathbf{u}}\otimes\hat{\mathbf{u}}) =-\tilde{\nabla}\tilde{p}^{\prime}+\alpha_{1}\tilde{\nabla}^{2} \hat{\mathbf{u}}+\alpha_{2}\tilde{T}\mathbf{e}_{y}, \tag{22}\] \[\frac{\partial\hat{T}}{\partial\tilde{t}}+\tilde{\nabla}\cdot( \hat{\mathbf{u}}\hat{T}) =\alpha_{3}\Phi+\alpha_{4}\tilde{\nabla}^{2}\hat{T}. \tag{23}\]
The \(\alpha_{i}\)'s can be expressed in terms of three dimensionless numbers, being the Rayleigh number \(\rm Ra\), the Prandtl number \(\rm Pr\) and the Gebhart number \(\rm Ge\) (also known as the dissipation number [6]):
\[\rm Ra =\frac{\beta g\Delta TH^{3}}{\nu\kappa}, \tag{24}\] \[\rm Pr =\frac{\nu}{\kappa},\] (25) \[\rm Ge =\frac{\beta gH}{c}. \tag{26}\]
Alternatively, one can employ the Grashof number \(\rm Gr=Ra/\,Pr\)[4]. In table 1 we present three different options for \(u_{\rm ref}\) with the corresponding values of \(\alpha\). Choices I and II are common in literature, see for example [26] for choice I and [15; 1; 27] for choice II. Other choices are also possible, e.g. \(u_{\rm ref}=\beta g\Delta TH^{2}/\nu\)[5], but this choice does not lead to a 'clean' expression in terms of the dimensionless numbers defined above. To our best knowledge, choice III is new and inspired by the form of the total energy equation, as we will explain below.
It is important to realize that the time scales and the velocity fields corresponding to numerical simulations with choices I, II and III are different. The time scales are related as \(\frac{\tilde{t}_{I}}{u_{\rm ref,I}}=\frac{\tilde{t}_{II}}{u_{\rm ref,II}}=\frac {\tilde{t}_{III}}{u_{\rm ref,III}}\), so \(\tilde{t}_{III}=\tilde{t}_{I}/\sqrt{\rm Ge}\)
and \(\tilde{I}_{III}=\tilde{I}_{II}\sqrt{\mathrm{Ra}\,\mathrm{Pr}}/\sqrt{\mathrm{Ge}}\). The velocity fields are related as \(\tilde{\mathbf{u}}_{I}u_{\mathrm{ref},I}=\tilde{\mathbf{u}}_{II}u_{\mathrm{ref},II}= \tilde{\mathbf{u}}_{III}u_{\mathrm{ref},III}\), so that \(\tilde{\mathbf{u}}_{III}=\tilde{\mathbf{u}}_{I}\sqrt{\mathrm{Ge}}\), and \(\tilde{\mathbf{u}}_{III}=\tilde{\mathbf{u}}_{II}\sqrt{\mathrm{Ge}}/\sqrt{\mathrm{Ra} \,\mathrm{Pr}}\). On the other hand, the temperature fields corresponding to each choice are equivalent, and consequently the Nusselt numbers are the same.
To obtain the non-dimensional form of the total energy equation we take the dot product of (22) with \(\tilde{\mathbf{u}}\) and add the internal energy equation (23). In order for the dissipation function of the kinetic energy equation to cancel with the internal energy equation, we require \(\alpha_{1}=\alpha_{3}\). This requirement is satisfied by \(u_{\mathrm{ref}}=\sqrt{c\Delta T}\), i.e. our proposed choice III in table 1. For the other choices (I and II), a weighting of the kinetic and internal energy equations is needed in order to cancel the dissipation function in the non-dimensional total energy equation. The weighting factor depends on the definition of the non-dimensional total energy. First define the dimensionless kinetic and internal energy as
\[\tilde{e}_{k} :=\frac{e_{k}}{\rho_{0}u_{\mathrm{ref}}^{2}}=\frac{\frac{1}{2} \rho_{0}|\mathbf{u}|^{2}}{\rho_{0}u_{\mathrm{ref}}^{2}}=\frac{\frac{1}{2}\rho_{0} u_{\mathrm{ref}}^{2}|\tilde{\mathbf{u}}|^{2}}{\rho_{0}u_{\mathrm{ref}}^{2}}=\frac{1}{2}| \tilde{\mathbf{u}}|^{2}, \tag{27}\] \[\tilde{e}_{i} :=\frac{e_{i}}{\rho_{0}c\Delta T}=\frac{\rho_{0}cT}{\rho_{0}c \Delta T}=\frac{\rho_{0}c\Delta T(\tilde{T}+T_{0}/\Delta T)}{\rho_{0}c\Delta T }=(\tilde{T}+T_{0}/\Delta T), \tag{28}\]
so that
\[e=e_{k}+e_{l}=\rho_{0}u_{\mathrm{ref}}^{2}\tilde{e}_{k}+\rho_{0}c\Delta T \tilde{e}_{l}=\rho_{0}u_{\mathrm{ref}}^{2}\left(\tilde{e}_{k}+\frac{c\Delta T }{u_{\mathrm{ref}}^{2}}\tilde{e}_{l}\right). \tag{29}\]
By _choosing_ the non-dimensional total energy as \(\tilde{e}=e/\rho_{0}u_{\mathrm{ref}}^{2}\), we obtain
\[\tilde{e}=\tilde{e}_{k}+\frac{c\Delta T}{u_{\mathrm{ref}}^{2}}\tilde{e}_{l}= \tilde{e}_{k}+\frac{\alpha_{1}}{\alpha_{3}}\tilde{e}_{i}=\tilde{e}_{k}+\gamma \tilde{e}_{i}. \tag{30}\]
Here \(\gamma=\frac{\alpha_{1}}{\alpha_{3}}\) is the weighting factor, which is reported in table 1 for different choices of \(u_{\mathrm{ref}}\). The global energy balances in non-dimensional form read
\[\frac{\mathrm{d}\tilde{E}_{k}}{\mathrm{d}\tilde{t}} =-\frac{\alpha_{1}}{\Lambda}\int_{\tilde{\Omega}}\Phi\,\mathrm{d} \tilde{\Omega}+\frac{\alpha_{2}}{\Lambda}\int_{\tilde{\Omega}}\tilde{T} \tilde{v}\,\mathrm{d}\tilde{\Omega}, \tag{31}\] \[\frac{\mathrm{d}\tilde{E}_{i}}{\mathrm{d}\tilde{t}} =\frac{\alpha_{3}}{\Lambda}\int_{\tilde{\Omega}}\Phi\,\mathrm{d} \tilde{\Omega}+\frac{\alpha_{4}}{\Lambda}\int_{\tilde{\Omega}}\tilde{\nabla} \tilde{T}\cdot\mathbf{n}\,\mathrm{d}\tilde{S},\] (32) \[\frac{\mathrm{d}\tilde{E}}{\mathrm{d}\tilde{t}} =\frac{\mathrm{d}\tilde{E}_{k}}{\mathrm{d}t}+\gamma\frac{\mathrm{ d}\tilde{E}_{i}}{\mathrm{d}t}=\frac{\alpha_{2}}{\Lambda}\int_{\tilde{\Omega}} \tilde{T}\tilde{v}\,\mathrm{d}\tilde{\Omega}+\frac{\gamma\alpha_{4}}{\Lambda} \int_{\tilde{\Omega}}\tilde{\nabla}\tilde{T}\cdot\mathbf{n}\,\mathrm{d}\tilde{S}, \tag{33}\]
where we define \(\tilde{E}=\frac{1}{\Lambda}\int_{\tilde{\Omega}}\tilde{e}\,\mathrm{d}\tilde{\Omega}\), and \(\Lambda=L/H\) is the aspect ratio of the box.
The choice for a particular reference velocity typically depends on the problem at hand. Choices I and II have the advantage that in case of \(\mathrm{Ge}=0\) (most commonly investigated in literature), one obtains \(\alpha_{3}=0\) and the dissipation terms simply drops from the internal energy equation. However, when \(\mathrm{Ge}\) is small but nonzero, the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \(u_{\mathrm{ref}}\) & \(\alpha_{1}=\frac{\nu}{u_{\mathrm{ref}}H}\) & \(\alpha_{2}=\frac{ggATH}{u_{\mathrm{ref}}^{2}}\) & \(\alpha_{3}=\frac{\nu u_{\mathrm{ref}}}{c\Delta TH}\) & \(\alpha_{4}=\frac{\nu}{u_{\mathrm{ref}}H}\) & \(\gamma=\frac{\alpha_{1}}{\alpha_{3}}\) \\ \hline I & \(\sqrt{gg\Delta TH}\) & \(\sqrt{\frac{\mathrm{Pr}}{\mathrm{Ra}}}\) & \(1\) & \(\mathrm{Ge}\sqrt{\frac{\mathrm{Pr}}{\mathrm{Ra}}}\) & \(\frac{1}{\sqrt{\mathrm{PrRa}}}\) & \(\frac{1}{\mathrm{Ge}}\) \\ II & \(\frac{x}{H}\) & \(\mathrm{Pr}\) & \(\mathrm{PrRa}\) & \(\frac{\mathrm{Ge}}{\mathrm{Ra}}\) & \(1\) & \(\frac{\mathrm{PrRa}}{\mathrm{Ge}}\) \\ III & \(\sqrt{c\Delta T}\) & \(\sqrt{\frac{\mathrm{PrGe}}{\mathrm{Ra}}}\) & \(\mathrm{Ge}\) & \(\sqrt{\frac{\mathrm{PrGe}}{\mathrm{Ra}}}\) & \(\sqrt{\frac{\mathrm{PrRa}}{\mathrm{Ra}}}\) & \(1\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Different non-dimensional forms resulting from different choices of \(u_{\mathrm{ref}}\).
weight factor \(\gamma\) becomes very large for choices I and II. Choice III does not suffer from this issue, because \(\gamma=1\) independent of Ge. However, choice III has the disadvantage that it does not work in the case \(\mathrm{Ge}=0\), since it leads to \(\alpha_{i}=0\) for all \(i\). In summary: for \(\mathrm{Ge}=0\), choices I and II are preferred; for small but nonzero Ge, choice III is preferred; in other cases, all choices are fine.
### Effect of viscous dissipation on Nusselt number and thermal dissipation
A main quantity of interest in natural convection flows is the Nusselt number Nu and we will investigate how it changes upon including viscous dissipation in the internal energy equation. First, define the average of the sum of convective and conductive fluxes through a horizontal plane \(y=y^{\prime}\) by
\[\overline{F}(y^{\prime}):=\frac{1}{L}\int_{0}^{L}\left(\rho_{0}cTv-\lambda \frac{\partial T}{\partial y}\right)_{(x,y^{\prime})}\mathrm{d}x. \tag{34}\]
Then, the Nusselt number based on \(\overline{F}\) follows as [3]:
\[\mathrm{Nu}(\hat{y}^{\prime}):=\frac{\overline{F}(y^{\prime})}{\lambda \Delta T/H}=\frac{1}{\Lambda}\int_{0}^{\Lambda}\left(\frac{1}{\alpha_{4}} \hat{T}\hat{v}-\frac{\partial\hat{T}}{\partial\hat{y}}\right)_{(x,y^{\prime} )}\mathrm{d}\hat{x}. \tag{35}\]
For steady state or statistically steady state (using a suitable average), and in the absence of viscous dissipation, it is straightforward to show from the internal energy equation that \(\mathrm{Nu}(\hat{y})=\mathrm{Nu}(\hat{y}=0)=\mathrm{Nu}\), which is a constant, independent of \(\hat{y}^{\prime}\)[1; 27]. However, upon including viscous dissipation, this relation no longer holds true and instead the steady internal energy equation yields
\[\alpha_{4}(\mathrm{Nu}(\hat{y}^{\prime})-\mathrm{Nu}(0))=\alpha_{3}\varepsilon _{U}(\hat{y}^{\prime}), \tag{36}\]
where the integrated dissipation function is given by
\[\varepsilon_{U}(\hat{y}^{\prime}):=\frac{1}{\Lambda}\int_{0}^{\hat{y}^{\prime }}\int_{0}^{\Lambda}\tilde{\Phi}\mathrm{d}\hat{x}\mathrm{d}\hat{y}. \tag{37}\]
Equation (36) is an important relation which shows that (taking \(\hat{y}^{\prime}=1\))
\[\alpha_{4}(\mathrm{Nu}(1)-\mathrm{Nu}(0))=\alpha_{3}\varepsilon_{U}(1), \tag{38}\]
so _the Nusselt number of the upper plate is always larger than or equal to the Nusselt number of the lower plate._
A second relation between Nusselt number and viscous dissipation can be obtained from the global kinetic energy balance, equation (31). The second term in the right-hand side of equation (31) can be rewritten with equation (36), following the analysis in [27]:
\[\begin{split}\frac{\alpha_{2}}{\Lambda}\int_{\tilde{\Omega}} \tilde{T}\hat{v}\,\mathrm{d}\tilde{\Omega}&=\frac{\alpha_{2}}{ \Lambda}\int_{0}^{1}\int_{0}^{\Lambda}\tilde{T}\hat{v}\,\mathrm{d}\hat{x} \mathrm{d}\hat{y}=\alpha_{2}\alpha_{4}\int_{0}^{1}\mathrm{Nu}(\hat{y})\, \mathrm{d}\hat{y}+\frac{\alpha_{2}\alpha_{4}}{\Lambda}\int_{0}^{\Lambda}\int_ {0}^{1}\frac{\partial\hat{T}}{\partial\hat{y}}\,\mathrm{d}\hat{y}\mathrm{d} \hat{x}\\ &=\alpha_{2}\alpha_{4}\mathrm{Nu}(0)+\alpha_{2}\alpha_{3}\int_{0 }^{1}\varepsilon_{U}(\hat{y})\,\mathrm{d}\hat{y}+\frac{\alpha_{2}\alpha_{4}}{ \Lambda}\int_{0}^{\Lambda}(\hat{T}(\hat{x},\hat{y}=1)-\hat{T}(\hat{x},\hat{y}= 0))\mathrm{d}\hat{x}\\ &=\alpha_{2}\alpha_{4}(\mathrm{Nu}(0)-1)+\alpha_{2}\alpha_{3} \int_{0}^{1}\epsilon_{U}(\hat{y})\,\mathrm{d}\hat{y}.\end{split} \tag{39}\]
For (statistically) steady flow, this term equals the first term in the right-hand side of equation (31), yielding the second relation between the Nusselt number and the viscous dissipation \(\varepsilon_{U}\)
\[\alpha_{2}\alpha_{4}(\mathrm{Nu}(0)-1)=\alpha_{1}\varepsilon_{U}(1)-\alpha_{2 }\alpha_{3}\int_{0}^{1}\varepsilon_{U}(\hat{y})\,\mathrm{d}\hat{y}. \tag{40}\]
We recognize the well-known equation \(\alpha_{2}\alpha_{4}(\mathrm{Nu}(0)-1)=\alpha_{1}\epsilon_{U}(1)\), see e.g. [1], but with the additional negative term \(-\alpha_{2}\alpha_{3}\int_{0}^{1}\epsilon_{U}(\hat{y})\,\mathrm{d}\hat{y}\).
Lastly, we link the thermal dissipation \(\epsilon_{T}\) to the Nusselt number and the viscous dissipation function. The non-dimensional internal energy equation, equation (23), is multiplied by \(\hat{T}\), and after integrating by parts, using the skew-symmetry of the convective operator, and employing the boundary condition \(\hat{T}(\hat{y}=1)=0\), one obtains
\[\frac{1}{\Lambda}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\tilde{\Omega}}\frac{1}{ 2}\hat{T}^{2}\,\mathrm{d}\tilde{\Omega}=\frac{\alpha_{3}}{\Lambda}\int_{\tilde {\Omega}}\hat{T}\hat{\Phi}\,\mathrm{d}\tilde{\Omega}-\frac{\alpha_{4}}{\Lambda }\int_{0}^{\Lambda}\left(\hat{T}\frac{\partial\hat{T}}{\partial\hat{y}}\right) _{\hat{y}=0}\,\mathrm{d}\hat{x}-\frac{\alpha_{4}}{\Lambda}\int_{\tilde{\Omega }}\|\hat{\nabla}\hat{T}\|^{2}\,\mathrm{d}\tilde{\Omega}. \tag{41}\]
With the boundary condition \(\hat{T}(\hat{y}=0)=1\), and the assumption of (statistically) steady flow, this relation is further simplified to
\[\alpha_{4}\mathrm{Nu}(0)=\alpha_{4}\epsilon_{T}-\frac{\alpha_{3}}{\Lambda} \int_{\tilde{\Omega}}\hat{T}\hat{\Phi}\,\mathrm{d}\tilde{\Omega}, \tag{42}\]
where
\[\epsilon_{T}\,:=\frac{1}{\Lambda}\int_{\tilde{\Omega}}\|\hat{\nabla}\hat{T}\|^ {2}\,\mathrm{d}\tilde{\Omega}. \tag{43}\]
Since \(\hat{T}\geq 0\), \(\tilde{\Phi}\geq 0\), we conclude that _viscous dissipation lowers the Nusselt number of the lower plate_. In absence of viscous dissipation in the internal energy equation, one obtains the familiar relation \(\mathrm{Nu}=\epsilon_{T}\). In combination with equation (38), we obtain for the Nusselt number of the upper plate:
\[\alpha_{4}\mathrm{Nu}(1)=\alpha_{4}\epsilon_{T}+\frac{\alpha_{3}}{\Lambda} \int(1-\hat{T})\hat{\Phi}\,\mathrm{d}\tilde{\Omega}. \tag{44}\]
Assuming that the temperature satisfies \(0\leq\hat{T}\leq 1\), we find that _viscous dissipation increases the Nusselt number of the upper plate_. In other words, the thermal dissipation lies in between the two Nusselt numbers:
\[\mathrm{Nu}(0)\leq\epsilon_{T}\leq\mathrm{Nu}(1). \tag{45}\]
The three relations (38), (40) and (42) are summarized in table 2 and will be confirmed in the numerical experiments in section 5.
## 3 Energy-consistent spatial discretization
### Mass, momentum and kinetic energy equation
To discretize the non-dimensional mass and momentum equations (19) and (22), we use the staggered-grid energy-conserving finite volume method described in [28], extended by including the buoyancy term in the momentum equations. This leads to the following semi-discrete equations:
\[MV_{h}(t) =0, \tag{46}\] \[\Omega_{V}\frac{\mathrm{d}V_{h}(t)}{\mathrm{d}t} =-C_{V}(V_{h}(t))-Gp_{h}(t)+\alpha_{1}D_{V}V_{h}(t)+\alpha_{2}(AT_ {h}(t)+y_{T}). \tag{47}\]
\begin{table}
\begin{tabular}{c c c} \hline \hline origin & without viscous dissipation & with viscous dissipation \\ \hline internal & \(\mathrm{Nu}(1)=\mathrm{Nu}(0)\) & \(\alpha_{4}(\mathrm{Nu}(1)-\mathrm{Nu}(0))=\alpha_{3}\epsilon_{U}(1)\) \\ kinetic & \(\alpha_{2}\alpha_{4}(\mathrm{Nu}(0)-1)=\alpha_{1}\epsilon_{U}(1)\) & \(\alpha_{2}\alpha_{4}(\mathrm{Nu}(0)-1)=\alpha_{1}\epsilon_{U}(1)-\alpha_{2} \alpha_{3}\int_{0}^{1}\epsilon_{U}(\hat{y})\,\mathrm{d}\hat{y}\) \\ internal energy \(\times T\) & \(\mathrm{Nu}(0)=\epsilon_{T}\) & \(\alpha_{4}\mathrm{Nu}(0)=\alpha_{4}\epsilon_{T}-\frac{\alpha_{3}}{\Lambda}\,f _{\tilde{\Omega}}\,\hat{T}\hat{\Phi}\,\mathrm{d}\tilde{\Omega}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Steady-state Nusselt number relations, with and without viscous dissipation.
Here, \(V_{h}\in\mathbb{R}^{N_{V}}\) are the velocity unknowns, \(p_{h}\in\mathbb{R}^{N_{P}}\) the pressure unknowns, and \(T_{h}\in\mathbb{R}^{N_{P}}\) the temperature unknowns; see figure 2 for their positioning. \(M\in\mathbb{R}^{N_{P}\times N_{V}}\) is the discretized divergence operator, \(G=-M^{T}\in\mathbb{R}^{N_{V}\times N_{P}}\) the discretized gradient operator, \(\Omega_{V}\in\mathbb{R}^{N_{V}\times N_{V}}\) a matrix with the'velocity' finite volume sizes on its diagonal, and \(C_{V}\) and \(D_{V}\) constitute central difference approximations of the convective and diffusive terms. \(A\) is a matrix that averages the temperature from the center of the 'temperature' finite volumes to center of the'velocity' finite volumes, and the vector \(y_{T}\) incorporates the nonzero boundary condition for the temperature at the lower plate.
The energy-conserving nature of our finite volume method is crucial in deriving an energy-consistent discretization of viscous dissipation. The energy-conserving property means that, in absence of boundary contributions, the discretized convective and pressure gradient operators do not contribute to the kinetic energy balance: \(V_{h}^{T}C_{V}(V_{h})=0\) and \(V_{h}^{T}Gp_{h}=0\), just like in the continuous case. This is achieved by using a skew-symmetric convection operator and the compatibility between \(M\) and \(G\) via \(G=-M^{T}\). The discrete kinetic energy balance then reads:
\[\frac{\mathrm{d}E_{k,h}}{\mathrm{d}t}=-\alpha_{1}\epsilon_{U,h}+\alpha_{2}V_{ h}^{T}(AT_{h}+y_{T}), \tag{48}\]
where \(E_{k,h}=\frac{1}{2}V_{h}^{T}\Omega_{V}V_{h}\). The global viscous dissipation (i.e. summed over the entire domain) is given by \(\epsilon_{U,h}=\|QV_{h}\|_{2}^{2}>0\), where \(Q\) stems from decomposing the symmetric negative-definite diffusive operator as \(D_{V}=-Q^{T}Q\). Equation (48) is the semi-discrete counterpart of equation (31).
### Proposed viscous dissipation function
Given a discretization that satisfies a discrete kinetic energy balance, the key step is to design a discretization scheme of the internal energy equation (23) which is such that discrete versions of the global balances (13) and (14) are obtained. In particular, the viscous dissipation in the internal energy equation should cancel the viscous dissipation term in the kinetic energy equation, where the latter is fully determined by _the choice of the diffusion operator and the expression for the local kinetic energy_. The choice for the diffusion operator (second-order central differencing) is straightforward. The choice for the expression of the local kinetic energy on a staggered grid is however not obvious. We propose the following definition:
\[k_{i,j}:=\frac{1}{4}u_{i+1/2,j}^{2}+\frac{1}{4}u_{i-1/2,j}^{2}+\frac{1}{4}v_{ i,j+1/2}^{2}+\frac{1}{4}v_{i,j-1/2}^{2}. \tag{49}\]
This choice gives a local kinetic energy equation that is consistent with the continuous equations, as is detailed in B, and consistent with the global energy definition.
The expression for \(\Phi_{h}\) then follows from differentiating the expression for \(k_{ij}\) in time, substituting the momentum equations, and rewriting the terms involving the diffusive operator (see B). The implied
Figure 2: Staggered grid with positioning of unknowns around a pressure volume.
dissipation then follows by constructing a discrete version of (10). As example, we construct the discrete version of \(u\frac{\partial^{2}u}{\partial x^{2}}=-\left(\frac{\partial u}{\partial x}\right) ^{2}+\frac{\partial}{\partial x}\left(u\frac{\partial u}{\partial x}\right)\), being
\[\frac{u_{i+1/2,j}}{\Delta x}\left(\frac{u_{i+3/2,j}-u_{i+1/2,j}} {\Delta x}-\frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right)=-\frac{1}{2}\left( \frac{u_{i+3/2,j}-u_{i+1/2,j}}{\Delta x}\right)^{2}-\frac{1}{2}\left(\frac{u_{ i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right)^{2}\\ +\frac{1}{\Delta x}\left(\frac{1}{2}(u_{i+3/2,j}+u_{i+1/2,j}) \frac{u_{i+3/2,j}-u_{i+1/2,j}}{\Delta x}-\frac{1}{2}(u_{i+1/2,j}+u_{i-1/2,j}) \frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right). \tag{50}\]
The first two terms on the right-hand side contribute to the viscous dissipation function. Repeating this process for the other components \((u\frac{\partial^{2}u}{\partial y^{2}},\frac{\partial^{2}v}{\partial x^{2}}, \frac{\partial^{2}v}{\partial y^{2}})\), as outlined in B.2, yields the following novel expression for the local dissipation function:
\[\boxed{\Phi_{i,j}=\frac{1}{2}\Phi_{i+1/2,j}^{\mu}+\frac{1}{2}\Phi_{i-1/2,j}^{ \mu}+\frac{1}{2}\Phi_{i,j+1/2}^{v}+\frac{1}{2}\Phi_{i,j-1/2}^{v}}, \tag{51}\]
where
\[\Phi_{i+1/2,j}^{\mu}=-\frac{1}{2}\left(\frac{u_{i+3/2,j}-u_{i+1/2,j}}{\Delta x }\right)^{2}-\frac{1}{2}\left(\frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right)^ {2}-\frac{1}{2}\left(\frac{u_{i+1/2,j+1}-u_{i+1/2,j}}{\Delta y}\right)^{2}- \frac{1}{2}\left(\frac{u_{i+1/2,j}-u_{i+1/2,j-1}}{\Delta y}\right)^{2}, \tag{52}\]
\[\Phi_{i,j+1/2}^{v}=-\frac{1}{2}\left(\frac{v_{i+1,j+1/2}-v_{i,j+1/2}}{\Delta x }\right)^{2}-\frac{1}{2}\left(\frac{v_{i+1,j-1/2}-v_{i,j-1/2}}{\Delta x} \right)^{2}-\frac{1}{2}\left(\frac{v_{i,j+3/2}-v_{i,j+1/2}}{\Delta y}\right)^ {2}-\frac{1}{2}\left(\frac{v_{i,j+1/2}-v_{i,j-1/2}}{\Delta y}\right)^{2}. \tag{53}\]
At boundaries, an adaptation of \(\Phi_{h}\) is required in order to have a discrete equivalent of equation (10). This is detailed in equation (B.15).
Note that \(\Phi_{h}\) is derived based on local energy consideration which upon summation equals the global dissipation, just like equation (37):
\[1^{T}\Omega_{p}\Phi_{h}=\epsilon_{U,h}. \tag{54}\]
### Internal energy equation
Having proposed a consistent expression for \(\Phi_{h}\), the spatial discretization of the internal energy equation (23) reads:
\[\Omega_{p}\frac{\mathrm{d}T_{h}}{\mathrm{d}t}=-C_{T}(V_{h},T_{h})+\alpha_{3} \Omega_{p}\Phi_{h}(V_{h})+\alpha_{4}(D_{T}T_{h}+\hat{y}_{T}), \tag{55}\]
where
\[\begin{split}[C_{T}(V_{h},T_{h})]_{i,j}=&\Delta y \left(u_{i+1/2,j}\frac{1}{2}(T_{i+1,j}+T_{i,j})-u_{i-1/2,j}\frac{1}{2}(T_{i,j}+ T_{i-1,j})\right)+\\ &\Delta x\left(v_{i,j+1/2}\frac{1}{2}(T_{i,j+1}+T_{i,j})-v_{i,j- 1/2}\frac{1}{2}(T_{i,j}+T_{i,j-1})\right)\end{split} \tag{56}\]
is the convection operator. The convection operator has a discrete skew-symmetry property which will be used in the derivation of the thermal dissipation balance in the next subsection. \(D_{T}\) the standard second-order difference stencil with boundary conditions encoded in \(\hat{y}_{T}\).
The total internal energy is given by \(E_{i,h}=1^{T}\Omega_{p}T_{h}\) (simply summing over all finite volumes). Due to the no-slip boundary conditions on the velocity field, the convective operator satisfies \(1^{T}C_{T}(V_{h},T_{h})=0\). The summation over the diffusive operator can be written in terms of the Nusselt numbers (detailed in the next section). The total internal energy equation thus reads
\[\begin{split}\frac{\mathrm{d}E_{i,h}}{\mathrm{d}t}& =\alpha_{3}1^{T}\Omega_{p}\Phi_{h}+\alpha_{4}1^{T}(D_{T}T_{h}+ \hat{y}_{T}),\\ &=\alpha_{3}1^{T}\Omega_{p}\Phi_{h}+\alpha_{4}(\mathrm{Nu}_{H}- \mathrm{Nu}_{C}),\end{split} \tag{57}\]
where in the second line the Nusselt numbers are instantaneous Nusselt numbers. Upon adding the total kinetic energy equation (48), and using property (54), the global energy balance results:
\[\begin{split}\frac{\mathrm{d}E_{h}}{\mathrm{d}t}=\frac{\mathrm{d}E_ {k,h}}{\mathrm{d}t}+\gamma\frac{\mathrm{d}E_{i,h}}{\mathrm{d}t}&= \alpha_{2}V_{h}^{T}(AT_{h}+y_{T})+\gamma\alpha_{4}1^{T}(D_{T}T_{h}+\hat{y}_{T }),\\ &=\alpha_{2}V_{h}^{T}(AT_{h}+y_{T})+\gamma\alpha_{4}(\mathrm{Nu} _{H}-\mathrm{Nu}_{C}),\end{split} \tag{58}\]
which is the semi-discrete counterpart of equation (33). In other words, we have proposed a discrete viscous dissipation function that leads to a correct expression for the total energy equation, namely such that the viscous dissipation from the kinetic and internal energy equations exactly balances. Note that in the case of homogeneous Neumann boundary conditions for the temperature on all boundaries, the last term disappears.
### Discrete global balances and Nusselt number relations
We now derive discrete versions of the Nusselt relations that incorporate the viscous dissipation function, i.e. relations (38) and (42). Our symmetry-preserving spatial discretization is such that _exact_ discrete relations can be derived. It is important to realize that the discrete approximation for the Nusselt number cannot be chosen independently (when the goal is to have exact discrete global balances) but is implicitly defined once the discretization of the diffusive operator is chosen. Consider the discretized global internal energy equation for steady conditions,
\[\alpha_{3}1^{T}\Omega_{p}\Phi_{h}(V_{h})+\alpha_{4}1^{T}(D_{T}T_{h}+\hat{y}_{ T})=0. \tag{59}\]
The second term can be simplified as
\[1^{T}(D_{T}T_{h}+\hat{y}_{T})=-\sum_{i=1}^{N_{x}}\frac{T_{i,1}-T_{H}}{\frac{1} {2}\Delta y}\Delta x+\sum_{i=1}^{N_{x}}\frac{T_{C}-T_{i,N_{y}}}{\frac{1}{2} \Delta y}\Delta x=\mathrm{Nu}_{H}-\mathrm{Nu}_{C}, \tag{60}\]
where the Nusselt numbers on the lower (hot) and upper (cold) plate are defined as
\[\mathrm{Nu}_{H} :=-\sum_{i=1}^{N_{x}}\frac{T_{i,1}-T_{H}}{\frac{1}{2}\Delta y} \Delta x, \tag{61}\] \[\mathrm{Nu}_{C} :=-\sum_{i=1}^{N_{x}}\frac{T_{C}-T_{i,N_{y}}}{\frac{1}{2}\Delta y }\Delta x. \tag{62}\]
This leads to the discrete version of (38):
\[\alpha_{4}(\mathrm{Nu}_{C}-\mathrm{Nu}_{H})=\alpha_{3}1^{T}\Omega_{p}\Phi_{h} (V_{h}). \tag{63}\]
The discrete version of (42) follows by considering the inner product of equation (55) with \(T_{h}^{T}\) instead of \(1^{T}\). An important property of the convective discretization (56) is that
\[T_{h}^{T}C_{T}(V_{h},T_{h})=0,\qquad\forall\,T_{h},\quad\text{if}\quad MV_{h}=0. \tag{64}\]
This property is most easily derived by recognizing that \(C_{h}(V_{h},T_{h})\) can be written in terms of a matrix-vector product \(\mathcal{C}_{T}(V_{h})T_{h}\), where \(\mathcal{C}_{T}(V_{h})\) is skew-symmetric if \(MV_{h}=0\). In addition, the inner product of \(T_{h}\) with the diffusive terms can be written as
\[T_{h}^{T}(D_{T}T_{h}+\hat{y}_{T})=\sum_{i=1}^{N_{x}}\left(-T_{H}\frac{T_{i,1}- T_{H}}{\frac{1}{2}\Delta y}+T_{C}\frac{T_{C}-T_{i,N_{y}}}{\frac{1}{2}\Delta y }\right)\Delta x-\epsilon_{T,h}, \tag{65}\]
where
\[\epsilon_{T,h}\,:=\sum_{i=1}^{N_{x}}\left(\frac{1}{2}\left(\frac{T_{i,1}-T_{H}}{ \frac{1}{2}\Delta y}\right)^{2}+\sum_{j=2}^{N_{y}}\left(\frac{T_{i,j}-T_{i,j-1}} {\Delta y}\right)^{2}+\frac{1}{2}\left(\frac{T_{C}-T_{i,N_{y}}}{\frac{1}{2} \Delta y}\right)^{2}\right)\Delta x\Delta y+\sum_{j=1}^{N_{y}}\sum_{i=2}^{N_{x }}\left(\frac{T_{i,j}-T_{i-1,j}}{\Delta x}\right)^{2}\Delta x\Delta y \tag{66}\]
is the discrete analogue of (41) and equation (65) is the discrete version of \(f\,T\frac{\mathrm{d}^{2}T}{\mathrm{d}y^{2}}=[T\frac{\mathrm{d}T}{\mathrm{d}y}] -f(\frac{\mathrm{d}T}{\mathrm{d}y})^{2}\). With the boundary condition \(T_{H}=1\), \(T_{C}=0\), we get the balance
\[\alpha_{4}\mathrm{Nu}_{H}=\alpha_{4}\epsilon_{T,h}-\alpha_{3}T_{h}^{T}\Omega_ {p}\Phi_{h}(V_{h}), \tag{67}\]
which is the discrete version of equation (42).
## 4 Energy-consistent temporal discretization
The system of equations (46), (47) and (55) needs to be integrated in time with a suitable method in order to preserve a time-discrete version of the global energy balance (58). A common choice is to use an explicit method (e.g. Adams-Bashforth) for the nonlinear convective terms and an implicit method (e.g. Crank-Nicolson) for the (stiff) linear diffusion terms [20; 27; 29], or an explicit method for both convection and diffusion [30; 31]. In such an approach, the temperature equation is typically solved first (given velocity fields at previous time instances), and then the mass and momentum equations are solved with a pressure-correction approach. However, these methods do not preserve the global energy balance as they violate the energy-conserving nature of the nonlinear terms when marching in time [32].
Instead, we show here that the implicit midpoint method can be employed to achieve energy-consistent time integration. The fully discrete system reads:
\[MV_{h}^{n+1/2} =0, \tag{68}\] \[\Omega_{V}\frac{V_{h}^{n+1}-V_{h}^{n}}{\Delta t} =-C_{V}(V_{h}^{n+1/2})-GP_{h}^{n+1/2}+\alpha_{1}D_{V}V_{h}^{n+1/2 }+\alpha_{2}(AT_{h}^{n+1/2}+y_{T}),\] (69) \[\Omega_{p}\frac{T_{h}^{n+1}-T_{h}^{n}}{\Delta t} =-C_{T}(V_{h}^{n+1/2},T_{h}^{n+1/2})+\alpha_{3}\Omega_{p}\Phi(V_{ h}^{n+1/2})+\alpha_{4}(D_{T}T_{h}^{n+1/2}+\hat{y}_{T}). \tag{70}\]
Here \(V_{h}^{n+1/2}=\frac{1}{2}(V_{h}^{n}+V_{h}^{n+1})\) and \(T_{h}^{n+1/2}=\frac{1}{2}(T_{h}^{n}+T_{h}^{n+1})\). Upon multiplying (69) by \((V_{h}^{n+1/2})^{T}\) and (70) by \(1^{T}\), and adding the two resulting equations, we get the discrete energy balance,
\[\frac{E_{h}^{n+1}-E_{h}^{n}}{\Delta t}=\frac{E_{k,h}^{n+1}-E_{k,h}^{n}}{\Delta t }+\gamma\frac{E_{i,h}^{n+1}-E_{i,h}^{n}}{\Delta t}=\alpha_{2}(V_{h}^{n+1/2})^ {T}(AT_{h}^{n+1/2}+y_{T})+\gamma\alpha_{4}1^{T}(D_{T}T_{h}^{n+1/2}+\hat{y}_{T}), \tag{71}\]
which is the fully-discrete counterpart of equations (33) and (58). The derivations hinges again on skew-symmetry of the convection operator \(C_{V}\), the compatibility between \(M\) and \(G\) (\(G=-M^{T}\)), and the consistency requirement on the viscous dissipation function, equation (54).
The system of equations (68) - (70) leads to a large system of nonlinear equations which has a saddle point structure due to the divergence-free constraint. We solve the system in a segregated fashion and iterate at each time step with a standard pressure-correction method until the residual of the entire system is below a prescribed tolerance. We will compare this energy-conserving time integration approach to an explicit one-leg method [30; 31] in section 6.
## 5 Steady state results (Rayleigh-Benard)
The concept of energy consistency is best demonstrated through time-dependent simulations. However, we start with steady-state results in order to validate the spatial discretization method and to get intuition for the effect of the Gebhart number on the Nusselt number. For the results reported here we employ a direct solver that solves the entire coupled non-linear system of equations that arises from spatial discretization. As initial guess we take the following divergence-free velocity field:
\[u(x,y) =-64x^{2}(x-1)^{2}y(y-1)(2y-1), \tag{72}\] \[v(x,y) =64x(x-1)(2x-1)y^{2}(y-1)^{2}, \tag{73}\]
which is inspired by the regularized driven cavity problem [33]. For the temperature we take a random field (between 0 and 1). The idea behind this choice of initial condition is to avoid the non-linear solver to be stuck in the trivial solution (\(\mathbf{u}=0\)). Note that in all simulations in this article, we will set \(\mathrm{Pr}=0.71\) (air), and use non-dimensionalization choice I. Choices II and III give equivalent results apart from scaling factors.
### Grid convergence study for no-dissipation case (\(\mathrm{Ge}=0\))
Figure 2(a) shows the temperature field when viscous dissipation is not included (\(\mathrm{Ge}=0\)). The resulting Nusselt numbers as a function of grid refinement are displayed in Table 3 and indicate excellent agreement with literature [34]. We note that the Nusselt numbers as defined by (61) and (62) are first-order approximations. More accurate approximations can be constructed by including more interior points. We are not using such high-order accurate approximations as they would not satisfy the discrete global balance (63). Note also that we only report \(\mathrm{Nu}_{H}\) since \(\mathrm{Nu}_{\mathrm{C}}=\mathrm{Nu}_{H}\) up to machine precision.
### Grid convergence study for viscous dissipation case (Ge\(>0\))
When including viscous dissipation (\(\mathrm{Ge}>0\)) in the internal energy equation, the flow field changes qualitatively and loses its symmetric nature, as can be observed in figures 2(b)-2(c). The Nusselt numbers at the hot and cold plate start to deviate from each other, their difference being equal to the dissipation function, according to equation (63) (or (38)). This is reported in table 4 and figure 3(a). The critical Rayleigh number that we find from the bifurcation diagram is \(\mathrm{Ra}_{c}\approx 2585\), which is in excellent agreement with the value of 2585.02 reported in literature [35; 36]. It is independent of the value of the Prandtl number, as shown in [35], and also independent of the value of the Gebhart number. This latter fact follows by extending the linear stability analysis in [35] and realizing that the term \(\nabla\mathbf{u}\,:\,\nabla\mathbf{u}\) with \(\mathbf{u}=\mathbf{u}_{0}+\varepsilon\mathbf{u}^{\prime}\) and background state \(\mathbf{u}_{0}=0\) leads to the term \(\varepsilon^{2}\nabla\mathbf{u}^{\prime}\,:\,\nabla\mathbf{u}^{\prime}\), which disappears when gathering terms of \(\mathcal{O}(\varepsilon)\). The results in figure 3(a) show indeed that the bifurcation point is the same for different values of \(\mathrm{Ge}\).
Figure 3(b) shows a different interpretation of the Nusselt number, indicating the relation with the thermal dissipation and viscous dissipation according to equation (67) (or (42)). The results confirm that the thermal dissipation lies in between the Nusselt number of the hot and cold plate.
energy conservation requires that all contributions from boundary terms disappear, which we achieve by prescribing no-slip conditions \(\mathbf{u}=0\) and adiabatic conditions \(\frac{\delta T}{\delta n}=0\) on all boundaries (the pressure does not require boundary conditions). The energy balance then represents a pure exchange of kinetic, internal and potential energy according to
\[\frac{E_{h}^{n+1}-E_{h}^{n}}{\Delta t}=\frac{E_{k,h}^{n+1}-E_{k,h}^{n}}{\Delta t }+\gamma\frac{E_{i,h}^{n+1}-E_{i,h}^{n}}{\Delta t}=\alpha_{2}(V_{h}^{n+1/2})^{ T}(AT_{h}^{n+1/2}+y_{T}). \tag{74}\]
However, with adiabatic boundary conditions we cannot simulate the classic Rayleigh-Benard problem. Instead, we turn to the well-known Rayleigh-Taylor problem, featuring a cold (heavy) fluid on top of a warm (light) fluid. A sketch of the set-up is shown in figure 5. The energy-conserving implicit midpoint ('IM') method detailed in section 4 will be compared to the explicit one-leg ('OL') method commonly used in DNS studies [30; 31] (where we take \(\kappa=\frac{1}{2}\) and a fixed time step).
The domain size is \(1\times 2\), the grid is \(64\times 128\), the time step \(\Delta t=5\cdot 10^{-3}\) and the end time \(T=100\). We consider the case \(\mathrm{Ra}=10^{6}\) and \(\mathrm{Ge}=\{0.1,1\}\). The instability naturally arises due to growth of round-off errors (no perturbation is added in the initial condition). After the initial instability has developed, an asymmetry in the solution appears, triggering a sequence of well-known'mushroom' type plumes: hot plumes rising upward and cold plumes sinking downward (figure 6). Noteworthy are the differences in the development of the instability due to different time integration methods: IM predicts an earlier onset (around \(t=23\)) than OL (around \(t=33\)), and the evolution stays symmetric for a much longer period of time in case of IM. For both methods, the time of onset of instability is insensitive to the value of Ge, just like the bifurcation point in the steady state Rayleigh-Benard simulation was insensitive to the value of Ge. The differences between the methods might be attributed to the absence of artificial dissipation in the IM scheme compared to OL, as well as its more symmetric nature. However, one should note that the problem is chaotic, and similar differences in the solution can also be obtained by adding minute perturbations to the initial condition.
Since there is no driving force and all boundary conditions are homogeneous, viscosity damps the velocity field back to a homogeneous steady state, while at the same time increasing the temperature through dissipation. This increase in temperature is clear from figure 7a, where the average temperature is displayed. Compared to the initial temperature difference \(\Delta T=1\), the relative temperature increase is about \(2\%\) for \(\mathrm{Ge}=0.1\) and more than \(20\%\) for \(\mathrm{Ge}=1\). Note that many existing natural convection models, which ignore the viscous dissipation term, would not predict any temperature increase. With our proposed energy-consistent viscous dissipation function, the temperature increase exactly matches the kinetic energy loss through viscous dissipation. This is confirmed
Figure 5: Problem set-up with initial condition for Rayleigh Taylor problem.
in figure 6(b), which shows the energy error
\[\varepsilon_{E}\,:=\left|\frac{E_{k,h}^{n+1}-E_{k,h}^{n}}{\Delta t}+\gamma\frac{ E_{i,h}^{n+1}-E_{i,h}^{n}}{\Delta t}-\alpha_{2}(V_{h}^{n+1/2})^{T}(AT_{h}^{n+1/2}+y_{T} )\right|. \tag{75}\]
For IM the error remains at the tolerance with which we solve the system of nonlinear equations (\(10^{-12}\)). For OL, the error is initially at a similar level but increases to \(\mathcal{O}(10^{-6})\) when the instability develops (for \(t>30\)). However, one could argue that this advantage is offset by the fact that IM is roughly 4-5\(\times\) as expensive because it requires roughly 4-5 iterations (Poisson solves) per time step, making OL much faster to run. Consequently, OL will be employed for the 3D simulations in the next section. Note that this balance of accuracy versus computational costs depends on the details of the flow problem and might differ in other test cases.
Figure 6: Rayleigh-Taylor temperature fields.
Figure 7: Rayleigh-Taylor results, IM = Implicit Midpoint, OL = One-Leg scheme.
## 7 Energy-conserving simulation of a turbulent flow
As a final test-case, we consider the numerical simulation of an air-filled (\(\mathrm{Pr}=0.71\)) Rayleigh-Benard flow at two different Rayleigh numbers, \(\mathrm{Ra}=10^{8}\) and \(10^{10}\). Direct numerical simulations (DNS) were carried out and analyzed in previous studies [37; 38] without taking into account the viscous dissipation effects (\(\mathrm{Ge}=0\)). Here, the results are extended to \(\mathrm{Ge}=0.1\) and \(\mathrm{Ge}=1\) keeping the same domain size (\(\pi\times 1\times 1\)) and mesh resolution (\(400\times 208\times 208\) for \(\mathrm{Ra}=10^{8}\), and \(1024\times 768\times 768\) for \(\mathrm{Ra}=10^{10}\)). Grids are constructed with a uniform grid spacing in the periodic \(x\)-direction whereas wall-normal points (\(y\) and \(z\) directions) are distributed following a hyperbolic-tangent function as follows (identical for the \(z\)-direction)
\[y_{i}=\frac{1}{2}\left(1+\frac{\tanh\left(\gamma_{y}(2(i-1)/N_{y}-1)\right)}{ \tanh\gamma_{y}}\right),\qquad i=1,...,N_{y}+1, \tag{76}\]
where \(N_{y}\) and \(\gamma_{y}\) are the number of control volumes and the concentration factor in the \(y\)-direction, respectively. In our case, \(\gamma_{y}=\gamma_{z}=1.4\) for \(\mathrm{Ra}=10^{8}\) and \(\gamma_{y}=\gamma_{z}=1.6\) for \(\mathrm{Ra}=10^{10}\). For further details, the reader is referred to our previous works [37; 38].
Instantaneous temperature fields corresponding to the statistically steady state are displayed in Figure 8. As expected, thermal dissipation effects at \(\mathrm{Ge}=1\) lead to a significant increase in the average cavity temperature which is clearly visible for both Rayleigh numbers. As in 2D, the flow symmetry (in average sense) with respect to the mid-height plane is lost for \(\mathrm{Ge}>0\) leading to higher (lower) Nusselt number for the top (bottom) wall. Subsequently, the top (bottom) thermal boundary layer becomes thinner (thicker) with respect to the case at \(\mathrm{Ge}=0\). This implies that mesh resolution requirements in the near-wall region are also asymmetrical; however, in this work, for the sake of simplicity, the grid spacing at the two walls is the same regardless of the Gebhart number.
Figure 8: Instantaneous temperature fields for 3D RBC at different Rayleigh and Gebhart numbers. For a visualization of the 3D time-dependent simulation results, we refer to the supplementary material.
All simulations have been carried out for 500 time-units starting from a zero velocity field and uniformly distributed random temperatures between \(T_{C}\) and \(T_{H}\). As the fluid sets in motion, initially the discrete kinetic energy of the system increases. Then, after a sufficiently long period of time (around 50 time-units) a statistically steady state is reached. This is clearly observed in Figure 9 where the time-evolution of various rate-of-changes of energy are shown. Results correspond to \(\mathrm{Ra}=10^{8}\) and \(\mathrm{Ge}=1\) using a very fine (\(400\times 208\times 208\approx 17.3\)M) and a very coarse mesh. Similar results are obtained for the other tested configurations. As expected, once a statistically steady state is reached, the kinetic energy fluctuates around its mean value and therefore its rate-of-change \(\mathrm{d}E_{k,h}/\mathrm{d}t\) (in red) fluctuates around zero. Only two terms contribute to the global kinetic energy of the system (see equation (48)): the global viscous dissipation, \(\epsilon_{u,h}\) (in yellow), and the contribution of the buoyancy forces given by \(\alpha_{2}V_{h}^{T}(AT_{h}(t)+y_{T})\) (in blue). These two contributions cancel each other on average when a statistically steady state is reached. The former is transferred into internal energy, \(E_{i,h}\), whereas the latter can be viewed as a transfer from potential to kinetic energy. In addition, the total energy of the system is exactly in balance with the buoyancy term and the heat conduction through the top and bottom boundaries (green line), as given by (58), repeated here for convenience:
\[\frac{\mathrm{d}E_{k,h}}{\mathrm{d}t}+\gamma\frac{\mathrm{d}E_{i,h}}{\mathrm{ d}t}-\alpha_{2}V_{h}^{T}(AT_{h}+y_{T})-\gamma\alpha_{4}(\mathrm{Nu}_{H}- \mathrm{Nu}_{C})=0. \tag{77}\]
This proofs that the viscous dissipation function has indeed been discretized correctly, since an imbalance between the viscous dissipation implied by the kinetic energy equation and the explicitly added viscous dissipation in the internal energy equation would otherwise show up. These energy balances are exactly satisfied for any grid, so even for very coarse grids (see Figure 9, right). Notice again that it is important that the Nusselt numbers are evaluated consistently with the discretization of the diffusive terms in the internal energy equation, as explained in Section 3.4.
In addition to these instantaneous balances, we show in Figure 10 that the time averages of the exact relations given in equations (38), (40) and (42) are preserved at the discrete level, similar to what was shown in steady-state in 2D (see Figure 4a). However, here we display _time-averaged_ Nusselt numbers and consider a wide range of meshes. The finest meshes correspond to the DNS simulations shown in Figure 8 whereas coarser and coarser meshes have been generated by reducing the number of grid points in each spatial direction by factors
Figure 9: Time-evolution of the most relevant energy contributions for \(\mathrm{Ge}=1\); (left) Finest grid: \(400\times 208\times 208\approx 17.3\)M; (right) Coarsest grid: \(50\times 26\times 26\approx 0.034\)M.
of approximately \(\sqrt{2}\). Hence, after six successive mesh coarsenings, the total number of grid points is reduced by approximately \(((\sqrt{2})^{6})^{3}=2^{9}=512\). This under-resolution causes a pile-up of (kinetic) energy close to the smallest resolved scales, that leads to higher values of \(\epsilon_{U}\) and, therefore, an increase of both \(\text{Nu}_{H}\) (see equation (40)) and \(\text{Nu}_{C}-\text{Nu}_{H}\) (see equation (38)). Although the solution is surely less accurate at coarse grids, the fact that an energy balance is still satisfied, makes our approach an excellent starting point for developing or testing sub-grid scale models, as the additional dissipation that is introduced by the sub-grid scale model can be exactly quantified.
## 8 Conclusions
In this paper we have proposed a new energy-consistent discretization of the viscous dissipation function. The viscous dissipation function is an important quantity, for example in turbulent flow computations, where it is critical to assess the global energy balances, or in natural convection flows, where it leads to internal heating. This latter case has been the focus of this article, and we have shown that including the viscous dissipation function in the internal energy equation leads to a _consistent total energy balance_: viscous dissipation acts as a sink in the kinetic energy equation and as a source in the internal energy equation, such that the sum of internal and kinetic energy only changes due to buoyancy and thermal diffusion.
Our key result is a new discretization of the _local_ viscous dissipation function that abides by the total energy balance. We have shown that it is determined by two choices, namely the discretization of the diffusive terms in the momentum equations and the expression for the local kinetic energy. The discretization of the diffusive terms is detailed for both general (non-constant viscosity) and simplified (constant viscosity) stress tensor expressions. The proposed expression for the local kinetic energy is such that a discrete local kinetic energy equation is satisfied, and leads to a quadratic, strictly dissipative form of the viscous dissipation function, also for general stress tensors. Near boundaries we have proposed corrections to the viscous dissipation function to keep the dissipative property.
The numerical experiments in 2D and 3D show that viscous dissipation does not affect the critical Rayleigh number at which instabilities form, but it does significantly impact the development of instabilities once they occur, leading to a significant difference between the Nusselt numbers on the cold and hot plates. Moreover, simulations of turbulent Rayleigh-Benard convection have clearly shown that the proposed discretization is stable even for very coarse grids. Namely, the numerical discretization does not interfere with the energy balances and, therefore, we consider that the proposed method is an excellent starting point for testing sub-grid scale models.
Figure 10: Time-averaged Nusselt numbers at lower and upper plate for a set of meshes at \(\text{Ge}=0\), \(\text{Ge}=0.1\) and \(\text{Ge}=1\).
The analysis in this paper has been performed for the classic finite-volume staggered grid method. Extensions to other discretization methods, such as finite differences or finite elements, are in principle possible provided that a discrete local kinetic energy balance mimicking the continuous balance can be identified. Another limitation of this work is the assumption of incompressible flow, which might seem restrictive given the fact that viscous dissipation typically becomes important for compressible flows. However, the idea of discretizing the viscous dissipation term in an energy-consistent manner is also applicable to compressible flows, see e.g. [39; 40], and we expect our work can therefore be extended in that direction.
As mentioned, an important avenue for future work lies in the assessment of subgrid-scale models for turbulent flows, including those driven by buoyancy. For example, in large-eddy simulation, the kinetic energy equation of the resolved scales and of the subgrid-scales features viscous dissipation terms, and the current work provides a basis for proper discrete representations of these terms.
## Data availability statement
The incompressible Navier-Stokes code is available at [https://github.com/bsanderse/INS2D](https://github.com/bsanderse/INS2D) (Matlab version). A Julia version is available from [https://github.com/agdestein/IncompressibleNavierStokes.jl](https://github.com/agdestein/IncompressibleNavierStokes.jl). The data generated in this work is available upon request.
## CRediT
**Benjamin Sanderse**: Conceptualization, Methodology, Writing - Original Draft, Software; **Xavier Trias**: Writing - Original Draft, Writing - Review & Editing, Software
## Acknowledgements
This publication is part of the project "Discretize first, reduce next" (with project number VI.Vidi.193.105) of the research programme NWO Talent Programme Vidi which is (partly) financed by the Dutch Research Council (NWO). F.X.T. is supported by the _Ministerio de Economia y Competitividad_, Spain, RETOtwin project (PDC2021-120970-I00). Turbulent Rayleigh-Benard simulations were carried out on MareNostrum 4 supercomputer at BSC. The authors thankfully acknowledge these institutions.
## Appendix A Forms of the dissipation function
In this appendix we explain why the dissipation function changes depending on which form of the stress tensor is used. The stress tensor for an incompressible fluid with non-constant viscosity is given by
\[\hat{\mathbf{\tau}}=\mu(\nabla\mathbf{u}+(\nabla\mathbf{u})^{T}). \tag{10}\]
In the case of constant viscosity \(\mu\), the divergence of the stress tensor can be simplified:
\[\nabla\cdot\hat{\mathbf{\tau}}=\mu\nabla\cdot(\nabla\mathbf{u}+(\nabla\mathbf{u})^{T})=\mu \nabla\cdot\nabla\mathbf{u}+\mu\nabla\cdot(\nabla\mathbf{u})^{T}=\mu\nabla^{2}\mathbf{u}+ \mu\nabla(\nabla\cdot\mathbf{u})=\mu\nabla^{2}\mathbf{u}=:\;\nabla\cdot\mathbf{\tau}, \tag{11}\]
where
\[\mathbf{\tau}=\mu\nabla\mathbf{u}=\hat{\mathbf{\tau}}-\mu(\nabla\mathbf{u})^{T}. \tag{12}\]
Note that \(\mathbf{\tau}\) is not a proper stress tensor, since it is not symmetric. We stress that \(\nabla\cdot\mathbf{\tau}=\nabla\cdot\hat{\mathbf{\tau}}\), even though \(\mathbf{\tau}\neq\hat{\mathbf{\tau}}\).
In the kinetic energy equation the divergence of the stress tensor is multiplied by \(\mathbf{u}\): \(\mathbf{u}\cdot(\nabla\cdot\mathbf{\tau})\). Since \(\nabla\cdot\mathbf{\tau}=\nabla\cdot\hat{\mathbf{\tau}}\), we also have
\[\mathbf{u}\cdot(\nabla\cdot\mathbf{\tau})=\mathbf{u}\cdot(\nabla\cdot\hat{\mathbf{\tau}}). \tag{13}\]
Expanding both the left-hand and right-hand side with a vector identity (note: also valid for non-symmetric \(\mathbf{\tau}\)) gives:
\[\nabla\cdot(\mathbf{\tau}\cdot\mathbf{u})-\mathbf{\tau}\,:\,\nabla\mathbf{u}=\nabla\cdot(\hat {\mathbf{\tau}}\cdot\mathbf{u})-\hat{\mathbf{\tau}}\,:\,\nabla\mathbf{u}. \tag{14}\]
\[\nabla\cdot(\mathbf{\tau}\cdot\mathbf{u})-\Phi=\nabla\cdot(\mathbf{\hat{\tau}}\cdot\mathbf{u})-\Phi. \tag{106}\]
The crucial point is that, even though equation (106) holds, the individual terms are not equal, i.e. \(\Phi\neq\hat{\Phi}\) and \(\nabla\cdot(\mathbf{\tau}\cdot\mathbf{u})\neq\nabla\cdot(\mathbf{\hat{\tau}}\cdot\mathbf{u})\).
This insight can be further clarified by evaluating these expressions in 2D Cartesian coordinates:
\[\nabla\cdot(\mathbf{\hat{\tau}}\cdot\mathbf{u}) =\mu\left[\frac{\partial}{\partial x}\left(2u\frac{\partial u}{ \partial x}\right)+\frac{\partial}{\partial y}\left(u\left(\frac{\partial u}{ \partial y}+\frac{\partial v}{\partial x}\right)\right)+\frac{\partial}{ \partial x}\left(v\left(\frac{\partial u}{\partial y}+\frac{\partial v}{ \partial x}\right)\right)+\frac{\partial}{\partial y}\left(2v\frac{\partial v }{\partial y}\right)\right], \tag{107}\] \[\Phi =\mu\left[2\left(\frac{\partial u}{\partial x}\right)^{2}+\left( \frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)^{2}+2\left( \frac{\partial v}{\partial y}\right)^{2}\right],\] (108) \[\nabla\cdot(\mathbf{\tau}\cdot\mathbf{u}) =\mu\left[\frac{\partial}{\partial x}\left(u\frac{\partial u}{ \partial x}\right)+\frac{\partial}{\partial y}\left(u\frac{\partial u}{ \partial y}\right)+\frac{\partial}{\partial x}\left(v\frac{\partial v}{ \partial x}\right)+\frac{\partial}{\partial y}\left(v\frac{\partial v}{ \partial y}\right)\right],\] (109) \[\Phi =\mu\left[\left(\frac{\partial u}{\partial x}\right)^{2}+\left( \frac{\partial u}{\partial y}\right)^{2}+\left(\frac{\partial v}{\partial x} \right)^{2}+\left(\frac{\partial v}{\partial y}\right)^{2}\right]. \tag{110}\]
Note that in a closed domain (\(\mathbf{u}=0\) on all boundaries), we have the relation
\[\int_{\Omega}\Phi\,\mathrm{d}\Omega=\int_{\Omega}\hat{\Phi}\,\mathrm{d}\Omega. \tag{111}\]
## Appendix B Discrete dissipation operator from local kinetic energy equation
### Momentum equations and choice of local kinetic energy
The energy-conserving discretization presented in equation (47) can be written component-wise as:
\[\frac{\mathrm{d}u_{i+1/2,j}}{\mathrm{d}t} =-\mathrm{conv}^{u}_{i+1/2,j}-\frac{p_{i+1,j}-p_{i,j}}{\Delta x}+ \alpha_{1}\mathrm{diff}^{u}_{i+1/2,j}, \tag{112}\] \[\frac{\mathrm{d}v_{i,j+1/2}}{\mathrm{d}t} =-\mathrm{conv}^{v}_{i+1/2,j}-\frac{p_{i,j+1}-p_{i,j}}{\Delta y}+ \alpha_{1}\mathrm{diff}^{v}_{i,j+1/2}+\alpha_{2}\frac{1}{2}(T_{i,j}+T_{i,j+1}). \tag{113}\]
The convective terms are discretized starting from the divergence form, and due to discrete mass conservation they can be written in skew-symmetric form, which is energy-conserving. These terms are not the main focus of this work and we refer to [30; 32] for details.
The aim here is to find a local kinetic energy equation expression and the exact form of the dissipation terms. The local kinetic energy should be such that it results in the well-known global kinetic energy balance [30] upon integration over the entire domain. This global kinetic energy equation is obtained by taking the inner product of all momentum equations with the full velocity vector \(V_{h}\) (containing \(u_{i+1/2,j}\) and \(v_{i,j+1/2}\) at all locations). This resulting global kinetic energy definition \(\frac{1}{2}V_{h}^{T}\Omega_{h}V_{h}\) still leaves room for the definition of the local kinetic energy.
Our proposal is to choose for the local kinetic energy the definition
\[k_{i,j}\,:=\frac{1}{4}u_{i+1/2,j}^{2}+\frac{1}{4}u_{i-1/2,j}^{2}+\frac{1}{4}v_ {i,j+1/2}^{2}+\frac{1}{4}v_{i,j-1/2}^{2}. \tag{114}\]
Upon differentiating,
\[\frac{\mathrm{d}k_{i,j}}{\mathrm{d}t}=\frac{1}{2}u_{i+1/2,j}\frac{\mathrm{d}u _{i+1/2,j}}{\mathrm{d}t}+\frac{1}{2}u_{i-1/2,j}\frac{\mathrm{d}u_{i-1/2,j}}{ \mathrm{d}t}+\frac{1}{2}v_{i,j+1/2}\frac{\mathrm{d}v_{i+1/2,j}}{\mathrm{d}t}+ \frac{1}{2}v_{i,j-1/2}\frac{\mathrm{d}v_{i,j-1/2}}{\mathrm{d}t}, \tag{115}\]
and substituting the momentum equations, our proposed definition gives a local kinetic energy equation that is consistent with the continuous equations. The stencil of points required to evaluate (115) is shown in figure 11.
The choice (114) is inspired by the fact that it naturally allows a discrete equivalent of \(\mathbf{u}\cdot\nabla p=\nabla\cdot(\mathbf{p}\mathbf{u})-p\nabla\cdot\mathbf{u}\):
\[\frac{1}{2}u_{i+1/2,j}\frac{p_{i+1,j}-p_{i,j}}{\Delta x}+\frac{1}{2}u_ {i-1/2,j}\frac{p_{i,j}-p_{i-1,j}}{\Delta x}+\frac{1}{2}v_{i,j+1/2}\frac{p_{i,j+1} -p_{i,j}}{\Delta y}+\frac{1}{2}v_{i,j-1/2}\frac{p_{i,j}-p_{i,j-1}}{\Delta y}=\\ \frac{u_{i+1/2,j}\frac{1}{2}(p_{i+1,j}+p_{i,j})-\frac{1}{2}u_{i-1/ 2,j}(p_{i,j}+p_{i-1,j})}{\Delta x}+\frac{v_{i,j+1/2}\frac{1}{2}(p_{i,j+1}+p_{i, j})-v_{i,j-1/2}\frac{1}{2}(p_{i,j}+p_{i,j-1})}{\Delta y}\\ -p_{i,j}\underbrace{\frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}+ \frac{v_{i,j+1/2}-v_{i,j-1/2}}{\Delta y}}_{\text{div}(u)_{i,j}}.\] (B.5)
Furthermore, choice (B.3) for the local kinetic energy leads to a consistent quadratic dissipation form in the case of a general stress tensor, as will be shown in B.4.
### Diffusion and dissipation
We continue to investigate the dissipation implied by the diffusive term in the momentum equation (B.1) and the kinetic energy choice (B.3). Restricting ourselves momentarily to the term \(\frac{\partial^{2}u}{\partial x^{2}}\), we are looking for a discrete equivalent of the relation
\[u\frac{\partial^{2}u}{\partial x^{2}}=-\left(\frac{\partial u}{\partial x} \right)^{2}+\frac{\partial}{\partial x}\left(u\frac{\partial u}{\partial x} \right).\] (B.6)
This is given by \(u_{i+1/2,j}\cdot\text{diff}_{i+1/2,j}^{\mu}\):
\[\frac{u_{i+1/2,j}}{\Delta x}\left(\frac{u_{i+3/2,j}-u_{i+1/2,j}}{ \Delta x}-\frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right)=-\frac{1}{2}\left( \frac{u_{i+3/2,j}-u_{i+1/2,j}}{\Delta x}\right)^{2}-\frac{1}{2}\left(\frac{u_ {i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right)^{2}\\ +\frac{1}{\Delta x}\left(\frac{1}{2}(u_{i+3/2,j}+u_{i+1/2,j}) \frac{u_{i+3/2,j}-u_{i+1/2,j}}{\Delta x}-\frac{1}{2}(u_{i+1/2,j}+u_{i-1/2,j}) \frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right).\] (B.7)
Equation (B.7) is important because the discrete local dissipation expression is explicitly needed in the internal energy equation.
Figure B.11: Stencil of velocity and pressure points involved in the local kinetic energy equation.
The analysis for the term \(\frac{\partial^{2}u}{\partial y^{2}}\) is completely analogous, and hence we can define the following discrete function that describes the dissipation implied by the discretized diffusion term of the momentum equation for \(u_{i+1/2,j}\):
\[\Phi^{u}_{i+1/2,j}=-\frac{1}{2}\left(\frac{u_{i+3/2,j}-u_{i+1/2,j}} {\Delta x}\right)^{2}-\frac{1}{2}\left(\frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x }\right)^{2}\\ -\frac{1}{2}\left(\frac{u_{i+1/2,j+1}-u_{i+1/2,j}}{\Delta y} \right)^{2}-\frac{1}{2}\left(\frac{u_{i+1/2,j}-u_{i+1/2,j-1}}{\Delta y}\right)^ {2}.\] (B.8)
Similarly, the dissipation implied by the discretized diffusion term of the momentum equation for \(v_{i,j+1/2}\) is:
\[\Phi^{v}_{i,j+1/2}=-\frac{1}{2}\left(\frac{v_{i+1,j+1/2}-v_{i,j+1/ 2}}{\Delta x}\right)^{2}-\frac{1}{2}\left(\frac{v_{i+1,j-1/2}-v_{i,j-1/2}}{ \Delta x}\right)^{2}\\ -\frac{1}{2}\left(\frac{v_{i,j+3/2}-v_{i,j+1/2}}{\Delta y} \right)^{2}-\frac{1}{2}\left(\frac{v_{i,j+1/2}-v_{i,j-1/2}}{\Delta y}\right)^ {2}.\] (B.9)
The entire dissipation term appearing in the kinetic energy equation for \(\frac{\mathrm{d}k_{i,j}}{\mathrm{d}t}\) is then
\[\boxed{\Phi_{i,j}=\frac{1}{2}\Phi^{u}_{i+1/2,j}+\frac{1}{2}\Phi^{u}_{i-1/2,j} +\frac{1}{2}\Phi^{v}_{i,j+1/2}+\frac{1}{2}\Phi^{v}_{i,j-1/2}.}\] (B.10)
### Boundary conditions
The analysis in the previous section ignored the effect of boundary conditions. Upon integrating (B.6) over the domain, we get
\[\int u\frac{\partial^{2}u}{\partial x^{2}}\mathrm{d}x=-\int\left(\frac{ \partial u}{\partial x}\right)^{2}\mathrm{d}x+\underbrace{\left[u\frac{ \partial u}{\partial x}\right]}_{\text{boundary term}},\] (B.11)
and the boundary term vanishes in case of homogeneous Dirichlet, homogeneous Neumann or periodic conditions. The discrete version should mimic this behavior.
Consider the case where the solution on the boundary is given by \(u_{1/2,j}=u_{b,j}\) (figure B.12, left). Then the first unknown for which the momentum equation (B.1) is solved is \(u_{3/2,j}\), and equation (B.7) becomes
\[\frac{u_{3/2,j}}{\Delta x}\left(\frac{u_{5/2,j}-u_{3/2,j}}{ \Delta x}-\frac{u_{3/2,j}-u_{b,j}}{\Delta x}\right)=\frac{1}{\Delta x}\left( \frac{1}{2}(u_{5/2,j}+u_{3/2,j})\frac{u_{5/2,j}-u_{3/2,j}}{\Delta x}-\frac{1} {2}(u_{3/2,j}+u_{b,j})\frac{u_{3/2,j}-u_{b,j}}{\Delta x}\right)\\ -\frac{1}{2}\left(\frac{u_{5/2,j}-u_{3/2,j}}{\Delta x}\right)^{2} -\frac{1}{2}\left(\frac{u_{3/2,j}-u_{b,j}}{\Delta x}\right)^{2}.\] (B.12)
Figure B.12: Staggered grid near vertical (left) and horizontal (right) boundary.
In case where \(u_{b,j}=0\), we want the boundary term to vanish, like the term \(u\frac{\partial u}{\partial x}\) in the continuous case. However, when setting \(u_{b,j}=0\), the term that corresponds to \(u\frac{\partial u}{\partial x}\) reads:
\[-\frac{1}{2}(u_{3/2,j}+u_{b,j})\frac{u_{3/2,j}-u_{b,j}}{\Delta x}=-\frac{1}{2} \frac{u_{3/2,j}^{2}}{\Delta x}, \tag{13}\]
and the discrete boundary contribution does _not_ vanish for \(u_{b,j}=0\). This issue is caused by the fact that the finite volumes do not cover the entire domain, because there is no momentum equation to be solved for \(u_{b,j}\) (as it is given by the boundary data). We resolve this issue by splitting instead as
\[-\frac{u_{3/2,j}}{\Delta x}\left(\frac{u_{3/2,j}-u_{b,j}}{\Delta x}\right)=- \underbrace{\frac{u_{b,j}}{\Delta x}\left(\frac{u_{3/2,j}-u_{b,j}}{\Delta x} \right)}_{\text{boundary contribution}}-\underbrace{\frac{u_{3/2,j}-u_{b,j}}{ \Delta x}\left(\frac{u_{3/2,j}-u_{b,j}}{\Delta x}\right)}_{\text{ dissipation contribution}}, \tag{14}\]
so that the contribution to the dissipation function is
\[-\left(\frac{u_{3/2,j}-u_{b},j}{\Delta x}\right)^{2}, \tag{15}\]
instead of \(-\frac{1}{2}\left(\frac{u_{3/2,j}-u_{b,j}}{\Delta x}\right)^{2}\).
For the discretization of \(\frac{\partial^{2}u}{\partial y^{2}}\), we have a different situation, because the solution points are not aligned with the boundary. The first unknown is \(u_{i+1/2,1}\), which is situated at a distance \(\frac{1}{2}\Delta y\) above the lower boundary. In this case we can write
\[\frac{u_{i+1/2,j}}{\Delta y}\left(\frac{u_{i+1/2,j+1}-u_{i+1/2,j}} {\Delta y}-\frac{u_{i+1/2,j}-u_{i+1/2,j-1}}{\Delta y}\right)\overset{j=1}{=} \frac{u_{i+1/2,1}}{\Delta y}\left(\frac{u_{i+1/2,2}-u_{i+1/2,1}}{\Delta y}- \frac{u_{i+1/2,1}-u_{i+1/2,b}}{\frac{1}{2}\Delta y}\right)\\ =\frac{1}{\Delta y}\left(\frac{1}{2}(u_{i+1/2,2}+u_{i+1/2,1}) \frac{u_{i+1/2,2}-u_{i+1/2,1}}{\Delta y}-u_{i+1/2,b}\frac{u_{i+1/2,2}-u_{i+1/2,1}}{\frac{1}{2}\Delta y}\right)\\ -\frac{1}{2}\left(\frac{u_{i+1/2,2}-u_{i+1/2,1}}{\Delta y}\right) ^{2}-\frac{1}{2}\left(\frac{u_{i+1/2,1}-u_{i+1/2,b}}{\frac{1}{2}\Delta y} \right)^{2}, \tag{16}\]
and we have a correct discrete equivalent of the continuous expression, and no correction to \(\Phi\) is needed.
The analysis for the \(v\)-component follows in a similar fashion. A correction is needed in the expression for \(\Phi\) associated to \(\frac{\partial^{2}v}{\partial x^{2}}\), but not for \(\frac{\partial^{2}v}{\partial y^{2}}\).
### Extension to non-constant viscosity: general stress tensor
For the case of non-constant \(\mu\), the discretization of the diffusion terms in the momentum equation changes to
\[\begin{split}\text{diff}_{i+1/2,j}^{\mu}=&\frac{1}{ \Delta x}\left[\left(2\mu_{i+1,j}\frac{u_{i+3/2,j}-u_{i+1/2,j}}{\Delta x} \right)-\left(2\mu_{i,j}\frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right) \right]+\\ &\frac{1}{\Delta y}\left[\left(\mu_{i+1/2,j+1/2}\frac{u_{i+1/2,j+ 1}-u_{i+1/2,j}}{\Delta y}\right)-\left(\mu_{i+1/2,j-1/2}\frac{u_{i+1/2,j}-u_{ i+1/2,j-1}}{\Delta y}\right)\right]+\\ &\frac{1}{\Delta y}\left[\left(\mu_{i+1/2,j+1/2}\frac{v_{i+1,j+ 1/2}-v_{i,j+1/2}}{\Delta x}\right)-\left(\mu_{i+1/2,j-1/2}\frac{v_{i+1,j-1/2}-v _{i,j-1/2}}{\Delta x}\right)\right].\end{split} \tag{17}\]
Importantly, we first show that this form reduces to the expression in equation (B.1) for constant \(\mu\). In the continuous equations, this happens because
\[\frac{\partial}{\partial x}\left(2\frac{\partial u}{\partial x}\right)+\frac{ \partial}{\partial y}\left(\frac{\partial u}{\partial y}+\frac{\partial v}{ \partial x}\right)=\frac{\partial}{\partial x}\left(\frac{\partial u}{ \partial x}\right)+\frac{\partial}{\partial y}\left(\frac{\partial u}{ \partial y}\right)+\frac{\partial}{\partial x}\left(\frac{\partial u}{ \partial x}+\frac{\partial v}{\partial y}\right)=\frac{\partial^{2}u}{ \partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}.\] (B.18)
The derivation hinges on the divergence-freeness of \(\mathbf{u}\) and interchanging of differentiation in \(x-\) and \(y\)-directions. Discretely, the same derivation holds, which can be shown by rewriting as follows:
\[\frac{1}{\Delta x}\left[\left(\frac{u_{i+3/2,j}-u_{i+1/2,j}}{ \Delta x}\right)-\left(\frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right)\right] +\frac{1}{\Delta y}\left[\left(\frac{u_{i+1/2,j+1}-u_{i+1/2,j}}{\Delta y} \right)-\left(\frac{u_{i+1/2,j}-u_{i+1/2,j-1}}{\Delta y}\right)\right]+\] \[\frac{1}{\Delta x}\left[\left(\frac{u_{i+3/2,j}-u_{i+1/2,j}}{ \Delta x}\right)-\left(\frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right)+\left( \frac{v_{i+1,j+1/2}-v_{i+1,j-1/2}}{\Delta y}\right)-\left(\frac{v_{i,j+1/2}-v _{i,j-1/2}}{\Delta y}\right)\right],\] (B.19)
and the second line evaluates to zero, as it contains the difference of the divergence associated to volumes \((i+1,j)\) and \((i,j)\).
We continue to derive the dissipation function. As explain in Remark 1 and in A, the dissipation function changes when the generic stress tensor for non-constant \(\mu\) is considered. Multiplying (B.17) with \(u_{i+1/2,j}\) and rewriting leads to
\[\hat{\Phi}^{\mu}_{i+1/2,j}=u_{i+1/2,j}\cdot\mathrm{diff}^{\mu}_{i +1/2,j}=\\ \frac{1}{\Delta x}\left(\mu_{i+1,j}(u_{i+3/2,j}+u_{i+1/2,j})\frac {u_{i+3/2,j}-u_{i+1/2,j}}{\Delta x}-\mu_{i,j}(u_{i+1/2,j}+u_{i-1/2,j})\frac{u_ {i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right)\\ -\mu_{i+1,j}\left(\frac{u_{i+3/2,j}-u_{i+1/2,j}}{\Delta x} \right)^{2}-\mu_{i,j}\left(\frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right)^{2} +\\ \frac{1}{\Delta y}\left(\mu_{i+1/2,j+1/2}\frac{u_{i+1/2,j+1}+u_{i +1/2,j}}{2}\left[\frac{u_{i+1/2,j+1}-u_{i+1/2,j}}{\Delta y}+\frac{v_{i+1,j+1/ 2}-v_{i,j+1/2}}{\Delta x}\right]\right.\\ -\mu_{i+1/2,j-1/2}\frac{u_{i+1/2,j}+u_{i+1/2,j-1}}{2}\left[\frac {u_{i+1/2,j}-u_{i+1/2,j-1}}{\Delta y}+\frac{v_{i+1,j-1/2}-v_{i,j-1/2}}{ \Delta x}\right]\right)\\ -\frac{\mu_{i+1/2,j+1/2}}{2}\left(\frac{u_{i+1/2,j+1}-u_{i+1/2,j }}{\Delta y}\right)\left(\frac{u_{i+1/2,j+1}-u_{i+1/2,j}}{\Delta y}+\frac{v_{i +1,j+1/2}-v_{i,j+1/2}}{\Delta x}\right)\\ -\frac{\mu_{i+1/2,j-1/2}}{2}\left(\frac{u_{i+1/2,j}-u_{i+1/2,j-1} }{\Delta y}\right)\left(\frac{u_{i+1/2,j}-u_{i+1/2,j-1}}{\Delta y}+\frac{v_{i +1,j-1/2}-v_{i,j-1/2}}{\Delta x}\right).\] (B.20)
The last two terms are not in quadratic form yet. The quadratic form results upon considering the full kinetic energy expression (B.3), i.e. adding \(\hat{\Phi}^{\mu}_{i-1/2,j}=u_{i-1/2,j}\cdot\mathrm{diff}^{\mu}_{i-1/2,j}\), \(\hat{\Phi}^{v}_{i,j+1/2}=v_{i,j+1/2}\cdot\mathrm{diff}^{\mu}_{i,j+1/2}\) and \(\hat{\Phi}^{v}_{i,j-1/2}=v_{i,j-1/2}\cdot\mathrm{diff}^{\mu}_{i,j-1/2}\). The full dissipation function then reads
\[\hat{\Phi}_{i,j}=\frac{1}{2}\hat{\Phi}^{\mu}_{i+1/2,j}+\frac{1}{2} \hat{\Phi}^{u}_{i-1/2,j}+\frac{1}{2}\hat{\Phi}^{v}_{i,j+1/2}+\frac{1}{2}\hat{ \Phi}^{v}_{i,j-1/2}=\\ -\frac{\mu_{i+1,j}}{2}\left(\frac{u_{i+3/2,j}-u_{i+1/2,j}}{\Delta x }\right)^{2}-\mu_{i,j}\left(\frac{u_{i+1/2,j}-u_{i-1/2,j}}{\Delta x}\right)^{2 }-\frac{\mu_{i-1,j}}{2}\left(\frac{u_{i-1/2,j}-u_{i-3/2,j}}{\Delta x}\right)^{2} \\ -\frac{\mu_{i+1/2,j+1/2}}{4}\left(\frac{u_{i+1/2,j+1}-u_{i+1/2,j}} {\Delta y}+\frac{v_{i+1,j+1/2}-v_{i,j+1/2}}{\Delta x}\right)^{2}-\frac{\mu_{i+1 /2,j-1/2}}{4}\left(\frac{u_{i+1/2,j}-u_{i+1/2,j-1}}{\Delta y}+\frac{v_{i+1,j-1/ 2}-v_{i,j-1/2}}{\Delta x}\right)^{2}\\ -\frac{\mu_{i-1/2,j+1/2}}{4}\left(\frac{u_{i-1/2,j+1}-u_{i-1/2,j} }{\Delta y}+\frac{v_{i,j+1/2}-v_{i-1,j+1/2}}{\Delta x}\right)^{2}-\frac{\mu_{i -1/2,j-1/2}}{4}\left(\frac{u_{i-1/2,j}-u_{i-1/2,j-1}}{\Delta y}+\frac{v_{i,j-1 /2}-v_{i-1,j-1/2}}{\Delta x}\right)^{2}\\ -\frac{\mu_{i,j+1}}{2}\left(\frac{v_{i,j+3/2}-v_{i,j+1/2}}{\Delta y }\right)^{2}-\mu_{i,j}\left(\frac{v_{i,j+1/2}-v_{i,j-1/2}}{\Delta y}\right)^{2}- \frac{\mu_{i,j-1}}{2}\left(\frac{v_{i,j-1/2}-v_{i,j-3/2}}{\Delta y}\right)^{2}.\] (B.21) |
2308.09578 | An AI-Driven VM Threat Prediction Model for Multi-Risks Analysis-Based
Cloud Cybersecurity | Cloud virtualization technology, ingrained with physical resource sharing,
prompts cybersecurity threats on users' virtual machines (VM)s due to the
presence of inevitable vulnerabilities on the offsite servers. Contrary to the
existing works which concentrated on reducing resource sharing and encryption
and decryption of data before transfer for improving cybersecurity which raises
computational cost overhead, the proposed model operates diversely for
efficiently serving the same purpose. This paper proposes a novel Multiple
Risks Analysis based VM Threat Prediction Model (MR-TPM) to secure
computational data and minimize adversary breaches by proactively estimating
the VMs threats. It considers multiple cybersecurity risk factors associated
with the configuration and management of VMs, along with analysis of users'
behaviour. All these threat factors are quantified for the generation of
respective risk score values and fed as input into a machine learning based
classifier to estimate the probability of threat for each VM. The performance
of MR-TPM is evaluated using benchmark Google Cluster and OpenNebula VM threat
traces. The experimental results demonstrate that the proposed model
efficiently computes the cybersecurity risks and learns the VM threat patterns
from historical and live data samples. The deployment of MR-TPM with existing
VM allocation policies reduces cybersecurity threats up to 88.9%. | Deepika Saxena, Ishu Gupta, Rishabh Gupta, Ashutosh Kumar Singh, Xiaoqing Wen | 2023-08-18T14:18:45Z | http://arxiv.org/abs/2308.09578v1 | # An AI-Driven VM Threat Prediction Model for Multi-Risks Analysis-Based Cloud Cybersecurity
###### Abstract
Cloud virtualization technology, ingrained with physical resource sharing, prompts cybersecurity threats on users' virtual machines (VMs) due to the presence of inevitable vulnerabilities on the offsite servers. Contrary to the existing works which concentrated on reducing resource sharing and encryption/decryption of data before transfer for improving cybersecurity which raises computational cost overhead, the proposed model operates diversely for efficiently serving the same purpose. This paper proposes a novel Multiple Risks Analysis based VM Threat Prediction Model (MR-TPM) to secure computational data and minimize adversary breaches by proactively estimating the VMs threats. It considers multiple cybersecurity risk factors associated with the configuration and management of VMs, along with analysis of users' behaviour. All these threat factors are quantified for the generation of respective risk score values and fed as input into a machine learning based classifier to estimate the probability of threat for each VM. The performance of MR-TPM is evaluated using benchmark Google Cluster and OpenNebula VM threat traces. The experimental results demonstrate that the proposed model efficiently computes the cybersecurity risks and learns the VM threat patterns from historical and live data samples. The deployment of MR-TPM with existing VM allocation policies reduces cybersecurity threats up to 88.9%.
Hypervisor vulnerability, Network-cascading, Risk analysis, Side-channel, Unauthorized data access.
## 1 Introduction
CybercIMES are gobbling up the utility of the cloud services for the beneficiaries, including Cloud Service Providers (CSP)s as well as the end users. According to the estimation of Norton Security, in 2023, cybercriminals will be breaching 33 billion records per year [1]. Also, it has been reported that the misconfiguration and mismanagement associated with the virtualization technology at the cloud platform are the topmost causes of leakage of terabytes of sensitive data of millions of cloud users across the world [2]. Though the CSPs employ sharing of physical resources among multiple users in the view of maximizing the revenues [3, 4, 5, 6, 7] the discrepancies and unpatched susceptibilities developed during virtualization, produce misconfigured VMs and hypervisors, expediting the occurrence of cyberattacks. A malicious user may initiate a number of VMs and exploit the misconfigured or vulnerable VMs in multiple ways to impose a threat on a target VM [8, 9]. Moreover, the vulnerability of hypervisor due to misconfigured virtualization, devastates the cybersecurity by acquiescing all the coresident VMs to be compromised effortlessly [10]. The mismanagement during physical resource distribution yields co-residency of vulnerable VMs and malicious user VM appealing security threats such as leakage of user's confidential data, hampering of data, unauthorized access via insecure interfaces, hijacking of accounts, etc. [11, 12, 13, 14, 15]. Therefore, the key challenge for the CSP is: How to minimize the cybersecurity threats due to misconfiguration and mismanagement of shared resources on a cloud platform?
### _Related Work_
The considerable works presented for preserving cybersecurity of computational data via VM allocation have focused on both _defensive strategies_ as well as _preventive strategies_. The defensive strategies include minimization of resource sharing by reducing the number of users per server [16, 11], raising the difficulties for achieving co-residency [17, 18], and eliminating side-channel based cyberthreats [19]. While some other researchers have provided _preventive strategies_ merely by periodic migration of VMs [20, 21]. Levitin et al. [22] have presented a method to resist co-residence data theft attacks and improve service reliability by incorporating threshold voting-based N-version programming (NVP). Wu et al. [23] presented a secure and efficient outsourced K-means clustering (SEOKC) scheme for cloud data protection by applying a fully homomorphic encryption with the ciphertext packing technique to attain parallel computation without any excess cost. This scheme preserves data privacy by furnishing database security, privacy of clustering results, and hidden data access patterns. Zhang et al. [24] presented a double-blind anonymous evaluation-based trust model which allows suitable matching between anonymous users and service providers and employed node checking to detect malicious behaviour. A Previously-Selected-Servers-First (PSSF) policy was proposed in [11] for minimization of exposure of benign VMs to malicious ones. Every server maintained a record of a list of users whose VMs were ever hosted on it. The previously assigned servers that have ever hosted VMs of an old user are considered first for allocation of their new VMs. If no such server exists, then a server with more resource capacity among
the remaining servers, is considered for VM placement. Miao et al. [25] improved PSSF by adding a rule that a new VM should be co-located with the user VM to whom it is already co-resident. A hierarchical correlation model for analyzing and estimating reliability, performance, and power consumption of a cloud service is proposed in [26] to locate common causes of co-located multiple VM failures sharing multicore CPUs.
SEA-LB [16] allocates VMs considering minimum power consumption and side-channel attacks with maximum resource utilization by applying modified genetic algorithm approach. The security is provided by minimizing the number of shared servers at the cost of resource utilization. Saxena et al. [15] presented a security embedded resource allocation (SEDRA) model in which the performance of network traffic and inter-VM links are considered to detect and mitigate VM threats by utilizing a random tree classifier. Han et al. [17] proposed a two-player game-based defence mechanism against side-channel attack, where the potential differences between the attackers' and legal users' behaviour were examined by using clustering and semi-supervised learning techniques for the classification of users. As a result, the attacker's efficiency of achieving co-residency with a target VM raised drastically, thus denying an attack on computational data executing within a VM. A data security risk analysis based VM placement is discussed in [27], where a secure and multi-objective VM allocation is formulated and solved by applying an evolutionary optimization. A Vickrey Clarke-Groves bidding mechanism based defence system was presented in [28] to maximise the difficulty for the adversary to locate the target VM.
### _Our Contributions_
In the light of the aforementioned approaches, it is revealed that rigorous control over VM-centered cybercrimes is still in the infancy stage which marks the need to proactively estimate the intensity of VM threats in real-time. Since, machine learning algorithms are capable of extracting and learning useful patterns from known malicious activities rapidly by profiling devices such as VMs and servers, and understanding regular activities, it can intelligently identify previously unknown forms of malware and help protect VMs from potential attacks. Owing to the effective machine-learning capabilities of Extreme Gradient Boosting (XGB) approach including handling missing values, parallelization, distributed computing, and cache optimization, we have devised an XGB inspired VM threat prediction model. Correspondingly, a **M**ultiple **R**isks Analysis based VM **T**hreat **P**rediction **M**odel (MR-TPM) is proposed that predicts cyberthreats associated with VMs misconfiguration and their insecure allocation at the cloud platform. To the best of the authors' knowledge, such a proactive VM threat prediction model by considering multiple security risk factors for alleviation of cyberthreats, is presented for the first time. The key contributions are fourfold:
* A novel concept of multiple risks analysis based cybersecurity pertaining to VMs, is proposed to maximize the security of computational data executing on VMs. Also, the ill-effects of misconfiguration and insecure VM management are minimized by considering the intended multiple risk factors.
* Quantification and assessment of all the considered security threat factors for the periodic training of the newly developed artificial intelligence (AI) driven VM threat prediction model is introduced.
* Implementation and evaluation of the proposed model using real VM threat traces reveals that MR-TPM predicts threats with precise accuracy and helps to mitigate them before the occurrence.
* Deployment of the proposed model with existing VM placement policies demonstrates its compatability and applicability in improving the security of user data during execution by exploiting and analysing multiple VM risks for threat prediction. Additionally, its workload prediction component helps to optimize resource utilization, power consumption substantially by minimizing the number of active servers.
A bird eye view of the proposed model is shown in Fig. 1, where multiple types of VM security attack factors (\(\{R_{1},R_{2},...,R_{n}\}\in R\)) are gathered, quantified, and analysed to periodically train a machine learning based VM threat estimator for accurate prediction of future threats on VMs.
_Organization_: The paper is structured as follows: Section II discusses the problem formulation. A detailed elaboration of proposed MR-TPM is conferred in Section III. The multiple cyberthreat factors associated with VMs, including user behaviour analysis, configuration-dependent factors, and allocation-dependent factors, are entailed in Section IV, Section V, and Section VI, respectively. The operational design and complexity of MR-TPM is presented in Section VII. The performance evaluation followed by conclusive remarks and the future scope of the proposed work are presented in Section VIII and Section IX, respectively. Table I shows the list of symbols with explanatory terms used throughout the paper.
## 2 Problem Formulation
A cloud datacenter environment is considered where multiple users requests for execution of their workloads or appli
Fig. 1: A bird eye view of MR-TPM
cations. The users can be categorised into legitimate (normal) and malicious (threat-imposing) users. During workload execution, the inter-dependent VMs need to communicate and exchange information to complete the application execution. However, some malicious user VMs may intrude this operation and seek for the security loopholes to exploit various opportunities for launching successful threats to legitimate users' VMs for stealing sensitive information via an unauthorized access. The security of VMs is compromised by exploiting either configuration discrepencies of VMs and associated host servers or insecure allocation and mismanagement of VMs. Accordingly, a problem configuring research assumptions and design goals is formulated in the following subsections.
### _Assumptions_
The assumptions addressing conditions for VM threats and the capabilities of malicious user VMs during workload distribution and execution are as follows:
* Only CSP decides mapping between VMs and servers, and it may or may not have the knowledge of legitimate and malicious VMs.
* Each active VM belongs to one user only. However, the user can have multiple number of VMs over time.
* Malicious user may run one or multiple VMs to exploit means of security escape for imposing a threat on target VM(s). The VM threats can be executed in three ways: one-to-one (one specific malicious VM attacks one target VM), one-to-many (one specific malicious VM attacks multiple target VMs in networking), and many-to-many (group of malicious VMs attack many target VMs).
* VM(s) are migrated either to handle over/under-load on the source server or to protect them from malicious activity only. Otherwise, the VM is assumed to run on the same server until the user terminates it.
### _Problem statement and Design Goals_
Specifically, the problem is to develop a VM threat prediction model which is trained with data samples considering \(n\) probable risk factors addressing security loopholes that estimates VM(s) security threats proactively to improve cybersecurity during cloud workload processing. Based on the aforementioned problem assumptions and statement, the design goals of the proposed model are as follows:
* To develop a machine learning-driven model that will determine VM threats prior to occurrence in real-time. This model must not effect the efficiency of VM management and it must be adaptable and compatable for operation with any VM allocation policies.
* To generate a knowledge database for training of the corresponding VM threat predictor by identifying and computing risk score values for all the probable security factors associated with VM(s).
* To accurately detect security threats on legitimate VM(s) due to presence of malicious VM and vulnerabilities of VM(s) configuration and management.
## 3 Proposed VM Threat Prediction Model
Consider a cluster of \(P\) servers \(\{S_{1},S_{2},...,S_{P}\}\in\) S hosts \(Q\) VMs \(\{V_{1},V_{2},...,V_{Q}\}\in\) V of \(M\) users \(\{U_{1},U_{2},...,U_{M}\}\in\) U. Let \(S_{1}\) hosts \(x\) VMs such that \(\{V_{1}^{1},V_{2}^{1},...,V_{x}^{1}\}\in\) V\({}^{1}\); \(S_{2}\) and \(S_{P}\) host \(y\) VMs \(\{V_{1}^{2},V_{2}^{2},...,V_{y}^{2}\}\in\) V\({}^{2}\) and \(z\) VMs \(\{V_{1}^{P},V_{2}^{P},...,V_{x}^{P}\}\in\) V\({}^{\text{P}}\), respectively, where \(\{\text{V}^{1},\text{V}^{2},...,\text{V}^{P}\}\in\) V and \(\{x\cup y\cup z\}\subseteq Q\). A mapping \(\omega|\omega:\)U\(\times\) V \(\mapsto\) S assigns VMs of each user on a specific server such as \(\omega_{ji}^{k}=1\) iff \(j^{th}\) VM of \(k^{th}\) user is deployed on \(i^{th}\) server. The comprehensive description of the essential blocks and intrinsic information flow of MR-TPM is depicted in Fig 2. The proposed cyberthreat prediction model records and analyses multiple security risk factors \(\{R_{1},R_{2},R_{3},R_{4},R_{5}\}\in R\) associated with a VM configuration, such as VM vulnerability \(\{L_{1},L_{2},...,L_{Q}\}\), server Hypervisor vulnerability \(\{H_{1},H_{2},...,H_{P}\}\); and VM allocation, including Side-channel effect \(\{C_{1},C_{2},...,C_{Q}\}\) and Network cascading effect \(\{N_{1},N_{2},...,N_{Q}\}\); User behaviour \(\{U_{1}^{*},U_{2}^{*},...,U_{M}^{*}\}\); and previous records of VM threats \(\{\Xi_{1},\Xi_{2},...,\Xi_{n}\}\). During time-interval \(\{t_{a},t_{b}\}\in t\), all the security risk factors and threat information are collected and categorized into four classes:
* User behaviour analysis \(\{U_{1}^{*},U_{2}^{*},...,U_{M}^{*}\}\) (Section 4)
* VM Configuration-dependent factors for computation of the scores of VM vulnerability \(\{L_{1},L_{2},...,L_{Q}\}\) and server hypervisor vulnerability scores \(\{H_{1},H_{2},...,H_{P}\}\) (Section 5)
* VM Allocation-dependent factors for assessment of side-channel effects \(\{C_{1},C_{2},...,C_{Q}\}\) and network cascading effects \(\{N_{1},N_{2},...,N_{Q}\}\) (Section 6)
* Records of live threats or malicious actions on VMs for updation of VM threat database
**Definition 1**.: _VM cyberthreat prediction: The mechanism intended for computation and analysis of various security escapes and unpatched discrepancies associated with a VM along with proactive threat estimation, is designated as VM cyberthreat prediction._
MR-TPM proactively estimates the workload information \(\{W_{1}^{P},W_{2}^{P},...,W_{Q}^{P}\in\) W\(\}\) by utilizing a neural network based workload predictor (\(Pr\)), which is periodically trained with the latest and historic resource utilization \(\{RU_{1},RU_{2},...,RU_{Q}\}\) by VMs \(\{V_{1},V_{2},...,V_{Q}\}\) to determine active VMs \(\{\hat{V_{1}},\hat{V_{2}},...,\hat{V_{Q^{*}}},\)\(Q^{*}\subseteq Q\}\) having predicted workload (\(W^{P}\)) \(>\) 0. The prior knowledge of active VMs is utilized for analysis of the placement of VMs during the next
\begin{table}
\begin{tabular}{|l|} \hline \(S\): server, \(V\): VM, \(U\): user, \(P\): number of servers, \(Q\) : number of VMs, \(M\): number of users, \(\omega\): mapping among server, VM, user \(R\): security risk factor, \(L\):VM vulnerability, \(H\): Hypervisor vulnerability \(S\):\(Hyp_{\omega}\): Server’s hypervisor own vulnerability, \(\Xi\): Threat, \(C\): Co-residency effects, \(N\): Network cascading effects, \(Wr\): predicted workload, F: features used for prediction, \(\Xi\): Base Learning as XGB, G: unauthorized access, \(Rq\): job request, H\({}^{\text{f}}\): record of malicious actions, \(P(\Xi)\): probability of threat \(RU\): resource utilization, \(PW\): power consumption, \(M\): migration cost CC: status of VM after migration, G:Status of server after migration \(\text{D}(S_{k},S_{j})\): distance between servers \(S_{k}\) and \(S_{j}\) \\ \hline \end{tabular}
\end{table} TABLE I: Notations with explanatory terms
time interval \(\{t_{a+1},t_{b+1}\}\in t\). The consecutive processes of feature selection (FS) and threat prediction (TP) is performed for active VMs based on the predicted workload information \(\{W_{1}^{p},W_{2}^{p},...,W_{Q}^{p}\}\) for VMs \(\{V_{1},V_{2},...,V_{Q}\}\) of users \(\{U_{1},U_{2},...,U_{M}\}\) during time-interval \(\{t_{a+1},t_{b+1}\}\). The historical database of VM threats is utilized for feature selection, followed by training of online VM threat predictor \(\mathtt{TP}\), which is periodically re-trained with the latest data samples for online VM threat prediction \(\{\Xi_{1}^{p},\Xi_{2}^{p},...,\Xi_{Q_{\ast}}^{p}\}\). Among all the collected and analysed features \(\{L,H,C,N,\mathrm{V,S,U,U^{\ast}},W^{p},\omega\) etc.\(\}\subseteq\mathrm{F}\), only useful features are filtered (i.e., \(\mathrm{F}^{\ast}\)) by applying Recursive Feature Elimination (RFE) to train threat predictor \(\mathtt{TP}\) to estimate VM threats \(\{\Xi_{1}^{p},\Xi_{2}^{p},...,\Xi_{Q_{\ast}}^{p}\}\) with improved accuracy and reduced computation time. The data samples containing selected features \(\{\mathrm{F}^{\ast}_{1},\Xi_{2}^{\ast},...,\mathrm{F}_{s}^{\ast}\}\in\mathrm{F}^ {\ast}\) are split into training samples \(\{\bar{\mathrm{F}}_{1}^{\ast\ast},\bar{\mathrm{F}}_{2}^{\ast\ast},...,\bar{ \mathrm{F}}_{s}^{\ast\ast}\}\in\bar{\mathrm{F}}^{\ast\ast}\) and testing samples \(\{\mathrm{F}_{1}^{\ast\ast},\bar{\mathrm{F}}_{2}^{\ast\ast},...,\mathrm{F}_{s }^{\ast\ast}\}\in\mathrm{F}^{\ast\ast}\) subject to the constraints: (_i_) \(\mathrm{F}^{\ast}=\bar{\mathrm{F}}^{\ast\ast}\cup\mathrm{F}^{\ast\ast}\) (_ii_) \(\bar{\mathrm{F}}^{\ast\ast}\cap\mathrm{F}^{\ast\ast}=\emptyset\) (_iii_) \(\{s^{\ast},s^{\ast\ast}\}\subseteq s\) where \(s\) is total number of data samples. A mapping \(\{\Omega|\Omega:\bar{\mathrm{F}}^{\ast\ast\ast}\times\mathtt{TP}\Rightarrow \mathtt{TP}^{\ast}\}\) trains threat predictor \(\mathtt{TP}\) with \(\bar{\mathrm{F}}^{\ast\ast}\) to generate Trained Predictor (\(\mathtt{TP}^{\ast}\)) during training phase while \(\{\Omega^{\ast}|\Omega^{\ast}:\bar{\mathrm{F}}^{\ast\ast}\times\mathtt{TP}^{ \ast}\Rightarrow\mathtt{TP}^{\ast\ast}\}\) evaluates \(\mathtt{TP}^{\ast}\) with unseen test data \(\mathrm{F}^{\ast\ast}\) to generate Tested Predictor (\(\mathtt{TP}^{\ast\ast}\)) for online VM threat prediction.
The proposed VM threat predictor utilizes an Extreme-Gradient Boosting (XGB) based machine learning algorithm to learn and develop the precise correlations among extracted patterns for accurate prediction of cyberthreats: {\(\Xi_{1}^{p}\), \(\Xi_{2}^{p}\),..., \(\Xi_{Q_{\ast}}^{p}\)}. Let a threat predictor (TP) is composed of \(l\) base learners (i.e., decision trees) \(\mathrm{BL}^{\ast}=\{\mathrm{BL}_{1}^{\ast},\mathrm{BL}_{2}^{\ast},...,\mathrm{ BL}_{l}^{\ast}\}\) and predicts output \(\mathsf{O}^{\ast}=\{\mathsf{O}_{1}^{\ast},\mathsf{O}_{2}^{\ast},...,\mathsf{O}_ {l}^{\ast}\}\) using Eq. (1), where \(\mathrm{F}_{i}\subseteq\mathrm{F}^{\ast}\) such that \(\mathrm{F}\) represents the input vector of size \(s^{\ast}\). During each iteration, decision trees are trained incrementally to reduce prediction errors and the amount of error reduction is computed as _gain_ or _loss term_ using Eq. (2). The expressions \(L(\mathsf{O}^{\ast},\mathsf{O}^{\ast\ast}_{1-1}+\mathrm{BL}_{4}^{\ast}(\mathrm{ F}_{i}))\) and \(\Psi(\mathrm{BL}_{4}^{\ast})\)
Fig. 2: Multiple Risks Analysis based VM Threat Prediction Model (MR-TPM)
are loss term and a regularization term, respectively. Taylor expansion is applied to compute the exact loss for different possible base learners, which updates Eq. (2) to Eq. (3); where \(g_{i}=\partial_{0^{**}_{t-1}}L(\mathsf{O}^{*},\mathsf{O}^{**}_{t-1})\), and \(h_{i}=\partial^{2}_{0^{**}_{t-1}}L(\mathsf{O}^{*},\mathsf{O}^{**}_{t-1})\) are first and second order derivatives of loss function in the gradient, respectively. The term \(\Psi(\mathrm{BL}^{*}_{t})\) is computed using Eq. (4), where \(\gamma\) and \(\lambda\) are \(L_{1}\) and \(L_{2}\) regularisation coefficients, respectively, \(w\) is internal split tree weight and \(K\) is the number of leaves in the tree.
\[\mathsf{O}^{*}=\sum_{z=1}^{l}\mathrm{BL}^{*}_{z}(\mathrm{F}_{i}) \quad\forall i\in\{1,2,...,s^{*}\} \tag{1}\] \[L_{t}=\sum_{i=1}^{s^{*}}L(\mathsf{O}^{*},\mathsf{O}^{**}_{t-1}+ \mathrm{BL}^{*}_{t}(\mathrm{F}_{i}))+\Psi(\mathrm{BL}^{*}_{t})\] (2) \[L_{t}=\sum_{i=1}^{s^{*}}[g_{i}\mathrm{BL}^{*}_{t}(\mathrm{F}_{i })+\frac{1}{2}h_{i}\mathrm{BL}^{*}_{t}(\mathrm{F}_{i})]+\Psi(\mathrm{BL}^{*}_{ t})\] (3) \[\Psi(\mathrm{BL}^{*}_{t})=\gamma K+\frac{1}{2}\lambda||w||^{2} \tag{4}\]
During each time-interval \(\{t_{a},t_{b}\}\in t,a<b\), live selected features \(\tilde{\mathsf{F}}^{**}\) are given as input to the above discussed threat predictor \(\mathrm{T}\mathrm{P}^{**}\) to estimate the status of threat \(\Xi\) for VMs \(\{\tilde{V}_{1},\tilde{V}_{2},...,\tilde{V}^{*}_{Q}\}\) in the next time-interval \(\{t_{a+1},t_{b+1}\}\in t,a<b\). Accordingly, the process of VM-threat handling is performed for the VMs with predicted threat-status (\(\tilde{V}_{i}^{\Xi}>0:i\in[1,2,...Q^{**}],Q^{**}\subseteq Q^{*}\subseteq Q\)) by shifting them to a server where the possibility of threat is least (\(\tilde{V}_{i}^{\Xi}=0\)). A detailed description of VM security risk factors is provided in the subsequent sections.
## 4 User behaviour analysis
Users \(\{U_{1},U_{2},...,U_{M}\}\) submit job requests \(\{Rq_{1},Rq_{2},...,Rq_{M}\}\) during time-interval \(\{t_{a},t_{b}\}\) at the cloud platform as depicted in Fig. 3, where the users are classified into Trusted, Non-trusted and Unknown users. The \(k^{th}\) user \(U_{k}\) behaviour is defined in accordance with the actions of its VMs as follows: _Trusted_: The user behaviour is trusted when the VMs of known user \(U_{k}\) (having historical records of VM resource usage), execute assigned load efficiently without interrupting and interfering with other co-located VMs via an unauthorised access, irrespective of the presence of any vulnerabilities of software or hardware. _Non-trusted_: A user \(U_{k}\) is non-trusted in case of the users VM attempt any kind of cybercrime or malicious activity such as unauthorized data access, data hijacking, data phishing, etc. by leveraging the susceptibilities of cloud virtualization technology. _Unknown_: The new user for which there are no records of any previous VM usage, is considered as unknown user. User behaviour analysis deals with the process of critical monitoring, recording, and examining the traces of their previous VM usage and the inter-relationships among co-resident VMs of different users periodically to interpret or investigate the occurrence of cyberthreats in the presence of intended vulnerabilities of cloud environments. The class of user and the selected VM placement policy are passed to the load balancer, which makes VM management decisions. Accordingly, the VMs are deployed on different servers to compute the users' data \(\{Rq_{1},Rq_{2},...,Rq_{M}\}\). Concurrently, the VM usage traces or type of data access information is collected and passed to 'VM usage database' for examination of the user behaviour.
**Definition 2**.: _VM usage database_ (\(\mathrm{DB}\))_: The historical repository of data values concerning VM usage related attributes such as its ephemeral user ID, CPU, memory, and bandwidth usage, inter-communication links with other VMs, types of authorized access, etc., constitute VM usage database which is utilized for multiple risks computation, training of resource usage predictor, and VM threat predictor._
The new VM of \(k^{th}\) user (\(U_{k}\)) is allocated according to the analysis of \(U_{k}\) behaviour by applying Eq. (5), where \(\Theta_{k}\) represents malicious actions for e.g., unauthorized access executed by \(U_{k}\).
\[U_{k}=\begin{cases}Trusted(0),&If(\Theta_{k}=0)\\ Non-trusted(1),&If(\Theta_{k}>0)\\ Unknown(-1),&\text{otherwise}\end{cases} \tag{5}\]
**Theorem 1**.: _The behaviour of user \(U_{k}^{*}\) having VM \(V_{i}^{k^{*}}\) hosted on server \(S_{j}\) is bounded by \(\Theta\) such that for a given time-period {\(t_{a}\), \(t_{b}\)} and VM usage database (\(\mathrm{DB}\neq\phi\)), if \(\Theta_{k}^{*}=1\), \(U_{k}^{*}\) is non-trusted; otherwise, it is trusted._
Proof.: Let \(\Theta_{ij,k=i^{*},j,k^{*}}\) represents a data access by a user \(U_{k}^{*}\) owning VM \(V_{i^{*}}^{k^{*}}\) to \(k^{th}\) user \(U_{k}\) VM \(V_{i}^{k}\) during time {\(t_{a}\), \(t_{b}\)}, is formulated in Eq. (6):
\[\int_{t_{a}}^{t_{b}}\Theta_{ij,k\Rightarrow i^{*}j,k^{*}}dt=\int_{t_{a}}^{t_{b} }(\omega_{ij}^{k}\times\omega_{i^{*}j}^{k^{*}})\times\uplus_{ij,k\Rightarrow i ^{*}j,k^{*}}^{\eta}dt \tag{6}\]
where, \(\uplus_{ij,k\Rightarrow i^{*}j,k^{*}}^{\eta}\) represents inter-VM relationship between \(V_{i}^{k}\) and \(V_{i^{*}}^{k^{*}},\forall\{i,i^{*}\}\in Q,j\in P\). It is a Boolean value, \(1\) for unauthorised data access (i.e., non-trusty relation) and \(0\) for trusty relation. Assume LA (stated in Eq. (7)) specifies set of authorized inter-VM links for \(i^{th}\) VM of \({k^{*}}^{th}\) user. The inter-VM relationship (\(\uplus_{ij,k\rightarrow i^{*}j,k^{*}}^{\eta}\)) between \(V_{i}^{k}\) and \(V_{i^{*}}^{k^{*}}\) placed on \(j^{th}\) server is evaluated in Eq. (8) which corresponds
Fig. 3: User classification
to the inter-VM links \(\mathds{L}\mathds{A}_{ij,k\Rightarrow i^{*}j,k^{*}}\) between them.
\[\mathds{L}\mathds{A}^{V_{i,k^{*}}}\in\{\mathds{L}\mathds{A}_{1}^{V_{i,k^{*}}}, \mathds{L}\mathds{A}_{n}^{V_{i,k^{*}}},...,\mathds{L}\mathds{A}_{n}^{V_{i,k^{*} }}\} \tag{7}\] \[\uplus_{ij,k\to i^{*}j,k^{*}}^{v^{*}}=\begin{cases}1,&If\quad \mathds{L}\mathds{A}_{ij,k\to i^{*}j,k^{*}}\not\subset\mathds{L}\mathds{A}^{V_ {i,k}}\\ 0,&\text{otherwise}\end{cases} \tag{8}\]
Hence, when the user \(U_{k}^{*}\) has attempted an unauthorised access, the inter-VM relationship parameter \(\uplus_{ij,k\to i^{*}j,k^{*}}^{v^{*}}\) is equal to \(1\) and applying Eq. (8) in Eq. (6), \(\Theta_{k}^{*}=1\) for \(U_{k}^{*}\) is non-trusted, and trusted, otherwise.
**Corollary 1**.: _The user \(U_{k}^{*}\) behaviour is also reflected by the relationship \(\uplus_{ij,k\to S_{j}}^{S}\) between user \(U_{k}^{*}\) and server \(S_{j}\) which is 'non-trusty' for malicious records (\(\mathds{H}^{\ddagger}{}_{j}\)) greater than \(0\), otherwise, it is trusty._
Proof.: Let an unauthorized data access \(\Theta_{ij,k^{*}\Rightarrow S_{j}}\) from \(i^{th}\) VM of \(k^{*th}\) user to server \(S_{j}\) during time \(\{t_{a},t_{b}\}\) is formulated in Eq. (9). The term \(\uplus_{ij,k^{*}}^{S}=\{0,1\}\) signifies a relationship between \(S_{j}\) and \(U_{k}^{*}\), such that it is equals to a Boolean value, \(1\) for an unauthorized data access via malicious hypervisor, and \(0\) otherwise.
\[\int_{t_{a}}^{t_{b}}\Theta_{ij,k\Rightarrow S_{j}}dt=\int_{t_{a}}^{t_{b}} \omega_{ij,k}\times\uplus_{ij,k\to S_{j}}^{S}dt\quad\forall\{i\}\in Q,j\in P \tag{9}\]
Suppose the relation (\(\uplus_{ij,k\to S_{j}}^{S}\)) between user \(U_{k}\) and server \(S_{j}\) is analysed using Eq. (10), where \(\mathds{H}^{\ddagger}{}_{j}\) represents malicious actions records computed using Eq. (11).
\[\uplus_{ij,k\to S_{j}}^{S}=\begin{cases}1,&If(\mathds{H}^{\ddagger}{}_{j}>0 \wedge H_{j}>H_{Thr})\\ 0,&\text{otherwise}\end{cases} \tag{10}\]
\[\mathds{H}^{\ddagger}{}_{j}=\sum\omega_{ij}^{k}\times\uplus_{i^{*}j}^{k^{*}} \times\Theta_{ij,k\Rightarrow i^{*}j,k^{*}} \tag{11}\]
If user \(U^{k^{*}}\) is non-trusty, then \(\Theta_{ij,k\Rightarrow i^{*}j,k^{*}}=1\) (as proved in Theorem 1). Accordingly, the value of the term \(\mathds{H}^{\ddagger}{}_{j}\) is also \(1\). Putting \(\mathds{H}^{\ddagger}{}_{j}\)= 1 in Eq. (10) when \(H_{j}>H_{Thr}\), the value of \(\uplus_{ij,k\to S_{j}}^{S}\) becomes 1. Hence, \(\mathds{H}^{\ddagger}{}_{j}>0\) for a non-trusty behaviour of user \(U^{k^{*}}\).
Further, the total threat information or unauthorized data access \(\Theta_{k}\) for the duration \(\{t_{a}\), \(t_{b}\}\) by \(U^{k}\) is compiled by applying Eq. (12):
\[\int_{t_{a}}^{t_{b}}\Theta_{k}dt=\int_{t_{a}}^{t_{b}}(\Theta_{ij,k\Rightarrow i ^{*}j,k^{*}}+\Theta_{ij,k\Rightarrow S_{j}})dt \tag{12}\]
The Random Forest Classifier (RFC) classifies users \(U_{1},U_{2},...,U_{M}\) on the basis of their future behaviour by utilizing the learning capability of different base learners or decision trees and knowledge driven via extracted correlated patterns from their historical information, where Eq. (5) is evaluated periodically for duration \(\{t_{a},t_{b}\}\in t\). RFC is composed of \(n^{*}\) base learner estimators that produce \(n^{*}\) outcomes and apply majority voting to predict absolute behaviour of user \(U_{k}\).
## 5 Configuration-dependent factors
The vulnerabilities of virtualisation technology and VM security loopholes which are governed by the susceptibilities related to the creation and installation of VMs, including sharing of a common physical machine, hypervisor or guest OS installation, are confined to configuration-dependent risks. MR-TPM considers two configuration dependent security risk factors (\(R_{2},R_{3}\)), including VM vulnerability (\(L\)) [29] and Hypervisor vulnerability (\(H\)) [30]. A malicious user (\(U^{Mal}\) : \(U^{Mal}\subseteq\mathds{U}\)) launches multiple applications (\(A_{p}\), \(A_{q}\),..., \(A_{t}\)) to compromise the target benign VM (\(V^{Ben}:V^{Ben}\subseteq\mathds{V}\)) by achieving co-residency and exploiting VM and hypervisor vulnerabilities, as shown in Fig. 4. The application \(A_{p}\) of \(U^{Mal}\) exploits the hypervisor vulnerability of server \(S_{1}\) (i.e., \(H_{1}>H_{Thr}\)) and compromises multiple VMs. At server \(S_{p}\), the applications \(A_{s}\) and \(A_{t}\) of \(U^{Mal}\) utilize the vulnerability of VM \(V_{2}\) (i.e., \(L_{2}>L_{Thr}\)) to launch the attack and hamper computational data on it. The parameters \(H_{Thr}\) and \(L_{Thr}\) specify threshold values of hypervisor vulnerability and VM vulnerability, respectively. At server \(S_{2}\), both kinds of vulnerabilities are absent, i.e., the threshold values of VM vulnerability as well as hypervisor vulnerability are lesser than their respective threshold values, and all the VMs deployed on it are secured (\(V^{\mathbb{E}}=0\)).
The vulnerable VMs are deprived of standard security features with respect to the operating system, applications like e-mail, web-browsing, and network protocols, and are prone to loose administrative control. Besides this, vulnerable hypervisors of servers leads to hyperjacking where \(U^{Mal}\) can easily gain unauthorized access of hypervisor to compromise all the hosted VMs and the applications running on them. It is typically launched against Type 2-Hypervisors running over a host operating system. A mapping \(\{\varpi|\varpi:A_{m}\times U^{Mal}\times V_{i}\Rightarrow V_{i}^{Mal}\}\) defines malicious VMs such that an \(i^{th}\) VM (\(V_{i}\)) becomes malicious, if it hosts \(m^{th}\) application (\(A_{m}\)) of \(U^{Mal}\). The probability of threat (\(\mathds{P}(\Xi_{i})\)) for \(i^{th}\) VM over time-interval \(\{t_{a},t_{b}\}\) can be defined using Eq. (13),
\[\mathds{P}(\Xi_{i})=\begin{cases}1,&If(L_{i}>L_{Thr}\quad\&\&\quad\omega_{ji}^{k} \cap\omega_{ji^{*}}^{k^{*}}=S_{j})\\ 1,&If(H_{j}>H_{Thr}\quad\&\&\quad\omega_{ji}^{k}\cap\omega_{ji^{*}}^{k^{*}}=S_ {j})\\ 0,&\text{otherwise}\end{cases} \tag{13}\]
where \(t_{a}<t_{b}\) and \(\omega_{ji}^{k}\cap\omega_{ji^{*}}^{k^{*}}=S_{j}\) signifies co-location of \(i^{th}\) VM (\(V_{i}\)) of \(k^{th}\) benign user (\(U^{Ben}|U^{Ben}\subseteq\mathds{U}\)) and \(i^{*th}\) malicious VM (\(V_{i^{*}}\subseteq V^{Mal}\)) of \(k^{*th}\) malicious user \(U^{Mal}\) at \(j^{th}\) server (\(S_{j}\)).
### _VM vulnerability_
The VMs vulnerability score list is generated using vulnerability scanner tools, such as Common Vulnerability Scoring System (CVSS), Nessus and Qualys [30]. The CVSS measures the severity of vulnerabilities of a hardware or software and produces a score in the range [0, 10]. It quantifies the vulnerability risk score (\(L\)) of \(i^{th}\) VM in the range [0, 1] by applying Eq. (14).
\[L_{i}=\frac{V_{i}^{Score}}{10}\quad\forall i\in[1,Q],V^{Score}\in[1,10] \tag{14}\]
### _Hypervisor vulnerability_
The security risk of a hypervisor (\(H\)) depends on its own vulnerability (\(S^{Hyp\_scor}\)) as computed in Eq. (15) by applying CVSS score system and the vulnerability of the VMs hosted on it. The overall vulnerability score of hypervisor \(H_{j}\) is given by Eq. (16), where \(max(L_{i}\times\omega_{ij})\) represents maximum VM vulnerability score (\(L\)) among all VMs hosted on server \(S_{j}\), \(\forall i\in[1,Q],j\in[1,P]\), \(\omega_{ij}=1\) if \(S_{j}\) hosts \(V_{i}\).
\[S^{Hyp\_scor}_{j}=\frac{S^{Score}_{j}}{10}\quad\forall S^{Score} \in[1,10] \tag{15}\] \[\int_{t_{a}}^{t_{b}}H_{j}dt=\int_{t_{a}}^{t_{b}}S^{Hyp\_scor}_{j} (1+max(L_{i}\times\omega_{ij}))dt \tag{16}\]
## 6 Allocation-dependent factors
The cybersecurity risk factors pertaining to the distribution of physical resources and assignment of VMs on physical servers subject to resource availability constraints, characterize allocation-dependent risk factors. The VM security risks due to the Side-channel effect and Network cascading effect depend upon the placement of VMs of different users on available servers (i.e., \(\text{U}\times\text{V}\Rightarrow\text{S}\)). Two VMs \(V_{i}\) and \(V_{j}\) are inter-dependent \(iff(V_{i},V_{j})\in\text{LA}\), where L\(\&\) implies Legal Access subject to the constraints:
* \(V_{i}(\text{L\&})V_{i}\quad\forall_{V_{i}}\in\text{L\&}\),
* \(V_{i}(\text{L\&})V_{j}=V_{j}(\text{L\&})V_{j}\quad\forall_{V_{i},V_{j}}\in \text{L\&}\),
* \(\{V_{i}(\text{L\&})V_{j}\cup V_{j}(\text{L\&})V_{k}\}\Rightarrow V_{i}(\text{ L\&})V_{k}\),
* \(\forall_{V_{i},V_{j},V_{k}}\in\text{L\&}\)
As depicted in Fig. 5, a malicious user \(U^{Mal}\) executes an application at \(V_{1}\) hosted on the server (\(S_{1}\)) having an effective VM vulnerability, i.e., \(L_{1}^{1}>L_{Thr}\), achieves co-residency with one of the inter-dependent VMs (\(\{V_{1},V_{2},...,V_{Z}\}\in\text{L\&}\)), where \(Z\) is the number of inter-dependent VMs. The malicious VM (\(V_{1}^{Mal}\)) successfully launches side-channel threat on vulnerable VM (\(V_{2}\)) and the threat propagates to multiple VMs crossing physical boundaries of network devices using network cascading effect via inter-communication links among VMs: \(\{V_{1},V_{2},...,V_{Z}\}\in\text{L\&}\). The probability of threat (\(\text{P}(\Xi_{i})\)) for \(i^{th}\) VM over time-interval \(\{t_{a},t_{b}\}\) is defined using Eq. (17), where \(\text{C}^{*}_{ii^{*}j}=\omega_{ij}\times\omega_{i^{*}j}=\{0,1\}\) is a Boolean variable which specifies co-location between \(i^{th}\) VM (\(V_{i}\)) and \(i^{*}\) malicious VM (\(V_{i}^{*}\)) at server (\(S_{j}\)).
\[\text{P}(\Xi_{i})=\begin{cases}1,&If((L_{i}>L_{Thr}\lor H_{j}>H_{Thr})\wedge \text{C}^{*}_{ii^{*}j})\\ 1,&If(\Pi_{k=1}^{Z}(\text{C}^{*}_{ki^{*}j^{*}}\times\text{C}^{*}_{ikj}\times L _{k})>L_{Thr})\\ 0,&\text{otherwise}\end{cases} \tag{17}\]
### _Side-channel effect_
Let a malicious VM \(V_{j}^{Mal}\) and benign VM \(V_{i}^{Ben}\) are hosted on server \(S_{k}\). If \(V_{j}^{Mal}\) compromises any VM on server \(S_{k}\), then it can compromise other co-resident VMs eventually. Hence, the survival of \(V_{i}^{Ben}\) depends on its own vulnerability score (\(L_{i}\)) and its co-resident VMs vulnerability score. The side-channel risk score (\(C\)) of \(V_{i}^{Ben}\) during time-interval \(\{t_{a},t_{b}\}\) is calculated as stated in Eq. (18), where \(\omega_{jk}\times\omega_{ik}\) represents co-location of \(i^{th}\) and \(j^{th}\) VM on \(k^{th}\) server, \(\forall i,j\in[1,Q]\), \(k\in[1,P]\).
\[\int_{t_{a}}^{t_{b}}C_{i}dt=\int_{t_{a}}^{t_{b}}1-\Pi_{j=1}^{Q}(1-L_{j}\times \omega_{jk}\times\omega_{ik})dt \tag{18}\]
### _Network cascading effect_
The impact of cascading network connections among VMs on cloud security establishes the network cascading effect. It is computed in terms of network cascading score (\(N\)) respective to VM \(V_{i}\) during time-interval \(\{t_{a},t_{b}\}\) using Eq. (19), where \(V_{i}\) and \(V_{j}\) are connected via legal access network link and hosted on different servers \(S_{k}\) and \(S_{k^{*}}\) such that
Figure 4: VM and hypervisor vulnerability based threats
\(\forall i,j\in[1,Q],i\neq j\). If a malicious VM \(V^{Mal}\) hosted on server \(S_{k^{*}}\), is successful in compromising the VM \(V_{j}\), then it can compromise VM \(V_{i}\) and all other VMs that are connected via common network by exploiting the network paths.
\[\int_{t_{a}}^{t_{b}}N_{i}dt=\int_{t_{a}}^{t_{b}}1-\Pi_{j=1}^{Q}(1-L_{j}\times \omega_{ik}\times\omega_{jk^{*}})dt \tag{19}\]
## 7 Operational Design and Complexity
MR-TPM utilizes values of current state of multiple security attack factors {\(R_{1},R_{2},R_{3},R_{4},R_{5}\)} and three historical databases namely \((i)\) VMs' resource utilization {\(RU_{1}\), \(RU_{2}\),..., \(RU_{Q}\)} \(\in\) RU_db_, \((ii)\) user-records {\(U_{1}\), \(U_{2}\),..., \(V_{M}\)} \(\in\) U_db_ and \((iii)\) VM threats traces {\(\Xi_{1}\), \(\Xi_{2}\),..., \(\Xi_{n}\)} \(\in\) Th_db_. The set of users \(U_{1}\), \(U_{2}\),..., \(U_{M}\), servers \(S_{1}\), \(S_{2}\),..., \(S_{P}\) and \(V_{1}\), \(V_{2}\),..., \(V_{Q}\) are initialized followed by a mapping \(\mathbb{U}\times\mathbb{V}\Rightarrow\mathbb{S}\) among VMs, users and servers. The VMs are allocated to servers using some suitable VM placement strategy, for example, First-Fit Decreasing (FFD), Best-Fit, Greedy, Random-Fit etc. Thereafter, for each consecutive time-intervals {\(t_{a}\), \(t_{b}\)}, current resource utilization of \(V_{1}\), \(V_{2}\),..., \(V_{Q}\) are passed as input into a workload predictor [31] trained with RU_db_ to estimate their resource utilization during next time-interval. The threat status prediction is conducted for the VMs with predicted workload estimation (\(W^{p}>0\)). To predict future threat status of VM \(V_{i}\), values of \(R_{1},R_{2},R_{3},R_{4}\) associated with \(V_{i}\) are assessed by applying Eqs. (14)-(19). The assessment of \(R_{5}\) is done by analysing the behaviour of co-resident users of \(V_{i}\) by using RFC based user classifier trained with U_db_. The current score values of \(R_{1},R_{2},R_{3},R_{4},R_{5}\) are fed as input into threat predictor (\(\mathrm{TP}^{**}\)) trained and tested with Th_db_, to predict the future threat status (\(\widehat{V}_{i}^{\Xi}\)) of \(V_{i}\). Accordingly, the VMs with \(\widehat{V}_{i}^{\Xi}>0\) are migrated to server where \(\widehat{V}_{i}^{\Xi}=0\) by applying Eq. (20). The migration cost is computed using Eq. (21), where D(\(S_{k},S_{j}\)) is the distance or number of hops covered by migrating VM \(V_{mig}\) from source (\(S_{k}\)) to destination server \(S_{j}\), \(\{j,k\in[1,P]\}\), \(V_{mig}\in\) TP\({}_{V}\), WW(\(V_{mig}\)) = \(V_{mig}^{CPU}\times V_{mig}^{Mern}\) is the size of migrating VM, TP\({}_{V}\) is the list of VMs with 'unsafe' status or VMs on overloaded server (\(S_{k}\)). The first term \(\sum\text{CC}_{mig.j}\text{D}(S_{k},S_{j})*\text{WW}(V_{mig})\) signifies network energy consumed during VM migration. The second term \(\sum\text{G}_{j}\text{B}_{j}\) specifies server state transition energy, where if \(i^{th}\) VM is placed at \(j^{th}\) server after migration, then CC\({}_{mig.j}=1\), otherwise \(0\). If \(j^{th}\) server receives one or more VMs after migration, then G\({}_{j}=1\) else it is 0. Similarly, if \(\text{B}_{j}=0\), then \(j^{th}\) server is active before migration, otherwise, \(\text{B}_{j}=\text{E}_{tr}\) where E\({}_{tr}=4260\) Joules which is energy consumed in switching a server from sleep to active state [32, 33].
\[V_{i}{}^{mig\_status}=\begin{cases}1&If(\hat{V}_{i}^{\Xi}>0)\\ 0&\text{otherwise}\end{cases} \tag{20}\] \[\mathcal{M}_{c}=\sum\text{CC}_{mig.j}(\text{D}(S_{k},S_{j})*\text{ WW}(V_{mig}))+\sum\text{G}_{j}\text{B}_{j} \tag{21}\]
The operational summary for proposed work is depicted in Algorithm 1.
```
1 Initialize: \(List_{\text{U}}\), \(List_{\text{V}}\), \(List_{\text{S}}\), \(\omega\);
2 Allocate \(V_{1}\), \(V_{2}\),..., \(V_{Q}\) to \(S_{1}\), \(S_{2}\),..., \(S_{P}\) by defining a mapping \(\mathbb{U}\times\mathbb{V}\Rightarrow\mathbb{S}\) ;
3foreach time-interval {\(t_{a},t_{b}\)}do
4\([V_{i}^{Pred}]\Leftarrow\) Workload Prediction(\(V_{i}\)) \(\forall i\in\{1,2,...Q\}\) ;
5if(\(V_{i}^{Pred}>0\))then
6\([\hat{V}_{i}^{\Xi}>0]\Leftarrow\) Threat Predictor (\(\mathrm{TP}^{**}\));
7if\(\hat{V}_{i}^{\Xi}==\)'unsafe'then
8 Migrate \(V_{i}\) to server \(S_{k}\) such that \(\hat{V}_{i}^{\Xi}==\)'safe';
9 Compute \(\mathcal{M}_{c}\) by applying Eq. (21);
10
11else
12 Keep \(V_{i}\) at same server until user terminates it;
13
14
15
16 end if
17
18else
19 VM threat prediction is not required;
20
21 end if
22
23 end for
```
**Algorithm 1**MR-TPM Operational Summary
The \(List_{\text{U}}\), \(List_{\text{V}}\), \(List_{\text{S}}\), \(\omega\);
```
1 Initialize: \(List_{\text{U}}\), \(List_{\text{V}}\), \(List_{\text{S}}\), \(\omega\);
2 Allocate \(V_{1}\), \(V_{2}\),..., \(V_{Q}\) to \(S_{1}\), \(S_{2}\),..., \(S_{P}\) by defining a mapping \(\mathbb{U}\times\mathbb{V}\Rightarrow\mathbb{S}\) ;
3foreach time-interval {\(t_{a},t_{b}\)}do
4\([V_{i}^{Pred}]\Leftarrow\) Workload Prediction(\(V_{i}\)) \(\forall i\in\{1,2,...Q\}\) ;
5if(\(V_{i}^{Pred}>0\))then
6\([\hat{V}_{i}^{\Xi}>0]\Leftarrow\) Threat Predictor (\(\mathrm{TP}^{**}\));
7if\(\hat{V}_{i}^{\Xi}==\)'unsafe'then
8 Migrate \(V_{i}\) to server \(S_{k}\) such that \(\hat{V}_{i}^{\Xi}==\)'safe';
9 Compute \(\mathcal{M}_{c}\) by applying Eq. (21);
11
12
13 end if
14
15 end for
16
17 end for
```
**Algorithm 2**MR-TPM Operational Summary
The \(List_{\text{U}}\), \(List_{\text{V}}\), \(List_{\text{S}}\), \(\omega\);
Fig. 5: Side channel and Network cascading threats
## 8 Performance Evaluation
### _Experimental Set-up and Implementation_
The simulation experiments are executed on a server machine assembled with two Intel(r) Xeon(r) Silver 4114 CPU with 40 core processor and 2.20 GHz clock speed in Cloud data center simulation framework implemented in Python Jupyter Notebook. The computation machine is deployed with 64-bit Ubuntu 16.04 LTS, having main memory of 128 GB. The datacenter environment is set up with three different types of servers and four types of VMs configuration shown in Tables II and III. The resource features like power consumption (\(P_{max},P_{min}\)), MIPS, RAM and memory are taken from real server configuration; IBM [34] and Dell [35], where \(S_{1}\) is 'ProLiantM110G5XEON3075', \(S_{2}\) is 'IBMX3250Xeonx3480' and \(S_{3}\) is 'IBM3550Xeonx5675'. Furthermore, the experimental VMs configurations are inspired from the VM instances of the Amazon website [36].
### _Datasets and Simulation parameters_
MR-TPM is evaluated using two benchmark VM traces from a publicly available real workload datasets: _OpenNebula Virtual Machine Profiling Dataset_ (ONeb) [37] and _Google Cluster Data_ (GCD) [38]. ONeb provides information gathered by the monitoring system for six VMs over 63 Hours via executing a set of probe programs provided by OpenNebula. It reports VM threats respective to the server status, basic performance indicators, as well as VM status, and resource capacity consumption of server hosting these VMs. The exact values of VM and hypervisor vulnerability scores are not reported in the original VM threat database. Therefore, to prepare VM threat database including attributes: {\(V\_id\)\({}_{i}^{viclim}\), \(S\_id\), \(V\_id\)\({}_{i}^{dMal}\), \(V_{i}^{CPU}\), \(V_{i}^{BW}\), \(V_{i}^{memory}\), \(R_{i}^{score}\), \(L_{i}\), \(H_{i}\), \(C_{i}\), \(N_{i}\), \(V_{i}^{status}\),..., etc.}, the VMs of ONeb dataset that have experienced attacks, are assigned vulnerability score higher than the threshold value of VM threat (which is considered 0.5 for the experiments) and the rest of the risk scores are computed using Eqs. (13)-(19). These VM threats information is learned by the VM threat predictor for estimation of threats on VMs before occurrence.
Also, we have utilized a realistic workload of Google Cluster Data (GCD) 1, which provides resource usage percentage for each job in every five minutes over period of twenty-four hours. GCD contains capacity usage information of resources CPU, memory, disk I/O request information of 672,300 jobs executed on 12,500 servers for the period of 29 days. The VM vulnerability (\(L\)) and server hypervisor vulnerability (\(H\)) are generated in the range [0, 10] during VMs and the server's initialization. Accordingly, the VM threat database reporting traces of attacks on GCD VMs, including attributes {\(V\_id\)\({}_{i}^{viclim}\), \(S\_id\), \(V\_id\)\({}_{i}^{dMal}\), \(V_{i}^{CPU}\), \(V_{i}^{BW}\), \(V_{i}^{memory}\), \(R_{i}^{score}\), \(L_{i}\), \(H_{i}\), \(C_{i}\), \(N_{i}\), \(V_{i}^{status}\),..., etc.} is generated and updated at runtime according to requirement of the proposed model. These datasets do not report user database and hence, we created a user database consisting of {\(U_{id}\), \(Attack_{threshold}\), \(U_{class}\)} and utilized it for user behaviour analysis based on their previous VM usage. The number of users is considered equals to 30% of the number of VMs (i.e., 1200 VMs), who requested varying number and type of VMs over time. Therefore, different number and types of VMs are mapped with user at run-time and according to the risk scores associated to different VMs, threat is defined. Each user can hold VMs with a constraint that at any instance, the total number of VMs requests must not exceed total number of available VMs at the datacenter. The user database is created and updated during runtime. All the experiments are executed for 100 time-intervals of five minutes to analyse the performance of proposed model dynamically, though this period can be extended as per the availability of traces. The security threats are generated depending upon the threshold values for the four types of risks {\(L_{i}\), \(H_{i}\), \(C_{i}\), \(N_{i}\)} associated with a VM and presence of the malicious \(V^{Mal}\). The presence of some malicious user VM (\(V^{Mal}\)) on a server and the risk scores corresponding to \(i^{th}\) VM (\(V_{i}\)) 'greater than equal to' their respective threshold values indicate the high probability of security threat (i.e., \(V_{i}^{\Xi}>0\)).
Footnote 1: [https://github.com/HiPro-IT/CPU-and-Memory-resource-usage-from-Google-Cluster-Data](https://github.com/HiPro-IT/CPU-and-Memory-resource-usage-from-Google-Cluster-Data)
### _VM Cyberthreat Prediction_
The VM threat prediction is performed for the different population of malicious user \(U^{Mal}\), such as 5%, 20%, 50% and 90%. The prediction accuracy achieved during training, testing, and live phase for 5% and 50% \(U^{Mal}\) over period of 500 minutes is shown in Fig. 6 for both the datasets. It can be noticed that prediction accuracy is closer to 98% for all three phases, which is slightly increasing for live cyberthreat detection during each experiment because of the capability of online-learning and re-training of MR-TPM with the passage of time. To provide online training, we perform read/write operation of live VM threats in threat database file dynamically during period of 500 minutes. The Receiver Operator Characteristics (ROC) curve obtained for different experiments using both the datasets are depicted in Fig. 7. ROC curves for the \(U^{Mal}=5\%\) is better than the ROC curves obtained with \(U^{Mal}=50\%\) because of effective learning of true threats in the presence of least number of malicious users. It is observed that the proposed MR-TPM efficiently predicts VM threats for the test as well as live data in all the experiments for both datasets.
Fig. 8 analyses the Actual Threat (AT), Predicted Threat (PT), and Unpredicted Threat (UT) for online VM threat
\begin{table}
\begin{tabular}{l c c c c c c} \hline Server & PE & MIPS & RAM(GB) & Storage(GB) & \(PW_{max}\) & \(PW_{min}\)/\(PW_{id}\) \\ \hline \(S_{1}\) & 2 & 2660 & 4 & 160 & 135 & 93.7 \\ \(S_{2}\) & 4 & 3067 & 8 & 250 & 113 & 42.3 \\ \(S_{3}\) & 12 & 3067 & 16 & 500 & 222 & 85.4 \\ \hline \end{tabular}
\end{table} TABLE II: Server Configuration
\begin{table}
\begin{tabular}{l c c c c} \hline VM type & PE & MIPS & RAM(GB) & Storage(GB) \\ \hline \(v_{type1}\) & 1 & 500 & 0.5 & 40 \\ \(v_{type2}\) & 2 & 1000 & 1 & 60 \\ \(v_{type3}\) & 3 & 1500 & 2 & 80 \\ \(v_{type4}\) & 4 & 2000 & 3 & 100 \\ \hline \end{tabular}
\end{table} TABLE III: VM Configuration
prediction in the presence of 5%, and 50% \(U^{Mal}\) for both datasets. It can be observed that most of the VM threats are predicted correctly where UT is closer to zero and PT is closer to AT, indicating that along with all true threats, some false threats are also predicted. However, the difference between AT and PT is reducing over the time with enhancement of learning capability of VM threat predictor.
The values of precision, recall, F1 measure score, average of mean square error (\(Avg.MSE\)), average of mean absolute error (\(Avg.MAE\)) observed for the different experimental cases of both the datsets, including GCD and ONeb VM traces are shown in Table IV and Table V which are consistently higher than \(0.95\) for each case. The \(Avg.MSE\) and \(Avg.MAE\) values are observed in the range of [\(0.0001-0.0008\)] and [\(0.001-0.010\)], respectively, and the accuracy of prediction is higher than 96% reaching up to 99.71% and 99.25% for the GCD and ONeb, respectively. The reason for such an accurate prediction is the incremental learning of MR-TPM with historical and live VM threat databases in real-time. Fig. 9 shows the changes observed in the various risks scores {\(L\), \(H\), \(N\), \(C\)} of a randomly selected VM among the 1200 VMs under simulation for a period of 500 minutes.
### _Deployment and Comparison_
To further analyse the efficiency of MR-TPM, it is deployed with existing state-of-the-art VM placement (VMP) policies, including Previously Selected Server First (PSSF) [11], Secure and Energy Aware Load Balancing (SEA-LB) [16], Security Embedded Dynamic Resource Allocation (SEDRA) [15] and baseline VMP policies, including First-Fit Decreasing (FFD), Best-Fit (BF), and Random- Fit (RF). All the results shown in Section 8-C are derived with FFD VMP policy.
Table VI compares the average number of VM threats realised without and with MR-TPM (results are shown for the Live phase) integrated together with the above mentioned VMP policies. It can be observed that up to 88.5%, 86.5%, 86.2%, 88.9%, 88.5% and 88.1% threats are reduced with proposed approach over PSSF, SEA-LB, SEDRA, RF, BF and FFD, respectively, for \(U^{Mal}\%=90\) at \(T(min)=500\). The resource utilization of datacenter can be obtained using Eqs. (22), (23), where \(Z\) is the number of resources, \(\omega_{ji}=\{0,1\}\) is mapping between server (\(S_{i}\)) and VM (\(V_{j}\)). Though in formulation, only \(CPU\) and \(Mem\) are considered, it is extendable to any number of resources.
\[RU_{dc}=\int\limits_{t_{1}}^{t_{2}}(\frac{RU_{dc}^{CPU}+RU_{dc}^{Mem}}{|Z| \times\sum_{i=1}^{P}\gamma_{i}})dt \tag{22}\]
\[RU_{dc}^{r}=\sum_{i=1}^{P}\frac{\sum_{j=1}^{Q}\omega_{ji}\times V_{j}^{r}}{S_{ i}^{r}}\quad r\in CPU,Mem,etc. \tag{23}\]
The resource utilization of different VMP integrated with MR-TPM follows the trend: \(FFD\geq SEA-LB\geq SEDRA\geq PSSF\geq BF\geq RF\), as shown in Fig. (a)a.
The power consumption for \(i^{th}\) server can be formulated as \(PW_{i}\) and total power consumption \(PW_{dc}\) during time-interval {\(t_{1}\), \(t_{2}\)} can be computed by applying Eq. (24), where \({PW_{i}}^{max}\), \({PW_{i}}^{min}\) and \({PW_{i}}^{idle}\) are maximum, minimum and idle state power consumption of \(i^{th}\) server.
\[PW_{dc}=\int\limits_{t_{1}}^{t_{2}}\sum_{i=1}^{P}{[{PW_{i}}^{max}-{PW_{i}}^{ min}]RU+{PW_{i}}^{idle}dt} \tag{24}\]
Fig. (b)b shows the comparison of power consumption which is highest (i.e., 109.10 KW) for MR-TPM + PSSF and least (69.29 KW) for MR-TPM + FFD. The average number of active servers is compared in Fig. (c)c, where MR-TPM + FFD and MR-TPM + PSSF operates at the lowest (118) and highest (774) number of active servers, respectively. Both the power consumption as well as the number of active servers follow the trend: \(FFD<BF<SEDRA<SEA-LB<RF<PSSF\). The reason for the resultant trend is that VMs are tightly packed onto servers using FFD, while in the case of others, sharing of servers is minimized for the purpose of security at the cost of the larger number of active servers.
## 9 Conclusions and Future Work
To provide a comprehensive solution for secure workload distribution at cloud datacenter, a novel MR-TPM is proposed which estimates the future threats on user VMs by analysing multiple risk pathways, including VM and hypervisor vulnerabilities, co-residency, network cascading effects and user
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(U^{max}\) & \(time\) & \multicolumn{6}{c|}{Performance metrics} \\ (\%) & (\(min\)) & \(Precision\) & \(Recall\) & \(I^{New}\) & \(Avg.MSE\) & \(Avg.3/AE\) & \(Avg.avg.\) \\ \hline \multirow{4}{*}{5} & 100 & 0.96 & 0.98 & 0.96 & 0.000 & 0.001 & 0.91 & 0.91 \\ \cline{2-8} & 200 & 0.99 & 0.99 & 1.000 & 0.002 & 0.003 & 0.91 & 0.90 \\ \cline{2-8} & 200 & 0.99 & 0.99 & 0.99 & 1.000 & 0.002 & 0.003 & 0.90 \\ \cline{2-8} & 200 & 0.99 & 0.99 & 0.99 & 1.000 & 0.002 & 0.003 & 0.90 \\ \cline{2-8} & 200 & 0.99 & 0.99 & 0.99 & 1.000 & 0.002 & 0.003 & 0.91 \\ \hline \multirow{4}{*}{5} & 100 & 0.99 & 0.97 & 0.97 & 0.900 & 0.002 & 0.003 & 0.90 \\ \cline{2-8} & 200 & 1.00 & 0.99 & 1.000 & 0.003 & 0.003 & 0.90 & 0.90 \\ \cline{2-8} & 200 & 1.00 & 0.99 & 1.00 & 0.002 & 0.003 & 0.91 \\ \cline{2-8} & 200 & 1.00 & 0.99 & 1.00 & 0.002 & 0.003 & 0.91 \\ \cline{2-8} & 200 & 1.00 & 0.99 & 1.00 & 1.000 & 0.002 & 0.003 & 0.91 \\ \cline{2-8} & 200 & 0.99 & 0.98 & 1.00 & 1.000 & 0.002 & 0.014 & 0.90 \\ \hline \multirow{4}{*}{5} & 100 & 0.98 & 0.99 & 0.99 & 0.000 & 0.003 & 0.003 & 0.92 \\ \cline{2-8} & 200 & 0.98 & 0.99 & 0.99 & 0.000 & 0.003 & 0.003 & 0.92 \\ \cline{2-8} & 300 & 1.00 & 1.00 & 1.00 & 0.001 & 0.002 & 0.93 \\ \cline{2-8} & 300 & 1.00 & 0.99 & 1.00 & 0.001 & 0.001 & 0.001 & 0.91 \\ \hline \multirow{4}{*}{90} & 100 & 0.99 & 1.00 & 1.00 & 0.001 & 0.001 & 0.004 & 0.91 \\ \cline{2-8} & 200 & 0.98 & 1.00 & 0.99 & 0.002 & 0.003 & 0.92 & 0.93 \\ \cline{2-8} & 300 & 1.00 & 0.99 & 0.98 & 1.000 & 0.001 & 0.004 & 0.91 \\ \hline \multirow{4}{*}{91} & 100 & 0.99 & 1.00 & 1.00 & 0.001 & 0.004 & 0.004 & 0.91 \\ \cline{2-8} & 200 & 0.99 & 0.98 & 1.00 & 0.001 & 0.003 & 0.004 & 0.92 \\ \cline{1-1} \cline{2-8} & 300 & 0.99 & 0.98 & 1.00 & 0.003 & 0.003 & 0.99 & 0.96 \\ \cline{1-1} \cline{2-8} & 300 & 0.99 & 0.99 & 1.00 & 0.001 & 0.001 & 0.001 & 0.91 \\ \hline \end{tabular}
\end{table} TABLE IV: Performance metrics for GCD VM traces
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(U^{max}\) & \(time\) & \multicolumn{6}{c|}{Performance metrics} \\ (\%) & (\(min\)) & \(Precision\) & \(Recall\) & \(I^{New}\) & \(Avg.MSE\) & \(Avg.3/AE\) & \(Avg.3/AE\) & \(Avg.avg.\) \\ \hline \multirow{4}{*}{5} & 100 & 0.97 & 0.99 & 0.98 & 0.002 & 0.003 & 0.003 & 0.85 \\ \cline{2-8} & 200 & 0.99 & 0.99 & 1.00 &
\begin{table}
\begin{tabular}{|c|c|c||c||c||c||c||c||c||c|c||c|c|c||c|} \hline \multirow{2}{*}{\(U^{Max}\)} & \multirow{2}{*}{\(T\)} & \multicolumn{8}{c||}{Percentage of VM security threats (\(\Xi\))} \\ \cline{3-14} & (\%) & \((min)\) & \multicolumn{1}{c||}{PSSF [11]} & \multicolumn{1}{c||}{SEA-LB [16]} & \multicolumn{1}{c|}{SEBAR [15]} & \multicolumn{1}{c|}{RF} & \multicolumn{1}{c|}{BF} & \multicolumn{1}{c|}{FFD} \\ \cline{3-14} & & W-TF & TF & W-TF & TF & W-TF & TF & W-TF & TF & W-TF & TF & W-TF & TF \\ \hline \hline \multirow{4}{*}{5} & 100 & 116 & 19 & 206 & 13 & 137 & 17 & 276 & 16 & 283 & 19 & 318 & 58 \\ \cline{2-14} & 200 & 203 & 33 & 187 & 27 & 169 & 19 & 296 & 22 & 258 & 17 & 287 & 22 \\ \cline{2-14} & 300 & 270 & 27 & 193 & 19 & 125 & 18 & 226 & 17 & 238 & 16 & 308 & 26 \\ \cline{2-14} & 400 & 216 & 32 & 208 & 18 & 148 & 23 & 298 & 17 & 222 & 18 & 256 & 23 \\ \cline{2-14} & 500 & 223 & 22 & 214 & 21 & 177 & 25 & 196 & 18 & 236 & 17 & 312 & 29 \\ \hline \hline \multirow{4}{*}{20} & 100 & 365 & 26 & 327 & 54 & 244 & 17 & 474 & 19 & 86.7 & 17.4 & 678 & 35 \\ \cline{2-14} & 200 & 376 & 23 & 364 & 68 & 314 & 24 & 494 & 17 & 89.9 & 15.7 & 673 & 42 \\ \cline{2-14} & 300 & 399 & 17 & 397 & 39 & 344 & 28 & 478 & 29 & 87.5 & 14.0 & 579 & 44 \\ \cline{2-14} & 400 & 416 & 31 & 389 & 28 & 297 & 28 & 473 & 26 & 86.4 & 17.2 & 598 & 58 \\ \cline{2-14} & 500 & 402 & 26 & 428 & 49 & 308 & 37 & 501 & 28 & 89.7 & 16.9 & 657 & 39 \\ \hline \hline \multirow{4}{*}{50} & 100 & 537 & 28 & 466 & 37 & 376 & 24 & 638 & 27 & 584 & 27 & 779 & 56 \\ \cline{2-14} & 200 & 556 & 23 & 451 & 39 & 439 & 23 & 627 & 29 & 595 & 24 & 767 & 49 \\ \cline{2-14} & 300 & 536 & 15 & 485 & 44 & 359 & 30 & 692 & 27 & 603 & 38 & 797 & 67 \\ \cline{2-14} & 400 & 547 & 37 & 509 & 35 & 388 & 26 & 701 & 28 & 586 & 27 & 745 & 54 \\ \hline \hline \multirow{4}{*}{90} & 500 & 533 & 41 & 487 & 46 & 373 & 34 & 694 & 25 & 567 & 15 & 779 & 48 \\ \cline{2-14} & 100 & 783 & 57 & 766 & 41 & 676 & 56 & 893 & 65 & 837 & 76 & 958 & 108 \\ \cline{1-1} \cline{2-14} & 200 & 723 & 49 & 748 & 56 & 621 & 43 & 907 & 84 & 878 & 89 & 997 & 99 \\ \cline{1-1} \cline{2-14} & 300 & 792 & 78 & 687 & 75 & 658 & 49 & 958 & 98 & 889 & 94 & 946 & 83 \\ \cline{1-1} \cline{2-14} & 400 & 712 & 62 & 673 & 69 & 684 & 57 & 897 & 87 & 847 & 88 & 984 & 94 \\ \cline{1-1} \cline{2-14} & 500 & 728 & 84 & 678 & 91 & 633 & 87 & 927 & 102 & 837 & 96 & 996 & 118 \\ \hline \end{tabular}
\end{table} TABLE VI: Comparison of number of threats without and with MR-TPM (live phase) deployed with various VMP approaches
Fig. 8: Number of threats (AT:Actual Threats, PT: Predicted Threats, UT: Unpredicted Threats)
Fig. 6: Prediction Accuracy
Fig. 7: ROC Curves
behaviour. The proposed model is periodically trained and retrained with historical and live threat data for accurate prediction of threat on VMs. MRTPM deployed with existing VM allocation policies minimizes multiple risks based VM threats and related adversary breaches. The performance evaluation of the proposed VM threat prediction model supports its efficacy in improving cybersecurity and resource efficiency over the compared approaches. In the future, MR-TPM can be extended with transfer learning to enhance its capabilities of analysing unknown/unseen security threats. Additionally, other possible security risk factors can be quantified and included to improve the prediction approach of cyberthreats further.
## Acknowledgments
The authors would like to thank the University of Aizu, Japan and the National Institute of Technology, Kurukshetra, India for financially supporting the research work.
|
2306.00782 | Design and simulation of a source of cold cadmium for atom
interferometry | We present a novel optimised design for a source of cold atomic cadmium,
compatible with continuous operation and potentially quantum degenerate gas
production. The design is based on spatially segmenting the first and
second-stages of cooling with the the strong dipole-allowed $^1$S$_0$-$^1$P$_1$
transition at 229 nm and the 326 nm $^1$S$_0$-$^3$P$_1$ intercombination
transition, respectively. Cooling at 229 nm operates on an effusive atomic beam
and takes the form of a compact Zeeman slower ($\sim$5 cm) and two-dimensional
magneto-optical trap (MOT), both based on permanent magnets. This design allows
for reduced interaction time with the photoionising 229 nm photons and produces
a slow beam of atoms that can be directly loaded into a three-dimensional MOT
using the intercombination transition. The efficiency of the above process is
estimated across a broad range of experimentally feasible parameters via use of
a Monte Carlo simulation, with loading rates up to 10$^8$ atoms/s into the 326
nm MOT possible with the oven at only 100 $^\circ$C. The prospects for further
cooling in a far-off-resonance optical-dipole trap and atomic launching in a
moving optical lattice are also analysed, especially with reference to the
deployment in a proposed dual-species cadmium-strontium atom interferometer. | Satvika Bandarupally, Jonathan N. Tinsley, Mauro Chiarotti, Nicola Poli | 2023-06-01T15:14:36Z | http://arxiv.org/abs/2306.00782v1 | # Design and simulation of a source of cold cadmium for atom interferometry
###### Abstract
We present a novel optimised design for a source of cold atomic cadmium, compatible with continuous operation and potentially quantum degenerate gas production. The design is based on spatially segmenting the first and second-stages of cooling with the the strong dipole-allowed \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) transition at 229 nm and the 326 nm \({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\) intercombination transition, respectively. Cooling at 229 nm operates on an effusive atomic beam and takes the form of a compact Zeeman slower (\(\sim\)5 cm) and two-dimensional magneto-optical trap (MOT), both based on permanent magnets. This design allows for reduced interaction time with the photoionising 229 nm photons and produces a slow beam of atoms that can be directly loaded into a three-dimensional MOT using the intercombination transition. The efficiency of the above process is estimated across a broad range of experimentally feasible parameters via use of a Monte Carlo simulation, with loading rates up to 10\({}^{8}\) atoms/s into the 326 nm MOT possible with the oven at only 100 \({}^{\circ}\)C. The prospects for further cooling in a far-off-resonance optical-dipole trap and atomic launching in a moving optical lattice are also analysed, especially with reference to the deployment in a proposed dual-species cadmium-strontium atom interferometer.
+
Footnote †: Also at CNR-INO, Sesto Fiorentino, Italy; Author to whom correspondence should be addressed: [email protected]
+
Footnote †: Also at CNR-INO, Sesto Fiorentino, Italy; Author to whom correspondence should be addressed: [email protected]
## I Introduction
The production of cold, large and dense samples is an indispensable technique in modern atomic, ionic and molecular physics [1]. It forms the experimental basis of a diverse range of fundamental and applied experiments, including frequency metrology [2], searches for exotic matter and forces [3], and atom interferometry [4]. Techniques for the fast and robust generation of ultracold samples of many species are consequently well established, for example in the atomic domain, Cs, Rb, Sr, Yb and many others. In other cases, source preparation remains difficult, especially for molecules, due to complexities of the energy level structure or the availability of suitable lasers. In particular, there is growing interest in the laser cooling and trapping of alkaline-earth-like metals, such as Cd, Zn and Hg [5; 6; 7; 8], whose deployment has been slowed by the relevant cooling and trapping transitions lying in the challenging ultraviolet regime.
Here we focus on the design and simulation of a high-flux source of atomic cadmium, which is a transition metal possessing two valence-shell electrons and a similar transition structure to alkaline-earth atoms, providing access to narrow-linewidth intercombination transitions, ideal for high-precision metrology, such as optical clocks and atom interferometers [9], and also access to a broad, dipole-allowed transition suitable for rapid cooling of room-temperature atoms to the mK regime (Fig. 1). In comparison to other alkaline-earth and alkaline-earth-like systems, e.g. Sr and Yb, which have been extensively utilised in leading optical clocks [10; 11; 12; 13] and which are being utilised in a raft of next-generation interferometers [14; 15], in Cd these transitions lie in the UV region, enhancing intrinsic measurement sensitivity of clocks and interferometers and dramatically reducing the sensitivity to blackbody radiation, a major systematic error in clocks [16; 17] and also a factor in high-precision atom interferometry [18]. The afforded high scattering rate and low wavelength of \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) dipole-allowed transitions may potentially also benefit single-atom optical tweezer experiments and related quantum simulators [19].
Despite the increasing interest in Cd due to these properties, experimental demonstrations of cold Cd available in the literature are limited to a handful of examples, with demonstrations of magneto-optical traps (MOTs) on the broadband \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) transition at 229 nm [20; 21] and, more recently, on the narrow 326 nm \({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\) transition [5]. Other common techniques such as Zeeman slowers and 2D-MOTs, or the use of spatially separated regions for optimal vacuum pressure levels, have not been reported, even though they form the basis of many experiments optimised for fast atom loading or continuous sources [22]. Similarly, the production of quantum degenerate sources of Cd has yet to be reported. All attempts to cool and trap Cd are hampered by the problematic nature of the 229 nm light, which is difficult to produce stably at high continuous-wave powers, damages vacuum components and causes photoionisation of Cd [20]. Very recently, a system to trap atoms without using the \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) transition light has been reported, with atom numbers in an intercombination transition MOT enhanced by using the 23-MHz-wide \({}^{3}\)P\({}_{2}\)-\({}^{3}\)D\({}_{3}\) transition at 361 nm and two further lasers for optical pumping [23; 24].
In this article, we present the design and simulation of such an optimised system for Cd to be used as the atomic source for an atom interferometer [9], and with a focus on the unique challenges and opportunities of this atom. In particular, we have designed and extensively simulated a system which uses only a minimal amount of 229 nm
light to generate a slow beam of atoms which can be trapped directly and efficiently in a MOT based only on the 326 nm intercombination transition, the basic idea of which is shown in Fig. 2. The design is inspired by recent developments in continuous source production [25; 26] and is centred around two recently demonstrated UV laser systems [27; 28], which have been designed specifically for this purpose, and on a novel effusive atomic beam of Cd [27].
The structure of this article is the following; in Section II we discuss the general design requirements for the cold-atom source apparatus and discuss the relevant properties of Cd in detail; an overview and the basic idea of the source is given in Section III; in Section IV we present the atom-light interaction model used and give details of the numerical simulation; simulation results of the first two-stages of cooling at 229 nm are presented in Section V; and likewise the trapping in a 3D MOT at 326 nm is shown in Section VI; Section VII brings these results together to present a finalised vacuum chamber system and a numerical simulation of the full system; Section VIII presents the design of optical-dipole-trap systems for the further cooling, spatial transfer and launching of the atoms; finally, conclusions and the experimental outlook are reported in Section IX.
## II Cadmium characteristics & atomic source requirements
An ideal cold atom apparatus for quantum experiments should be able to load large numbers of atoms in a vacuum chamber where background gas collisions are negligible over the timescales of the experiment to preserve coherence [29]. Large atom numbers are required to minimise the quantum projection noise (or standard noise limit) [30], which is often the limiting sensitivity factor for atom interferometers [31]. Moreover, the cold atom preparation should be as rapid as possible to enhance sensitivity [32] and minimise frequency aliasing problems arising from the Dick effect [33]. Practically, these requirements often require a high-flux source of atoms which can be efficiently trapped in a science chamber, which is spatially segmented from the source. Finally, we note that the system should be robust, allowing for stable operation over the months and years typically required to perform cold atom experiments.
A simplified energy level diagram with allowed transitions for bosonic cadmium is shown in Fig. 1 and further details on the key \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) and \({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\) transitions for laser cooling and trapping are given in Table 1, with the values for the same transitions in the more commonly employed Sr and Yb also provided for comparison. In this section, we will discuss these two transitions in detail and highlight how their features guide our cooling and trapping apparatus design, also considering the requirements highlighted above.
The dipole-allowed transition \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) has two very noticeable features: a short wavelength of 229 nm lying in the deep-ultraviolet (DUV) regime; and a very broad natural linewidth (2\(\pi\)\(\times\)91 MHz) [34]. Both these features present technical challenges in implementing an optimised practical realisation of a cold Cd atom source. Ideally, the system would be operated close to the saturation intensity to saturate the cooling and trapping forces, but given the very high saturation intensity of the main broad cooling transition (991 mW/cm\({}^{2}\)), this would require high continuous-wave powers at 229 nm which is problematic for a number of technical and fundamental reasons.
Firstly, the laser sources and the optics in the DUV regime are still under progress and are relatively less developed in comparison to visible or infrared regime. Although UV lasers are constantly improving, the regime below approximately 240 nm remains highly challenging. For example, recent advances using CLBO crystals to generate 2 W of stable power at 261 nm [35] cannot be directly applied due to the phase-matching properties of CLBO, which has a Type-I SHG cut-off wavelength at \(\sim\)237 nm [36]. Instead beta-barium borate (BBO) crystals have to be used, which exhibit greater DUV-induced damage [37], and achieving such powers stably over significant timescales (\(>\)hours) has not currently been demonstrated. Additionally, many optical components are not readily available at this wavelength.
Figure 1: (a) To-scale partial energy level diagram and main transitions for bosonic Cd. These levels and transitions shown have previously been used for cooling and trapping Cd. Also shown is the photoionisation energy and the energy of a 229-nm photon starting from the important \({}^{1}\)P\({}_{1}\) and \({}^{3}\)P\({}_{1}\) states.
For example, while single-mode fibres for the UV have been demonstrated, they are not commercially available and require a complex production procedure for a still limited performance [38; 39].
Moreover, even if W-level power at 229 nm were available, it is not clear that it would be advantageous for a practical experiment with Cd atoms. One issue is that at this short wavelength, Cd atoms are prone to photoionisation from both the \({}^{1}\)P\({}_{1}\) and, potentially, the \({}^{3}\)P\({}_{1}\) states (Fig. 1), which will practically limit the number of atoms which can be loaded into a MOT [20]. For example, the expected loss rate due to photoionisation can be calculated according to \(\Gamma_{\rm ion}=\sigma If/\hbar\omega\), where \(I\) is the beam intensity, \(f\) is the fraction of atoms in the \({}^{1}\)P\({}_{1}\) state and \(\sigma\)\(=2\times 10^{-16}\) cm\({}^{2}\)[20], meaning that for a MOT using 6 beams with \(P\)=150 mW, \(w\)=2 mm (\(I\)=2400 mW/cm\({}^{2}\), \(s\)=2.4) and detuning \(\Delta\)=\(-\Gamma\), we obtain a photoionisation loss rate \(\Gamma_{\rm ion}\) =1.2 kHz which represents a significant loss for the MOT. Furthermore, high-power DUV light is damaging to optical coatings, especially those under vacuum, leading to degradation of performance. The complete damage mechanisms are not fully understood, though there seems to be contributions from both oxygen depletion of the coating material and from various UV-induced mechanisms related to hydrocarbon contamination [40; 41]. Although research into improving optical coatings under vacuum is ongoing [41; 42] and fluoride-based coatings seem to perform much better [43], this problem is not currently solved. Excessive use of 229-nm light would therefore require the system to be regularly opened or purged to recover the coating performance.
Also mentioned in Table 1 is the large linewidth of the 229-nm transition. For the development of the MOT on this transition with high scattering rate, high magnetic field gradients are prescribed. For example, a MOT can be modelled as a damped harmonic oscillator [44] and we estimate that critical damping requires a gradient of \({\rm d}B/{\rm d}z\sim\) 210 G/cm for saturation intensity \(s\)=0.2 and detuning \(\Delta\)=\(-\Gamma\). Such a requirement would rule out the usage of water-cooled magnetic coils not only due to the sheer blockage of optical access, but also the possibility of eddy currents arising from switching the large currents (\(\sim\)100 A).
However, this combination of the short wavelength and broad natural linewidth does prevent significant advantages if utilised correctly. It allows for a very large deceleration force on the atom which can, for example, dramatically shorten the length of the Zeeman slower stage where the atoms from the room temperature are cooled down to tens of m/s over a distance of cm. For example, the computed minimum stopping distance for atoms travelling at a speed \(v\)=290 m/s is only 9 mm (Table 1). Furthermore, the broad transition means that the approximate capture velocity of any MOT can potentially be large at practical values (\(>\)50 m/s), making it easy to capture atoms from e.g. a relatively fast atomic beam.
The Doppler temperature of the \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) is, however, higher than desirable at 2.2 mK, so in any case further cooling is mandatory. Alkaline-earth and alkaline-earth-like systems typically achieve cooling to the necessary \(\mu\)K regime by using a second-stage MOT on the narrow \({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\) intercombination transition, with the laser frequency modulated to enhance its scattering force [45]. Although this transition is also in the UV regime for Cd at 326 nm, it is considerably less challenging and powers approaching the W-level around this wavelength can be more readily achieved [28; 46; 47].
One intriguing possibility is the direct loading into the intercombination-transition MOT of an atomic beam of Cd, something which is routinely performed for Yb with loading rates of 10\({}^{8}\) atoms/s [48], but is far more challenging for Sr, though possible in a carefully optimised system [26]. As shown in Table 1 the linewidth of the \({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\) transition of Cd lies between these two cases, suggesting a difficult but achievable process, as also suggested by the similarity in the stopping distance of Cd and Yb (Table 1). The drawback of this technique is that the capture velocity of the MOT will be limited to \(\sim\)5 m/s, so only a very small fraction of room-temperature atoms can captured if used on its own.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Atom & \(\lambda\) (nm) & \(\Gamma/2\pi\) & \(I_{s}\) (mW/cm\({}^{2}\)) & \(T_{D}\) & \(T_{b}\) (\({}^{\circ}\)C) & \(v_{b}\) (m/s) & \(d_{\rm stop}\) (m) \\ \hline \multicolumn{8}{c}{\({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) (First-stage)} \\ \hline Cd & 228.9 & 91 MHz & 991 & 2.2 mK & 100 & 288 & 9\(\times 10^{-3}\) \\ Sr & 460.9 & 32 MHz & 42.5 & 0.8 mK & 280 & 397 & 8\(\times 10^{-2}\) \\ Yb & 398.9 & 28 MHz & 57.7 & 0.7 mK & 240 & 272 & 7\(\times 10^{-2}\) \\ \hline \multicolumn{8}{c}{\({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\) (Second-stage)} \\ \hline Cd & 326.1 & 66.6 kHz & 0.252 & 1.6 \(\mu\)K & 100 & 288 & 18 \\ Sr & 689.4 & 7.4 kHz & 0.003 & 180 mK & 280 & 397 & 513 \\ Yb & 555.8 & 182.2 kHz & 0.139 & 4.4 \(\mu\)K & 240 & 272 & 16 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Relevant properties of the first and second-stage cooling transitions for Cd, Sr and Yb. Values for the wavelength (\(\lambda\)), corresponding natural linewidth (\(\Gamma\)), saturation intensity (\(I_{s}\)) and the Doppler temperature (\(T_{D}\)) are reported. Also shown is the stopping distance (\(d_{\rm stop}\)) for atoms at the most probable velocity of a beam (\(v_{b}\) = \(\sqrt{3k_{B}T/m}\)), where the beam temperature (\(T_{b}\)) is set to give approximately the same vapour pressure as Cd at 100 \({}^{\circ}\)C (2.5\(\times 10^{-7}\) mbar).
## III Overview & Idea of the System
The basic idea of system is shown in Fig. 2, inspired by previous systems with other species [25; 26]. In brief, the main guiding principle was to minimise the amount of 229 nm light required and to separate the cooling based on the broad and narrow transitions at 229 nm and 326 nm, respectively, principally due to the aggressive and problematic nature of 229 nm light discussed above (see Section II)
Cadmium atoms will be emitted from an oven at 100 \({}^{\circ}\)C to form an effusive atomic beam which is loaded into a 2D MOT on the broad \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) transition at 229 nm. Unlike a 3D MOT, this system does not have steady-state trapping of the Cd atoms and therefore reduces the interaction time of the atoms with the 229 nm light. The magnetic field for a 2D MOT can furthermore be generated with a simple arrangement of permanent magnets [25], removing the requirement of high electrical currents otherwise necessary for generating of the required large gradients (\(>\)200 G/cm). The loading rate of this 2D MOT can be optionally enhanced by using a transverse-field Zeeman slowing beam, without the need for further magnets.
A low-intensity push beam will both plug the 2D MOT in the non-trapping dimension and direct the atoms from the 2D MOT vertically downwards towards a chamber, around 35 cm beneath the 2D MOT, where they will be loaded directly into a 3D MOT based on the narrow \({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\) 326 nm transition. Separating these two MOTs spatially, rather than temporally, is beneficial for improving vacuum quality and potentially allows for a continuous flux of cold atoms [22; 26]. In the specific case of Cd, there are additional practical benefits to separating the MOT regions; for example, this design lessens the problem of photoionisation by reducing the interaction time with the 229 nm light (see Section V) and protects the weak 3D MOT from the strong 229-nm photons.
For an atomic beam with a longitudinal velocity below the capture velocity of \(\sim\)5 m/s, the acceleration due to gravity begins to play a non-trivial role. For example, for an estimated transit distance of \(\sim\)30 cm, atoms with an initial longitudinal velocity in the horizontal direction of 4 m/s will fall 3 cm off axis. This would therefore require the 3D MOT to be carefully placed off the beam-axis and for the vacuum chamber to incorporate potentially non-trivial geometries. This is a known complication when trying to load directly on the intercombination transition of Yb systems [49]. We circumvent this problem by instead separating the two MOT chambers along the vertical axis, exploiting the acceleration due to gravity to help the atoms fall towards the 3D-MOT region [26]. This has the additional benefit of reducing the required power of the push beam, helping to protect the intercombination-transition MOT from the more powerful dipole-transition light.
The loading of the 3D MOT will also be enhanced by including two additional stages of cooling on the 326 nm transition: firstly in the transverse direction in optical molasses; and then using a pair of angled and crossed beams to slow in the longitudinal (vertical) direction. The combination of the relatively fast transverse velocity due to the high Doppler temperature of the \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) transition with the slow longitudinal velocity results in a divergent atomic beam (\(\sim\) 100 mrad). The transverse cooling is therefore required for collimation of the slow atomic beam coming from the 2D MOT, without which only a small fraction of the atoms would be capturable in the 3D MOT. The additional longitudinal slowing is less critical, but allows for a greater fraction of atoms to be captured by reducing the vertical component of the velocity gained during free fall. These beams are so-called crossed (or angled) slowing beams, a geometry which avoids interference with the 3D MOT itself and as has been demonstrated effectively in e.g. Dy, Er and Yb systems [50; 51; 52].
## IV Numerical Simulation of Atomic Trajectories
We numerically simulate the atomic trajectories of our system using pseudorandom sampling and the Monte Carlo method, implementing this process on Python. This technique has been successfully applied previously to simulate a broad variety of MOTs and MOT-based
Figure 2: Cartoon of the basic idea of the cold Cd source, using the \({}^{1}\)S\({}_{0}\)–\({}^{1}\)P\({}_{1}\) transition at 229 nm and \({}^{1}\)S\({}_{0}\)–\({}^{3}\)P\({}_{1}\) transition at 326 nm in spatially separated regions. The approximate required power per beam and frequency detuning of each stage is shown.
atomic beam sources, including standard 3D and 2D implementations [53; 54; 55], to more unconventional configurations such as pyramidal [56] and Rydberg-dressed systems [57].
The atomic trajectories are determined for a pseudo-randomly drawn initial position and velocity and by stepping the time sequentially and using the calculated acceleration due to radiation pressure to update the atom's position and velocity for the next time step. Although it is possible to perform a quantum full simulation of MOT dynamics [58], we instead chose temporal step sizes \(\tau\) such that \(\tau>1/\Gamma\) so that the atom-light interaction can be treated in a semi-classical manner. As different regimes of the simulation are dominated by different transitions with highly different linewidths, we alter this time step accordingly and also in a trade-off between accuracy and computational time, but \(\tau\) is typically \(\sim\)50 \(\mu\)s. The total end time for the simulation is made longer than generally required (\(\sim\)500 ms), with the simulation of each atom instead stopped when it fulfils certain criteria, such as leaving a certain spatial range or becoming trapped in the MOT.
The starting point of our simulation is an effusive atomic beam. Collimated sources of Cd have previously been demonstrated, continuously with a capillary-based oven system with a divergence \(\sim\)40 mrad [27], and in a pulsed manner using laser ablation [24]. Here we model an oven with a simple single 32-mm long, 1-mm diameter capillary, for which the Knudsen number \(K_{N}\gg\)1 at 100 \({}^{\circ}\)C, taking a Van der Waals radius of 158 pm for Cd [59], meaning intra-atomic collisions can be ignored. Although the longitudinal and transverse velocity distributions from a capillary-based oven are known [60; 61], it is not possible to sample from them independently, due to the permissible range of transverse velocities for successfully exiting the capillaries depending upon the longitudinal velocity. We instead use the Monte-Carlo method to generate three velocity components using the Maxwell-Boltzmann distribution and geometrically determine whether these atoms will exit our oven design. This simulation is performed until the desired number of atoms have successfully exited the capillary, which is typically \(10^{4}\). The generated transverse velocity distributions of this simulation match the theoretical distribution well [61; 62], as shown in Fig. 3.
All the laser beams are modeled as perfect Gaussian beams, with a sharp truncation introduced by the diameter of the vacuum viewport they will be shape through. We determine the local light-induced acceleration by these beams at each position and velocity by considering a model that includes the vector of the local magnetic field, not just the field magnitude [63]. In this model, the polarisation of the light is decomposed into its different \(\sigma^{-}\), \(\pi\) and \(\sigma^{+}\) components, following the quantisation axis provided by the local magnetic field direction. This allows for a more accurate determination of the scattering force at arbitrary 3D fields and positions within our simulation, for example when the atom is not along the beam axes. Finally, when simulating the \({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\) transition, we typically assume that the laser beam is frequency modulated. This technique is often used to enhance the trapping potential of MOTs on intercombination transitions [45]. We model this by assuming the total power of the laser beam to be evenly distributed between the \(j\) frequency modes.
Following this formalism [63] and adjusting for the possibility for multiple frequency modes, we can write the acceleration due to the \(i^{\text{th}}\) laser beam, propagating in the direction of a unit vector \(\hat{\mathbf{k}}_{i}\) with intensity \(I\) and operating on a transition with natural linewidth \(\Gamma\) and vacuum wavelength \(\lambda\):
\[a_{i}=\frac{n\pi\Gamma}{m\lambda}s_{j}\hat{\mathbf{k}}_{i}.\] \[\sum_{n=-1,0,1}^{j}\frac{\eta_{n}}{1+s_{\text{tot}}+4\left( \Delta_{\Gamma}-k_{\Gamma}\right)\hat{\mathbf{k}_{i}}\cdot\mathbf{v}-\Delta g _{F}\mu_{\Gamma}n\left|\mathbf{B}\right|}, \tag{1}\]
Figure 3: Velocity and position distributions (n=10\({}^{4}\)) simulated at the output of our oven at T=100 \({}^{\circ}\)C. (a) Positional distribution in the transverse beam plane (\(x\)-\(z\), see Fig. 4); (b) longitudinal velocity distribution; (c) transverse velocity distribution, compared to the theoretical distribution. (d) Positional distribution in the transverse beam plane at an axial distance of 75 mm from the oven – the approximate position of the 2D MOT where most of the atoms are within a \(\sim\)2 mm radius.
where \(s_{j}=I/jI_{\rm sat}\) is the saturation parameter of a single frequency mode and \(s_{\rm tot}=\sum_{i}s_{j}\) is the combined saturation parameter from all beams [63, 64], \(\Delta_{\Gamma}\) is the detuning of the mode in units of linewidth, \(k_{\Gamma}=1/\left(\lambda\Gamma\right)\), \(\mu_{\Gamma}=\mu_{B}/\left(2\pi\Gamma\right)\) with \(\mu_{B}\) the Bohr magneton, and \(m\) is the atomic mass, \(\mathbf{v}\) the atomic velocity and \(\hat{\mathbf{B}}\) is unit vector of the magnetic field at the position in question. We concentrate on the bosonic isotopes of Cd for which \(\Delta g_{F}=g_{F}^{\prime}m_{F}^{\prime}-g_{F}m_{F}\) is 1 and 1/6 for the 229 nm and 326 nm MOT transitions, respectively. The \(\sigma^{-}\), \(\pi\) and \(\sigma^{+}\) components are accounted for by the summation over \(n\) and with \(n\) = -1, 0, 1, respectively. The parameter \(\eta_{n}\) which is given by \(\eta_{0}=\left(1-\left(\hat{\mathbf{k}_{i}}\cdot\hat{\mathbf{B}}\right)^{2} \right)/2\) and \(\eta_{\pm 1}=\left(1\mp\alpha\hat{\mathbf{k}_{i}}\cdot\hat{\mathbf{B}}\right) ^{2}/4\), where \(\alpha=\pm 1\) is the handedness of the circularly polarised light relative to the propagation direction [63]. We also extend this formalism to account for linear polarisations, as well as circular. In this case we instead use \(\eta_{0}=\left(\hat{\mathbf{E}_{i}}\cdot\hat{\mathbf{B}}\right)^{2}\) and \(\eta_{\pm 1}=\left(1-\left(\hat{\mathbf{E}_{i}}\cdot\hat{\mathbf{B}}\right)^{2 }\right)/2\), where \(\hat{\mathbf{E}_{i}}\) is the unit linear polarisation vector of the beam.
The details of the various magnetic field calculations are given in the relevant sections below, but in all cases we first calculate a field on a spatial grid across the full experimental region. These calculations are then linearly interpolated and saved for computational efficiency, allowing for the determination of the total magnetic field from all sources at any spatial points within the simulation region. Combined with Eq. 1, this means that the we can simulate the light force at all points. Earth's magnetic field is assumed to be consistent and cancelled and therefore not considered.
In addition to determining the acceleration due to each beam, we also use Eq. 1 to estimate the total scattering rate \(R\). This can then be used to model the heating effect of spontaneous emission via the addition of a random momentum kick \(\hbar\left|\mathbf{k}\right|\sqrt{R\tau}\hat{\mathbf{x}}\), where \(\hat{\mathbf{x}}\) is a unit vector chosen pseudorandomly from an isotropic distribution [55, 56]. In this way, the temperature of the atoms is limited to the Doppler temperature [65] instead of continuing to decrease towards zero, which is important for correctly understanding the behaviour of the \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) transition stages (Sec. V) where the 2.2 mK Doppler temperature leads to non-negligible residual velocities.
## V The 2D-Mot, Zeeman slower and push beam at 229 nm
The first-stage of cooling and trapping of our design is to load atoms from the effusive beam into a 2D MOT. We have designed a compact chamber with cut-out regions that allow for permanent magnets to be placed at a minimum distance of 22 mm from the MOT centre, but to remain external to the vacuum chamber for experimental ease (Fig. 4). With two stacks of three permanent bar magnets each (neodymium, 25\(\times\)10\(\times\)3 mm\({}^{3}\), M\(\sim\)9\(\times\)10\({}^{5}\) A/m), we can generate magnetic field gradients of \(\sim\)250 G/cm which are approximately uniform across the MOT region. A set of three magnets is the maximum possible with our chamber design and it produces a higher capture velocity than a set of one or two. Although analytic solutions for the field produced by bar magnets exist [66], we find minimal deviations when mod
Figure 4: The Cd atom source. (a) To-scale drawing of the 2D-MOT chamber design and the field generated by the permanent magnets. (b) and (c) Calculated magnetic field components (solid lines) along the MOT beam axes. For the relevant component along the beam axis, the calculated field gradient (dashed line, units G/cm with the same scale) and the measured magnetic field (black diamonds) are also shown. Contour plots of the simulated capture velocity of the 2D MOT as a function of (d) beam detuning and beam power (\(w=2\) mm) and as a function of (e) beam power and radius (\(\Delta\)=\(-1.5\Gamma\)) are also shown. The atoms slowed in the 2D MOT are accelerated vertically downwards by a weak push beam (not shown).
elling the magnets as point-source dipoles. Fig. 4 shows both the calculated and measured magnetic field profiles, as determined with a Hall probe, which are in good agreement.
We can use this field to estimate the capture velocity, to the nearest m/s, of the 2D MOT using the simulation method outlined in the previous section. In these simulations we use atoms without transverse velocity and remove the random heating arising from scattering, effectively therefore only considering cooling and trapping in 1D. We perform this simulations for a range of beam powers, beam radii, and frequency detunings, showing that capture velocities approaching 100 m/s are achievable for this configuration (Fig. 4). We find that for a reasonable 229 nm power of just 10 mW per beam and with a beam radius of 2 mm and a detuning of \(-1.5~{}\Gamma\), this configuration can achieve capture velocities of \(\sim\)70 m/s. This beam radius is chosen to match the atomic beam size at the 2D MOT position (Fig. 3 (d)). The high capture velocity is a positive feature of the 229-nm transition, arising from the large accelerations achievable due to the low wavelength and high linewidth (see Table 1). Although increasing the power can improve the performance (Fig. 4), we limit the power to 10 mW to be comfortable for long-term production of current laser technology [27] and to protect the vacuum viewports. Likewise we use a detuning of \(-1.5~{}\Gamma\) to be within a region which is immune to frequency and intensity fluctuations, as the capture velocity drops dramatically with increasing detuning or declining intensity following the optimum value. The resulting \(\sim\)70 m/s capture velocity will allow for appreciable atom numbers to be loaded into the 2D MOT directly from an effusive beam or vapour, as studied later in Section VII.
Following trapping in the 2D MOT, we can consider the case of a low-intensity push beam orthogonal to the 2D-MOT plane, which serves the purpose to plug the atoms in one direction and to accelerate them in the other, generating a slow beam of atoms. This beam is low-intensity to reduce the 229-nm power requirements, to maintain a small velocity of the output atoms, and to not interfere with the 326-nm MOT which is placed directly vertically below (see Section VI). By incorporating the push beam, we can simulate the interaction time with the 229 nm light in the 2D MOT and push beam, finding it to be just a few ms (Fig. 5 (a)). For comparison, a steady-state 3D MOT on this transition with similar beam parameters is loaded for at least 200 ms [5]. The expected losses due to photoionisation for our system are therefore around only 2%, given a calculated value of \(\Gamma_{\text{ion}}\)=4 Hz, and such losses can be considered negligible.
To investigate the output atomic beam we simulate atoms from our atomic oven which are then slowed and
Figure 5: The low-intensity push for generating a slow beam of atoms. (a) Sample trajectories of atoms interacting with the 2D MOT and push beam, most interactions happen within a 5 ms time frame. (b) Mean longitudinal velocity of the atoms exiting the 2D MOT (50 mm distance) as a function of push beam power and detuning (\(w\)=3 mm); (c) the mean transverse displacement of the atoms from the beam axis for the same variables. Only around 150 \(\mu\)W of power is required to produce atoms at the target velocity of 4 m/s.
Figure 6: Simulated evolution of the position (upper row) and velocity (lower row) distributions of our slow atomic beam as it propagates in the vertical direction \(z\). Columns from left to right show the distributions at a distance of 50 mm below the 2D MOT; 100 mm below the 2D MOT and after transverse cooling at 326 nm; and 300 mm below the 2D MOT and after the crossed slowing beams. A positive value of \(v_{z}\) is in the direction of gravity, the effect of which is included.
trapped in the 2D-MOT and exit along the axis of the push beam for a range of experimental powers and frequency detunings, with the beam radius fixed at 3 mm to be larger than the size of the 2D MOT. In Fig. 5 we consider the output velocity and positional spread of such atoms at a vertical distance of \(z=50\) mm from the 2D-MOT centre, with a target axial velocity of around 4 m/s. At this distance, the force from the push beam has become small and the velocity distribution is largely fixed (Fig. 5 (a)). The longitudinal velocity of the output atomic beam is controlled by the intensity of the push beam and its detuning, as shown in Fig. 5 (b). The advantage of increasing the frequency detuning is that the sensitivity to power fluctuations is reduced, for example, decreasing the sensitivity by a factor 2 when increasing the detuning from \(-2\Gamma\) to \(-3\Gamma\). Even at \(\Delta\)=\(-3\Gamma\), however, only \(\sim\)150 \(\mu\)W is required for \(v_{z}\)=4 m/s. For the specific case of a push beam power of 170 \(\mu\)W and \(\Delta\)=\(-3\Gamma\), the simulated distribution is shown in Fig. 6, giving an axial velocity centred around 4 m/s with a transverse spread of \(\pm\)0.5 m/s, compatible with the Doppler temperature. With such a velocity distribution, the mean transverse displacement from the vertical axis also remains modest over short distances (\(<\)10 mm at 50 mm displacement), though subsequent transverse cooling on the narrow 326-nm intercombination transition is mandatory, as discussed later (Section VI).
We finally consider the possibility of enhancing the atom number in the 2D MOT by use of a Zeeman slowing beam. Generating these fields over the short distances requested (\(\sim\)60 mm) is challenging, even with permanent magnets. For example, while permanent magnets in Halbach arrays have been used and studied in detail for Zeeman slowers with e.g. Rb [66; 67] and Yb [49], the generated fields had gradients of 3 G/cm [66], 12 G/cm [67] and 20 G/cm [49], an order of magnitude lower than what is requested in this case (\(\sim\)100 G/cm). Such gradients are difficult to design, especially without affecting the magnetic field in the 2D MOT region.
However, the negative gradient slope of the 2D MOT field makes a reasonable approximation of a transverse field Zeeman slower, as shown in Fig. 7 (a). This field requires linear polarisation orthogonal to the magnet field direction [68], which is therefore equally decomposed into \(\sigma^{+}\) and \(\sigma^{-}\) components. Only half the input power is therefore available to drive the \(\sigma^{+}\) needed for our decreasing field configuration, effectively doubling the power requirements compared to a longitudinal field Zeeman slower [69], which runs counter to the design idea of minimizing the required 229-nm power (see Section III).
Nevertheless, the performance of a Zeeman slower using this field is shown in Fig. 7 as a function of the slowing beam power. The beam waist is 2 mm (focused at the oven output) and the detuning \(\Delta=-6.5\)\(\Gamma\) to match the ideal field as closely as possible. The range of oven output velocities captured by the 2D MOT is shown, as atoms that are too slow can be pushed backwards by the Zeeman slower beam, especially at higher powers (Fig. 7 (c)). This range can be approximately converted into a normalized atom number by integrating the longitudinal velocity distribution of the oven output (see Fig. 3) within the capture velocity range. As can be seen in Fig. 7 (d), at \(\sim\)30 mW of power the fraction of atoms it is in principle possible to load into the 2D MOT is increased to \(\sim\)30%, an order or magnitude improvement from the case without the Zeeman slower. Due to this large required beam intensity, the Zeeman slower is in general ignored in the subsequent analysis and discussion.
## VI Direct loading of a 3D-MOT at 326 nm
Whilst the broad dipole-allowed \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) transition allows for efficient slowing of the fast atoms from the longitudinal beam, the Doppler temperature of 2.2 mK is
Figure 7: (a) Blue line shows the ideal field of the Zeeman slower for slowing atoms from 135 m/s to 75 m/s over 58 mm with \(\Delta=\) -6.5 \(\Gamma\). The orange line shows the transverse field from the 2D MOT magnets and the black diamonds the data measured with a Hall probe. (b) The longitudinal velocity of atoms coming from the oven to the 2D MOT when the Zeeman slower beam power is set to 20 mW. Blue traces are those captured in the 2D MOT. (c) The shaded region shows the oven output velocities captured in the 2D MOT when adding a Zeeman slowing beam. (d) The corresponding fraction of the oven output at 100 \({}^{\circ}\)C. This value peaks for 30 mW of beam power at around 30% of atoms, up from 3% for the case without any slower.
impractical for either direct performance of atom interferometry or for further cooling in an optical dipole trap (see Section VIII). Further cooling with the \({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\) transition is therefore required, as discussed in Section II, and we propose to directly capture the slow atomic beam exiting the 2D MOT into a MOT based on this transition.
To understand the feasibility of this approach, we first numerically determine the capture velocity of a Cd MOT at 326 nm, for a broad range of experimentally realistic parameters. In simulating this process, we consider copper-wire coils wound around a standard DN100CF flange (see Section VII for vacuum system details), and calculate the generated magnetic field analytically along the axial direction and numerically otherwise, and perform a linear interpolation between these points. We assume usable laser powers of up to 100 mW per beam in a retro-reflected configuration, based on our recently developed system [28].
We determine the the capture velocity for a broad range of parameters for atoms travelling along the vertical \(z\) direction and with the MOT beams propagating through the coils along the \(x\) axis and the other beams at 45\({}^{\circ}\) to the \(y\) and \(z\) axes in the \(x=0\) plane. (Fig. 8 (a)). Figures 8 (b) and (c) show that the capture velocity can be \(>\)5 m/s for a large range of feasible parameters in terms of magnetic field gradients and frequency modulation given the available power. In the case of a magnetic field gradient \(\nabla B\)=30 G/cm, as used previously for this MOT [5] and 100 frequency modes evenly separated by \(\Gamma/2\pi\) (6.6 MHz amplitude), we find that beams with radii \(\geq\)5 mm are required to capture atoms at \(v_{z}\)=5 m/s (Fig. 8 (d)).
While the first-stage of cooling produces a beam of atoms sufficiently slow to be captured by this MOT (see Section V), a problem arises in the that the transfer distance from the 2D to 3D MOT will be \(\sim\)35 cm, based on standard vacuum components and other considerations such as differential pumping. Due to the slow longitudinal velocity and high Doppler temperature of the 229 nm transition, the atoms will diverge significantly over this distance (see Fig. 6). We therefore simulate transverse cooling 85 mm below the 2D MOT in 2D optical molasses with the \({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\) transition, which effectively collimates the slow atomic beam. The slow atomic beam after the molasses is shown in Fig. 6 for the case of 10 mW per 5 mm radius beam with 100 frequency modes. The transverse velocity spread has been reduced to \(<\pm\)0.05 m/s, an order of magnitude reduction. The lack of cooling in the longitudinal direction does cause some heating, though this is not seen to be significant.
Furthermore, as shown in Fig. 9 (a), this cooling is effective for a broad range of beam powers and radii, requiring a beam radius \(>\)4 mm and powers \(>\)2 mW to approach optimal transverse cooling. The transverse veloc
Figure 8: (a) Schematic of the simulation of the 3D MOT at 326 nm. Atoms downwards along the vertical \(z\) axis and interact with beams propagating along the \(x\) axis and at 45\({}^{\circ}\) to the \(y\) and \(z\) axes (\(x=0\) plane). The magnetic field coils produce an axial field along the \(x\) axis. Frequency modulation of the \(j\) modes is simulated assuming the first mode at \(\Delta\)=\(-2\Gamma\) with subsequent modes further detuned by \(\Gamma\) and all modes having the same saturation parameter \(s\). The capture velocity is determined as a function of (b) saturation parameter per frequency mode and magnetic field gradient (\(w\)=5 mm), (c) the saturation parameter per frequency mode and the number of modes (\(\nabla B\)=30 G/cm, \(w\)=5 mm), and (d) the beam power and beam waist (\(\nabla B\)=30 G/cm, modes = 100). For a broad range of feasible experimental parameters, a capture velocity of around 5 m/s is achievable.
Figure 9: Transverse and longitudinal cooling at 326 nm of the slow atomic beam. (a) Mean transverse velocity of the slow atomic beam following the 326-nm optical molasses for a range of beam radii and powers (100 frequency modes). (b) Sample transverse velocity distribution after the molasses stage (\(w\)=5 mm, \(P\)=10 mW) showing that the output approaches the Doppler limit. (c) Longitudinal deceleration on the atoms due to the crossed slowing beams angled at 16\({}^{\circ}\), targeting slowing down to 4 m/s. (d) Fraction of the atoms having a longitudinal velocity less than 4 m/s following the crossed slowing.
ity distribution approaches the Doppler limit (Fig. 9 (b)) and substantial transverse cooling can be achieved for even low powers. Moreover, we find that this cooling can occur relatively rapidly over just 2 ms, with reasonable power levels (10 mW per beam, radius 5 mm) and frequency modulations (100 modes). This cooling is also stronger than the contribution of the 229 nm scattered from the 2D MOT. To estimate the effect of this on the atoms, we consider a worst-case scenario of all the input light being scattered resonantly and isotropically from the MOT centre. At the molasses position, the force from this scattered light is two orders of magnitude lower than the force from a single beam of the molasses and it is therefore taken to have a negligible effect.
The capture efficiency of the MOT can be further enhanced by using vertical slowing on this output of the 2D molasses. We consider a pair of crossed slowing beams aligned with a full angle of 16\({}^{\circ}\) and again on the 326 nm transition. Assuming 100 frequency modes, the acceptance velocity range of the force is \(\sim\)2 m/s (Fig. 9 (c)), meaning the detuning \(\Delta\) must be carefully selected in order to slow the desired atoms, also accounting for the non-negligible Zeeman shifts arising from the MOT coils (see Eq. 1). In our case this corresponds to a detuning of \(\Delta\)=\(-\)650 \(\Gamma\) (43 MHz) to slow the atoms to a minimum velocity of 4 m/s. With just a few mW of power, the crossed slowing of atoms increases the fraction of atoms with a longitudinal velocity \(v_{z}\)\(<\)4 m/s from 30% to 95% (Fig. 9 (d)). As shown in Fig. 6, this slowing process produces only minimal heating in the transverse direction and the slow atomic beam from the 2D MOT has therefore been both collimated for efficient transport and further cooled to below the 3D MOT capture velocity.
## VII Full vacuum apparatus & MOT loading rates
We utilise the results of the preceding sections to design a vacuum system capable of generating a cold beam of Cd which can be loaded into a MOT on the 326-nm \({}^{1}\)S\({}_{0}\)\({}^{,3}\)P\({}_{1}\). A to-scale diagram of the design is shown in Fig. 10. The 2D and 3D MOT regions are each pumped with an ion pump and non-evaporable getter and are sep
Figure 10: (a) To-scale drawing of the vacuum chamber design overlaid with a sample of simulated atomic trajectories. Some features, such as vacuum pumps, have been removed for clarity. The colours of the trajectories represent different outcomes: grey lines are always lost, green lines are captured with just the 2D and 3D MOTs active, cyan lines are captured when the transverse cooling is also activated, and red lines when the crossed slowing beams are active. See text for details. (b) Atoms captured for different cooling configurations based on velocity output from the oven, with the same colour scheme as (a). (c) Atoms captured with (blue diamonds) and without the (red squares) the Zeeman slower stage being active.
arated by a gate valve to allow for the replacement of viewports in the case of UV-induced damaged, without having to open the whole system. This geometry also provides the finalised distances for the different cooling stages, namely the molasses cooling 85 mm below the 2D MOT and the crossed slowing 100 mm above the 3D MOT, which is itself 350 mm below the 2D MOT. With this finalised design, we are able to simulate the atomic trajectories throughout the whole system at a range of oven temperatures and estimate the atomic loading rate into the 3D MOT, a more useful experimental parameter than e.g. the capture velocity used above.
Information on all the simulated beam powers, radii, frequency detuning etc. is given in Table 2. We first attempt to quantify the effect of the different stages of our design, by running the simulation at \(T\)=100 \({}^{\circ}\)C (partial pressure of 2.5\(\times\)10\({}^{-7}\) mbar, flow rate of 1.1\(\times\)10\({}^{10}\) atoms/s) with and without the Zeeman slower and the crossed (vertical) and transverse cooling at 326 nm. We find that the introduction of the transverse slower increases the capture efficiency by more than an order of magnitude and with the introduction of vertical slowing it increases by another factor \(\sim\)3 (Table 3), for an approximate total factor of 40 in the efficiency. As Fig. 10(b) shows, with these two cooling stages active, nearly all of the atoms below the capture velocity of the 2D MOT are captured by the 3D MOT, showing that the transfer between the MOTs is highly efficient. Adding the Zeeman slower leads to a substantial increase in the capture efficiency, by capturing a higher initial longitudinal velocity class in the 2D MOT, as shown in Fig. 10(c).
As shown in Table 3, we estimate expected loading rates into our 3D MOT and find values \(\sim\)10\({}^{7}\) atoms/s for an oven temperature of 100 \({}^{\circ}\)C, without the Zeeman slower. When adding the Zeeman slower beam, this increases by approximately a factor 5 to 10\({}^{8}\) atoms/s, though at the cost of a substantial increase in the required 229 nm power. The loading rate is determined by calculating the expected flow rate of atoms at the simulated oven temperature and capillary design and then multiplying this by simulated capture efficiency. We further scale the loading rate based on the fractional natural abundance of the \({}^{114}\)Cd isotope of 0.29 [70]. Although we have designed the system to work with a minimal amount of 229 nm light, we note that the system is scalable should problems such as stable power production and vacuum viewport damage be solved (see Section II). In addition to allowing for a Zeeman slower, this would allow for an increase in the 2D MOT beam powers, resulting in an approximate threefold increase in loading rate (Fig. 11).
The parameters presented in Table 2 can also be varied to look for the optimum loading rate, especially the effect of the 3D MOT beams and the Zeeman slower. Figure 11 shows that the loading rate is robust for a broad range of 3D MOT beam powers and beam radii (100 frequency modes), provided the power is \(>\)10 mW and the beam radius \(>\)3 mm. The Zeeman slower beam shows a maximum loading rate which differs slightly from what would naively be expected from looking only at the velocity class addressed by the slowing beam (cf. Fig. 7 (d) and Fig. 11 (c)). This is due to the small force imbalance the Zeeman slower introduces to the 2D MOT, which can deflect the slow atomic beam off axis, especially when the Zeeman slower beam radius exceeds the 2D MOT beam radius. Substantial increases in loading rates are available (factor 5), even for powers down to around 10 mW with a focused beam (\(w<\)2 mm).
It should be noted that these loading rates represent the upper bound for the loading rate as our single-atom simulation does not consider losses such as collisions with background gases or other cold Cd atoms, as well as photoionisation losses. Nevertheless, they suggest that significant numbers of atoms can be quickly loaded into an intercombination transition MOT without the need for significant powers or interaction times with the problematic 229 nm \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) transition. The loading rates presented can also be enhanced by increasing the oven temperature above the modest 100 \({}^{\circ}\)C used here, and by an approximate factor 3 by using enriched cadmium sources, as have been employed previously elsewhere [71; 5].
Figure 11: Simulated loading rate of \({}^{114}\)Cd into the 3D MOT. (a) Loading rate as a function of 2D MOT beam power. (b) Loading rate as a function of 3D MOT beam power and radius (100 frequency modes). (c) Loading rate as a function of Zeeman slower beam power and radius. All simulations with 10\({}^{4}\) atoms and non-variable parameters as shown in Table 2, except the Zeeman slower which is only used in (c).
## VIII Trapping, Transfer & Launching at 1064 nm
Although the micro-Kelvin temperatures achieved in the 326-nm MOT [5] are sufficient for many applications, producing quantum degenerate sources requires further cooling towards the nK level. This is typically achieved by performing evaporative cooling in an optical dipole trap [72; 73], neither of which techniques have been demonstrated with Cd yet. In this section we consider the feasibility and prospects of this approach and also the discuss the transfer and launching of Cd atoms using related techniques, with particular reference to a dual-species interferometer with Sr [9].
We consider bosonic Cd atoms in the ground state and consider the two-level system made with the \({}^{1}\)P\({}_{1}\) level. In this approximation the optical dipole potential \(U\left(\mathbf{r}\right)\) can be calculated according to [74],
\[U\left(\mathbf{r}\right)=\frac{3\pi c^{2}}{2\omega_{0}^{3}}\left(\frac{\Gamma} {\omega_{0}-\omega}+\frac{\Gamma}{\omega_{0}+\omega}\right)I\left(\mathbf{r} \right), \tag{2}\]
where \(\omega\) and \(I\left(\mathbf{r}\right)\) are, respectively, the angular frequency and intensity profile of the trap light, and \(\Gamma\) and \(\omega_{0}\) are the natural linewidth and angular frequency of the two-level system, respectively, in this case the \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) transition.
As is clear from Eq. 2, Cd is not especially suited to trapping in this manner due to the reduced trap depth coming from the necessarily large values of \(\omega_{0}-\omega\) and especially \(\omega_{0}^{3}\). Nevertheless, for a optical dipole trap formed from two focused beams at 1064 nm crossing at an angle of 60\({}^{\circ}\) in the horizontal plane, trap depths in excess of 30 \(\mu\)K can be achieved with reasonable powers and waists (Fig. 12 (a) and (b)). Commercial lasers at this wavelength can readily produce powers \(>\)40 W and with M\({}^{2}<\)1.1. This means that efficient loading into a dipole trap from the 3D MOT (\(T\sim\mu\)K) will be available. Due to the lack of known cold collisional properties of Cd, however, it remains to be seen which isotopes or isotope mixture will be suitable for evaporative cooling. In any case, the optical dipole trap can also serve as the initial stage in transferring the prepared atoms \(\sim\)40 cm into the science chamber of a dual-species interferometer [9]. This transfer can also be performed using a 1064 nm laser, with a single shifting focus beam using an optically compensated zoom lens [75], for which similar waists and powers are needed.
The free evolution time \(T\) of the atom interferometer can be enhanced by launching the atoms in a fountain configuration. A dual-species launch requires that an accelerating lattice be used with sufficient trap depth for both atom species, if the atoms are to be simultaneously launched with the along the same spatial trajectory and with the same velocity, a key requirement for minimising systematic errors, although the difference in mass of the two species will result in different trajectories following the application of the interferometry beams. A standing wave based on a high-power 1064-nm laser is well-suited to this task and we calculate the expected launch efficiencies for both Cd and Sr (Fig. 12 (c)). Our model considers losses from both spontaneous single-photon scattering, which is low due to the large detunings and launch times, and the considerably larger losses due to Landau-Zener tunnelling [76]. To estimate the losses due to Landau-Zener tunnelling, we consider the launch as a sequence of Bloch oscillations and first estimate the trap depth using Eq. 2, considering a retro-reflected standing-wave configuration [77]. This computed trap depths can be used to determine the band gap energies by numerically solving the Schrodinger equation [78]. For a launch velocity \(v_{l}\), the fraction of surviving atoms \(f\) is then given by [79],
\[f=\left(1-\exp\left[-\frac{\pi\Omega^{2}}{2\alpha}\right]\right)^{N}, \tag{3}\]
where \(\hbar\Omega\) is the band gap energy, \(\alpha\) is the chirp rate of the lattice frequency, and \(N=\frac{mv_{l}}{2\hbar k_{L}}\) is the number of avoided crossings that the atoms pass through and \(k_{L}\) is the lattice wavenumber. Eq. 3 shows that large trap depths and slow chirp rates minimise tunnelling losses.
Figures 12 (d) and (e) shows the expected losses for both Cd and Sr as a function of the laser beam power for a final launch height of 1 m and with a lattice acceleration distance of 10 cm. For the used waist of 0.5 mm, the
\begin{table}
\begin{tabular}{l c c c c c} Beam (number) & \(P\) (mW) \(w\) (mm) & \(\Delta\) (\(\Gamma\)) & Modes & s \\ \hline \multicolumn{5}{c}{229 nm\({}^{-1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\)} \\ \hline
2D MOT (\(\times\)4) & 10 & 2 & -1.5 & 1 & 0.16 \\ Push beam (\(\times\)1) & 0.17 & 3 & -3 & 1 & \(10^{-3}\) \\ Zeeman Slower (\(\times\)1) & 25 & 2 & -6.5 & 1 & 0.4 \\ \hline \multicolumn{5}{c}{326 nm – \({}^{1}\)S\({}_{0}\)-\({}^{3}\)P\({}_{1}\)} \\ \hline
3D MOT (\(\times\)6) & 50 & 5 & -2 & 100 & 5.1 \\ Transverse Cooling (\(\times\)4) & 10 & 5 & -1 & 100 & 1.0 \\ Crossed Slowing (\(\times\)2) & 2.5 & 3 & -650 & 100 & 0.71 \\ \end{tabular}
\end{table}
Table 2: Properties of the laser beams in the final simulation of the system, including optical power (per beam) \(P\), beam waist \(w\), detuning \(\Delta\), number of frequency modes and the saturation parameter per mode \(s\).
\begin{table}
\begin{tabular}{l c c c} & Efficiency & Loading Rate (atoms/s) \\ \hline
2D MOT \& 3D MOT & 0.02 \% & 6 \(\pm\) 4 \(\times\) 10\({}^{5}\) \\ + Transverse Cooling & 0.34 \% & 1.0 \(\pm\) 0.2 \(\times\) 10\({}^{7}\) \\ + Crossed Slowing & 0.85 \% & 2.6 \(\pm\) 0.3 \(\times\) 10\({}^{7}\) \\ + Zeeman Slower & 4.13 \% & 12.6 \(\pm\) 0.6 \(\times\) 10\({}^{7}\) \\ \end{tabular}
\end{table}
Table 3: Capture efficiency and loading rates of the 3D MOT at 326 nm for natural \({}^{114}\)Cd and an oven temperature of 100 \({}^{\circ}\)C. These simulations use 10\({}^{4}\) atoms and the parameters given in Table 2. Efficiency and loading rates are shown for different cooling configurations – see text for more details. Errors are from the counting statistics of the simulation.
Rayleigh length is 0.7 m and so larger than the launch region. Here the losses from Cd are clearly considerable unless high powers are used due to the reduced depth from the \(1/\omega^{3}\) dependence of Eq. 2. This is a more serious issue than for the crossed optical dipole trap, though the required powers for a launch of 1 m with reasonable levels of atom losses (e.g. \(<\) 50%) are still experimentally feasible (\(\sim\)15 W).
The above considerations have assumed that the launch velocity is continuous, whereas in reality it is quantised according to the \(2\hbar k_{L}\) momentum imparted by the Bloch oscillations of the lattice undergoing an acceleration \(a_{L}\)[76, 78]. There will therefore be a small difference in the launch velocity for different Cd and Sr isotopes arising from their mass difference. Furthermore, as the Bloch oscillation period also depends upon the mass (\(\tau_{B}=2\hbar k_{L}/ma_{L}\)), different isotopes will not in general undergo an equal number of oscillations during the launch and isotopes will be launched in superpositions of momentum states. However, careful selection of the launch characteristics can help suppress these effects, by selecting oscillations close to the ratio of the masses. For example, when considering the two most abundant isotopes, \({}^{114}\)Cd and \({}^{88}\)Sr, a launch with exactly \(N_{\rm Cd}=631\) oscillations, \(N_{\rm Sr}\approx 487\). In this case the atoms will be launched to 98 cm and the difference in launch velocities will be \(\Delta v_{L}=0.2\) mm/s and the difference in apogees just 0.08 mm.
## IX Conclusion & Outlook
We have presented the design and thorough simulation of a state-of-the-art apparatus for producing ultracold Cd samples. The design is cognisant of the unique challenges of this atomic species, especially the broadband dipole-allowed \({}^{1}\)S\({}_{0}\)-\({}^{1}\)P\({}_{1}\) transition at 229 nm. Specifically, we have simulated that it is possible to efficiently load Cd atoms directly into an intercombination-transition MOT starting from an atomic oven, by first using the 229 nm to generate a slow atomic beam, overcoming problems associated with photoionsiation. Such a segmented architecture may be useful for other alkaline-earth-like elements, especially Zn whose relevant transitions are similar to Cd (214 nm, \(\Gamma\)=2\(\pi\times\)71 MHz and 308 nm, \(\Gamma\)=2\(\pi\times\)4 kHz) and whose laser cooling and trapping is in its infancy [7].
The design is to be used as a basis for atom interferometry [9], where the intercombination transitions of Cd make it a good candidate for both Bragg interferometry [80] and single-photon clock-transition atom interferometry [14, 47]. Longer term, the device seems compatible with continuous atom laser systems [81] if differential pumping is introduced on the vertical axis, which is currently left free for interferometry beams. Additionally and conversely, it may be noted that the device can also be operated in a pulsed configuration, with the 229 nm only turned on intermittently for bursts lasting the 5 ms or so required to cool and trap in the 2D MOT, whilst the
Figure 12: Prospects for dipole trapping and lattice launching at 1064 nm. Calculated depth of the optical dipole trap in the (a) horizontal and (b) vertical directions as a function of single-beam power and waist. Depths significantly larger than the 3D MOT Doppler temperature are available. (c) Proposed launch scheme in which atoms are accelerated in a moving lattice over a distance of 10 cm and released with velocity \(v_{L}\). (d) Estimated launch efficiencies for different launch heights for various lattice beam powers (beam waist \(w\)= 0.5 mm). (e) Estimated launch efficiencies (height = 1 m) as a function of single lattice beam power and waist.
3D MOT remains on for the whole duration. This may be beneficial for protecting the vacuum windows and the BBO crystal, for which the damage mechanisms are related to sustained exposure to continuous-wave 229 nm. A similar idea has very recently been shown to be highly effective in preventing degradation in an evacuated optical cavity at 244 nm [82].
## X Acknowledgments
We thank Aidan Arnold and Stefan Truppe for useful discussions, Nicola Grani and Shamaila Manzoor for initial work on the simulations, and Leonardo Salvi for help with the magnetic field calculations. This work has been supported by the European Research Council, Grant No.772126 (TICTOCGRAV). J.N.T acknowledges the support of the Horizon Europe Grant ID 101080164 (UVQuanT).
|
2305.08856 | Some results of fixed point of non-expansive mappings on asymmetric
spaces | Some fixed point results of classical theory, such as Banach's Fixed Point
Theorem, have been previously extended by other authors to asymmetric spaces in
recent years. The aim of this paper is to extend to asymmetric spaces some
others fixed point results for contractions, shrinkage maps and non-expansive
maps. In fact, a version of Edelstein type theorem (Theorem 21), Schauder type
theorem (Theorem 22), and Kirk type theorem (Theorem 26) are stated and proved
in this new context. In order to do that, classical definitions and results
were adapted to this new context. Also, the normal structure in the asymmetric
case was considered. | L. Benítez-Babilonia, R. Felipe, L. Rubio | 2023-03-21T19:02:11Z | http://arxiv.org/abs/2305.08856v1 | # Some results of fixed point of non-expansive mappings on asymmetric spaces
###### Abstract
Some fixed point results of classical theory, such as Banach's Fixed Point Theorem, have been previously extended by other authors to asymmetric spaces in recent years. The aim of this paper is to extend to asymmetric spaces some others fixed point results for contractions, shrinkage maps and non-expansive maps. In fact, a version of Edelstein type theorem (Theorem 21), Schauder type theorem (Theorem 22), and Kirk type theorem (Theorem 26) are stated and proved in this new context. In order to do that, classical definitions and results were adapted to this new context. Also, the normal structure in the asymmetric case was considered.
keywords: Asymmetric spaces, asymmetric norms, non-expansive maps, fixed points, asymmetric normal structure. Msc: 47H10, 46B25, 47H09 +
Footnote †: journal: Elsevier
## 1 Introduction
Asymmetric spaces were first introduced by Wilson in [14] as quasi-symmetric spaces. An asymmetric space is a generalization of a metric space where the distance does not satisfy the axiom of symmetry. In this case there are two topologies defined on the same space, the forward topology \(\tau^{f}\) and the backward topology \(\tau^{b}\), as can be found in [10], then, for notions such as convergence, completeness and compactness two versions are considered, namely forwards and backwards. In [6], these concepts were studied in the asymmetric context, and complemented in [4], where are given the basic results on asymmetric normed spaces. In some cases, these spaces represent a specific topic of interest, for example, within the geometric theory of groups, very interesting results have been found related to exterior spaces, as can be seen in [2].
The study of fixed points for different types of applications, in different types of spaces, with certain geometric and topological properties in classical theory, have been extended to asymmetric spaces. For instance, in [11], is introduced the notion of a forward and backward contraction to prove the Banach Contraction Principle in asymmetric spaces, and in [5], a generalization of this theorem is given, which is known as Caristi-Kirk fixed point theorem; also, in [13], are presented some fixed-point results for applications of the \(\chi F\) type contractions and applications to fractal theory.
This article is organized as follows. Since our aim is to give a precise and self-contained proofs of fixed point theorems in asymmetric spaces, in Sections 2 and 3, a brief exposition of some topological concepts that have been extended to asymmetric spaces and asymmetric normed spaces is included, which are well known and have been widely studied in the references [1], [4] and [7].
In Section 4, the definition of normal structure is extended to the context of asymmetric normed spaces, taking into consideration the approach appeared in [8] about the classic theory of normal structure due to M. Brodski and D. Milman. The normal structure is a geometric property of convex subsets of a usual Banach space. From this notion some interesting results were obtained in the fixed-point theory of non-expansive maps on Banach spaces. However, as far as we know, in the asymmetric case this type of structure has not received any attention. We consider that our work is a first step in order to fill this gap.
Finally, in Section 5 the main results are stated and proved. The Banach Contraction Principle, in asymmetric spaces [11] is taken as a starting point, to give another versions of it, which are Theorem 20 and Corollary 19, respectively. Asymmetric versions of classical theorems of fixed point theory are also presented, such as: Edelstein type Theorem (Theorem 21), Schauder type Theorem (Theorem 22), Theorem 23 and Theorem 26.
## 2 Asymmetric spaces
In this section, we focus on the basic topological theory derived from the definition of asymmetric distance. See [12] for a more complete study of this subject.
**Definition 1** ([4]): _Let \(X\) be a non-empty set. A function is called an **asymmetric distance** on \(X\) if satisfies the following conditions:_
1. _if_ \(x,y\in X\)_, then_ \(p(x,y)\geq 0\)_,_
2. \(p(x,y)=p(y,x)=0\) _if and only if_ \(x=y\)_,_
3. _if_ \(x,y,z\in X\)_, then_ \(p(x,z)\leq p(x,y)+p(y,z)\)_._
_In such a case, the pair \((X,p)\) is called an **asymmetric space**._
Note that, in these spaces, the symmetric condition is not required. That is, \(p(x,y)\) and \(p(y,x)\) are not necessarily equal. Then, two topologies \(\tau^{f}\) and \(\tau^{b}\) can be defined in \(X\), which are generated by the bases \(\mathscr{B}_{1}:=\{B^{f}(x,r):x\in X\quad\text{and}\quad r>0\}\) and \(\mathscr{B}_{2}:=\{B^{b}(x,r):x\in X\quad\text{and}\quad r>0\}\), respectively (see [10]). Here, we have used the following notation:
\[B^{f}(x,r) =\{u\in X:p(x,u)<r\}\quad\text{for the \bf open $f-\textbf{ball}$,}\] \[B^{b}(x,r) =\{v\in X:p(v,x)<r\}\quad\text{for the \bf open $b-\textbf{ball}$.}\]
Thus, a set \(A\subseteq X\) is \(f-\textbf{open $(b-\textbf{open})$ in $X$, if, for each $x\in A$, there is $r>0$ such that $B^{f}(x,r)\subset A$ ($B^{b}(x,r)\subset A$, respectively). See [4] for more details.
If \((X,p_{1})\) and \((Y,p_{2})\) are two asymmetric spaces and \(g:X\longrightarrow Y\) is a function, then four different notions of continuity can be defined, according to the considered topologies. However, such continuities are characterized by
the sequential continuity, as follows: Let \((X,p)\) be an asymmetric space. The sequence \(\{x_{n}\}\subset X\) is \(f-\)**convergent** (respectively \(b-\)**convergent**) to \(x_{0}\in X\), denoted by \(x_{n}\stackrel{{ f}}{{\longrightarrow}}x_{0}\) (respectively \(x_{n}\stackrel{{ b}}{{\longrightarrow}}x_{0}\)), if \(p(x_{0},x_{n})\longrightarrow 0\) (respectively \(p(x_{n},x_{0})\longrightarrow 0\)).
**Example 1**.: _Consider the function \(d:\mathbf{R}\times\mathbf{R}\rightarrow[0,\infty)\) defined by means of_
\[d(x,y)=\begin{cases}y-x,&\text{if}\quad y>x;\\ 0,&\text{if}\quad y\leq x.\end{cases} \tag{1}\]
_Then, \(d\) is an asymmetric distance and \((\mathbf{R},d)\) is an asymmetric space. In this example, if \(x<y\), then \(d(x,y)>0\) and \(d(y,x)=0\). More examples can be found in [6], [11] and [13]._
There are many notion of Cauchy sequence that are related but not equivalent in the asymmetric case, as can be seen in [4]. We present the most convenient to our work.
**Definition 2** ([6]).: _Let \((X,p)\) be an asymmetric space. A sequence \(\{x_{n}\}\) in \(X\) is \(f-\)**Cauchy** (resp. \(b-\)**Cauchy**), if for all \(\epsilon>0\) there exists \(N\in\mathbb{N}\) such that for \(m\geq n\geq N\), \(p(x_{n},x_{m})<\epsilon\) (resp. \(p(x_{m},x_{n})<\epsilon\)) holds._
The order of the subscripts \(n\) and \(m\) must be considered in asymmetric spaces. Kelly, Collins and Zimmer present some examples of \(f-\)convergent sequences which are not \(f-\)Cauchy. See [10], Example 5.8 and [6], Example 3.6. However, other properties are preserved in asymmetric spaces, as it is showed in the following proposition.
**Proposition 3** ([6]).: _Let \((X,p)\) be an asymmetric space and \(\{x_{n}\}\) be a \(b-\)Cauchy sequence on \(X\). If \(\{x_{n}\}\) has an \(f-\)convergent subsequence, then \(\{x_{n}\}\)\(f-\)convergent._
**Proof.** Let \(\{x_{n_{k}}\}\subset\{x_{n}\}\) be a subsequence such that \(x_{n_{k}}\stackrel{{ f}}{{\rightarrow}}x_{0}\). Then, for all \(\epsilon>0\), there exists \(N_{1}\in\mathbb{N}\) such that
\[p(x_{0},x_{n_{k}})<\frac{\epsilon}{2}\quad\text{for all}\quad k\geq N_{1}. \tag{2}\]
On the other hand, for all \(\epsilon>0\), there exists \(N_{2}\in\mathbb{N}\) such that
\[p(x_{n_{k}},x_{n})<\frac{\epsilon}{2}\quad\text{for all}\quad n_{k}\geq n\geq N _{2}. \tag{3}\]
Let \(N_{0}=\max\,\{N_{1},N_{2}\}\), \(k\geq N_{0}\) and \(n_{k}\geq k\). Thus, by (2) and (3),
\[p(x_{0},x_{n})\leq p(x_{0},x_{n_{k}})+p(x_{n_{k}},x_{n})<\epsilon.\]
It is worth pointing out that compactness and sequential compactness do not always coincide in asymmetric spaces.
**Definition 4** ([6]).: _Let \((X,p)\) be an asymmetric space. A subset \(K\subset X\) is \(f-\)**compact** (resp. \(b-\)**compact**) if every open cover of \(K\) in \(\tau^{f}\) (resp. \(\tau^{b}\)) has a finite subcover. We will say that \(K\) is_ **relatively**\(f-\)**compact** (resp. **relatively**\(b-\)**compact**) if \(cl^{f}(K)\) is \(f-\)**compact** (resp. \(cl^{b}(K)\) is \(b-\)**compact**), where \(cl^{f}\) denotes the closure in \(\tau^{f}\) (resp. \(cl^{b}\) denotes the closure in \(\tau^{b}\)). Also, \(K\subset X\) is_ **sequentially**\(f-\)**compact** (resp. **sequentially \(b-\)**compact**) if every sequence has an \(f-\)convergent subsequence (resp. \(b-\)**convergent subsequence) with limit at \(K\)._
Note that different types of Cauchy sequences provide diverse types of completeness in asymmetric spaces, each with its advantages and disadvantages. Thus, the next definition is related to Definition 2.
**Definition 5** ([6]): _An asymmetric space \((X,p)\) is \(f-\)_**complete** _(resp. \(b-\)_**complete**_), if every \(f-\)_**Cauchy sequence is \(f-\)_**convergent _(resp. \(b-\)_Cauchy sequence is \(b-\)convergent) on \(X\). We say that a subset \(K\subset X\) is_ **totally \(f-\)bounded** _(resp._ **totally \(b-\)bounded**_), if for every \(\epsilon>0\), \(K\) can be covered by a finite number of \(f-\)open balls (resp. \(b-\)open balls) of radius \(\epsilon\)._
On the other hand, if \((Y,d)\) is a metric space, then clearly \((Y,\tau^{f})=(Y,\tau^{b})\). Thus, considering an asymmetric space \((X,p)\), we have two types of continuities for a function \(g:X\longrightarrow Y\): \(f-\)continuity and \(b-\)continuity. Then, in the particular case when \((Y,d_{Y})=(\mathbb{R},|\cdot|)\), we have the following results :
**Proposition 6**: _Let \((X,p)\) be an \(f-\)compact asymmetric space and consider \(\mathbb{R}\) with the usual metric. If \(f:X\rightarrow\mathbb{R}\) is \(f-\)continuous on \(X\), then \(f(X)\) is compact on \(\mathbb{R}\)._
Since the proof of the preceding proposition is, except for obvious modifications, identical to that in the symmetric framework, it will be omitted here.
The next proposition is an adaptation to asymmetric spaces of the Weierstrass Theorem. In this case, the result is addressed using a function \(f\) that maps a compact asymmetric space to a metric space.
**Proposition 7**: _Let \((X,p)\) be an \(f-\)compact asymmetric space and let us consider \(\mathbb{R}\) provided with the usual metric. Suppose that \(f:X\rightarrow\mathbb{R}\) is \(f-\)continuous on \(X\) and_
\[M=\sup_{x\in X}f(x),\quad m=\inf_{x\in X}f(x). \tag{4}\]
_Then, there exist \(x_{1},x_{2}\in X\) such that \(f(x_{1})=M\) and \(f(x_{2})=m\)._
**Proof.** It follows immediately from Proposition 6.
## 3 Asymmetric Functional Analysis
In the present section, we introduce the asymmetric normed spaces, which we denote by \((X,\|\cdot|)\). Most of the results and the theory presented here are based on the works [1], [4] and [7].
**Definition 8** ([4]): _Let \(X\) be a real vector space. We say that \(\|\cdot|:X\rightarrow\mathbb{R}\) is_ **an asymmetric norm** _on \(X\) if it satisfies the following properties:_
1. _For each_ \(x\in X,\|x\|\geq 0\)_._
2. _For each_ \(x\in X\)_,_ \(\|x\|=\|-x\|=0\) _if and only if,_ \(x=0\)_._
3. _For each_ \(\lambda\geq 0\) _and each_ \(x\in X\)_,_ \(\|\lambda x\|=\lambda\|x\|\)_._
4. _For each_ \(x,y\in X\)_,_ \(\|x+y\|\leq\|x|+\|y|\)
_In this case, the pair \((X,\|\cdot\|)\) is called_ **an asymmetric normed space**_._
**Example 2**.: _The function \(\|\cdot|:\mathbb{R}^{2}\to\mathbb{R}\) defined by \(\|(x,y)|=\max\left\{0,y-x,y+x\right\}\) is an asymmetric norm in \(\mathbb{R}^{2}\). We refer the reader to [4] for more details._
**Example 3** ([4]).: _The function \(\|\cdot|_{u}:\mathbb{R}\to\mathbb{R}^{+}\) defined by_
\[\|x|_{u}=x\lor 0=\max\left\{x,0\right\} \tag{5}\]
_is an asymmetric norm._
An asymmetric norm induces two asymmetric distances, namely \(d(x,y)=\|y-x|\) and \(\hat{d}(x,y)=\|x-y|\).
**Definition 9** ([4]).: _The space \((X,\|\cdot|)\) is said to be \(f-\)**Banach** if it is \(f-\)complete in the asymmetric distance given by \(d(x,y)=\|y-x|\). Similarly, the space \((X,\|\cdot|)\) is \(b-\)**Banach** if it is \(b-\)complete in the asymmetric distance given by \(\hat{d}(x,y)=\|x-y|\)._
Inspired by the definition of forward and backward contractions given in [11], we give the following definition.
**Definition 10**.: _Let \((X,d)\) be an asymmetric space. A mapping \(T:X\to X\) is \(f-\)**Lipschitz** if there exists a non-negative real number \(k\) such that_
\[d(Tx,Ty)\leq kd(x,y),\quad\text{for all}\quad x,y\in X. \tag{6}\]
_The smallest value of \(k\) in (6) will be called the_ **Lipschitz \(f-\)constant** _of \(T\) and is denoted by \(k_{f}\)._
_Similarly, a mapping \(T:X\to X\) is \(b-\)**Lipschitz** if there exists a non-negative real number \(l\) such that_
\[d(Tx,Ty)\leq ld(y,x),\quad\text{for all}\quad x,y\in X. \tag{7}\]
_The smallest value of \(l\) in (7) is called the_ **Lipschitz \(b-\)constant** _of \(T\) and denoted by \(l_{b}\)._
According to the above, we give the following definitions.
* The mapping \(T\) is \(f-\)**non-expansive** if \(0\leq k_{f}\leq 1\); and it is a \(b-\)**non-expansive** if \(0\leq l_{b}\leq 1\).
* The mapping \(T\) is an \(f-\)**contraction** if \(0\leq k_{f}<1\); and is a \(b-\)**contraction** if \(0\leq l_{b}<1\).
Furthermore, the mapping \(T:X\to X\) is called \(f-\)**shrinkage** if
\[d(Tx,Ty)<d(x,y);\quad\text{for all}\quad x,y\in X,\quad\text{con}\quad x\neq y. \tag{8}\]
Similarly, \(T:X\to X\) is called \(b-\)**shrinkage** if
\[d(Tx,Ty)<d(y,x);\quad\text{for all}\quad x,y\in X,\quad\text{con}\quad x\neq y. \tag{9}\]
The details of the following example can be found in ([11]).
**Example 4**.: _Let \(d_{1}:\mathbb{R}\times\mathbb{R}\to[0,\infty)\) be a function defined of the following form_
\[d_{1}(x,y)=\left\{\begin{array}{ll}y-x,&\text{if}\quad y\geq x;\\ \frac{1}{4}(x-y),&\text{if}\quad x>y.\end{array}\right.\]
_Then \((\mathbb{R},d_{1})\) is an asymmetric space and the mapping \(T:\mathbb{R}\to\mathbb{R}\), given by \(Tx=\frac{1}{2}x\), is an \(f-\)contraction but not a \(b-\)contraction, where \(\mathbb{R}\) is endowed with \(d_{1}\) in both cases._
### Weak topology in asymmetric normed spaces
In this new context we can introduce the weak topology. The weak topology and the topology of the asymmetric norm coincide in a wide class of finite-dimensional asymmetric normed spaces [1]. However, this is not necessarily true for every asymmetric space. In fact, it can be seen that the weak topology on an infinite-dimensional asymmetric normed space is strictly coarser than the topology of the asymmetric norm.
If \((X,\|\cdot|)\) is an asymmetric normed space, \(X^{*}\) denotes the set
\[X^{*}=\left\{\varphi:(X,\|\cdot|)\rightarrow(\mathbb{R},\|\cdot \|_{u})\,:\,\varphi\text{ is linear and continuous}\right\}. \tag{10}\]
Then \(X^{*}\) is the topological asymmetric dual of the asymmetric normed space \((X,\|\cdot|)\).
**Definition 11** ([1]): _Let \((X,\|\cdot|)\) be an asymmetric normed space. The_ **weak forward topology** _on \(X\), induced by the asymmetric norm \(\|\cdot|\), is the topology generated by the base_
\[\mathscr{B}_{*}:=\{V_{\varphi_{1},\varphi_{2},\ldots,\varphi_{n}} (x,\epsilon):x\in X,\quad\epsilon>0\quad\text{and}\quad\varphi_{1},\varphi_{2},\ldots,\varphi_{n}\in X^{*}\},\]
_where \(V_{\varphi_{1},\varphi_{2},\ldots,\varphi_{n}}(x,\epsilon)=x+V_{\varphi_{1}, \varphi_{2},\ldots,\varphi_{n}}(0,\epsilon)\), and_
\[V_{\varphi_{1},\varphi_{2},\ldots,\varphi_{n}}(0,\epsilon)= \left\{z\in X\,:\,\varphi_{1}(z)<\epsilon,\varphi_{2}(z)<\epsilon,\ldots, \varphi_{n}(z)<\epsilon\right\}, \tag{11}\]
_whenever \(\varphi_{1},\varphi_{2},\ldots,\varphi_{n}\in X^{*}\), \(\epsilon>0\) and \(n\in\mathbb{N}\)._
_The_ **weak forward topology** _induced by \(\|\cdot\|\) will be denoted by \(\tau_{w}^{f}\) and the_ **weak backward topology** _induced by \(\|\cdot\|\) will be denoted by \(\tau_{w}^{b}\)._
**Definition 12** ([1]): _Let \((X,\|\cdot|)\) be an asymmetric normed space. The sequence \(\{x_{n}\}\subset X\) is_ **weakly \(f-\)convergent** _to \(x_{0}\in X\), if the sequence \(\{\varphi(x_{n})\}\) is \(f-\)convergent to \(\varphi(x_{0})\) for all \(\varphi\in X^{*}\), that is, we have_
\[\|\varphi(x_{n})-\varphi(x_{0})|_{u}\to 0,\,\forall\varphi\in X^{*}. \tag{12}\]
_The weak \(f-\)convergence will be denoted by \(x_{n}\stackrel{{ f}}{{\rightharpoonup}}x_{0}\)._
Let \((X,\|\cdot|)\) be an asymmetric normed space. The sequence \(\{x_{n}\}\subset X\) is **strongly \(f-\)convergent** if it is \(f-\)convergent in the asymmetric norm. Below, we will call it simply \(f-\)convergence and write \(x_{n}\stackrel{{ f}}{{\rightarrow}}x_{0}\) when no confusion can arise. In an analogous way, the **strong \(b-\)convergence** is defined.
Next, we present an asymmetric version of Mazur's theorem whose proof is analogous to the classical version which can be consulted in [15].
**Theorem 13** (Mazur-type theorem): _Let \(X\) be an asymmetric normed space and \(\{x_{n}\}\subset X\) be a weakly \(f-\)convergent sequence to \(x_{0}\in X\). Then, for all \(\epsilon>0\), there exists a convex combination \(y_{m}=\sum_{j=1}^{m}\alpha_{j}x_{n_{j}}\) such that \(\|x_{0}-y_{m}\|\leq\epsilon\)._
**Proof.** Let us consider \(M_{1}=\operatorname{conv}(\{0,x_{1},x_{2},x_{3},\ldots\})\). Then, we have to prove that, for every \(\epsilon>0\), there is \(y\in M_{1}\), such that \(\|x_{0}-y\|\leq\epsilon\). In contrary case, there exists \(\epsilon_{0}>0\) with \(\|x_{0}-y\|>\epsilon_{0}\), for all \(y\in M_{1}\). Let us define the following set
\[M=\left\{v\in X\,:\,\|v-u|\leq\frac{\epsilon_{0}}{2};\quad\text {for some}\quad u\in M_{1}\right\}. \tag{13}\]
Note that \(M\neq\emptyset\), \(M\) is a convex set, \(M_{1}\subset M\) and \(x_{0}\notin M\). Moreover, for any \(x\in X\) there exists \(\alpha>0\) such that \(\alpha^{-1}x\in M\). In particular, there exists \(v_{0}\in M\) with \(v_{0}\neq 0\) and \(\beta_{0}\in(0,1)\) such that \(x_{0}=\beta_{0}^{-1}v_{0}\). In this case, we can define \(p=p_{M}:X\to\mathbf{R}\) the Minkowski functional of \(M\), given by
\[p(z)=\inf\{t>0\,:\,z\in tM\},\quad\text{for each}\quad z\in X.\]
Thus, \(p(v_{0})=1\) and \(p(x_{0})\geq\beta_{0}^{-1}>1\).
Now, let us consider the functional \(f_{0}:\mathbf{R}v_{0}\to\mathbf{R}\) defined by \(f_{0}(tv_{0})=t\). Then, Hahn-Banach type theorem (Theorem 2.2.2, [4]) implies the existence of a functional \(\varphi_{0}:X\to\mathbf{R}\) which is an extension of \(f_{0}\), such that
\[\varphi_{0}(x)\leq p(x),\quad\text{for every}\quad x\in X. \tag{14}\]
This implies
\[\sup_{x\in M_{1}}\varphi_{0}(x)\leq\sup_{x\in M}\varphi_{0}(x)\leq\sup_{x\in M }p(x)=1<\beta_{0}^{-1}=\varphi_{0}\left(\beta_{0}^{-1}v_{0}\right)=\varphi_{0} \left(x_{0}\right). \tag{15}\]
On the other hand, \(x_{n}\stackrel{{ f}}{{\to}}x_{0}\), that is \(\phi(x_{n})\stackrel{{ f}}{{\longrightarrow}}\phi(x_{0})\) for all \(\phi\in X^{*}\). In particular, \(\varphi_{0}(x_{n})\stackrel{{ f}}{{\longrightarrow}}\varphi_{0}( x_{0})\). But, Inequality (15) with \(\{x_{n}\}\subset M_{1}\) imply
\[\sup_{n\in\mathbb{N}}\varphi_{0}(x_{n})<\varphi_{0}\left(x_{0}\right), \tag{16}\]
which is impossible. Therefore, the result is true.
In an asymmetric space \((X,d)\), a set \(C\subset X\) is **weakly \(f-\)closed** (resp. **weakly \(b-\)closed**) if it is closed with respect to the topology \(\tau_{w}^{f}\) (resp. \(\tau_{w}^{b}\)). Also, it is **weakly \(f-\)compact** (resp. **weakly \(b-\)compact**) if it is compact with respect to the topology \(\tau_{w}^{f}\) (resp. \(\tau_{w}^{b}\)).
## 4 Normal structure in asymmetric normed spaces
The concept of normal structure was introduced by Brodskii and Milman. This notion has been useful in the study of fixed points of non-expansive self-mappings on \(K\), where \(K\) is a weakly compact set. In this section, we introduce this notion in the context of asymmetric spaces.
For any subset \(K\) of the set \(X\), we define the diameter of \(K\) as
\[\mathrm{Diam}(K)=\sup\{\|v-u|:u,v\in K\}. \tag{17}\]
A set \(K\subset X\) is **bounded** if \(\mathrm{Diam}(K)<\infty\). Also, we say that a subset \(K\subseteq X\) is \(f\)**-bounded** (resp. \(b\)**-bounded**) if it is contained in an \(f\)-ball (resp. in a \(b\)-ball). Then, any bounded non-empty subset of \(X\) is \(f\)-bounded and \(b\)-bounded. Indeed, let \(x\in K\), then taking into account that \(\mathrm{Diam}(K)<\infty\), we have that for all \(u\in K\)
\[\|u-x|<\mathrm{Diam}(K)+1,\]
that is \(K\subset B^{f}(x,r_{0})\) where \(r_{0}=\mathrm{Diam}(K)+1\). Similarly we show that \(K\subset B^{b}(x,r_{0})\). Vice versa, if \(K\) is contained both in an \(f\)-ball and a \(b\)-ball, then this is bounded.
Now, if \(u\in X\), we define two radiuses of \(K\) with respect to \(u\) as
\[r_{u}^{f}(K)=\sup\{\|v-u|:v\in K\};\quad\text{forward radius of $K$,} \tag{18}\] \[r_{u}^{b}(K)=\sup\{\|u-v|:v\in K\};\quad\text{backward radius of $K$.} \tag{19}\]
Thus, a point \(u\in K\) is called **forward diametral** if
\[r_{u}^{f}(K)=\operatorname{Diam}(K), \tag{20}\]
otherwise, if \(r_{u}^{f}(K)<\operatorname{Diam}(K)\), then it is called **forward non-diametral**.
In general, a set \(D\subset X\) is called **forward diametral** if all its points are forward diametral. In classical theory, the following definition refers to a geometric property of Banach spaces, which is called normal structure. Now, we will present a similar definition in the context of asymmetric normed spaces.
**Definition 14**.: _Let \(X\) be an \(f-\)Banach space and \(K\) be a convex subset of \(X\). We say that \(K\) has **forward normal structure**, if every convex bounded subset \(H\) of \(K\) with \(\operatorname{Diam}(H)>0\) contains a forward non-diametral point of \(H\), that is, there exists \(u\in H\) such that_
\[r_{u}^{f}(H):=\sup\{\|v-u|:v\in H\}<\operatorname{Diam}(H). \tag{21}\]
In other words, the subsets of \(X\) with forward normal structure do not contain convex bounded subsets consisting only of forward diametral points, except for those of cardinality one.
Consider \(D\) a subset of \(K\) and a function \(T:K\to K\). The set \(D\) is \(T-\)**invariant** if \(T(D)\subseteq D\).
**Definition 15**.: _Let \(K\) be a non-empty, \(f-\)closed, convex subset of \(X\) and \(T:K\to K\) be a map. The set \(K\) is **minimal \(T-\)invariant** if \(T(K)\subseteq K\) and \(K\) does not have a proper non-empty, \(f-\)closed, convex subset which is \(T-\)invariant._
The proof of the following two results are almost analogous to the symmetric case, under appropriated changes. In [8, page 33], the proofs of the classical versions can be found.
**Proposition 16**.: _Let \(X\) be an \(f-\)Banach space and \(K\) be a non-empty, convex, weakly \(f-\)compact subset of \(X\). Then, for any map \(T:K\to K\) there exists a minimal \(T-\)invariant set \(D\subseteq K\)._
**Proof.** Let us consider the following family of sets
\[\mathcal{P}=\left\{D\subseteq K:D\neq\emptyset,\text{ is convex, weakly }f-\text{compact and }T-\text{invariant}\right\}.\]
Note that \((\mathcal{P},\subseteq)\) is a partial ordered set. Moreover, each totally ordered subfamily of \(\mathcal{P}\) is lower bounded, in fact, if \(\left\{D_{\alpha}:\alpha\in\Delta\right\}\subseteq\mathcal{P}\) is totally ordered, then \(\bigcap\limits_{\alpha\in\Delta}D_{\alpha}\) is a lower bound. Thus, Zorn's Lemma implies the existence of a minimal set \(D_{1}\in\mathcal{P}\). Therefore, \(D_{1}\) is a minimal \(T-\)invariant set.
**Lemma 17**.: _If \(K\) is a minimal \(T-\)invariant set, then \(K=\overline{\operatorname{conv}}(T(K))\)._
**Proof.** Let us set \(D=\overline{\text{conv}}(T(K))\). Note that \(D\) is a non-empty, convex, \(f-\)closed set. Hence, the convexity and the \(f-\)closedness of \(K\) imply \(D=\overline{\text{conv}}(T(K))\subseteq\overline{\text{conv}}(K)=K\). That is, \(D\subseteq K\). Thus,
\[T(D)\subset T(K)\subseteq\overline{\text{conv}}(T(K))=D.\]
This implies that \(D\) is \(T-\)invariant. Finally, since \(K\) is minimal, we obtain that \(K=D=\overline{\text{conv}}(T(K))\).
## 5 Some fixed point results of non-expansive mappings
We will show that \(f-\)bounded, \(f-\)closed, and convex sets \(K\subset X\) in asymmetric normed spaces have the property that each non-expansive self-mapping \(T:K\to K\) has a fixed point. This is obtained by assuming additional conditions on \(K\) or \(X\).
There are many classic fixed point results which have been extended to asymmetric spaces. A good example is the Banach contraction principle presented by [11].
**Theorem 18**.: _Let \((X,d)\) be a complete forward space and \(T:X\to X\) be a \(f-\)contraction. Assume that \(f-\)convergence implies \(b-\)convergence. Then, \(T\) has a unique fixed point._
In fact, many of these results are built for different types of contractions, as in the case of the Caristi-Kirk theorem, which is a generalization of the Banach contraction principle presented by Cobzas in [5].
The following result is a consequence of Theorem 18.
**Corollry 19**.: _Let \(X\) be a complete forward asymmetric space and let \(T:X\to X\) be a mapping such that \(T^{k}\) is a \(f-\)contraction for some \(k\in\mathbb{Z}^{+}\). Suppose that \(f-\)convergence implies \(b-\)convergence. Then, \(T\) has a unique fixed point._
**Proof.** By Theorem 18 there exists a unique \(x_{0}\in X\) such that \(T^{k}x_{0}=x_{0}\), which implies \(Tx_{0}=T^{k}\left(Tx_{0}\right)\). Then, \(Tx_{0}\) is also a fixed point of \(T^{k}\), and so \(x_{0}\) is a fixed point of \(T\). In order to prove uniqueness, let us assume that \(y\in X\) is also a fixed point of \(T\), that is \(Ty=y\). Then, we obtain
\[y=Ty=T\left(Ty\right)=T^{2}y=T^{2}\left(Ty\right)=T^{3}y=\cdots=T^{k}y.\]
It follows that \(y\) is a fixed point of \(T^{k}\), therefore \(y=x_{0}\).
The next theorem has been already stated in [11]. Here, we present it with an alternative proof.
**Theorem 20** (Banach type Theorem).: _Let \((X,d)\) be a forward sequentially compact space and let \(T:X\to X\) be a \(b-\)contraction. Assume that \(f-\)convergence implies \(b-\)convergence. Then, \(T\) has a unique fixed point._
**Proof.** Let us take \(x_{0}\in X\) and define the following sequence
\[x_{1}=Tx_{0},\,x_{2}=Tx_{1}=T^{2}x_{0},\,x_{3}=T^{3}x_{0},\,\ldots,\,x_{n}=T^ {n}x_{0},\ldots.\]
Then, there exists \(0<k<1\) such that
\[d(x_{m+1},x_{m}) =d(Tx_{m},Tx_{m-1})\] \[\leq kd(x_{m-1},x_{m})=kd(Tx_{m-2},Tx_{m-1})\] \[\leq k^{2}d\left(x_{m-1},x_{m-2}\right)=k^{2}d\left(Tx_{m-2},Tx_{m -3}\right)\] \[\leq\cdots\leq\lambda k^{m},\]
with \(\lambda=\max\left\{d\left(x_{1},x_{0}\right),d\left(x_{0},x_{1}\right)\right\}\). We will show that the sequence is \(b-\)Cauchy. Consider \(m\leq n\), then
\[d(x_{n},x_{m}) \leq d(x_{n},x_{n-1})+d(x_{n-1},x_{n-2})+\cdots+d(x_{m+1},x_{m})\] \[\leq k^{n-1}\lambda+k^{n-2}\lambda+\ldots+k^{m}\lambda\] \[=\left(k^{m}+k^{m+1}+\ldots+k^{n-2}+k^{n-1}\right)\lambda=\left( \frac{k^{m}-k^{n}}{1-k}\right)\lambda\] \[\leq\lambda\left(\frac{k^{m}}{1-k}\right).\]
Since \(0<k<1\), then \(\frac{k^{m}}{1-k}\to 0\), when \(m\to\infty\). Thus
\[d(x_{n},x_{m})\leq\lambda\left(\frac{k^{m}}{1-k}\right)\to 0.\]
Therefore, \(\{x_{n}\}\) is \(b-\)Cauchy. By our assumption, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) which is \(f-\)convergent, say \(x_{n_{k}}\overset{f}{\to}x\). Then, it follows from the Proposition 3 that \(d(x,x_{n})\to 0\). Let us see that \(x\) is a fixed point of \(T\).
\[d(x,Tx)\leq d(x,x_{n})+d(Tx_{n-1},Tx)\leq d(x,x_{n})+kd(x,x_{n-1}).\]
So, by letting \(n\to\infty\), we have \(d(x,Tx)=0\). By a similar argument, we obtain \(d(Tx,x)=0\), which implies that \(Tx=x\).
In order to prove uniqueness, suppose there exists \(y\in X\) a fixed point of \(T\) such that \(y\neq x\). Then \(d(x,y)\neq 0\) or \(d(y,x)\neq 0\). Suppose \(d(x,y)\neq 0\). Thus,
\[d(y,x)=d(Ty,Tx)<kd(x,y)=kd(Tx,Ty)<k^{2}d(y,x).\]
It follows that necessarily \(d(y,x)\neq 0\), which implies that \(1<k^{2}<1\), leading to a contradiction.
Now, we will present the asymmetric version of Edelstein's Theorem. In the symmetric case, this result is obtained by changing the hypothesis that \(T\) is a contraction for \(T\) a shrinkage mapping in Theorem 18; this requires changing the completeness hypothesis to compactness.
**Theorem 21** (Edelstein type Theorem).: _Let \((X,d)\) be an \(f-\)compact asymmetric space and let \(T:X\to X\) be an \(f-\)shrinkage mapping. If \(f-\)convergence implies \(b-\)convergence, then \(T\) has a unique fixed point._
**Proof.** Let \(g\,:X\to\mathbb{R}\) be given by \(g(z)=d(z,Tz)\). We will prove that \(g\) is \(f-\)continuous in \(X\). Let \(\{x_{n}\}\) be such that \(x_{n}\overset{f}{\to}x\). Then, by inequality (8) we have that \(Tx_{n}\overset{f}{\to}Tx\). Also \(x_{n}\overset{b}{\to}x\) and \(Tx_{n}\overset{b}{\to}Tx\). Let \(\epsilon>0\), there exists \(N\in\mathbb{N}\) such that if \(n\geq N\),
\[d(x,x_{n})<\epsilon/2,\quad d(Tx,Tx_{n})<\epsilon/2,\quad d(x_{n},x)<\epsilon/ 2\quad\text{and}\quad d(Tx_{n},Tx)<\epsilon/2.\]
Hence, by triangular inequality applied forward and backward,
\[|g(x)-g(x_{n})|=|d\left(x,Tx\right)-d\left(x_{n},Tx_{n}\right)|<\epsilon,\]
therefore, \(g\) is \(f-\)continuous.
Now Proposition 7 shows that \(g\) takes its minimum value \(m\) at \(x_{0}\in X\). We show that \(x_{0}\) is a fixed point of \(T\). Suppose \(x_{0}\neq Tx_{0}\). Then, inequality (8) implies
\[m=g(x_{0})\leq g(Tx_{0})=d(Tx_{0},T^{2}x_{0})<d(x_{0},Tx_{0})=m,\]
which is impossible. It remains to see that \(x_{0}\) is the unique fixed point, which follows immediately.
In the following result, we will see that if \(T:K\to K\) is an \(f-\)non-expansive map defined on \(K\subset X\), with \(X\) an asymmetric space, then, under additional conditions on \(K\) or \(X\), we can guarantee the existence of fixed points of the mapping \(T\).
**Theorem 22** (Schauder type Theorem).: _Let \(X\) be an asymmetric normed space and let \(K\subset X\) be an non-empty \(f-\)compact and convex subset. If \(T:K\to K\) is \(f-\)non-expansive, and the \(f-\)convergence implies \(b-\)convergence, then \(T\) has a fixed point._
**Proof.** Let \(x_{0}\in K\). Let us define the following sequence of functions
\[S_{n}x=\left(1-\frac{1}{n}\right)Tx+\frac{1}{n}x_{0},\quad\text{for each} \quad n\in\mathbb{N},\quad x\in K.\]
Then \(S_{n}(K)\subseteq K\), for every \(n\in\mathbb{N}\). Let \(C_{n}=\left(1-\frac{1}{n}\right)\), note that \(0<C_{n}<1\), for all \(n\in\mathbb{N}\), then
\[\left\|S_{n}y-S_{n}x\right\lvert=C_{n}\left\|Ty-Tx\right\lvert\leq C_{n} \left\|y-x\right\lvert.\]
This implies that \(S_{n}:K\to K\) is an \(f-\)contraction, for every \(n\in\mathbb{N}\). Since \(K\) is \(f-\)compact, Proposition 4.8 in [6] proves that \(K\) is \(f-\)complete and by Theorem 18, \(S_{n}\) has a fixed point \(x_{n}\in K\). On the other hand, taking into account that \(K\) is \(f-\)compact and \(T(K)\subset K\), there exists a subsequence \(\{Tx_{n_{j}}\}\) and \(u\in K\) such that \(Tx_{n_{j}}\stackrel{{ f}}{{\rightarrow}}u\), when \(j\rightarrow\infty\). Even more,
\[\left\|x_{n_{j}}-u\right\lvert\leq\left(1-\frac{1}{n_{j}}\right)\left\|Tx_{n_ {j}}-u\right\lvert+\frac{1}{n_{j}}\left(\left\|x_{0}\right\lvert+\left\|-u \right\lvert\right).\]
So \(x_{n_{j}}\stackrel{{ f}}{{\rightarrow}}u\), when \(j\rightarrow\infty\). Therefore \(Tx_{n_{j}}\stackrel{{ f}}{{\rightarrow}}Tu\), when \(j\rightarrow\infty\). Thus, \(u\) is a fixed point of \(T\).
We have the following consequence.
**Theorem 23**.: _Let \(X\) be an \(f-\)Banach space, and let \(K\) be an \(f-\)closed, \(f-\)bounded, convex and non-empty subset of \(X\). If \(T:K\to K\) is \(f-\)non-expansive, \((T-I)(K)\) is an \(f-\)closed subset of \(X\) and the \(f-\)convergence implies \(b-\)convergence, then \(T\) has a fixed point at \(K\)._
**Proof.** Let \(x_{0}\in K\) be given. Then, there exists \(r>0\) such that \(K\subseteq B^{f}(x_{0},r)\). Thus,
\[\left\|Tx-Tx_{0}\right\lvert\leq\left\|x-x_{0}\right\lvert<r,\quad\text{for all}\quad x\in K. \tag{22}\]
Consider the sequence \(\{t_{n}\}\) given by \(t_{n}=\frac{n}{n+1}\). Note that \(0<t_{n}<1\) and \(t_{n}\to 1\). For all \(n\in\mathbb{N}\), let us define the mapping \(T_{n}:K\to K\) by means of \(T_{n}x=t_{n}Tx\). We will prove that \(T_{n}\) is an \(f-\)contraction. Indeed, for all \(x,y\in K\)
\[d(T_{n}x,T_{n}y)=t_{n}\|Ty-Tx|\leq t_{n}\|y-x|=t_{n}d(x,y).\]
Then, from Theorem 18, for every \(n\in\mathbb{N}\) there exists a unique \(x_{n}\in K\) such that \(T_{n}x_{n}=x_{n}\).
On the other hand, let \(\{y_{n}\}\) be the sequence in \((T-I)(K)\) given by \(y_{n}=(T-I)(x_{n})\). We shall verify that \(y_{n}\stackrel{{ f}}{{\to}}0\). In fact,
\[d(0,y_{n}) =d(0,(Tx_{n}-x_{n}))=\|Tx_{n}-T_{n}x_{n}|\] \[=\|Tx_{n}-t_{n}Tx_{n}|=(1-t_{n})\,\|Tx_{n}|\] \[\leq(1-t_{n})\,(\|Tx_{n}-Tx_{0}|+\|Tx_{0}|)\] \[\leq 2\,(1-t_{n})\max\,\{r,\|Tx_{0}|\}\,.\]
Since \(t_{n}\to 1\), then \(y_{n}\stackrel{{ f}}{{\to}}0\). Therefore, \(0\in(T-I)(K)\), which implies that there exists \(x_{0}\in K\) such that \((I-T)(x_{0})=0\). Hence, \(0=(T-I)(x_{0})=Tx_{0}-x_{0}\). Thus, \(T\) has a fixed point at \(K\), as desired.
The following result is a consequence of Theorem 18 and it will be a fundamental tool in the proof of the Theorem 26.
**Lemma 24**.: _Let \(X\) be an \(f-\)Banach space, and let \(K\subseteq X\) be a non-empty, convex, \(f-\)closed and \(f-\)bounded subset. Consider \(T:K\to K\) an \(f-\)non-expansive mapping. Suppose that \(f-\)convergence implies \(b-\)convergence, then there exists a sequence \(\{x_{n}\}\subset K\) such that_
\[\lim_{n\to\infty}\|x_{n}-Tx_{n}|=0. \tag{23}\]
_This sequence will be called a fixed-point \(f-\)approximating sequence for \(T\) in \(K\)._
**Proof.** Let \(a_{0}\in K\). Define the mapping \(S_{n}:K\to K\), given by
\[S_{n}x=\frac{a_{0}}{n}+\left(1-\frac{1}{n}\right)Tx,\quad\text{if}\quad n\geq 2.\]
It is easy to see that \(S_{n}\) is an \(f-\)contraction for every \(n\geq 2\). Then, by Theorem 18, we can state that \(S_{n}\) has a unique fixed point \(x_{n}\) for each \(n\geq 2\).
Now, because \(S_{n}x_{n}=x_{n}\), we can write
\[x_{n}=\frac{a_{0}}{n}+\left(1-\frac{1}{n}\right)Tx_{n},\quad\text{if}\quad n \geq 2.\]
We know that there exists \(b\in K\) and \(M>0\) such that \(d(b,x)=\|x-b|<M\), for all \(x\in K\), because \(K\) is \(f-\)bounded. Also,
\[\|Tx_{n}-x_{n}|=\frac{1}{n}\,\|Tx_{n}-a_{0}|\,.\]
Since \(K\) is \(T-\)invariant, we can choose \(a_{0}\) in such a way that \(a_{0}=Tb\). Then, taking into account that \(T\) is \(f-\)non-expansive, we obtain
\[\|Tx_{n}-x_{n}|=\frac{1}{n}\,\|Tx_{n}-a_{0}|=\frac{1}{n}\,\|Tx_{n}-Tb|\leq \frac{1}{n}\,\|x_{n}-b|\leq\left(\frac{1}{n}\right)M.\]
Thus, \(\|Tx_{n}-x_{n}|\to 0\) when \(n\to\infty\). Now, considering the sequence \(\{z_{n}=Tx_{n}-x_{n}\}\subseteq X\), and the fact that \(f-\)convergence implies \(b-\)convergence, we obtain (23).
Next, we present an asymmetric version of the Goebel-Karlovitz Lemma, which together with Lemma 24, constitute the central part of the proof of Theorem 26.
**Lemma 25** (Goebel-Karlovitz type Lemma).: _Let \(X\) be an \(f-\)Banach space, \(K\subset X\) non-empty, convex, \(f-\)closed, weakly \(f-\)compact and \(f-\)bounded. Consider \(T:K\to K\) an \(f-\)non-expansive map. Suppose \(f-\)convergence implies \(b-\)convergence. If \(K\) is minimal \(T-\)invariant, then_
\[\lim_{n\to\infty}\|x_{n}-x|=\operatorname{Diam}(K),\quad\text{for all}\quad x \in K,\]
_where \(\{x_{n}\}\) is a fixed-point \(f-\)approximating sequence for \(T\) in \(K\)._
**Proof.** Since \(K\) is \(f-\)bounded, there exists \(y_{0}\in K\), \(M>0\) such that \(\|x-y_{0}|<M\), for all \(x\in K\). By Lemma 24, there exists \(\{x_{n}\}\subseteq K\) such that
\[\lim_{n\to\infty}\|x_{n}-Tx_{n}|=0. \tag{24}\]
Consider \(s_{0}=\limsup_{n\to\infty}\|x_{n}-y_{0}|\geq 0\) and
\[D=\left\{x\in K\,:\,\limsup_{n\to\infty}\|x_{n}-x|\leq s_{0}\right\}. \tag{25}\]
Note that \(D\neq\emptyset\), since \(y_{0}\in D\). It is easy to see that \(D\) is convex, \(f-\)closed, and \(T-\)invariant. Then, by the minimality of \(K\), we have \(D=K\). The same argument is valid if \(y_{0}\) is changed by another \(y\in K\), thus \(D_{y}=K\) whenever \(y\in K\).
For \(y\in K\), consider the set
\[P(y)=\{\|x_{n}-y|\,|\,n\in\mathbb{N}\}\subset\mathbb{R}, \tag{26}\]
and denote by \(P(y)^{\prime}\) the set of limit points of \(P(y)\). Then, considering \(s^{\prime}\in P(y)^{\prime}\), there exists \(\{\beta_{m}(y)\}\subset P(y)\) such that \(\beta_{m}(y)\to s^{\prime}\), that is,
\[\beta_{m}(y)=\|x_{n_{m}}-y|\to s^{\prime}.\]
Let us prove that \(\beta_{m}(z)\to s^{\prime}\) for all \(z\in K\). Assume the opposite, that is, there exists \(z_{0}\in K\) such that \(\beta_{m}(z_{0})\nrightarrow s^{\prime}\). Then, there exists \(\{m_{j}\}\) such that \(\beta_{m_{j}}(z_{0})\to t\), with \(t\neq s^{\prime}\). Now, introduce the set
\[E=\left\{w\in K\,:\,\limsup_{j\to\infty}\|x_{m_{j}}-w|\leq\min\left\{t,s^{ \prime}\right\}\right\}, \tag{27}\]
note that \(z_{0}\in E\), so that \(E\neq\emptyset\). With similar arguments we can prove that the set \(E\) is also \(f-\)closed, convex and \(T-\)invariant. But, the minimality of \(K\), implies that \(E=K\). This means that \(y,z_{0}\in E\).
On the other hand, if \(t\neq s^{\prime}\), then \(s^{\prime}<t\) or \(t<s^{\prime}\). Without loss of generality, suppose that \(t<s^{\prime}\). Since \(y\in E\), we have that
\[s^{\prime}=\limsup_{j\to\infty}\|x_{m_{j}}-y|\leq\min\left\{t,s^{\prime} \right\}=t<s^{\prime},\]
which is a contradiction. Therefore, \(t=s^{\prime}\) and \(\lim\limits_{m\to\infty}\|x_{n_{m}}-z|=s^{\prime}\), for all \(z\in K\).
Next, we show that \(s^{\prime}=d\) where \(d=\operatorname{Diam}(K)\). Introduce the set
\[F=\left\{u\in K\,:\,\|u-x|\leq s^{\prime};\quad\text{for each}\quad x\in K \right\}, \tag{28}\]
since \(K\) is weakly \(f-\)compact, then we can extract a subsequence \(\{x_{n_{j}}\}\), which is weakly \(f-\)convergent, say \(x_{n_{j}}\xrightarrow[]{f}w_{0}\), with \(w_{0}\in K\). Then, Theorem 13 guarantees that \(F\neq\emptyset\).
It is not difficult to verify that \(F\) is convex and \(f-\)closed. To show that \(F\) is \(T-\)invariant, consider \(v_{1}\in F\). By Lemma 17, we have that \(K=\overline{\textit{conv}}(T(K))\). Let \(v\in K\), and \(\epsilon>0\), we choose \(v_{0}=\sum\limits_{i=1}^{m}\lambda_{i}Ty_{i}\), with \(y_{i}\in K\), \(\lambda_{i}>0\), \(\sum\limits_{i=1}^{m}\lambda_{i}=1\) and \(\|v_{0}-v|<\epsilon\).
\[\|Tv_{1}-v| =\|Tv_{1}-v+v_{0}-v_{0}|\] \[\leq\|Tv_{1}-v_{0}|+\|v_{0}-v|\] \[=\left\|Tv_{1}-\sum\limits_{i=1}^{m}\lambda_{i}Ty_{i}\right|+\|v _{0}-v|\] \[\leq s^{\prime}+\|v_{0}-v|<s^{\prime}+\epsilon,\]
hence \(Tv_{1}\in F\), which shows that \(F\) is \(T-\)invariant.
We have shown that \(F\) is a non-empty, \(f-\)closed, convex, and \(T-\)invariant subset of \(K\). From the minimality of \(K\), there follows that \(F=K\). Now, suppose that \(s^{\prime}<d=\operatorname{Diam}(K)\). Define \(\delta=\dfrac{s^{\prime}+d}{2}\), then \(s^{\prime}<\delta<d\). Because \(F=K\), we have
\[\|v-x|\leq s^{\prime}<\delta,\quad\text{for all}\quad v,x\in K.\]
So,
\[d=\operatorname{Diam}(K)=\sup\limits_{v,x\in K}\{\|v-x|\}\leq\delta<d,\]
which is a contradiction. Therefore, \(s^{\prime}=d=\operatorname{Diam}(K)\).
Finally, taking into account that the set \(P(y)\subseteq\mathbb{R}\) defined in (26) is bounded for all \(y\in K\), it means that \(P(y)^{\prime}=\{d\}\) for all \(y\in K\). It implies that
\[\lim\limits_{n\to\infty}\|x_{n}-y|=s^{\prime}=\operatorname{Diam}(K),\quad \text{for every}\quad y\in K.\]
**Theorem 26** (Kirk type theorem).: _Let \(X\) be an \(f-\)Banach space, and let \(K\subset X\) be non-empty, convex, \(f-\)closed, \(f-\)bounded and weakly \(f-\)compact. Consider \(T:K\to K\) an \(f-\)non-expansive mapping. Suppose that \(K\) has an \(f-\)normal structure and that \(f-\)convergence implies \(b-\)convergence in \(X\), then \(T\) has a fixed point in \(K\)._
**Proof.** Since \(K\) is weakly \(f-\)compact, by Proposition 16 there follows that \(K\) has a subset \(D\) which is minimal \(T-\)invariant. By Lemma 24, there exists an \(f-\)approximating sequence \(\{x_{n}\}\) that is \(f-\)convergent to the fixed point of \(T\) in \(D\), such that
\[\lim_{n\to\infty}\|x_{n}-Tx_{n}|=0.\]
Then, by Lemma 25, \(\lim_{n\to\infty}\|x_{n}-x|=\operatorname{Diam}(D)\), for all \(x\in D\).
We will see that \(D\) is a single element set. Suppose the opposite, that is, \(D\) consists of more than one point. Because \(D\) is an \(f-\)closed, convex, non-empty subset of \(K\), and since \(K\) has an \(f-\)normal structure, then there exists a forward non-diametral point \(x_{0}\in D\), which means that \(\sup_{n\in\mathbb{N}}\|x_{n}-x_{0}|<\operatorname{Diam}(D)\). Thus,
\[\lim_{n\to\infty}\|x_{n}-x_{0}|\leq\sup_{n\in\mathbb{N}}\|x_{n}-x_{0}|< \operatorname{Diam}(D),\]
but it contradicts Lemma 25. Therefore, \(D\) is a single-element set. Now, since \(D\) is \(T-\)invariant, that is, \(T(D)\subseteq D\), then this point is necessarily a fixed point of \(T\). It is important to mention that, in classical theory, there are other fixed point results that use very interesting ideas and techniques that could be extended to the context of asymmetric spaces. For example, the measure of non-compactness and Darbo's fixed point theorem are not treated in the present work, whose classic versions can be consulted in [3]. Also, there is the notion of exterior spaces, which are examples of asymmetric spaces, see [2], on which the possibility of establishing fixed-point results could be considered.
## Acknowledgments
The first author acknowledges support from project FAB-06-19 and the second author acknowledges support from Conacyt project 45886.
|
2308.07541 | Reinforcement Learning (RL) Augmented Cold Start Frequency Reduction in
Serverless Computing | Function-as-a-Service is a cloud computing paradigm offering an event-driven
execution model to applications. It features serverless attributes by
eliminating resource management responsibilities from developers and offers
transparent and on-demand scalability of applications. Typical serverless
applications have stringent response time and scalability requirements and
therefore rely on deployed services to provide quick and fault-tolerant
feedback to clients. However, the FaaS paradigm suffers from cold starts as
there is a non-negligible delay associated with on-demand function
initialization. This work focuses on reducing the frequency of cold starts on
the platform by using Reinforcement Learning. Our approach uses Q-learning and
considers metrics such as function CPU utilization, existing function
instances, and response failure rate to proactively initialize functions in
advance based on the expected demand. The proposed solution was implemented on
Kubeless and was evaluated using a normalised real-world function demand trace
with matrix multiplication as the workload. The results demonstrate a
favourable performance of the RL-based agent when compared to Kubeless' default
policy and function keep-alive policy by improving throughput by up to 8.81%
and reducing computation load and resource wastage by up to 55% and 37%,
respectively, which is a direct outcome of reduced cold starts. | Siddharth Agarwal, Maria A. Rodriguez, Rajkumar Buyya | 2023-08-15T03:01:41Z | http://arxiv.org/abs/2308.07541v1 | # Reinforcement Learning (RL) Augmented Cold Start Frequency Reduction in Serverless Computing
###### Abstract
Function-as-a-Service (FaaS) is a cloud computing paradigm offering an event-driven execution model to applications. It features'serverless' attributes by eliminating resource management responsibilities from developers and offers transparent and on-demand scalability of applications. Typical serverless applications have stringent response time and scalability requirements and therefore rely on deployed services to provide quick and fault-tolerant feedback to clients. However, the FaaS paradigm suffers from _cold starts_ as there is a non-negligible delay associated with on-demand function initialization. This work focuses on reducing the frequency of cold starts on the platform by using Reinforcement Learning. Our approach uses Q-learning and considers metrics such as function CPU utilization, existing function instances, and response failure rate to proactively initialize functions in advance based on the expected demand. The proposed solution was implemented on Kubeless and was evaluated using a normalised real-world function demand trace with matrix multiplication as the workload. The results demonstrate a favourable performance of the RL-based agent when compared to Kubeless' default policy and function keep-alive policy by improving throughput by up to 8.81% and reducing computation load and resource wastage by up to 55% and 37%, respectively, which is a direct outcome of reduced cold starts.
cold start, Cloud computing, Function-as-a-Service, Kubeless, Q-learning, reinforcement learning, serverless computing
## I Introduction
In cloud computing, a serverless deployment model removes the burden of managing and provisioning resources from the developers, allowing them to focus solely on the application development process. The term _serverless_, interchangeably used with Function-as-a-Service (FaaS), does not imply an absence of servers, but instead, accentuates delegating the responsibility of complex resource management tasks to cloud service providers (CSP) [1, 2]. The FaaS paradigm puts forward an event-driven, serverless computing model with fine-grained pay-per-use pricing where resources are billed based on their actual service time. Functions (i.e., a fragment of code containing business logic) are designed to scale on demand; they are stateless, short-lived, and run on lightweight containers or virtual machines (VMs) in response to a triggering event. Such an abstraction increases agility in application development, offering lower administrative and ownership costs.
The FaaS model has attracted a wide range of applications such as IoT services, REST APIs, stream processing and prediction services, which have strict availability and quality of service requirements in terms of response time. Conceptually, the FaaS model is designed to spin a new function instance for each demand request and shut down the instance after service [2]. However, in practice, commercial FaaS offerings like AWS Lambda, Azure Functions, and Google Cloud Function may choose to re-use a function instance or keep the instance running for a limited time to serve subsequent requests [3]. Some open source serverless frameworks like Kubeless [4] and Knative, have similar implementations to re-use an instance of a function to serve subsequent requests.
An increase in workload demand leads to an instantiation process involving the creation of new function containers and the initialisation of the function's environment within those containers, after which incoming requests are served. Such a process usually requires downloading the client code, setting up code dependencies and the runtime environment, setting up container networking, and finally executing the code to handle the incoming request. Hence, instantiating a function's container introduces a non-negligible time latency, known as _cold start_, and gives rise to a challenge for serverless platforms [5, 6, 7, 8]. Some application-specific factors such as programming language, runtime environment and code deployment size as well as function requirements like CPU and memory, affect the cold start of a function [8, 9, 10, 11].
To automate the process of creating new function instances and reusing existing ones, serverless frameworks usually rely on resource-based (CPU or memory) horizontal scaling, known as horizontal pod auto-scaling (HPA) in Kubernetes-based frameworks like Kubeless, to respond to incoming requests. Resource-based scaling policies implement a reactive approach and instantiate new functions only when resource usage rises above a pre-defined threshold, thus leading to cold start latencies and an increase in the number of unsuccessful requests.
Threshold-based scaling decisions fail to consider factors like varying application load and platform throughput and hence, pose an opportunity to explore dynamic techniques that analyse these factors to address cold starts. This work
presents a model-free Q-learning agent to exploit resource utilization, available function instances, and platform response failure rate to reduce the number of cold starts. We define a reward function for the RL agent to dynamically establish the required number of function instances for a given workload demand based on expected average CPU utilisation and response failure rate. The RL-based agent interacts with the serverless environment by performing scaling actions and learns through trial and error during multiple iterations. The agent receives delayed feedback, either positive or negative, based upon the observed state, and consequently learns the appropriate number of function instances to fit the workload demand. This strategy uses no prior knowledge about the environment, demand pattern or workload, and dynamically adjusts to the changes for preparing required functions in advance to reduce cold starts.
The proposed work scales the number of function instances by proactively estimating the number of functions that are needed to serve incoming workload to reduce the frequent cold starts. It utilizes a practical workload of matrix multiplication involved in image processing task, serving a sampled real-world function request pattern [12], and formally presents the cold start as an optimisation problem. Also, we structure the Q-learning components around the function metrics such as average CPU utilisation and response failure rate and evaluate our approach against the default resource-based policy and commercially accepted function keep-alive technique.
In summary, the key contributions of our work are:
1. We analyze function resource metrics such as CPU utilization, available instances, and the proportion of unsuccessful responses to propose a Q-learning model that dynamically analyses the application request pattern and improves function throughput by reducing frequent cold starts on the platform.
2. We present a brief overview of explored solutions to address function cold starts and highlight the differences between contrasting approaches to the proposed agent.
3. We perform our experiments on a real-world system setup and evaluate the proposed RL-based agent against the default resource-based policy and a baseline keep-alive technique.
The rest of the paper is organised as follows. Section II highlights related research studies. In Section III we present the system model and formulate the problem statement. Section IV outlines the proposed agent's workflow and describes the implementation hypothesis and assumptions. In Section V we evaluate our technique with the baseline approach and highlight training results and discuss about performance in Section VI. Section VII concludes the paper and highlights future research directions.
## II Related Work
### _Serverless Computing or Function-as-a-Service_
Serverless computing offers a cloud service model wherein the server management or resource management responsibility lies with the CSP. In [2], the authors discuss the potential of this new, less complex computing model introduced by Amazon in 2014. They explain a function-based, serverless commercial offering of AWS Lambda, i.e., the Function-as-a-Service platform. They highlight three primary differences between traditional cloud computing and serverless computing as follows: 1) decoupled computation and storage, 2) code execution without resource management, and 3) paying in proportion to the resources used. The research posts that the serverless or FaaS model promotes business growth, making the use of the cloud easier.
Baldini _et al._[18] introduce the emerging paradigm of FaaS as an application development architecture that allows the execution of a piece of code in the cloud without control over underlying resources. They identify containers and the emergence of microservices architecture as the promoter of FaaS model in serverless. They use FaaS and serverless interchangeably and defines it as a'stripped down' programming model that executes stateless functions as its deployment unit.
Since the inception of serverless computing, there have been many commercial and open-source offerings such as AWS Lambda, Microsoft Azure Functions, Google Cloud Functions, Fission, and OpenWhisk. These platforms represent FaaS as an emerging technology but Hellerstein _et al._[19] put together gaps that furnish serverless as a bad fit for cloud innovations. The authors criticize the current developments of cloud computing and state that the potential of cloud resources is yet to be harnessed. On the contrary, the work in [8] argues that serverless offerings are economical and affordable as they remove the responsibility of resource management and complexity of deployments from consumers. It presents the opportunities offered by multiple FaaS offerings and gives an overview of other existing challenges and indicates potential approaches for future work.
A Microsoft work [20] estimates that there will be near 500 million new applications in the subsequent 5 years, and it would be difficult for the current development models to support such large expansions. FaaS is designed to increase development agility, reduce the cost of ownership, and decrease overheads related to servers and other cloud resources. The term'serverless' has been in the industry since the introduction of Backend-as-a-Service (BaaS). Despite the serverless benefits, FaaS experiences two major challenges, which are categorized as (i) system-level and (ii) programming and DevOps challenges [2, 18, 20]. The former identifies the cost of services, security, resource limits, and cold start while scaling, and the latter focuses on tools and IDEs, deployment, statelessness, and code granularity in the serverless model.
### _Function Cold Start and Mitigation_
Researchers in [3] describes function cold start as the time taken to execute a function. This process involves assigning a container to a function, accessing the code package and copying the function image, loading the image into memory, unpacking it, and executing the function handler. It broadly classifies the approaches to deal with function cold start in,
1) environment optimization, and 2) pinging. The former approach acts either by reducing container preparation time or decreasing the delay in loading function libraries, while the latter technique continuously monitors the functions and periodically pings them to keep the instances warm or running.
An adaptive container warm-up technique to reduce the cold start latency and a container pool strategy to reduce resource wastage is introduced in [5]. The proposed solution leverages a Long-Short Term Memory (LSTM) network to predict function invocation times and non-first functions in a chain to keep a warm queue of function containers ready. Although both the discussed techniques work in synchronization, the first function in the chain suffers from a cold start.
[6] explains platform-dependent overheads like pod provisioning and application implementation-dependent overheads. It presents a pool-based pre-warmed container technique, marked with selector 'app-label' to deal with the function cold start problem. To tackle the incoming demand, a container pool is checked first for existing pre-warmed containers, or the platform requests new containers as per the demand.
Another study [13] exploits the data similarity to reduce the function cold start. It criticizes the current container deployment technique of pulling new container images from the storage bucket and introduces a live container migration over a peer-to-peer network. Similarly, [7] aims to reduce the number of cold start occurrences by utilizing the function composition knowledge. It presents an application-side solution based on lightweight middleware. This middleware aims to enable the developers to control the frequency of cold start by treating the FaaS platform as a black box.
Based on the investigation [9], network creation and initialization are the prime contributors to the cold start latency. The study expresses that cold starts are caused due to work and wait times involved in various set-up processes like initializing networking elements. The study explains the stages of the container lifecycle and states that the clean-up stage demands cycles from the underlying containerization daemon, hindering other processes. Therefore a paused container pool manager is proposed to pre-create a network for function containers and attach the new function containers to configured IP and network when required.
Some studies [8, 10, 19] have identified significant factors that affect the cold start of a function. These include runtime environment, CPU and memory requirements, code dependency setting, workload concurrency, and container networking requirements. Most works [21, 22, 23, 24, 25] focus on commercial FaaS platforms like AWS Lambda, Azure Functions, Google Cloud Functions and fall short to evaluate open source serverless platforms like OpenLambda, Fission, Kubeless, etc. Very few studies [11, 26, 27] have successfully performed analysis on an open-source serverless platform and provided possible solution by targeting the container level fine-grained control of the platform.
Recent research [14, 16, 17, 28] introduces the paradigm of RL to the FaaS platforms in different ways. [14] focuses on request-based provisioning of VMs or containers on the Knative platform. The authors demonstrate a correlation between latency and throughput with function concurrency levels and thus propose a Q-Learning model to determine the optimal concurrency level of a function for a single workload. [16] proposes a two-layer adaptive approach, 1) an RL algorithm to predict the best idle-container window, and 2) an LSTM network to predict future invocation
\begin{table}
\begin{tabular}{|p{42.7pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Work** & **Name** & **Platform** & **Solution Focus** & **Strategy** & **Application Type** \\ \hline
[3] & - & AWS Lambda & Cold start latency & Optimising environments \& function pinging environments \& \& I/O intensive \\ \hline
[5] & AWU \& ACPS & Kubernetes & Cold start latency \& resource wastage & Invocation prediction (LSTM) \& Container pool & Function chain model \\ \hline
[6] & - & Knative & Cold start frequency & Container pool \& pod migration & Single function model \\ \hline
[7] & Naïve, Extended \& Global Approach & AWS Lambda, Apache OpenWhisk & Cold start frequency & Orchestration middleware & Function chain model \\ \hline
[9] & Pause Container Pool Manager & Apache OpenWhisk & Cold start latency & Container Pool & Function chain model \\ \hline
[11] & WLEC & OpenLambda & Cold start latency & Container Pool & Single function model \\ \hline
[13] & - & AWS & Cold start latency & Container migration \& content similarity & Single function model \\ \hline
[14] & - & Knative & Cold start frequency & AI-based container concurrency & Emulated CPU \& I/O intensive \\ \hline
[15] & Prebaking & OpenFaas & Cold start latency & CRIU process snapshot & Single function model \\ \hline
[16] & - & OpenWhisk & Cold start frequency & RL-based idle window \& LSTM based container pre-warming & Single function model \\ \hline
[17] & - & OpenFaas & Function Scaling & RL \& SLA-based configuration & Single function model \\ \hline Our work & - & Kubeless & Cold start frequency & AI-based function \& throughput metrics & Single function model \\ \hline \end{tabular}
\end{table} TABLE I: Related work summary
times to keep the pre-warmed containers ready. The study demonstrates the advantages of the proposed solution on the OpenWhisk platform using a simple HTTP-based workload and a synthetic demand pattern. Another research [17] focuses on resource-based scaling configuration (CPU utilisation) of OpenFaaS and adjusts the HPA settings using an RL-based agent. They assume a serverless-edge application scenario and a synthetic demand pattern for the experimentation and present their preliminary findings based on latency as SLA.
[28] introduced the idea of Q-learning to ascertain the appropriate amount of resources to reduce frequent cold starts. The authors share the preliminary training results with an attempt to show the applicability of reinforcement learning to the serverless environment. They utilise the platform exposed resource metrics to experiment with a synthetic workload trace, i.e. Fibonacci series calculation, to simulate a compute-intensive application and predict the required resources.
Our proposed work introduces a Q-Learning strategy to reduce frequent cold starts in the FaaS environment. Contrasting existing solutions, we apply the model-free Q-Learning to determine the appropriate number of function instances for the workload demand. Furthermore, the existing solutions either use continuous pinging, pool-based approaches, container migration and network building or exploit platform-specific implementations like function concurrency while failing to experiment with CPU-intensive real-world application workloads. Similar to [28], our work utilizes available resource-based metrics and function response failure rate to accomplish the learning, but improves over the discussed approach. Contrasting to their model, we formulate the problem of cold starts as an optimisation approach to proactively spawn the appropriate functions and minimizing frequent cold starts. As part of their Q-learning model, the study uses fixed-value constants in their reward modelling and we address this issue by carefully analysing the problem and curate it as a threshold-based reward system. Additionally, we include a real-world serverless application, i.e. matrix multiplication, used as part of image processing pipeline, to train and evaluate our agent and utilise the industry provided function invocation trace [12]. Further we describe our design decisions and utilize constants based upon the trial-error analyses. The successful learning of the agent results in the preparation of appropriate functions in a timeframe to reduce the frequency of cold starts and improve the platform's throughput. A summary of related works is presented in Table I.
## III System Model and Problem Formulation
FaaS is an event-driven cloud service model that allows stateless function deployment. It delivers high scalability and scale-to-zero feature being economical to infrequent demand patterns. New functions \(n_{i}\), where \(1\leq n_{i}\leq N\) and \(N\) is the maximum scale, are instantiated on-demand to serve the incoming load (scale up) and removed (scale down) below a configured, resource-based threshold metric value for every \(i\) iteration window. The preparation time of function containers i.e., cold start \(C_{t}\), adds to the execution time of a request. These frequent cold starts result in increased computation pressure on existing resources neglecting expected average CPU utilisation (\(\phi_{o}\)), and expected request failure rate (\(\tau_{o}\)). Therefore, an intelligent, learning-based solution is proposed to address the cold starts.
In this study, we consider Kubeless, an open-source Kubernetes-native serverless platform. It leverages the underlying Kubernetes resources to implement a serverless environment. It wraps function code inside a docker container with pre-defined resource requirements i.e. \(RR_{f}=(cpu_{f},mem_{f},tout_{f})\) and schedules them on worker nodes. Similar to other commercial FaaS providers, Kubeless has an idle-container window of 5 minutes to re-use functions and scales down to a minimum of one function if the collected metrics (default 15 seconds window) are below the set threshold. We take into account the general illustration of FaaS platform and consider a stochastic incoming request pattern \(D=\{d_{1},d_{2},\ldots,d_{i}\}\) with \(d_{i}\) requests in \(i^{th}\) iteration window. We analyze the request pattern for a timeframe \(T\) divided in \(i\) iteration windows of duration \(t_{i}\). The system model of the examined scenario is depicted in Fig. 1. The workflow of the potential cold start is explained in Fig. 2.
### _Problem Formulation_
We formulate the function cold start as an optimization problem aimed at minimizing the number of cold starts (Eq. 1) and aid the agent in learning a policy while maintaining average CPU utilisation and reducing the request failure rate.
\[\operatorname*{minimize}_{\phi,\tau,d_{i}} (n_{i})\] (1) s.t. \[\tau_{d_{i}}<\tau_{o}\] \[\phi_{d_{i}}<\phi_{o}\]
The cold start occurs when there are no available function containers on the platform to deal with incoming requests. FaaS services scale horizontally as per resource-based thresholds to be agile, usually considering the function's average CPU utilisation. Therefore, the goal of optimization is to
Fig. 1: System Model
assess the incoming request pattern \(d_{i}\) for an application task in \(i^{th}\) iteration window and configure a policy to prepare functions beforehand, considering actual and expected average CPU utilisation (\(\phi_{d_{i}}\&\phi_{o}\)) and request failure rate (\(\tau_{d_{i}}\&\tau_{o}\)). Since the preparation time, \(C_{t}\) remains similar for individual function containers, we focus on optimizing the frequency of cold start \(n_{i}\) for an individual iteration window.
With easy to implement and economical service model, enterprises are accommodating critical tasks like user verification, media processing, and parallel scientific computations into the serverless paradigm. To assess the necessity of a dynamic solution, we consider matrix multiplication as workload, which is a critical task in image processing.
#### Iii-B1 Reinforcement Learning model
In a model-free Q-Learning process, the agent learns by exploring the environment and exploiting the acquired information. The core components of the environment are state, action, reward and agent. The environment state represents the current visibility of the agent and is defined as a Markov Decision Process (MDP) [29, 30] where future environment state is independent of past states, given the present state information. Actions are the possible set of operations that the agent can perform in a particular state. Additionally, rewards are the guiding signals that lead the agent towards the desired goal by performing actions and transitioning between environment states. The agent maintains a Q-value table to assess the quality of action through obtained reward for the respective state and utilize it for future learning. Therefore, we propose a modelling scheme for the RL environment that is leveraged by a Q-Learning agent to learn a policy for function preparation.
We model the environment's state \(s_{i}=(\hat{n}_{i},\phi_{d_{i}},\tau_{d_{i}})\) where \(\phi_{d_{i}}\) is the average CPU utilisation of the available \(\hat{n}_{i}\) functions, \(\tau_{d_{i}}\) represents the request failure rate, and \(i\) is the iteration window during a timeframe \(T\). The agent adjusts the number of function instances in the upcoming iteration using suitable actions. These actions compensate for the expected cold starts from the incoming demand and help to appropriately provision required functions. Therefore, we define the action \(a_{i}\) as the number of function instances, \(n_{i}\), to add or remove from previously available functions \(\hat{n}_{i-1}\) and represent it as \(a_{i}=n_{i}\) such that \(1\leq(\hat{n}_{i-1}+a_{i})\leq N\). This heuristic helps the agent to control the degree of exploration by maintaining the number of functions within the threshold \(N\).
The motive of the RL-based agent is to learn an optimal policy, and we structure the rewards over resource-based metrics \(\phi_{d_{i}}\), function failure rate \(\tau_{d_{i}}\), and expected threshold values (\(\phi_{o}\) and \(\tau_{o}\)). It evaluates the quality of action \(a_{i}\) in state \(s_{i}\) by keeping a value-based table, i.e., Q-table, that captures this information for every \((s_{i},a_{i})\) pair. After executing the action, the agent waits for the duration of the iteration window and receives a delayed reward \(r_{i}\), expressed based on the difference between the expected and actual utilisation and failure rate values, as shown in Eq. 2.
\[r_{i}=\frac{(\phi_{o}-\phi_{d_{i}})+(\tau_{o}-\tau_{d_{i}})}{\hat{n}_{i}} \tag{2}\]
and the Q-table is represented as a matrix (Eq. 3) of dimension \(S\times A\).
\[Q_{(S_{n}\times A_{m})}=\begin{bmatrix}s_{1},a_{1}&\dots&s_{1},a_{m}\\ \vdots&\ddots&\vdots\\ s_{n},a_{1}&\dots&s_{n},a_{m}\end{bmatrix} \tag{3}\]
## IV Q-Learning for Cold Start Reduction
The proposed technique has two phases: an agent training phase and a testing phase. _Algorithm 1_ demonstrates the agent training workflow. The environment setup process precedes the agent training, where the agent interacts with the environment and obtains information. After initial setup, the agent is trained for multiple epochs or timeframes where it assesses the function demand \(d_{i}\) over individual iteration windows \(i\) and ascertains appropriate function instances. During an iteration window \(i\), the agent observes the environment state \(s_{i}\), selects an action \(a_{i}\) according to \(\epsilon\)-greedy policy. This greedy policy helps the agent to control its exploration and selects a random action with \(\epsilon\) probability, otherwise exploiting the obtained information. This exploration rate is a dynamic value and decays with ongoing learning to prioritise the acquired information.
After performing the selected action, the agent waits for duration \(t_{i}\) of an iteration window to obtain the delayed reward \(r_{i}\), calculated using the relevant resource-based metrics \(\phi_{d_{i}}\) and function failure rate \(\tau_{d_{i}}\). This reward helps the agent in action quality assessment, and it combines the acquired knowledge over previous iterations using the Bellman Equation (Eq. 4). It is the core component in learning as it aids Q-value or Q-table updates and improves the agent's value-based decision-making capability. The equation uses two hyper-parameters learning rate, \(\alpha\) and discount factor, \(\gamma\). The learning rate signifies the speed of learning and accumulating new information, and the discount factor balances the importance of immediate and future rewards.
Fig. 2: Function Warm Start & Cold Start workflow
\[Q(s_{i},a_{i})=(1-\alpha)Q(s_{i},a_{i})+\alpha(r_{i}+\gamma\max_{a}Q(s_{i+1},a_{i})) \tag{4}\]
The agent then evaluates and adjusts the Q-value in Q-Table based upon the delayed reward for the corresponding (\(s_{i},a_{i}\)) pair. The agent continues to analyse the demand over multiple iteration windows, selecting and performing actions, evaluating delayed rewards, assessing the quality of action and accumulating the information in Q-table, and repeating this process over multiple epochs for learning. Once the agent is trained for sufficient epochs and the exploration rate has decayed significantly, we exploit the knowledge of the agent in the testing phase.
In the testing phase, the agent is evaluated using a similar demand pattern and the Q-table values guide the agent in taking informed actions. The agent evaluates the current environment state and selects the best possible action i.e. action with the highest Q-value for the corresponding state, and prepares the required number of functions. We hypothesised the relationship between throughput and the number of available functions and evaluate the performance by considering throughput, resource utilisation and the number of available functions. We posit the agent learns to prepare an appropriate number of functions beforehand, improving the throughput and keeping the actual resource utilisation below the expected threshold.
## V Performance Evaluation
In this section, we provide the experimental setup and parameters, and perform an analysis of our agent compared to other complementary solutions.
### _System Setup_
We set up our experimental test-bed as discussed in Section III, using NeCTAR (Australian National Research Cloud Infrastructure) services on the Melbourne Research Cloud. We configure Kubernetes (v1.18.6) and Kubeless (v1.0.6) on a service cluster of 4 nodes, each with Ubuntu (18.04 LTS) OS image, 4 vCPUs, 16 GB RAM, and 30 GB of disk storage to perform the relevant experiments. Typical serverless applications expect high scalability for their changing demands and can be compute-intensive, demanding a considerable amount of resources such as CPU, memory, or time to execute. These factors add to frequent cold starts on the platform by keeping the available functions or resources busy while requesting new functions for the subsequent workload demand. We use Python-based matrix multiplication (1024 pixels x 1024 pixels) to mimic the image processing task as our latency-critical application to deploy serverless functions.
The experimental setup mimics real-time application demand experienced in commercial FaaS platforms [31, 12]. We consider a single function demand trace from the provided data [12] and downsize it according to our resource setup. We deploy the Apache JMeter load testing tool to generate the HTTP-based requests and randomize its request ramp-up period to guarantee the changing demand pattern for our workload. Also, we collect the relevant resource-based metrics and throughput information via Kubernetes APIs. Table II summarises the parameters used for system set-up.
### _RL Environment Setup_
To initialize the proposed RL-based environment, we first analyze and set up the function requirements according to deployed resource limits. After preliminary analysis, we configure the function requirements as 1 vCPU, 128 MB memory, and 60 seconds function timeout, where timeout represents the maximum execution period for a function until failure. To experiment we assume a timeframe of 10 minutes to analyse the demand pattern of 100 requests during 5 iteration windows of 2 minutes. Based on the resource analysis and underlying Kubernetes assets we assume the function limit \(N=7\). These constraints allow us to put a considerable load or pressure on the competing approaches and effectively evaluate them against each other.
As discussed in section III, the RL-environment components depend upon resource metrics (average CPU utilisation), request failure rate, number of available functions and expected threshold values, summarized in Table III. Since the proposed agent maintains a Q-table, these considerations help to minimise the risk of state explosion related to Q-Learning. The actions signify the addition or removal of functions based upon the function limit and the reward is modelled around the expected threshold values. We configure the Bellman
\begin{table}
\begin{tabular}{|c|c|} \hline
**Parameter Name** & _Value_ \\ \hline Kubernetes version & v1.18.6 \\ \hline Kubeless version & v1.0.6 \\ \hline Nodes & 4 \\ \hline OS & Ubuntu 18.04 LTS \\ \hline vCPU & 4 \\ \hline RAM & 16 GB \\ \hline Workload & Matrix Multiplication (\(m\times m\)) \\ \hline m & 1024 \\ \hline \end{tabular}
\end{table} TABLE II: System Setup Parameter values
Equation hyper-parameters: learning rate and discount factor as 0.9 and 0.99, respectively. The agent is structured to explore the environment and exploit the acquired knowledge. We use \(\epsilon\)-greedy action selection policy to randomly select an action with initial \(\epsilon=1\) probability and exploit this information with a decay rate of 0.0025.
### _Q-Learning Agent Evaluation_
We train the RL-based agent for a timeframe of 10 minutes over 500 epochs to analyze an application demand and learn the ideal number of functions to reduce frequent cold starts. The agent is structured according to the RL-based environment design explained in section III and around the implementation constraints. The quality of the RL-based agent is evaluated during a 2-hour period to reduce the effect of any bias and performance bottlenecks.
We assess the effectiveness of our approach against the default scaling policy and commercially used function keep-alive policy on the serverless platform (Fig. 4). Kubeless leverages the default resource-based scaling (HPA) implemented as a control loop that checks for the specific target metrics to adjust the function replicas. HPA has a default query period of 15 seconds to check and control the deployment based on the target metrics like average CPU utilization. Therefore, the HPA controller fetches the specific metrics from the underlying API and calculates the average metric values for the available function instances. The controller adjusts the desired number of instances based on threshold violation but is unaware of the demand and only scales after a 15-second metric collection window. The expected threshold for function average CPU utilisation is set to be 75% with maximum scaling up to 7 instances. Therefore, whenever the average CPU utilisation of the function violates the threshold, new function instances are provisioned in real-time, representing a potential cold start in the system.
Also, HPA has a 5-minute down-scaling window and during that period resources are bound to the platform irrespective of incoming demand which represents potential resource wastage. Therefore, it is worthwhile to analyse the performance of the RL-based agent against the function queuing or keep-alive approach that keeps enough resources bound to itself for an idle-container window.
Fig. 3 illustrates the learning curve of the agent over multiple epochs and we observe that the agent continuously attempts to meet the expected thresholds. This highlights the agent's capability to obtain positive rewards and move towards the desired configuration. We compare the RL-based agent with HPA and successfully demonstrate the agent's ability to adequately determine the required functions (Fig. 4c) and reduce the throughput failure rate by up to 8.81% in Fig. 4a. For example, in Fig. 4c during iteration windows 1 and 2, the HPA scales functions based on CPU utilisation threshold, unaware of the actual requirement for upcoming
Fig. 3: RL-based agent training curve for SLA.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Parameter** & _Value_ \\ \hline \(cpu_{f},mem_{f},tout_{f}\) & 1, 128M, 60 seconds \\ \hline \(N\) & 7 \\ \hline \(T\) & 10 minutes \\ \hline \(i\) & 5 \\ \hline \(t_{i}\) & 2 minutes \\ \hline \(\phi_{o}\) & 75\% \\ \hline \(\tau_{o}\) & 20\% \\ \hline \(\alpha\) & 0.9 \\ \hline \(\gamma\) & 0.99 \\ \hline \(\epsilon\) & 1 \\ \hline \(decougRate\) & 0.0025 \\ \hline \end{tabular}
\end{table} TABLE III: RL-Environment parameter values
iteration and results in resource wastage. Similarly, Fig. 3(c) illustrates the resource wastage by HPA during iterations 3 and 4. The proposed agent also targets to maintain expected CPU utilisation thresholds in Fig. 3(b) by reducing CPU utilisation up to 55%.
Similar results are observed against function queuing or keep-alive policy, where we evaluate two queues with \(N=4\) and \(N=7\). The RL-based agent scales and prepares the function according to demand needs while the queue results in resource wastage of up to 37%, as shown in Fig. 3(d). Although the queuing policy manages to reduce the request failure rate to zero, it is due to extra resources available, as depicted in Fig. 3(e), but can not be precisely captured by HPA metrics and shows over CPU utilisation of up to 50% in Fig. 3(f). The proposed agent analysed the demand pattern by consuming sufficient function resources, preparing the ideal number of functions and trying to keep the desired CPU utilisation under control. Hence, the learning and testing analysis support our hypothesis that reducing cold starts are directly linked to throughput improvement.
## VI Discussions
Function cold start is an inherent shortcoming of the serverless execution model. Thus, we have proposed an RL-based technique to investigate the demand pattern of the application and attempt to reduce the frequency of function cold starts. The proposed agent performs better than the baseline approaches under a controlled experimental environment. But there are certain points to recollect associated with the real-time appropriateness of the proposed solution.
We leverage the RL environment modelling, specifically Q-Learning constraints [29, 32], and in general, these algorithms are expensive in terms of data and time. The agent interacts with the modelled environment to acquire relevant information over multiple epochs that signify a higher degree of exploration. Hence, as evidenced in the proposed work, for an RL-based agent to outperform a baseline technique, a training period of 500 epochs is exploited for satisfactorily analyzing the workload demand for a timeframe (10 minutes). Therefore, RL-based approaches are considerably expensive in practical applications with stringent optimization requirements.
A classical Q-Learning approach is applied to discrete environment variables [29]. To constrain the serverless environment within the requirements of the Q-Learning algorithm, we consider the discrete variables to model cold starts. The size of the Q-table is large and is a function of state space and action space. But with the expansion of state space or the action space, the size of the Q-table grows exponentially [29, 32]. Therefore, Q-Learning experiences state explosion, making it infeasible to perform updates on Q-values and degraded space and time complexity.
The proposed agent analyzes individual application demand, so the learning can't be generalized for other demand patterns and requires respective training to be commissioned. Furthermore, the agent is trained for 500 iterations and
Fig. 4: RL-based agent comparison with complementary solutions.
evaluated, but the chance of exploring every state is bleak with limited iterations of training. Therefore, the agent expects to be guided by certain approximations to avoid acting randomly. The agent utilizes resource-based metrics that affect the cold starts, so the availability of relevant tools and techniques to collect instantaneous metrics is essential. Also, the respective platform implementation of a serverless environment, such as metrics collection frequency, function concurrency policy, and request queuing, can extend support to the analyses.
The difference between the approaches can be attributed to the following characteristics of the proposed RL-based agent -
1. The process of elimination of invalid states during the RL environment setup and lazy loading of Python, helps the agent to productively use the acquired information about the environment.
2. Although the RL-based agent outperforms HPA and function queue policy, there is a lack of function container concurrency policy. The CPU-intensive function workload is configured with an execution time of 60 seconds and thus affected by the concurrency control of the instance.
3. The composition of state space and reward function incorporates the effect of failures during the training, and therefore, the agent tries to compensate for the failures in consequent steps of learning by exploiting the acquired knowledge.
On the account of the performance evaluation results, we can adequately conclude that the proposed agent successfully outperforms competing policies for the given workload and experiment settings. We strengthen this claim by analysing the training and testing outcomes of the RL-based agent, focused on examining the workload pattern to reduce request failure which is a direct consequence of appropriate function instances representing reduced function cold starts.
## VII Conclusions and Future work
FaaS model executes the piece of code inside a container, known as a function and prepares new function containers on demand. New function containers undergo an initialisation process that puts together all the essential components before executing the function handler. This bootstrapping process consumes time in the order of a few seconds, known as function cold start and introduces a delay in the response of the function container.
This work visits the problem of function cold start by addressing frequent cold starts and analysing the application demand through an RL technique. We leverage the services of Apache JMeter to produce varying incoming request patterns and a CPU-intensive function workload to complement the invocation pattern and observe relevant cold starts. The system is set up using the Kubeless framework and the RL environment is modelled for the agent to examine the necessary metrics to make guided decisions in provisioning an appropriate number of function instances.
We present an evidence of leveraging Q-Learning to address cold starts on FaaS platforms and verify it with improved platform throughput, reduced resource wastage while maintaining expected thresholds, during the iterations. We evaluate the performance of our proposed agent against the HPA policy and function queue policy. We successfully observe the RL-based agent outperforming comparing techniques after a training of 500 epochs that verifies our hypothesis of strong association between success rate and reduced number of cold starts on the platform. After the test analyses, the Q-Learning agent successfully improves throughput by up to 8.81% and reduces resource wastage by up to 37% while preparing sufficient functions to reduce cold starts.
As part of future work, we plan to explore other important variables such as memory utilisation and function package size to assess the quality of learning to address frequent cold starts. Similar to Q-Learning, the application of other policy-based techniques such as SARSA, which is known to converge faster than Q-Learning, can also be experimented with in the domain of the cold start problem. As an adaptation of Q-Learning, the proposed solution includes discrete values over continuous values for state representation. In this context, to avoid the problem of state space explosion, function approximation techniques such as Deep Q-Networks, and Proximal Policy Optimization can also be leveraged to estimate the information about optimal actions.
|
2303.05024 | Phase transition for detecting a small community in a large network | How to detect a small community in a large network is an interesting problem,
including clique detection as a special case, where a naive degree-based
$\chi^2$-test was shown to be powerful in the presence of an Erd\H{o}s-Renyi
background. Using Sinkhorn's theorem, we show that the signal captured by the
$\chi^2$-test may be a modeling artifact, and it may disappear once we replace
the Erd\H{o}s-Renyi model by a broader network model. We show that the recent
SgnQ test is more appropriate for such a setting. The test is optimal in
detecting communities with sizes comparable to the whole network, but has never
been studied for our setting, which is substantially different and more
challenging. Using a degree-corrected block model (DCBM), we establish phase
transitions of this testing problem concerning the size of the small community
and the edge densities in small and large communities. When the size of the
small community is larger than $\sqrt{n}$, the SgnQ test is optimal for it
attains the computational lower bound (CLB), the information lower bound for
methods allowing polynomial computation time. When the size of the small
community is smaller than $\sqrt{n}$, we establish the parameter regime where
the SgnQ test has full power and make some conjectures of the CLB. We also
study the classical information lower bound (LB) and show that there is always
a gap between the CLB and LB in our range of interest. | Jiashun Jin, Zheng Tracy Ke, Paxton Turner, Anru R. Zhang | 2023-03-09T04:09:50Z | http://arxiv.org/abs/2303.05024v1 | # Phase transition for detecting a small community in a large network
###### Abstract
How to detect a small community in a large network is an interesting problem, including clique detection as a special case, where a naive degree-based \(\chi^{2}\)-test was shown to be powerful in the presence of an Erdos-Renyi background. Using Sinkhorn's theorem, we show that the signal captured by the \(\chi^{2}\)-test may be a modeling artifact, and it may disappear once we replace the Erdos-Renyi model by a broader network model. We show that the recent SgnQ test is more appropriate for such a setting. The test is optimal in detecting communities with sizes comparable to the whole network, but has never been studied for our setting, which is substantially different and more challenging. Using a degree-corrected block model (DCBM), we establish phase transitions of this testing problem concerning the size of the small community and the edge densities in small and large communities. When the size of the small community is larger than \(\sqrt{n}\), the SgnQ test is optimal for it attains the computational lower bound (CLB), the information lower bound for methods allowing polynomial computation time. When the size of the small community is smaller than \(\sqrt{n}\), we establish the parameter regime where the SgnQ test has full power and make some conjectures of the CLB. We also study the classical information lower bound (LB) and show that there is always a gap between the CLB and LB in our range of interest.
## 1 Introduction
Consider an undirected network with \(n\) nodes and \(K\) communities. We assume \(n\) is large and the network is connected for convenience. We are interested in testing whether \(K=1\) or \(K>1\) and the sizes of some of the communities are much smaller than \(n\) (communities are scientifically meaningful but mathematically hard to define; intuitively, they are clusters of nodes that have more edges "within" than "across" (Jin, 2015; Zhao et al., 2012)). The problem is a special case of network global testing, a topic that has received a lot of attention (e.g., Jin et al. (2018, 2021)). However, existing works focused on the so-called _balanced case_, where the sizes of communities are at the same order. Our case is _severely unbalanced_, where the sizes of some communities are much smaller than \(n\) (e.g., \(n^{c}\)).
The problem also includes clique detection (a problem of primary interest in graph learning (Alon et al., 1998; Ron and Feige, 2010)) as a special case. Along this line, Arias-Castro and Verzelen (2014); Verzelen and Arias-Castro (2015) have made remarkable progress. In detail, they considered the problem of testing whether a graph is generated from a one-parameter Erdos-Renyi model or a two-parameter model: for any nodes \(1\leq i,j\leq n\), the probability that they have an edge equals \(b\) if \(i,j\) both are in a small planted subset and equals \(a\) otherwise. A remarkable conclusion of these papers is: a naive degree-based \(\chi^{2}\)-test is optimal, provided that the clique size is in a certain range. Therefore, at first glance, it seems that the problem has been elegantly solved, at least to some extent.
Unfortunately, recent progress in network testing tells a very different story: the signal captured by the \(\chi^{2}\)-test may be a modeling artifact. It may disappear once we replace the models in Arias-Castro
& Verzelen (2014); Verzelen & Arias-Castro (2015) by a properly broader model. When this happens, the \(\chi^{2}\)-test will be asymptotically powerless in the whole range of parameter space.
We explain the idea with the popular _Degree-Corrected Block Model (DCBM)_(Karrer & Newman, 2011), though it is valid in broader settings. Let \(A\in\mathbb{R}^{n,n}\) be the network adjacency matrix, where \(A(i,j)\in\{0,1\}\) indicates whether there is an edge between nodes \(i\) and \(j\), \(1\leq i,j\leq n\). By convention, we do not allow for self-edges, so the diagonals of \(A\) are always 0. Suppose there are \(K\) communities, \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\). For each node \(i\), \(1\leq i\leq n\), we use a parameter \(\theta_{i}\) to model the degree heterogeneity and \(\pi_{i}\) to model the membership: when \(i\in\mathcal{C}_{k}\), \(\pi_{i}(\ell)=1\) if \(\ell=k\) and \(\pi_{i}(\ell)=0\) otherwise. For a \(K\times K\) symmetric and irreducible non-negative matrix \(P\) that models the community structure, DCBM assumes that the upper triangle of \(A\) contains independent Bernoulli random variables satisfying1
Footnote 1: In this work we use \(M^{\prime}\) to denote the transpose of a matrix or vector \(M\).
\[\mathbb{P}(A(i,j)=1)=\theta_{i}\theta_{j}\pi_{i}^{\prime}P\pi_{j},\qquad 1 \leq i,j\leq n. \tag{1.1}\]
In practice, we interpret \(P(k,\ell)\) as the baseline connecting probability between communities \(k\) and \(\ell\). Write \(\theta=(\theta_{1},\theta_{2},\ldots,\theta_{n})^{\prime}\), \(\Pi=[\pi_{1},\pi_{2},\ldots,\pi_{n}]^{\prime}\), and \(\Theta=\mathrm{diag}(\theta)\equiv\mathrm{diag}(\theta_{1},\theta_{2},\ldots, \theta_{n})\). Introduce \(n\times n\) matrices \(\Omega\) and \(W\) by \(\tilde{\Omega}=\Theta\Pi P\Pi^{\prime}\Theta\) and \(W=A-\mathbb{E}[A]\). We can re-write (1.1) as
\[A=\Omega-\mathrm{diag}(\Omega)+W. \tag{1.2}\]
We call \(\Omega\) the _Bernoulli probability matrix_ and \(W\) the noise matrix. When \(\theta_{i}\) in the same community are equal, DCBM reduces to the Stochastic Block Model (SBM) (Holland et al., 1983). When \(K=1\), the SBM reduces to the Erdos-Renyi model, where \(\Omega(i,j)\) take the same value for all \(1\leq i,j\leq n\).
We first describe why the signal captured by the \(\chi^{2}\)-test in Arias-Castro & Verzelen (2014); Verzelen & Arias-Castro (2015) is a modeling artifact. Using Sinkhorn's matrix scaling theorem (Sinkhorn, 1974), it is possible to build a null DCBM with \(K=1\) that has no community structure and an alternative DCBM with \(K\geq 2\) and clear community structure such that the two models have the _same_ expected degrees. Thus, we do not expect that degree-based test such as \(\chi^{2}\) can tell them apart. We make this Sinkhorn argument precise in Section 2.1 and show the failure of \(\chi^{2}\) in Theorem 2.3.
In the Erdos-Renyi setting in Arias-Castro & Verzelen (2014), the null has one parameter and the alternative has two parameters. In such a setting, we cannot have degree-matching. In these cases, a naive degree-based \(\chi^{2}\)-test may have good power, but it is due to the very specific models they choose. For clique detection in more realistic settings, we prefer to use a broader model such as the DCBM, where by the degree-matching argument above, the \(\chi^{2}\)-test is asymptotically powerless.
This motivates us to look for a different test. One candidate is the scan statistic Bogerd et al. (2021). However, a scan statistic is only computationally feasible when each time we scan a very small subset of nodes. For example, if each time we only scan a finite number of nodes, then the computational cost is polynomial; we call the test the _Economic Scan Test (EST)_. Another candidate may come from the Signed-Polygon test family (Jin et al., 2021b), including the Signed-Quadrilateral (SgnQ) as a special case. Let \(\hat{\eta}=(\mathbf{1}_{n}A\mathbf{1}_{n})^{-1/2}A\mathbf{1}_{n}\) and \(\tilde{A}=A-\hat{\eta}\hat{\eta}\). Define \(Q_{n}=\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\widehat{A}_{i_{1}i_{2}}\widehat{A} _{i_{2}i_{3}}\widehat{A}_{i_{3}i_{4}}\widehat{A}_{i_{4}i_{1}}\) where the shorthand \((dist)\) indicates we sum over distinct indices. The SgnQ test statistic
\[\psi_{n}=\big{[}Q_{n}-2(\|\hat{\eta}\|^{2}-1)^{2}\big{]}/\sqrt{8(\|\hat{\eta} \|^{2}-1)^{4}}. \tag{1.3}\]
SgnQ is computationally attractive because it can be evaluated in time \(O(n^{2}\bar{d})\), where \(\bar{d}\) is the average degree of the network (Jin et al., 2021b).
Moreover, it was shown in Jin et al. (2021b) that (a) when \(K=1\) (the null case), \(\psi_{n}\to N(0,1)\), and (b) when \(K>1\) and all communities are at the same order (i.e., a balanced alternative case), the SgnQ test achieves the classical information lower bound (LB) for global testing and so is optimal. Unfortunately, our case is much more delicate: the signal of interest is contained in a community with a size that is much smaller than \(n\) (e.g., \(n^{\varepsilon}\)), so the signal can be easily overshadowed by the noise term of \(Q_{n}\). Even in the simple alternative case where we only have two communities (with sizes \(N\) and \((n-N)\)), it is unclear (a) how the lower bounds vary as \(N/n\to 0\), and especially whether there is a gap between the computation lower bound (CLB) and classical information lower bound (LB), and (b) to what extent the SgnQ test attains the CLB and so is optimal.
### Results and contributions
We consider the problem of detecting a small community in the DCBM. In this work, we specifically focus on the case \(K=2\) as this problem already displays a rich set of phase transitions, and we believe it captures the essential behavior for constant \(K>1\). Let \(N\ll n\) denote the size of this small community under the alternative. Our first contribution analyzes the power of SynQ for this problem, extending results of Jin et al. (2021) that focus on the balanced case. Let \(\lambda_{1}=\lambda_{1}(\Omega)\). In Section 2.2, we define a population counterpart \(\tilde{\Omega}\) of \(\hat{A}\) and let \(\widetilde{\lambda}=\lambda_{1}(\tilde{\Omega})\). We show that SynQ has full power if \(\widetilde{\lambda}_{1}/\sqrt{\lambda_{2}}\to\infty\), which reduces to \(N(a-c)/\sqrt{nc}\to\infty\) in the SBM case.
For optimality, we obtain a computational lower bound (CLB), relying on the low-degree polynomial conjecture, which is a standard approach in studying CLB (e.g., Kunisky et al. (2019)). Consider a case where \(K=2\) and we have a small community with size \(N\). Suppose the edge probability within the community and outside the community are \(a\) and \(c\), where \(a>c\). The quantity \((a-c)/\sqrt{c}\) acts as the _Node-wise Signal-to-Noise Ratio (SNR)_ for the detection problem.2 When \(N\gg\sqrt{n}\), we find that the CLB is completely determined by \(N\) and node-wise SNR; moreover, SynQ matches with the CLB and is optimal. When \(N\ll\sqrt{n}\), the situation is more subtle: if the node-wise SNR \((a-c)/\sqrt{c}\to 0\) (weak signal case), we show the problem is computationally hard and the LB depends on \(N\) and the node-wise SNR. If \((a-c)/\sqrt{c}\gg n^{1/2}\) (strong signal case), then SynQ solves the detection problem. In the range \(1\ll(a-c)/\sqrt{c}\ll n^{1/2}\) (moderate signal case), the CLB depends on not only \(N\) and the node-wise SNR but also the background edge density \(c\). In this regime, we make conjectures of the CLB, from the study of the aforementioned economic scan test (EST). Our results are summarized in Figure 1 and explained in full detail in Section 2.7.
Footnote 2: Note that the node-wise SNR captures the ratio of the mean difference and standard deviation of Bernoulli(\(a\)) versus Bernoulli(\(c\)), which motivates our terminology.
We also obtain the classical information lower bound (LB), and discover that as \(N/n\to 0\), there is big gap between CLB and LB. Notably the LB is achieved by an (inefficient) signed scan test. In the balanced case in Jin et al. (2021), the SynQ test is optimal among all tests (even those that are allowed unbounded computation time), and such a gap does not exist.
We also show that that the naive degree-based \(\chi^{2}\)-test is asymptotically powerless due to the aforementioned degree-matching phenomenon.
Our statistical lower bound, computational lower bound, and the powerlessness of \(\chi^{2}\) based on degree-matching are also valid for all \(K>2\) since any model with \(K\geq 2\) contains \(K=2\) as a special case. We also expect that our lower bounds are tight for these broader models and that our lower bound constructions for \(K=2\) represent the least favorable cases when community sizes are severely unbalanced.
Compared to Verzelen & Arias-Castro (2015); Arias-Castro & Verzelen (2014), we consider network global testing in a more realistic setting, and show that optimal tests there (i.e., a naive degree-based \(\chi^{2}\) test) may be asymptotically powerless here. Compared with Bogerd et al. (2021), our setting is very different (they considered a setting where both the null and alternative are DCBM with \(K=1\)). Compared to the study in the balanced case (e.g., Jin et al. (2018, 2021); Gao & Lafferty (2017)), our study is more challenging for two reasons. First, in the balanced case, there is no gap between the UB (the upper bound provided by the SynQ test) and LB, so there is no need to derive the CLB, which is usually technical demanding. Second, the size of the smaller community can get as small as \(n^{\varepsilon}\), where \(\varepsilon>0\) is any constant. Due this imbalance in community sizes, the techniques of Jin et al. (2021) do not directly apply. As a result, our proof involves the careful study of the \(256\) terms that compose SynQ, which requires using bounds tailored specifically for the severely unbalanced case.
Our study of the CLB is connected to that of Hajek et al. (2015) in the Erdos-Renyi setting of Arias-Castro & Verzelen (2014). Hajek et al. (2015) proved via computational reducibility that the naive \(\chi^{2}\)-test is the optimal polynomial-time test (conditionally on the planted clique hypothesis). We also note work of Chen & Xu (2016) that studied a \(K\)-cluster generalization of the Erdos-Renyi model of Arias-Castro & Verzelen (2014); Verzelen & Arias-Castro (2015) and provided conjectures of the CLB. Compared to our setting, these models are very different because the expected degree profiles of the null and alternative differ significantly. In this work we consider the DCBM model, where due to the subtle phenomenon of degree matching between the null and alternative hypotheses, both CLB and LB are different from those obtained by Hajek et al. (2015).
**Notations:** We use \(\mathbf{1}_{n}\) to denote a \(n\)-dimensional vector of ones. For a vector \(\theta=(\theta_{1},\ldots,\theta_{n})\), \(\mathrm{diag}(\theta)\) is the diagonal matrix where the \(i\)-th diagonal entry is \(\theta_{i}\). For a matrix \(\Omega\in\mathbb{R}^{n\times n}\), \(\mathrm{diag}(\Omega)\) is the diagonal matrix where the \(i\)-th diagonal entry is \(\Omega(i,i)\). For a vector \(\theta\in\mathbb{R}^{n}\), \(\theta_{max}=\max\{\theta_{1},\ldots,\theta_{n}\}\) and \(\theta_{min}=\min\{\theta_{1},\ldots,\theta_{n}\}\). For two positive sequences \(\{a_{n}\}\) and \(\{b_{n}\}\), we write \(a_{n}\asymp b_{n}\) if \(c_{1}\leq a_{n}/b_{n}\leq c_{2}\) for constants \(c_{2}>c_{1}>0\). We say \(a_{n}\sim b_{n}\) if \((a_{n}/b_{n})=1+o(1)\).
## 2 Main results
In Section 2.1, following our discussion on Sinkhorn's theorem in Section 1, we introduce calibrations (including conditions on identifiability and balance) that are appropriate for severely unbalanced DCBM and illustrate with some examples. In Sections 2.2-2.3, we analyze the power of the SgnQ test and compare it with the \(\chi^{2}\)-test. In Sections 2.4-2.5, we discuss the information lower bounds (both the LB and CLB) and show that SgnQ test is optimal among polynomial time tests, when \(N\gg\sqrt{n}\). In Section 2.6, we study the EST and make some conjectures of the CLB when \(N\ll\sqrt{n}\). In Section 2.7, we summarize our results and present the phase transitions.
### DCBM for severely unbalanced networks: identifiability, balance metrics, and global testing
In the DCBM (1.1)-(1.2), \(\Omega=\Theta\Pi P\Pi^{\prime}\Theta\). It is known that the matrices \((\Theta,\Pi,P)\) are not identifiable. One issue is that \((\Pi,P)\) are only unique up to a permutation: for a \(K\times K\) permutation matrix \(Q\), \(\Pi P\Pi=(\Pi Q)(Q^{\prime}PQ)(\Pi Q)^{\prime}\). This issue is easily fixable in applications so is usually neglected. A bigger issue is that, \((\Theta,P)\) are not uniquely defined. For example, fixing a positive diagonal matrix \(D\in\mathbb{R}^{K\times K}\), let \(P^{*}=DPD\) and \(\Theta^{*}=\mathrm{diag}(\theta_{1}^{*},\theta_{2}^{*},\ldots,\theta_{n}^{*})\) where \(\theta_{i}^{*}=\theta_{i}/\sqrt{D(k,k)}\) if \(i\in\mathcal{C}_{k}\), \(1\leq k\leq K\). It is seen that \(\Theta\Pi P\Pi^{\prime}\Theta=\Theta^{*}\Pi P^{*}\Pi^{\prime}\Theta^{*}\), so \((\Theta,P)\) are not uniquely defined.
To motivate our identifiability condition, we formalize the degree-matching argument discussed in the introduction. Fix \((\theta,P)\) and let \(h=(h_{1},\ldots,h_{K})^{\prime}\) and \(h_{k}>0\) is the fraction of nodes in community \(k\), \(1\leq k\leq K\). By the main result of Sinkhorn (1974), there is a unique positive diagonal matrix \(D=\mathrm{diag}(d_{1},\ldots,d_{K})\) such that \(DPDh=\mathbf{1}_{K}\). Consider a pair of two DCBM, a null with \(K=1\) and an alternative with \(K>1\), with parameters \(\Omega=\Theta\mathbf{1}_{n}\mathbf{1}_{n}^{\prime}\Theta\equiv\theta\theta^{\prime}\) and \(\Omega^{*}(i,j)=\theta_{i}^{*}\theta_{j}^{*}\pi_{i}^{\prime}P\pi_{j}\) with \(\theta_{i}^{*}=d_{k}\theta_{i}\) if \(i\in\mathcal{C}_{k}\), \(1\leq k\leq K\), respectively. Direct calculation shows that node \(i\) has the same expected degree under the null and alternative.
There are many ways to resolve the issue. For example, in the balanced case (e.g., Jin et al. (2021b, 2022)), we can resolve it by requiring that \(P\) has unit diagonals. However, for our case, this is inappropriate. Recall that, in practice, \(P(k,\ell)\) represents as the baseline connecting probability between community \(k\) and \(\ell\). If we forcefully rescale \(P\) to have a unit diagonal here, both \((P,\Theta)\) lose their practical meanings.
Motivated by the degree-matching argument, we propose an identifiability condition that is more appropriate for the severely unbalanced DCBM. By our discussion in Section 1, for any DCBM with a Bernoulli probability matrix \(\Omega\), we can always use Sinkhorn's theorem to define \((\Theta,P)\) (while \(\Pi\) is unchanged) such that for the new \((\Theta,P)\), \(\Theta=\Theta\Pi P\Pi^{\prime}\Theta\) and \(Ph\propto\mathbf{1}_{K}\), where \(h=(h_{1},\ldots,h_{K})^{\prime}\) and \(h_{k}>0\) is the fraction of nodes in community \(k\), \(1\leq k\leq K\). This motivates the following identifiability condition (which is more appropriate for our case):
\[\|\theta\|_{1}=n,\qquad Ph\propto\mathbf{1}_{K},\quad\text{where $h_{k}$ is fraction of nodes in $\mathcal{C}_{k}$, $1\leq k\leq K$}. \tag{2.1}\]
**Lemma 2.1**.: _For any \(\Omega\) that satisfies the DCBM (1.2) and has positive diagonal elements, we can always find \((\Theta,\Pi,P)\) such that \(\Omega=\Theta\Pi P\Pi^{\prime}\Theta\) and (2.1) holds. Also, any \((\Theta,P)\) that satisfy \(\Omega=\Theta\Pi P\Pi^{\prime}\Theta\) and (2.1) are unique._
Moreover, for network balance, the following two vectors in \(\mathbb{R}^{K}\) are natural metrics:
\[d=(\|\theta\|_{1})^{-1}\Pi^{\prime}\Theta\mathbf{1}_{n},\qquad g=(\|\theta\|)^{-2 }\Pi^{\prime}\Theta^{2}\Pi\mathbf{1}_{K}, \tag{2.2}\]
In the balanced case (e.g., Jin et al. (2021b, 2022)), we usually assume the entries of \(d\) and \(g\) are at the same order. For our setting, this is not the case.
Next we introduce the null and alternative hypotheses that we consider. Under each hypothesis, we impose the identifiability condition (2.1).
General null model for the DCBM.When \(K=1\) and \(h=1\), \(P\) is scalar (say, \(P=\alpha\)), and \(\Omega=\alpha\theta\theta^{\prime}\) satisfies \(\|\theta\|_{1}=n\) by (2.1). The expected total degree is \(\alpha(\|\theta\|_{1}^{2}-\|\theta\|^{2})\sim\alpha\|\theta\|_{1}^{2}=n^{2}\alpha\) under mild conditions, so we view \(\alpha\) as the parameter for network sparsity. In this model, \(d=g=1\).
Alternative model for the DCBM.We assume \(K=2\) and that the sizes of the two communities, \(\mathcal{C}_{0}\) and \(\mathcal{C}_{1}\), are \((n-N)\) and \(N\), respectively. For some positive numbers \(a,b,c\), we have
\[P=\left[\begin{array}{cc}a&b\\ b&c\end{array}\right],\qquad\text{and}\qquad\Omega(i,j)=\left\{\begin{array}{ ll}\theta_{i}\theta_{j}\cdot a,&\text{if }i,j\in\mathcal{C}_{1},\\ \theta_{i}\theta_{j}\cdot c,&\text{if }i,j\in\mathcal{C}_{0},\\ \theta_{i}\theta_{j}\cdot b,&\text{otherwise}.\end{array}\right. \tag{2.3}\]
In the classical clique detection problem (e.g., Bogerd et al. (2021)), \(a\) and \(c\) are the baseline probability where two nodes have an edge when both of them are _in_ the clique and _outside_ the clique, respectively. By (2.1), \(a\epsilon+b(1-\epsilon)=b\epsilon+c(1-\epsilon)\) if we write \(\epsilon=N/n\). Therefore,
\[b=(c(n-N)-aN)/(n-2N). \tag{2.4}\]
Note that this is the _direct result_ of Sinkhorn's theorem and the parameter calibration we choose, not a condition we choose for technical convenience. Write \(d=(d_{0},d_{1})^{\prime}\) and \(g=(g_{0},g_{1})^{\prime}\). It is seen that \(d_{0}=1-d_{1}\), \(g_{0}=1-g_{0}\), \(d_{1}=\|\theta\|_{1}^{-1}\sum_{i\in\mathcal{C}_{1}}\theta_{i}\), and \(g_{1}=\|\theta\|^{-2}\sum_{i\in\mathcal{C}_{1}}\theta_{i}^{2}\). If all \(\theta_{i}\) are at the same order, then \(d_{1}\asymp g_{1}\asymp(N/n)\) and \(d_{0}\sim g_{0}\sim 1\). We also observe that \(b=c+O(ae)\) which makes the problem seem very close to Arias-Castro and Verzelen (2014); Bogerd et al. (2021), although in fact the problems are quite different.
Extension.An extension of our alternative is that, for the \(K\) communities, the sizes of \(m\) of them are at the order of \(N\), for an \(N\ll n\) and an integer \(m\), \(1\leq m<K\), and the sizes of remaining \((K-m)\) are at the order of \(n\). In this case, \(m\) entries of \(d\) are \(O(N/n)\) and other entries are \(O(1)\); same for \(g\).
### The SgnQ test: limiting null, p-value, and power
In the null case, \(K=1\) and we assume \(\Omega=\alpha\theta\theta^{\prime}\), where \(\|\theta\|_{1}=n\). As \(n\to\infty\), both \((\alpha,\theta)\) may vary with \(n\). Write \(\theta_{\max}=\|\theta\|_{\infty}\). We assume
\[n\alpha\to\infty,\qquad\text{and}\qquad\alpha\theta_{\max}^{2}\log(n^{2} \alpha)\to 0. \tag{2.5}\]
The following theorem is adapted from Jin et al. (2021b) and the proof is omitted.
**Theorem 2.1** (Limiting null of the SgnQ statistic).: _Suppose the null hypothesis is true and the regularity conditions (2.1) and (2.5) hold. As \(n\to\infty\), \(\psi_{n}\to N(0,1)\) in law._
We have two comments. First, since the DCBM has many parameters (even in the null case), it is not an easy task to find a test statistic with a limiting null that is completely parameter free. For example, if we use the largest eigenvalue of \(A\) as the test statistic, it is unclear how to normalize it so to have such a limiting null. Second, since the limiting null is completely explicit, we can approximate the (one-sided) \(p\)-value of \(\psi_{n}\) by \(\mathbb{P}(N(0,1)\geq\psi_{n})\). The p-values are useful in practice, as we show in our numerical experiments.. For example, using a recent data set on the statisticians' publication (Ji et al., 2022), for each author, we can construct an ego network and apply the SgnQ test. We can then use the \(p\)-value to measure the co-authorship diversity of the author. Also, in many hierarchical community detection algorithms (which are presumably recursive, aiming to estimate the tree structure of communities), we can use the p-values to determine whether we should further divide a sub-community in each stage of the algorithm (e.g. Ji et al. (2022)).
The power of the SgnQ test hinges on the matrix \(\widetilde{\Omega}=\Omega-(\mathbf{1}_{n}^{\prime}\Omega\mathbf{1}_{n})^{-1} \Omega\mathbf{1}_{n}\mathbf{1}_{n}^{\prime}\Omega\). By basic algebra,
\[\widetilde{\Omega}=\Theta\Pi\widetilde{P}\Pi^{\prime}\Theta,\qquad\text{where} \quad\widetilde{P}=P-(d^{\prime}Pd)^{-1}Pdd^{\prime}P. \tag{2.6}\]
Let \(\tilde{\lambda}_{1}\) be the largest (in magnitude) eigenvalue of \(\widetilde{\Omega}\). Lemma 2.2 is proved in the supplement.
**Lemma 2.2**.: _The rank and trace of the matrix \(\widetilde{\Omega}\) are \((K-1)\) and \(\|\theta\|^{2}{\rm diag}(\tilde{P})^{\prime}g\), respectively. When \(K=2\), \(\tilde{\lambda}_{1}={\rm trace}(\widetilde{\Omega})=\|\theta\|^{2}(ac-b^{2})(d_ {0}^{2}g_{1}+d_{1}^{2}g_{0})/(ad_{1}^{2}+2bd_{0}d_{1}+cd_{0}^{2})\)._
As a result of this lemma, we observe that in the SBM case, \(d=h\) and thus \(\widetilde{\lambda}_{1}=\lambda_{2}\asymp N(a-c)\). To see intuitively that the power of the SgnQ test hinges on \(\tilde{\lambda}_{1}^{4}/\lambda_{1}^{2}\), if we heuristically replace the terms of SgnQ by population counterparts, we obtain
\[Q_{n}=\sum_{i_{1},i_{2},i_{3},i_{4}(distinct)}\hat{A}_{i_{1}i_{2}}\hat{A}_{i _{2}i_{3}}\hat{A}_{i_{3}i_{4}}\hat{A}_{i_{4}i_{4}}\approx{\rm trace}([\Omega- \eta\eta^{\prime}]^{4})={\rm trace}(\widetilde{\Omega}^{4})=\tilde{\lambda}_ {1}^{4}.\]
We now formally discuss the power of the SgnQ test. We focus on the alternative hypothesis in Section 2.1. Let \(d=(d_{1},d_{0})^{\prime}\) and \(g=(g_{1},g_{0})^{\prime}\) be as in (2.2), and let \(\theta_{\max,0}=\max_{i\in\mathcal{C}_{0}}\theta_{i}\) and \(\theta_{\max,1}=\max_{i\in\mathcal{C}_{1}}\theta_{i}\). Suppose
\[d_{1}\asymp g_{1}\asymp N/n,\qquad a\theta_{\max,1}^{2}=O(1),\qquad cn\to \infty,\qquad c\theta_{\max,0}^{2}\log(n^{2}c)\to 0. \tag{2.7}\]
These conditions are mild. For example, when \(\theta_{i}\)'s are at the same order, the first inequality in (2.7) automatically holds, and the other inequalities in (2.7) hold if \(a\leq C\) for an absolute constant \(C>0\), \(cn\to\infty\), and \(c\log(n)\to 0\).
Fixing \(0<\kappa<1\), let \(z_{\kappa}>0\) be the value such that \(\mathbb{P}(N(0,1)\geq z_{\kappa})=\kappa\). The level-\(\kappa\) SgnQ test rejects the null if and only if \(\psi_{n}\geq z_{\kappa}\), where \(\psi_{n}\) is as in (1.3). Theorem 2.2 and Corollary 2.1 are proved in the supplement. Recall that our alternative hypothesis is defined in Section 2.1. By _power_ we mean the probability that the alternative hypothesis is rejected, minimized over all possible alternative DCBMs satisfying our regularity conditions.
**Theorem 2.2** (Power of the SgnQ test).: _Suppose that (2.7) holds, and let \(\kappa\in(0,1)\). Under the alternative hypothesis, if \(|\tilde{\lambda}_{1}|/\sqrt{\lambda_{1}}\to\infty\), the power of the level-\(\kappa\) SgnQ test tends to \(1\)._
**Corollary 2.1**.: _Suppose the same conditions of Theorem 2.2 hold, and additionally \(\theta_{\max}\leq C\theta_{\min}\) so all \(\theta_{i}\) are at the same order. In this case, \(\lambda_{1}\asymp cn\) and \(|\tilde{\lambda}_{1}|\asymp N(a-c)\), and the power of the level-\(\kappa\) SgnQ test tends to \(1\) if \(N(a-c)/\sqrt{cn}\to\infty\)._
In Theorem 2.2 and Corollary 2.1, if \(\kappa=\kappa_{n}\) and \(\kappa_{n}\to 0\) slowly enough, then the results continues to hold, and the sum of Type I and Type II errors of the SgnQ test at level-\(\kappa_{n}\to 0\).
The power of the SgnQ test was only studied in the balanced case (Jin et al., 2021b), but our setting is a severely unbalanced case, where the community sizes are at different orders as well as the entries of \(d\) and \(g\). In the balanced case, the signal-to-noise ratio of SgnQ is governed by \(|\lambda_{2}|/\sqrt{\lambda_{1}}\), but in our setting, the signal-to-noise ratio is governed by \(|\tilde{\lambda}_{1}|/\sqrt{\lambda_{1}}\). The proof is also subtly different. Since the entries of \(P\) are at different orders, many terms deemed negligible in the power analysis of the balanced case may become non-negligible in the unbalanced case and require careful analysis.
### Comparison with the naive degree-based \(\chi^{2}\)-test
Consider a setting where \(\Omega=\alpha\Theta\mathbf{1}_{n}\mathbf{1}_{n}^{\prime}\mathbf{\Theta}\equiv \alpha\theta\theta^{\prime}\) under the null and \(\Omega=\Theta\Pi P\Pi^{\prime}\mathbf{\Theta}\) under the alternative, and (2.1) holds. When \(\theta\) is unknown, it is unclear how to apply the \(\chi^{2}\)-test: the null case has \(n\) unknown parameters \(\theta_{1},\ldots,\theta_{n}\), and we need to use the degrees to estimate \(\theta_{i}\) first. As a result, the resultant \(\chi^{2}\)-statistic may be trivially \(0\). Therefore, we consider a simpler SBM case where \(\theta=\mathbf{1}_{n}\). In this case, \(\Omega=\alpha\mathbf{1}_{n}\mathbf{1}_{n}\), and \(\Omega=\Pi P\Pi^{\prime}\) and the null case only has one unknown parameter \(\alpha\). Let \(y_{i}\) be the degree of node \(i\), and let \(\hat{\alpha}=[n(n-1)]^{-1}\mathbf{1}_{n}^{\prime}A\mathbf{1}_{n}\). The \(\chi^{2}\)-statistic is
\[X_{n}=\sum_{i=1}^{n}(y_{i}-n\hat{\alpha})^{2}/[(n-1)\hat{\alpha}(1-\hat{\alpha })]. \tag{2.8}\]
It is seen that as \(n\alpha\to\infty\) and \(\alpha\to 0\), \((X_{n}-n)/\sqrt{2n}\to N(0,1)\) in law. For a fixed level \(\kappa\in(0,1)\), consider the \(\chi^{2}\)-test that rejects the null if and only if \((X_{n}-n)/\sqrt{2n}>z_{\kappa}\). Let \(\alpha_{0}=n^{-2}(\mathbf{1}_{n}^{\prime}\Omega\mathbf{1}_{n})\). The power of the \(\chi^{2}\)-test hinges on the quantity \((n\alpha_{0})^{-1}\|(\Omega\mathbf{1}_{n}-n\alpha_{0})\|^{2}=(n\alpha_{0})^{-1} \|\Pi Ph-(h^{\prime}Ph)^{-1}\mathbf{1}_{n}\|^{2}=0\), if \(Ph\propto\mathbf{1}_{K}\). The next theorem is proved in the supplement.
**Theorem 2.3**.: _Suppose \(\theta=\mathbf{1}_{n}\) and (2.7) holds. If \(|\tilde{\lambda}_{1}|/\sqrt{\lambda_{1}}\to\infty\) under the alternative hypothesis, the power of the level-\(\kappa\) SgnQ test goes to \(1\), while the power of the level-\(\kappa\)\(\chi^{2}\)-test goes to \(\kappa\)._
### The statistical lower bound and the optimality of the scan test
For lower bounds, it is standard to consider a random-membership DCBM (Jin et al., 2021), where \(\|\theta\|_{1}=n\), \(P\) is as in (2.3)-(2.4) and for a number \(N\ll n\), \(\Pi=[\pi_{1},\pi_{2},\ldots,\pi_{n}]^{\prime}\) satisfies
\[\pi_{i}=(X_{i},1-X_{i}),\qquad\text{where $X_{i}$ are iid Bernoulli}(\varepsilon)\text{ with }\varepsilon=N/n. \tag{2.9}\]
**Theorem 2.4** (Statistical lower bound).: _Consider the null and alternative hypotheses of Section 2.1, and assume that (2.9) is satisfied, \(\theta_{\max}\leq C\theta_{\min}\) and \(Nc/\log n\to\infty\). If \(\sqrt{N}(a-c)/\sqrt{c}\to 0\), then for any test, the sum of the type-I and type-II errors tends to \(1\)._
To show the tightness of this lower bound, we introduce the signed scan test, by adapting the idea in Arias-Castro and Verzelen (2014) from the SBM case to the DCBM case. Unlike the SgnQ test and the \(\chi^{2}\)-test, signed scan test is not a polynomial time test, but it provides sharper upper bounds. Let \(\hat{\eta}\) be the same as in (1.3). For any subset \(S\subset\{1,2,\ldots,n\}\), let \(\mathbf{1}_{S}\in\mathbb{R}^{n}\) be the vector whose \(i\)th coordinate is \(1\{i\in S\}\). Define the signed scan statistic
\[\phi_{sc}=\max_{S\subset\{1,2,\ldots,n\}:|S|=N}\mathbf{1}_{S}^{\prime}\big{(} A-\hat{\eta}\hat{\eta}^{\prime}\big{)}\mathbf{1}_{S}. \tag{2.10}\]
**Theorem 2.5** (Tightness of the statistical lower bound).: _Consider the signed scan test (2.10) that rejects the null hypothesis if \(\phi_{sc}>t_{n}\). Under the assumptions of Theorem 2.4, if \(\sqrt{N}(a-c)/\sqrt{c\log(n)}\to\infty\), then there exists a sequence \(t_{n}\) such that the sum of type I and type II errors of the signed scan test tends to \(0\)._
By Theorems 2.4-2.5 and Corollary 2.1, the two hypotheses are asymptotically indistinguishable if \(\sqrt{N}(a-c)/\sqrt{c}\to 0\), and are asymptotically distinguishable by the SgnQ test if \(N(a-c)/\sqrt{cn}\to\infty\). Therefore, the lower bound is sharp, up to log-factors, and the signed scan test is nearly optimal. Unfortunately, the signed scan test is not polynomial-time computable. Does there exist a polynomial-time computable test that is optimal? We address this in the next section.
### The computational lower bound
Consider the same hypothesis pair as in Section 2.4, where \(K=2\), \(P\) is as in (2.3)-(2.4), and \(\Pi\) is as in (2.9). For simplicity, we only consider SBM, i.e., \(\theta_{i}\equiv 1\). The low-degree polynomials argument emerges recently as a major tool to predicting the average-case computational barriers in a wide range of high-dimensional problems (Hopkins and Steurer, 2017; Hopkins et al., 2017). Many powerful methods, such as spectral algorithms and approximate message passing, can be formulated as functions of the input data, where the functions are polynomials with degree at most logarithm of the problem dimension. In comparison to many other schemes of developing computational lower barriers, the low-degree polynomial method yields the same threshold for various average-case hardness problems, such as community detection in the SBM (Hopkins and Steurer, 2017) and (hyper)-planted clique detection (Hopkins, 2018; Luo and Zhang, 2022). The foundation of the low-degree polynomial argument is the following _low-degree polynomial conjecture_ (Hopkins et al., 2017) :
**Conjecture 2.1** (Adapted from Kunisky et al. (2019)).: _Let \(\mathbb{P}_{n}\) and \(\mathbb{Q}_{n}\) denote a sequence of probability measures with sample space \(\mathbb{R}^{n^{x}}\) where \(k=O(1)\). Suppose that every polynomial \(f\) of degree \(O(\log n)\) with \(\mathbb{E}_{\mathbb{Q}_{n}}f^{2}=1\) is bounded under \(\mathbb{P}_{n}\) with high probability as \(n\to\infty\) and that some further regularity conditions hold. Then there is no polynomial-time test distinguishing \(\mathbb{P}_{n}\)from \(\mathbb{Q}_{n}\) with type I and type II error tending to \(0\) as \(n\to\infty\)._
We refer to Hopkins (2018) for a precise statement of this conjecture's required regularity conditions. The low-degree polynomial computational lower bound for our testing problem is as follows.
**Theorem 2.6** (Computational lower bound).: _Consider the null and alternative hypotheses in Section 2.1, and assume \(\theta_{i}\equiv 1\) and (2.9) holds. As \(n\to\infty\), assume \(c<a\), \(c<1-\delta\) for constant \(\delta>0\), \(N<n/3\), \(D=O(\log n)\), and \(\limsup_{n\to\infty}\left\{\left(\log_{n}\frac{N}{\sqrt{n}}+\log_{n}\frac{a-c} {\sqrt{c}}\right)\vee\left(\sqrt{D/2-1}\log_{n}\frac{a-c}{\sqrt{c}}\right) \right\}<0\). For any series of degree-\(D\) polynomials \(\phi_{n}:A\to\mathbb{R}\), whenever \(\mathbb{E}_{H_{0}}\phi_{n}(A)=0,\text{Var}_{H_{0}}(\phi_{n}(A))=1\), we must have \(\mathbb{E}_{H_{1}}\phi_{n}(A)=o(1)\). This implies if Conjecture 2.1 is true, there is no consistent polynomial-time test for this problem._
By Theorem 2.6, if both \((a-c)/\sqrt{c}\lesssim 1\) and \(N(a-c)/\sqrt{cn}\to 0\), the testing problem is computationally infeasible. The region where the testing problem is statistically possible but the SgnQ test loses power corresponds to \(N(a-c)/\sqrt{cn}\to 0\). If \(N\gtrsim\sqrt{n}\), Theorem 2.6 already implies that this is the computationally infeasible region; in other words, SgnQ achieves the CLB and is optimal. If \(N=o(\sqrt{n})\), SgnQ solves the detection problem only when \((a-c)/\sqrt{c}\gg n^{1/2}\), i.e. when the node-wise SNR is strong. We discuss the case of moderate node-wise SNR in the next subsection.
### The power of EST, and discussions of the tightness of CLB
When \(N=o(\sqrt{n})\) and \((a-c)/\sqrt{c}\rightarrow\infty\) both hold, the upper bound by SgnQ does not match with the CLB. It is unclear whether the CLB is tight. To investigate the CLB in this regime, we consider other possible polynomial-time tests. The economic scan test (EST) is one candidate. Given fixed positive integers \(v\) and \(e\), the EST statistic is defined to be \(\phi^{(v)}_{EST}\equiv\sup_{|S|\leq v}\sum_{i,j\in S}A_{ij}\), and the EST is defined to reject if and only if \(\phi^{(v)}_{EST}\geq e\). EST can be computed in time \(O(n^{v})\), which is polynomial time. For simplicity, we consider the SBM, i.e. where \(\theta=\mathbf{1}_{n}\), and a specific setting of parameters for the null and alternative hypotheses.
**Theorem 2.7** (Power of EST).: _Suppose \(\beta\in[1/2,1)\) and \(0<\omega<\delta<1\) are fixed constants. Under the alternative, suppose \(\theta=\mathbf{1}_{n}\), (2.9) holds, \(N=n^{1-\beta}\), \(a=n^{-\omega}\), and \(c=n^{-\delta}\). Under the null, suppose \(\theta=\mathbf{1}_{n}\) and \(\alpha=a(N/n)+b(1-N/n)\). If \(\omega/(1-\beta)<\delta\), the sum of type I and type II errors of the EST with \(v\) and \(e\) satisfying \(\omega/(1-\beta)<v/e<\delta\) tends to \(0\)._
Theorem 2.7 follows from standard results in probabilistic combinatorics (Alon and Spencer, 2016). It is conjectured in Bhaskara et al. (2010) that EST attains the CLB in the Erdos-Renyi setting considered by Arias-Castro and Verzelen (2014); Verzelen and Arias-Castro (2015). This suggests that the CLB in Theorem 2.6 is likely not tight when \(N=o(\sqrt{n})\) and \((a-c)/\sqrt{c}\rightarrow\infty\). However, this is not because our inequalities in proving the CLB are loose. A possible reason is that the prediction from the low-degree polynomial conjecture does not provide a tight bound. It remains an open question whether other computational infeasibility frameworks provide a tight CLB in our problem.
### The phase transition
We describe more precisely our results in terms of the phase transitions shown in Figure 1. Consider the null and alternative hypotheses from Section 2.1. For illustration purposes, we fix constants \(\beta\in(0,1)\) and \(\gamma\in\mathbb{R}\) and assume that \(N=n^{1-\beta}\) and \((a-c)/\sqrt{c}=n^{-\gamma}\). In the two-dimensional space of \((\gamma,\beta)\), the region of \(\beta>1/2\) and \(\beta<1/2\) corresponds to that the size of the small community is \(\gg\sqrt{n}\) and \(o(\sqrt{n})\), respectively, and the regions of \(\gamma>0\), \(-1/2<\gamma<0\) and \(\gamma<-1/2\) correspond to 'weak node-wise signal','moderate node-wise signal,' and the'strong node-wise signal', respectively. See Figure 1. By our results in Section 2.4, the testing problem is statistically impossible if \(\beta+2\gamma>1\) (orange region). By our results in Section 2.2, SgnQ has a full power if \(\beta+\gamma<1/2\) (blue region). Our results in Section 2.5 state that the testing problem is computationally infeasible if both \(\gamma>0\) and \(\beta+\gamma>1/2\) (green and orange regions). Combining these results, when \(\beta<1/2\), we have a complete understanding of the LB and CLB.
Figure 2: Left: Null distribution of SgnQ (\(n=500\)). Middle and right: Power comparison of SgnQ and \(\chi^{2}\) (\(n=100\), \(N=10\), 50 repetitions). We consider a 2-community SBM with \(P_{11}=a\), \(P_{22}=0.1\), \(P_{12}=0.1\) (middle plot) and \(P_{12}=\frac{an-(a+0.1)N}{n}\) (right plot, the case of degree matching).
## 3 Numerical results
**Simulations**. First in Figure 2 (left panel) we demonstrate the asymptotic normality of SgnQ under a null of the form \(\Omega=\theta\theta^{\prime}\), where \(\theta_{i}\) are i.i.d. generated from \(\mathrm{Pareto}(4,0.375)\). Though the degree heterogeneity is severe, SgnQ properly standardized is approximately standard normal under the null. Next in Figure 2 we compare the power of SgnQ in an asymmetric and symmetric SBM model. As our theory predicts, both tests are powerful when degrees are not calibrated in each model, but only SgnQ is powerful in the symmetric case. We also compare the power of SgnQ with the scan test to show evidence of a statistical-computational gap. We relegate these experiments to the supplement.
**Real data**: Next we demonstrate the effectiveness of SgnQ in detecting small communities in coauthorship networks studied in Ji et al. (2022). In Example 1, we consider the personalized network of Raymond Carroll, whose nodes consist of his coauthors for papers in a set of 36 statistics journals from the time period 1975 - 2015. An edge is placed between two coauthors if they wrote a paper in this set of journals during the specified time period. The SgnQ p-value for Carroll's personalized network \(G_{\mathsf{Carroll}}\) is \(0.02\), which suggests the presence of more than one community. In Ji et al. (2022), the authors identify a small cluster of coauthors from a collaboration with the National Cancer Institute. We applied the SCORE community detection module with \(K=2\) (e.g. Ke & Jin (2022)) and obtained a larger community \(G^{0}_{\mathsf{Carroll}}\) of size \(218\) and a smaller community \(G^{1}_{\mathsf{Carroll}}\) of size \(17\). Precisely, we removed Carroll from his network, applied SCORE on the remaining giant component, and defined \(G^{0}_{\mathsf{Carroll}}\) to be the complement of the smaller community. The SgnQ p-values in the table below suggest that both \(G^{0}_{\mathsf{Carroll}}\) and \(G^{1}_{\mathsf{Carroll}}\) are tightly clustered. Refer to the supplement for a visualization of Carroll's network and its smaller community labeled by author names. In Example 2, we consider three different coauthorship networks \(G_{\mathsf{old}}\), \(G_{\mathsf{recent}}\), and \(G_{\mathsf{new}}\) corresponding to time periods (i) 1975-1997, (ii) 1995-2007, and (iii) 2005-2015 for the journals AoS, Bka, JASA, and JRSSB. Nodes are given by authors, and an edge is placed between two authors if they coauthored at least one paper in one of these journals during the corresponding time period. For each network, we perform a similar procedure as in the first example. First we compute the SgnQ p-value, which turns out to be \(\approx 0\) (up to 16 digits of precision) for all networks. For each \(i\in\{\mathsf{old},\mathsf{recent},\mathsf{new}\}\), we apply SCORE with \(K=2\) to \(G_{i}\) and compute the SgnQ p-value on both resulting communities, let us call them \(G^{0}_{i}\) and \(G^{1}_{i}\). We refer to the table below for the results. For \(G_{\mathsf{old}}\) and \(G_{\mathsf{recent}}\), SCORE with \(K=2\) extracts a small community. The SgnQ p-value further supports the hypothesis that this small community is well-connected. In the last network, SCORE splits \(G_{\mathsf{new}}\) into two similarly sized pieces whose p-values suggests they can be split into smaller subcommunities.
\begin{tabular}{|c c c c c c c|} \hline Example & Network & Size & SgnQ p-value & Communities & Sizes & SgnQ p-values \\ \hline \(1\) & \(G_{\mathsf{Carroll}}\) & 235 & 0.02 & \((G^{0}_{\mathsf{Carroll}},G^{1}_{\mathsf{dom}})\) & (218, 17) & (0.134, 0.682) \\ \hline \(2\) & \(G_{\mathsf{old}}\) & 2647 & 0 & \((G^{0}_{\mathsf{old}},G^{1}_{\mathsf{old}})\) & (2586, 61) & (0, 0.700) \\ & \(G_{\mathsf{recent}}\) & 2554 & 0 & \((G^{0}_{\mathsf{recent}},G^{1}_{\mathsf{recent}})\) & (2540,14) & (0, 0.759) \\ & \(G_{\mathsf{new}}\) & 2920 & 0 & \((G^{0}_{\mathsf{new}},G^{1}_{\mathsf{new}})\) & (1685,1235) & (0, 0) \\ \hline \end{tabular}
**Discussions**: Global testing is a fundamental problem and often the starting point of a long line of research. For example, in the literature of Gaussian models, certain methods started as a global testing tool, but later grew into tools for variable selection, classification, and clustering and motivated many researches (e.g., Donoho & Jin (2004, 2015)). The SgnQ test may also motivate tools for many other problems, such as estimating the locations of the clique and clustering more generally. For example, in Jin et al. (2022), the SgnQ test motivated a tool for estimating the number of communities (see also Ma et al. (2021)). SgnQ is also extendable to clique detection in a tensor (Yuan et al., 2021; Jin et al., 2021) and for network change point detection. The LB and CLB we obtain in this paper are also useful for studying other problems, such as clique estimation. If you cannot tell whether there is a clique in the network, then it is impossible to estimate the clique. Therefore, the LB and CLB are also valid for the clique estimation problem (Alon et al., 1998; Ron & Feige, 2010).
The limiting distribution of SgnQ is \(N(0,1)\). This is not easy to achieve if we use other testing ideas, such as the leading eigenvalues of the adjacency matrix: the limiting distribution depends on many unknown parameters and it is hard to normalize (Liu et al., 2019). The p-value of the SgnQ test is easy to approximate and also useful in applications. For example, we can use it to measure the research diversity of a given author. Consider the ego sub-network of an author in a large co-authorship or citation network. A smaller p-value suggests that the ego network has more than 1 community and has more diverse interests. The p-values can also be useful as a stopping criterion in hierarchical community detection modules.
**Acknowledgments.** We thank the anonymous referees for their helpful comments. We thank Louis Cammarata for assistance with the simulations in Section A.3. J. Jin was partially supported by NSF grant DMS-2015469. Z.T. Ke was supported in part by NSF CAREER Grant DMS-1943902. A.R. Zhang acknowledges the grant NSF CAREER-2203741.
## Appendix
### Additional experiments
1. [label=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,ref=A,
We consider a SBM null and alternative model (as in Example 2 with \(\theta\equiv 1\)) with
\[P_{0}=\begin{pmatrix}\alpha&\alpha\\ \alpha&\alpha\end{pmatrix},\qquad P_{1}=\begin{pmatrix}a&b\\ b&c\end{pmatrix}\]
where \(aN+b(n-N)=\alpha\). For this simple testing problem, we compare the power of SgnQ and the scan test. In our experiments, we set \(\alpha=0.2\) and allow the parameter \(a\) to vary from \(a=\alpha\) to \(a=a_{\max}\equiv an/N\). Once \(a\) and \(\alpha\) are fixed, the parameters \(b\) and \(c\) are determined by
\[c =\frac{aN^{2}+\alpha n^{2}-2\alpha nN}{(n-N)^{2}},\] \[b =\frac{nc-(a+c)N}{n-2N}.\]
In particular, \(a_{\max}\) is the largest value of \(a\) such that \(b\geq 0\).
Since the scan test \(\phi_{sc}\) we defined is extremely computationally expensive, we study the power of an 'oracle' scan test \(\tilde{\phi}_{sc}\) which knows the location of the true planted subset \(\mathcal{C}_{1}\). The power of the oracle scan test is computed as follows. Let \(\kappa\) denote the desired level.
1. Using \(M_{cal}\) repetitions under the null, we calculate the (non-oracle) scan statistic \(\phi_{sc}^{(1)},\ldots,\phi_{sc}^{(M_{cal})}\) for each repetition. We set the threshold \(\hat{\tau}\) to be the empirical \(1-\kappa\) quantile of \(\phi_{sc}^{(1)},\ldots,\phi_{sc}^{(M_{cal})}\).
2. Given a sample from the alternative model, we compute the power using \(M_{pow}\) repetitions, where we reject if \[\tilde{\phi}_{sc}\equiv\mathbf{1}_{\mathcal{C}_{1}}(A-\hat{\eta}\hat{\eta}^{ \prime})\mathbf{1}_{\mathcal{C}_{1}}>\hat{\tau}.\]
In our experiments, we set \(M_{cal}=75\) and \(M_{pow}=200\).
Note that since \(\tilde{\phi}_{sc}\leq\phi_{sc}\), the procedure above gives an underestimate of the power of the scan test (provide the threshold is correctly calibrated), which is helpful since this can be used to show evidence of a statistical-computational gap.
In our plots we also indicate the statistical (information-theoretic) and computational thresholds in addition to the power. Inspired by the sharp characterization of the statistical threshold in (Arias-Castro & Verzelen, 2014, Equation (10)) for planted dense subgraph, in all plots we draw a black vertical dashed line at the first value of \(a\) such that
\[(1/2)\sqrt{N}(a-c)/\sqrt{c(1-c)}>1.\]
We draw a blue vertical dashed line at the first value of \(a\) such that
\[N(a-c)/\sqrt{nc}>1.\]
Figure 3: **Left:** Carroll’s personalized network, figure taken from Ji et al. (2022). **Right:** A small community of \(17\) authors extracted by SCORE and whose SgnQ p-value is \(0.6818\).
### \(\chi_{2}\) vs. SgnQ
We also show additional experiments demonstrating the effect of degree-matching on the power of the \(\chi^{2}\) test. We compute the power with respect to the following alternative models (as in Example 2 with \(\theta\equiv 1\)) with
\[P^{(1)}=\begin{pmatrix}a&b\\ b&c\end{pmatrix},\qquad P^{(2)}=\begin{pmatrix}a&c\\ c&c\end{pmatrix}\]
where \(b=\frac{cn-(a+c)N}{n-2N}\), \(c\) is fixed, and \(a\) ranges from \(c\) to \(a^{\prime}_{\max}=c(n-N)/N\) for the experiments with \(P^{(1)}\). Similar to before, \(a^{\prime}_{\max}\) is the largest value of \(a\) such that \(b\geq 0\). See Figure 5 for further details.
## Appendix B Proof of Lemma 2.1 (Identifiability)
To prove identifiability, we make use of the following result from (Jin et al., 2021c, Lemma 3.1), which is in line with Sinkhorn's work Sinkhorn (1974) on matrix scaling.
Figure 4: The power of SgnQ (blue curve) and oracle scan (black curve) for \(n=30,N\in\{4,6,7\}\) (left) and \(n=40,N\in\{4,6,7\}\) (right). The black dashed line indicates the theoretical statistical threshold, and the blue dashed line indicates the theoretical computational threshold.
**Lemma B.1** (Jin et al. (2021c)).: _Given a matrix \(A\in\mathbb{R}^{K,K}\) with strictly positive diagonal entries and non-negative off-diagonal entries, and a strictly positive vector \(h\in\mathbb{R}^{K}\), there exists a unique diagonal matrix \(D=\operatorname{diag}(d_{1},d_{2},\ldots,d_{K})\) such that \(DADh=1_{K}\) and \(d_{k}>0\), \(1\leq k\leq K\)._
We apply Lemma B.1 with \(h=(h_{1},\ldots,h_{K})^{\prime}\) and \(A=P\) to construct a diagonal matrix \(D=\operatorname{diag}(d_{1},\ldots,d_{K})\) satisfying \(DADh=1_{K}\). Note that \(P\) has positive diagonal entries since \(\Omega\) does.
Define \(P^{*}=DPD\) and \(D^{*}=\operatorname{diag}(d_{1}^{*},\ldots,d_{n}^{*})\in\mathbb{R}^{n}\) where
\[d_{i}^{*}\equiv d_{k}\qquad\text{if }i\in\mathcal{C}_{k}\]
Observe that
\[\Pi D^{-1}=(D^{*})^{-1}\Pi.\]
Define \(\Theta^{*}=\Theta(D^{*})^{-1}\), and let \(\theta^{*}=\operatorname{diag}(\Theta^{*})\). Next, let \(\overline{\Theta}=\frac{n}{\|\theta^{*}\|_{1}}\cdot\Theta^{*}\), let \(\overline{\theta}=\operatorname{diag}(\overline{\Theta})\), and let \(\overline{P}=\frac{\|\theta^{*}\|_{1}^{2}}{n^{2}}\cdot P^{*}\). Note that \(\|\overline{\theta}\|_{1}=n\) and \(\overline{P}h\propto\mathbf{1}_{K}\).
Using the previous definitions and observations, we have
\[\Omega=\Theta\Pi D^{-1}DPDD^{-1}\Pi^{\prime}\Theta=\Theta^{*}\Pi P^{*}\Pi^{ \prime}\Theta^{*}=\overline{\Theta}\Pi\overline{P}\Pi^{\prime}\overline{\Theta}\]
which justifies existence.
To justify uniqueness, suppose that
\[\Omega=\Theta^{(1)}\Pi P^{(1)}\Pi^{\prime}\Theta^{(1)}=\Theta^{(2)}\Pi P^{(2)} \Pi^{\prime}\Theta^{(2)},\]
where \(\theta^{(i)}=\operatorname{diag}(\Theta^{(i)})\) satisfy \(\|\theta^{(i)}\|_{1}=n\) for \(i=1,2\) and
\[P^{(1)}h\propto\mathbf{1}_{K},\qquad P^{(2)}h\propto\mathbf{1}_{K}.\]
Observe that
\[\Pi P^{(1)}\Pi^{\prime}\mathbf{1}_{n}=\alpha^{(1)}n\cdot\mathbf{1}_{n},\qquad \Pi P^{(2)}\Pi^{\prime}\mathbf{1}_{n}=\alpha^{(2)}n\cdot\mathbf{1}_{n}.\]
Figure 5: Power comparison of SgnQ and \(\chi^{2}\) (\(n=500\), \(N=22\), 50 repetitions). We consider a 2-community SBM with \(P_{11}=a\), \(P_{22}=c\), \(P_{12}=c\) (left) and \(P_{12}=\frac{an-(a+c)N}{n}\) (right plot, the case of degree matching) where \(c=0.05\) (top row) and \(c=0.20\) (bottom row).
for positive constants \(\alpha^{(i)},i\in\{1,2\}\). Since \(\Omega\) has nonnegative entries and positive diagonal elements, by Lemma B.1, there exists a unique diagonal matrix \(D\) such that
\[D\Omega D\mathbf{1}_{n}=\mathbf{1}_{n}.\]
We see that taking \(D=\frac{1}{\sqrt{\alpha^{(i)}n}}(\Theta^{(i)})^{-1}\) satisfies this equation for \(i=1,2\), and therefore by uniqueness,
\[\frac{1}{\sqrt{\alpha^{(1)}n}}(\Theta^{(1)})^{-1}=\frac{1}{\sqrt{\alpha^{(2)} n}}(\Theta^{(2)})^{-1}.\]
Since \(\|\theta^{(1)}\|_{1}=\|\theta^{(2)}\|_{1}=n\), further we have \(\alpha^{(1)}=\alpha^{(2)}\), and hence
\[\Theta^{(1)}=\Theta^{(2)}.\]
It follows that
\[\Pi P^{(1)}\Pi^{\prime}=\Pi P^{(2)}\Pi^{\prime},\]
which, since we assume \(h_{i}>0\) for \(i=1,\ldots,K\), further implies that \(P^{(1)}=P^{(2)}\).
## Appendix C Proof of Theorem 2.1 (Limiting null of the SgnQ statistic)
Consider a null DCBM with \(\Omega=\theta^{*}(\theta^{*})^{\prime}\). Note that this is a different choice of parameterization than the one we study in the main paper. In (Jin et al., 2021c, Theorem 2.1) it is shown that the asymptotic distribution of \(\psi_{n}\), the standardized version of SgnQ, is standard normal provided that
\[\|\theta^{*}\|\to\infty,\quad\theta^{*}_{max}\to 0,\quad\text{and}\quad(\| \theta^{*}\|^{2}/\|\theta^{*}\|_{1})\sqrt{\log(\|\theta\|_{1}^{*})}\to 0.\] (C.1)
We verify that, in a DCBM with \(\Omega=\alpha\theta\theta^{\prime}\) and \(\|\theta\|_{1}=n\), these conditions are implied by the assumptions in (2.5), restated below:
\[n\alpha\to\infty,\qquad\text{and}\qquad\alpha\theta^{2}_{\max}\log(n^{2}\alpha)\to 0\] (C.2)
In the parameterization of Jin et al. (2021c), we have \(\theta^{*}=\sqrt{\alpha}\theta\). First, \(\|\theta^{*}\|^{2}\to\infty\) because by (C.2),
\[\|\theta^{*}\|^{2}\geq\frac{1}{n}\cdot\|\theta^{*}\|_{1}^{2}=\alpha n\to\infty.\]
Next, \(\theta^{*}_{\max}\to 0\) because by (C.2),
\[\theta_{\max}=\sqrt{\alpha}\theta_{\max}\to 0.\]
To show the last part of (C.1), note that
\[(\|\theta^{*}\|^{2}/\|\theta^{*}\|_{1})\sqrt{\log(\|\theta\|_{1}^{*})}\leq \sqrt{\alpha}\theta_{\max}\sqrt{\log(\sqrt{\alpha}n)}=\frac{1}{\sqrt{2}}\sqrt{ \alpha}\theta_{\max}\sqrt{\log(\alpha n^{2})}\to 0\]
by (C.2). Thus (C.1) holds, and \(\psi_{n}\) is asymptotically standard normal under the null.
## Appendix D Proof of Lemma 2.2 (Properties of \(\tilde{\Omega}\))
**Lemma**.: _The rank and trace of the matrix \(\widetilde{\Omega}\) are \((K-1)\) and \(\|\theta\|^{2}{\rm diag}(\tilde{P})^{\prime}g\), respectively. When \(K=2\), \(\tilde{\lambda}_{1}={\rm trace}(\widetilde{\Omega})=\|\theta\|^{2}(ac-b^{2})( d_{0}^{2}g_{1}+d_{1}^{2}g_{0})/(ad_{1}^{2}+2bd_{0}d_{1}+cd_{0}^{2})\)._
**Proof of Lemma 2.2**. By basic algebra,
\[\widetilde{\Omega}=\Theta\Pi\widetilde{P}\Pi^{\prime}\Theta,\qquad\text{where } \widetilde{P}=(P-(d^{\prime}Pd)^{-1}Pdd^{\prime}P).\]
It is seen \(\widetilde{P}d=Pd-(d^{\prime}Pd)^{-1}Pdd^{\prime}Pd=0\), so \({\rm rank}(\widetilde{P})\leq K-1\). At the same time, since for any matrix \(A\) and \(B\) of the same size, \({\rm rank}(A+B)\leq{\rm rank}(A)+{\rm rank}(B)\), it follows \(\widetilde{P}\geq(K-1)\), as \({\rm rank}(P)=K\) and \({\rm rank}(Pdd^{\prime}P)\leq 1\). This proves that \({\rm rank}(\widetilde{P})=K-1\).
At the same time, since for any matrices \(A\) and \(B\), \(\mathrm{trace}(AB)=\mathrm{trace}(BA)\),
\[\mathrm{trace}(\widetilde{\Omega})=\mathrm{trace}(\widetilde{P}\Pi^{\prime} \Theta^{2}\Pi)=\|\theta\|^{2}\mathrm{trace}(\widetilde{P}G)=\|\theta\|^{2} \mathrm{diag}(\tilde{P})^{\prime}g.\]
This proves the second item of the lemma.
Last, when \(K=2\), \(\widetilde{\Omega}\) is rank \(1\), and its eigenvalue is the same as its trace. First
\[(\tilde{P})_{11} =a-\frac{(ad_{1}+bd_{0})^{2}}{ad_{1}^{2}+2bd_{0}d_{1}+cd_{0}^{2} }=(ac-b^{2})\frac{d_{0}^{2}}{ad_{1}^{2}+2bd_{0}d_{1}+cd_{0}^{2}}\] \[(\tilde{P})_{22} =c-\frac{(bd_{1}+cd_{0})^{2}}{ad_{1}^{2}+2bd_{0}d_{1}+cd_{0}^{2} }=(ac-b^{2})\frac{d_{1}^{2}}{ad_{1}^{2}+2bd_{0}d_{1}+cd_{0}^{2}}.\]
Thus
\[\tilde{\lambda}_{1}=\|\theta\|^{2}\mathrm{diag}(\tilde{P})^{\prime}g=\|\theta \|^{2}(ac-b^{2})\cdot\frac{d_{0}^{2}g_{1}+d_{1}^{2}g_{0}}{ad_{1}^{2}+2bd_{0}d_ {1}+cd_{0}^{2}}\]
This proves the last item and completes the proof of the lemma.
## Appendix E Proof of Theorem 2.2 (Power of the SgnQ test) and Corollary 2.1
### Setup and results
_Notation:_ Given sequences of real numbers \(A=A_{n}\) and \(B=B_{n}\), we write \(A\lesssim B\) to signify that \(A=O(B)\), \(A\asymp B\) to signify that \(A\lesssim B\) and \(B\lesssim A\), and \(A\sim B\) to signify that \(A/B=1+o(1)\).
Throughout this section, we consider a DCBM with parameters \((\Theta,P)\) where \(P\in\mathbb{R}^{2\times 2}\) has unit diagonals, and we analyze the behavior of SgnQ under the alternative. At the end of this subsection we explain how Theorem 2.2 and Corollary 2.1 follow from the results described next. Our results hinge on
\[\tilde{\lambda}\equiv\tilde{\lambda}_{1}=\mathrm{tr}(\tilde{\Omega}).\]
Given a subset \(U\subset[n]\), let \(\theta_{U}\in\mathbb{R}^{|U|}\) denote the restriction of \(\theta\) to the coordinates of \(U\). For notational convenience, we let \(S=\{i:\pi_{i}(1)=1\}\), which was previously written as \(\mathcal{C}_{1}\) in the main paper.
In a DCBM where \(P\) has unit diagonals, our main results hold under the following conditions.
\[\Omega_{ij} \lesssim\theta_{i}\theta_{j}\] (E.1) \[\|\theta\|_{\infty} =O(1),\text{ and }\] (E.2) \[\|\theta\|_{2}^{2} \to\infty.\] (E.3) \[(\|\theta\|_{2}^{2}/\|\theta\|_{1})\sqrt{\log(\|\theta\|_{1})} \to 0.\] (E.4)
First we justify that these assumptions are satisfied by an equivalent DCBM with the same \(\Omega\) represented with the parameterization (2.1) and satisfying (2.7). Thus all results proved in this section transfer immediately to the main paper.
**Lemma E.1**.: _Consider a DCBM with parameters \((\Theta^{*},P^{*})\) satisfying (2.1) and satisfying (2.7). Define \(\Theta=\mathrm{diag}(\theta)\) where_
\[\theta_{i}=\begin{cases}\sqrt{a}\theta_{i}^{*}&\text{ if }i\in S\\ \sqrt{c}\theta_{i}^{*}&\text{ if }i\in S^{c},\end{cases}\]
_and_
\[P=\begin{pmatrix}1&\frac{b}{\sqrt{ac}}\\ \frac{b}{\sqrt{ac}}&1\end{pmatrix}.\]
_Then_
\[\Omega=\Theta\Pi P\Pi\Theta=\Theta^{*}\Pi P^{*}\Pi^{\prime}\Theta^{*}\]
_and (E.1)-(E.4) are satisfied._
Proof.: The statement regarding \(\Omega\) follows by basic algebra. (E.1) follows if we can show that
\[\frac{b}{\sqrt{ac}}\lesssim 1.\] (E.5)
Since
\[b=\frac{cn-(a+c)N}{n-2N}=c\cdot\frac{n-N}{n-2N}-a\cdot\frac{N}{n-2N},\]
we have \(a\geq c\gtrsim b\), so (E.5) follows.
Next, (E.2) follows directly from \(a\theta^{2}_{\max,1}\lesssim 1\) since \(c\theta^{2}_{\max,0}=o(1)\) by (2.7).
For (E.3),
\[\|\theta\|_{2}^{2}\geq\frac{1}{n}\cdot\|\theta\|_{1}^{2}\geq cn\to\infty\]
by (2.7).
For the last part, note that
\[b=c\cdot\frac{n-N}{n-2N}-a\cdot\frac{N}{n-2N}\geq 0\Rightarrow a\varepsilon \lesssim c.\]
Thus,
\[\frac{\|\theta\|_{2}^{2}}{\|\theta\|_{1}} =\frac{a\|\theta_{S}^{*}\|_{2}^{2}+c\|\theta_{S^{*}}^{*}\|_{2}^{ 2}}{\sqrt{a}\|\theta_{S}^{*}\|_{1}+\sqrt{c}\|\theta_{S^{*}}^{*}\|_{1}}\lesssim \frac{a(N/n)\|\theta_{S^{*}}^{*}\|_{2}^{2}+c\|\theta_{S^{*}}^{*}\|_{2}^{2}}{ \sqrt{c}\|\theta_{S^{*}}^{*}\|_{1}}\] \[\lesssim\frac{c\|\theta_{S^{*}}^{*}\|_{2}^{2}}{\sqrt{c}\|\theta_{ S^{*}}^{*}\|_{1}}\lesssim\sqrt{c}\theta_{\max,0}=o\big{(}\frac{1}{\sqrt{\log cn^{2}}} \big{)}=o\big{(}\frac{1}{\sqrt{\log(\|\theta\|_{1})}}\big{)},\]
which implies (E.4). Above we use that \(a\geq c\) and \(g_{1}\asymp d_{1}\asymp N/n\), by assumption. Precisely, in the first line, we used
\[a\|\theta_{S}^{*}\|_{2}^{2}\asymp a\cdot(1-N/n)^{-1}\frac{N}{n}\|\theta_{S^{ *}}^{*}\|_{2}^{2}\lesssim c\|\theta_{S^{*}}^{*}\|_{2}^{2},\]
and in the second line we used
\[\|\theta\|_{1}\geq\sqrt{c}\|\theta_{S^{*}}^{*}\|_{1}\asymp\sqrt{c}(1-N/n)^{- 1}\|\theta^{*}\|_{1}\asymp\sqrt{c}n.\]
With Lemma E.1 in hand, we restrict in the remainder of this section to the setting where \(P\) has unit diagonals and (E.1)-(E.4) are satisfied.
Define \(v_{0}=\mathbf{1}^{\prime}\Omega\mathbf{1}\), and let \(\eta^{*}=(1/\sqrt{v_{0}})\Omega\mathbf{1}\). Recall \(\tilde{\Omega}=\Omega-\eta^{*}\eta^{*\mathsf{T}}\), and \(\tilde{\lambda}=\operatorname{tr}(\tilde{\Omega})\). Our main result concerning the alternative is the following.
**Theorem E.1** (Limiting behavior of SgnQ test statistic).: _Suppose that the previous assumptions hold and that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\). Then under the null hypothesis, as \(n\to\infty\), \(\mathbb{E}[Q]\sim 2\|\theta\|_{2}^{4}\), \(\operatorname{Var}(Q)\sim\sqrt{8}\|\theta\|_{2}^{2}\), and \((Q-\mathbb{E}Q)/\sqrt{\operatorname{Var}(Q)}\to N(0,1)\) in law. Under the alternative hypothesis, as \(n\to\infty\), \(\mathbb{E}Q\sim\tilde{\lambda}^{4}\) and \(\operatorname{Var}(Q)\lesssim|\tilde{\lambda}|^{6}+|\tilde{\lambda}|^{2} \lambda_{1}^{3}=o(\tilde{\lambda}^{8})\)._
Following Jin et al. (2021c), we introduce some notation:
\[\widetilde{\Omega}=\Omega-(\eta^{*})(\eta^{*})^{\prime},\qquad \text{where}\quad\eta^{*}=\frac{1}{\sqrt{v_{0}}}\Omega\mathbf{1}_{n},\ \ v_{0}=\mathbf{1}^{\prime}_{n}\Omega\mathbf{1}_{n};\] \[\delta_{ij}=\eta_{i}(\eta_{j}-\tilde{\eta}_{j})+\eta_{j}(\eta_{i }-\tilde{\eta}_{i}),\qquad\text{where}\quad\eta=\frac{1}{\sqrt{v}}(\mathbb{E}A) \mathbf{1}_{n},\ \ \tilde{\eta}=\frac{1}{\sqrt{v}}A\mathbf{1}_{n},\ \ v=\mathbf{1}^{\prime}_{n}(\mathbb{E}A)\mathbf{1}_{n};\] \[r_{ij}=(\eta_{i}^{*}\eta_{j}^{*}-\eta_{i}\eta_{j})-(\eta_{i}- \tilde{\eta}_{i})(\eta_{j}-\tilde{\eta}_{j})+(1-\frac{v}{V})\tilde{\eta}_{i} \tilde{\eta}_{j},\qquad\text{where}\ \ V=\mathbf{1}^{\prime}_{n}A\mathbf{1}_{n}.\]
The _ideal_ and _proxy_ SgnQ statistics, respectively, are defined as follows:
\[\widetilde{Q}_{n}=\sum_{i,j,k,\ell(dist)}(\widetilde{\Omega}_{ij}+W_{ij})( \widetilde{\Omega}_{jk}+W_{jk})(\widetilde{\Omega}_{k\ell}+W_{k\ell})( \widetilde{\Omega}_{\ell i}+W_{\ell i})\] (E.6)
\[Q_{n}^{*}=\sum_{i,j,k,\ell(dist)}(\widetilde{\Omega}_{ij}+W_{ij}+ \delta_{ij})(\widetilde{\Omega}_{jk}+W_{jk}+\delta_{jk})(\widetilde{\Omega}_{k \ell}+W_{k\ell}+\delta_{k\ell})(\widetilde{\Omega}_{\ell i}+W_{\ell i}+\delta_{ \ell i}).\] (E.7)
Moreover, we can express the original or _real_ SgnQ as
\[Q_{n}=\sum_{i,j,k,\ell(dist)}\bigg{[}(\widetilde{\Omega}_{ij}+W_ {ij}+\delta_{ij}+r_{ij})(\widetilde{\Omega}_{jk}+W_{jk}+\delta_{jk}+r_{jk})\\ (\widetilde{\Omega}_{k\ell}+W_{k\ell}+\delta_{k\ell}+r_{k\ell})( \widetilde{\Omega}_{\ell i}+W_{\ell i}+\delta_{\ell i}+r_{\ell i})\bigg{]}.\]
The next theorems handle the behavior of these statistics. Together the results imply Theorem E.1. Again, the analysis of the null carries over directly from Jin et al. (2021c), so we only need to study the alternative. The claims regarding the alternative follow from Lemmas E.7-E.12 below.
**Theorem E.2** (Ideal SgnQ test statistic).: _Suppose that the previous assumptions hold and that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\). Then under the null hypothesis, as \(n\to\infty\), \(\mathbb{E}[\tilde{Q}]=0\) and \(\mathrm{Var}(\tilde{Q})=8\|\theta\|_{2}^{8}\cdot[1+o(1)]\). Furthermore, under the alternative hypothesis, as \(n\to\infty\), \(\mathbb{E}[\tilde{Q}]\sim\tilde{\lambda}^{4}\) and \(\mathrm{Var}(\tilde{Q})\lesssim\lambda_{1}^{4}+|\tilde{\lambda}|^{6}=o(\tilde {\lambda}^{8})\)._
**Theorem E.3** (Proxy SgnQ test statistic).: _Suppose that the previous assumptions hold and that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\). Then under the null hypothesis, as \(n\to\infty\), \(|\mathbb{E}[\tilde{Q}-Q^{*}]|=o(\|\theta\|_{2}^{4})\) and \(\mathrm{Var}(\tilde{Q}-Q^{*})=o(\|\theta\|_{2}^{8})\). Furthermore, under the alternative hypothesis, as \(n\to\infty\), \(|\mathbb{E}[\tilde{Q}-Q^{*}]|\lesssim|\tilde{\lambda}|^{2}\lambda_{1}=o(\tilde {\lambda}^{4})\) and \(\mathrm{Var}(\tilde{Q}-Q^{*})\lesssim|\tilde{\lambda}|^{2}\lambda_{1}^{3}+| \tilde{\lambda}|^{6}=o(\tilde{\lambda}^{8})\)._
**Theorem E.4** (Real SgnQ test statistic).: _Suppose that the previous assumptions hold and that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\). Then under the null hypothesis, as \(n\to\infty\), \(|\mathbb{E}[Q-\tilde{Q}]|=o(\|\theta\|_{2}^{4})\) and \(\mathrm{Var}(Q-\tilde{Q})=o(\|\theta\|_{2}^{8})\). Furthermore, under the alternative hypothesis, as \(n\to\infty\), \(|\mathbb{E}[Q-Q^{*}]|\lesssim|\tilde{\lambda}|^{2}\lambda_{1}=o(\tilde{\lambda }^{4})\) and \(\mathrm{Var}(Q-Q^{*})\lesssim|\tilde{\lambda}|^{2}\lambda_{1}^{3}=o(\tilde{ \lambda}^{8})\)._
The previous work Jin et al. (2021c) establishes that under the assumptions above, if \(\|\theta_{S}\|_{1}/\|\theta\|_{1}\asymp 1\), then SgnQ distinguishes the null and alternative provided that \(|\lambda_{2}|/\sqrt{\lambda_{1}}\to\infty\). To compare with the results above, note that \(\lambda_{2}>\tilde{\lambda}\) if \(\|\theta_{S}\|_{1}/\|\theta\|_{1}\asymp 1\) (c.f. Lemma E.5 of Jin et al. (2021c)). Thus when \(K=2\), our main result extends the upper bound of Jin et al. (2021c) to the case when \(\|\theta_{S}\|_{1}/\|\theta\|_{1}=o(1)\). We note that \(|\tilde{\lambda}|\gtrsim|\lambda_{2}|\) in general (see Lemma E.3 and Corollary E.1).
The theorems above apply to the symmetric SBM. Recall that in this model,
\[\Omega_{ij}=\begin{cases}a&\text{if }i,j\in S\\ c&\text{if }i,j\notin S\\ \tilde{b}=\frac{nc(a+c)N}{n-2N}&\text{otherwise.}\end{cases}\]
where \(N=|S|\) and \(a,b,c\in(0,1)\). To obtain this model from our DCBM, set
\[P=\begin{pmatrix}1&\tilde{b}/\sqrt{ac}\\ \tilde{b}/\sqrt{ac}&1\end{pmatrix},\] (E.8)
and
\[\theta=\sqrt{a}\mathbf{1}_{S}+\sqrt{c}\mathbf{1}_{S^{c}}.\] (E.9)
The assumption (E.1) implies that \(\tilde{b}\lesssim\sqrt{ac}\), which is automatically satisfied since we assume \(a\geq c\).
In SBM, it holds that \(\lambda_{2}=\tilde{\lambda}\) (see Lemma E.3). Furthermore, explicit calculations in Section E.5 reveal that
\[\lambda_{1}\sim nc,\text{ and }\] (E.10) \[\lambda_{2}=\tilde{\lambda}\sim N(a-c).\]
In addition, with \(P,a,\tilde{b},c\) as above, if we have
\[\theta_{i}=\begin{cases}\rho_{i}\sqrt{a}&\text{if }i\in S\\ \rho_{i}\sqrt{c}&\text{if }i\notin S\end{cases}\]
for \(\rho>0\) with \(\rho_{\min}\gtrsim\rho_{\max}\) in the DCBM setting, a very similar calculation, which we omit, reveals that
\[\lambda_{1} \asymp nc,\text{ and}\] (E.11) \[\tilde{\lambda} \asymp N(a-c).\]
With the previous results of this subsection in hand (which are proved in the remaining subsections) we justify Theorem 2.2 and Corollary 2.1.
Proof of Theorem 2.2.: The SgnQ test has level \(\kappa\) by Theorem 2.1, so it remains to study the type II error. Using Theorem E.1 and Lemma E.1, the fact that the type II error tends to \(0\) directly follows from Chebyshev's inequality and the fact that \(\|\tilde{\eta}\|_{2}^{2}-1\approx\|\theta\|_{2}^{2}\) with high probability. In particular, note that since \(|\tilde{\lambda}|\gg\sqrt{\lambda_{1}}\), the expectation of SgnQ under the alternative is much larger than its standard deviation, under the null or alternative. We omit the details as they are very similar to the proof of Theorem 2.6 in (Jin et al., 2021c, Supplement,pgs. 5-6).
Proof of Corollary 2.1.: This result follows immediately from (E.11) and Theorem 2.2.
### Preliminary bounds
Define \(v_{0}=\mathbf{1}^{\mathsf{T}}\Omega\mathbf{1}\), and let \(\eta^{*}=1/\sqrt{v_{0}}\cdot\Omega\mathbf{1}\). For the analysis of SgnQ, it is important is to understand \(\tilde{\Omega}=\Omega-\eta^{*}\eta^{*\mathsf{T}}\). The next lemma establishes that \(\tilde{\Omega}\) is rank one and has a simple expression when \(K=2\).
**Lemma E.2**.: _Let \(f=(\|\theta_{S^{c}}\|_{1},-\|\theta_{S}\|_{1})^{\mathsf{T}}\) It holds that_
\[\tilde{\Omega}=\frac{(1-b^{2})}{v_{0}}\cdot\Theta\Pi ff^{\mathsf{T}}\Pi^{ \mathsf{T}}\Theta.\]
Proof.: Let \(\rho_{0}=\|\theta_{S}\|_{1}\) and \(\rho_{1}=\|\theta_{S^{c}}\|_{1}\). Note that
\[(\Omega\mathbf{1})_{i}=\theta_{i}\sum_{j}\theta_{j}\pi_{i}^{\mathsf{T}}P\pi_{ j}=\begin{cases}\theta_{i}(\rho_{0}+b\rho_{1})&\text{ if }i\in S\\ \theta_{i}(b\rho_{0}+\rho_{1})&\text{ if }i\notin S.\end{cases}\]
Hence
\[v_{0}=\mathbf{1}^{\mathsf{T}}\Omega\mathbf{1}=\rho_{0}^{2}+2b\rho_{0}\rho_{1} +\rho_{1}^{2}.\]
If \(i,j\in S\), then
\[\tilde{\Omega}_{ij}=\theta_{i}\theta_{j}\big{(}1-\frac{(\rho_{0}+b\rho_{1})^ {2}}{v_{0}}\big{)}=\theta_{i}\theta_{j}\cdot\frac{(1-b^{2})\rho_{1}^{2}}{v_{0 }}\]
Similarly if \(i\in S\) and \(j\notin S\),
\[\tilde{\Omega}_{ij}=\theta_{i}\theta_{j}\big{(}b-\frac{(\rho_{0}+b\rho_{1})(b \rho_{0}+\rho_{1})}{v_{0}}\big{)}=-\theta_{i}\theta_{j}\cdot\frac{(1-b^{2}) \rho_{0}\rho_{1}}{v_{0}}\]
and
\[\tilde{\Omega}_{ij}=\theta_{i}\theta_{j}\big{(}1-\frac{(b\rho_{0}+\rho_{1})^ {2}}{v_{0}}\big{)}=\theta_{i}\theta_{j}\cdot\frac{(1-b^{2})\rho_{0}^{2}}{v_{0}}\]
if \(i,j\in S^{c}\). The claim follows.
Let
\[w=\Theta\Pi f=\theta_{S}\|\theta_{S^{c}}\|_{1}-\theta_{S^{c}}\|\theta_{S}\|_ {1}=\rho_{1}\theta_{S}-\rho_{0}\theta_{S^{c}}\]
Using the previous lemma, we have the rank one eigendecomposition
\[\tilde{\Omega}=\tilde{\lambda}\tilde{\xi}\tilde{\xi}^{\mathsf{T}},\] (E.12)
where we define
\[\tilde{\xi} =\frac{\rho_{1}\theta_{S}-\rho_{0}\theta_{S^{c}}}{\|\rho_{1}\theta_{S }-\rho_{0}\theta_{S^{c}}\|_{2}}=\frac{\rho_{1}\theta_{S}-\rho_{0}\theta_{S^{c}} }{\sqrt{\rho_{1}^{2}\|\theta_{S}\|_{2}^{2}+\rho_{0}^{2}\|\theta_{S^{c}}\|_{2}^{2 }}},\text{ and}\] (E.13) \[\tilde{\lambda} =\frac{(1-b^{2})}{v_{0}}\cdot\big{(}\rho_{1}^{2}\|\theta_{S}\|_{2 }^{2}+\rho_{0}^{2}\|\theta_{S^{c}}\|_{2}^{2}\big{)}.\] (E.14)
Lemma E.5 of Jin et al. (2021c) implies that if \(\|\theta_{S}\|_{1}/\|\theta\|_{1}\times 1\), then \(\lambda_{2}\asymp\tilde{\lambda}_{1}\). If \(\|\theta_{S}\|_{1}/\|\theta\|_{1}=o(1)\), then this guarantee may not hold. Below, in the case \(K=2\), we express \(\tilde{\lambda}\) in terms of the eigenvalues and eigenvectors of \(\Omega\). This allows us to compare \(\lambda_{2}\) with \(\tilde{\lambda}\) more generally, as in Corollary E.1.
**Lemma E.3**.: _Let \(\Omega\) have eigenvalues \(\lambda_{1},\lambda_{2}\) and eigenvectors \(\xi_{1},\xi_{2}\). Let \(\tilde{\lambda}\) denote the eigenvalue of \(\tilde{\Omega}\). Then_
\[\tilde{\lambda}=\frac{\lambda_{1}\lambda_{2}\big{(}\langle\xi_{1}, \mathbf{1}\rangle^{2}+\langle\xi_{2},\mathbf{1}\rangle^{2}\big{)}}{\lambda_{1 }\langle\xi_{1},\mathbf{1}\rangle^{2}+\lambda_{2}\langle\xi_{2},\mathbf{1} \rangle^{2}}.\] (E.15)
Proof.: By explicit computation,
\[\tilde{\Omega} =\Omega-\eta^{*}\eta^{*\mathsf{T}}\] \[=\lambda_{1}\big{(}1-\frac{\lambda_{1}\langle\xi_{1},\mathbf{1} \rangle^{2}}{v_{0}}\big{)}\xi_{1}\xi_{1}^{\mathsf{T}}+\lambda_{2}\big{(}1- \frac{\lambda_{2}\langle\xi_{2},\mathbf{1}\rangle^{2}}{v_{0}}\big{)}\xi_{2} \xi_{2}^{\mathsf{T}}-\frac{\lambda_{1}\lambda_{2}\langle\xi_{1},\mathbf{1} \rangle\langle\xi_{2},\mathbf{1}\rangle}{v_{0}}\big{(}\xi_{1}\xi_{2}^{\mathsf{ T}}\xi_{2}+\xi_{1}^{\mathsf{T}}\big{)}\] \[=\frac{\lambda_{1}\lambda_{2}}{v_{0}}\big{(}\langle\xi_{2}, \mathbf{1}\rangle\xi_{1}+\langle\xi_{1},\mathbf{1}\rangle\xi_{2}\big{)}\cdot \big{(}\langle\xi_{2},\mathbf{1}\rangle\xi_{1}+\langle\xi_{1},\mathbf{1} \rangle\xi_{2}\big{)}^{\mathsf{T}}.\]
From (E.13) and (E.14), it follows that
\[\tilde{\xi}=\frac{\langle\xi_{2},\mathbf{1}\rangle\xi_{1}+\langle \xi_{1},\mathbf{1}\rangle\xi_{2}}{\sqrt{\langle\xi_{1},\mathbf{1}\rangle^{2} +\langle\xi_{2},\mathbf{1}\rangle^{2}}}\] \[\tilde{\lambda}=\frac{\lambda_{1}\lambda_{2}}{v_{0}}\big{(}\langle \xi_{1},\mathbf{1}\rangle^{2}+\langle\xi_{2},\mathbf{1}\rangle^{2}\big{)}.\]
**Corollary E.1**.: _It holds that_
\[|\lambda_{2}|\lesssim|\tilde{\lambda}|\lesssim\lambda_{1}.\] (E.16)
_If \(\lambda_{2}\geq 0\), then_
\[\lambda_{2}\leq\tilde{\lambda}\leq\lambda_{1}\] (E.17)
Proof.: Suppose that \(\lambda_{2}\geq 0\). Then
\[\lambda_{2}\big{(}\langle\xi_{1},\mathbf{1}\rangle^{2}+\langle \xi_{2},\mathbf{1}\rangle^{2}\big{)}\leq\lambda_{1}\langle\xi_{1},\mathbf{1} \rangle^{2}+\lambda_{2}\langle\xi_{2},\mathbf{1}\rangle^{2}=v_{0}\leq\lambda_ {1}\big{(}\langle\xi_{1},\mathbf{1}\rangle^{2}+\langle\xi_{2},\mathbf{1} \rangle^{2}\big{)},\]
implies (E.17).
Suppose that \(\lambda_{2}<0\). Note that
\[\lambda_{1}\big{(}\langle\xi_{1},\mathbf{1}\rangle^{2}+\langle \xi_{2},\mathbf{1}\rangle^{2}\big{)}\geq\lambda_{1}\langle\xi_{1},\mathbf{1} \rangle^{2}+\lambda_{2}\langle\xi_{2},\mathbf{1}\rangle^{2}=v_{0}\geq 0,\]
which combined with (E.15) implies that \(|\tilde{\lambda}|\geq|\lambda_{2}|\).
Next,
\[\lambda_{2}\leq\tilde{\xi}^{\mathsf{T}}\Omega\tilde{\xi}=\tilde{ \lambda}+\langle\tilde{\xi},\eta^{*}\rangle^{2},\]
which implies that
\[|\tilde{\lambda}|\leq|\lambda_{2}|+\langle\tilde{\xi},\eta^{*} \rangle^{2}\leq\lambda_{1}+\|\eta^{*}\|_{2}^{2}\lesssim\lambda_{1}+\|\theta\|_ {1}^{2}\lesssim\lambda_{1},\]
where the last inequality follows from Lemma E.5.
The next results are frequently used in our analyis of SgnQ.
**Lemma E.4**.: _Let \(v=\mathbf{1}^{\mathsf{T}}(\Omega-\operatorname{diag}(\Omega))\mathbf{1}\) and \(v_{0}=\mathbf{1}^{\mathsf{T}}\Omega\mathbf{1}\). Then_
\[v_{0}\sim v\sim\|\theta\|_{1}^{2}.\] (E.18)
Proof.: By (E.4), \(\|\theta\|_{2}^{2}=o(\|\theta\|_{1})\). By (E.3), \(\|\theta\|_{1}\to\infty\). Hence
\[v=\mathbf{1}^{\mathsf{T}}(\Omega-\operatorname{diag}(\Omega)) \mathbf{1}=\|\theta\|_{1}^{2}-\|\theta\|_{2}^{2}\sim\|\theta\|_{1}^{2}\sim v_{ 0}=\mathbf{1}^{\mathsf{T}}\Omega\mathbf{1}.\]
The next result is a direct corollary of Lemmas E.2 and E.4.
**Corollary E.2**.: _Define \(\beta\in\mathbb{R}^{n}\) by_
\[\beta=\sqrt{\frac{|1-b^{2}|}{v_{0}}}\cdot\left(\|\theta_{S^{c}}\|_{1}\mathbf{1 }_{S}+\|\theta_{S}\|_{1}\mathbf{1}_{S^{c}}\right)\] (E.19)
_Then_
\[|\tilde{\Omega}_{ij}|\lesssim\beta_{i}\theta_{i}\beta_{j}\theta_{j}.\] (E.20)
**Lemma E.5**.: _Let \(\lambda_{1}\) denote the largest eigenvalue of \(\Omega\). Then_
\[\lambda_{1}\gtrsim\|\theta\|_{2}^{2}.\] (E.21)
Proof.: Using the universal inequality \(a^{2}+b^{2}\geq\frac{1}{2}(a+b)^{2}\), we have
\[\lambda_{1} \geq\frac{\theta^{\mathsf{T}}\Omega\theta}{\|\theta\|_{2}^{2}} \geq\frac{1}{\|\theta\|_{2}^{2}}\cdot\sum_{i,j}\theta_{i}\theta_{j}\Omega_{ij }\geq\frac{1}{\|\theta\|_{2}^{2}}\cdot\big{(}\sum_{i,j\in S}\theta_{i}^{2} \theta_{j}^{2}+\sum_{i,j\notin S}\theta_{i}^{2}\theta_{j}^{2}\big{)}\] \[\geq\frac{\|\theta_{S}\|_{2}^{4}+\|\theta_{S^{c}}\|_{2}^{4}}{\| \theta\|_{2}^{2}}\gtrsim\|\theta\|_{2}^{2}.\]
**Lemma E.6**.: _Define \(\eta=\frac{1}{\sqrt{v}}(\Omega-\operatorname{diag}(\Omega))\mathbf{1}\). Then_
\[\eta_{i}\lesssim\eta_{i}^{*}\lesssim\theta_{i}\] (E.22)
Proof.: The left-hand side is immediate, so we prove that \(\eta_{i}^{*}\lesssim\theta_{i}\). We have
\[(\Omega\mathbf{1})_{i}=\begin{cases}\theta_{i}(\|\theta_{S}\|_{1}+b\|\theta_ {S^{c}}\|_{1})&\text{if }i\in S\\ \theta_{i}(b\|\theta_{S}\|_{1}+\|\theta_{S^{c}}\|_{1})&\text{if }i\notin S \end{cases}\]
Since \(\Omega_{ii}=\theta_{i}^{2}\),
\[\sqrt{v_{0}}\cdot\eta_{i}=\begin{cases}\theta_{i}(\|\theta_{S}\|_{1}+b\| \theta_{S^{c}}\|_{1})-\theta_{i}^{2}&\text{if }i\in S\\ \theta_{i}(b\|\theta_{S}\|_{1}+\|\theta_{S^{c}}\|_{1})-\theta_{i}^{2}&\text{ if }i\notin S.\end{cases}\]
Since \(b=O(1)\), \(\theta_{i}=O(1)\), and \(v_{0}\gtrsim\|\theta\|_{1}^{2}\) (c.f. Lemma E.4),
\[\eta_{i}^{*}\lesssim\frac{\theta_{i}\|\theta\|_{1}}{\sqrt{\|\theta\|_{1}^{2}} }=\theta_{i},\]
as desired.
We use the bounds (E.18) - (E.22) throughout. We also use repeatedly that
\[\|\theta\|_{p}^{p}\lesssim\|\theta\|_{q}^{q},\text{ if }p\geq q,\] (E.23)
which holds by (E.2), and
\[\|\beta\circ\theta\|_{2}^{2} =|\tilde{\lambda}|\] \[|\beta_{i}| \lesssim 1\] \[\|\beta\circ\theta^{\circ 2}\|_{1} \leq\|\beta\circ\theta\|_{2}\|\theta\|_{2}\lesssim\|\theta\|_{2}^ {2},\] (E.24)
where the second line holds by Cauchy-Schwarz.
### Mean and variance of SgnQ
The previous work Jin et al. (2021c) decomposes \(\tilde{Q}\) and \(\tilde{Q}-Q^{*}\) into a finite number of terms. For each term an exact expression for its mean and variance is derived in Jin et al. (2021c) that depends on \(\theta\), \(\eta\), \(v\), and \(\tilde{\Omega}\). These expression are then bounded using the inequalities (E.2), (E.3), (E.18), (E.21)-(E.23), as well as an inequality of the form
\[|\tilde{\Omega}_{ij}|\lesssim\alpha\theta_{i}\theta_{j}.\]
In our case, an inequality of this form is still valid, but it does not attain sharp results because it does not properly capture the signal \(|\tilde{\lambda}|\) from the smaller community. Instead, we use the inequality (E.20), followed by the bounds in (E.24) to handle terms involving \(\tilde{\Omega}\).
Therefore, for terms of \(\tilde{Q}\) and \(\tilde{Q}-Q^{*}\) that do not depend on \(\tilde{\Omega}\), the bounds in Jin et al. (2021c) carry over immediately. In particular, their analysis of the null hypothesis carries over directly. Hence we can focus solely on the alternative hypothesis.
Furthermore, any terms with zero mean in Jin et al. (2021c) also have zero mean in our setting : for every term that is mean zero, it is simply the sum of mean zero subterms, and each mean zero subterm is a product of independent, centered random variables (eg, \(X_{1}\) below).
#### e.3.1 Ideal SgnQ
The previous work Jin et al. (2021c) shows that \(\tilde{Q}=X_{1}+4X_{2}+4X_{3}+2X_{4}+4X_{5}+X_{6}\), where \(X_{1},\ldots,X_{6}\) are defined in their Section G.1. For convenience, we state explicitly the definitions of these terms.
\[X_{1} =\sum_{i,j,k,\ell(dist)}W_{ij}W_{jk}W_{k\ell}W_{\ell i}, X_{2} =\sum_{i,j,k,\ell(dist)}\widetilde{\Omega}_{ij}W_{jk}W_{k\ell}W_{ \ell i},\] \[X_{3} =\sum_{i,j,k,\ell(dist)}\widetilde{\Omega}_{ij}\widetilde{\Omega }_{jk}W_{k\ell}W_{\ell i}, X_{4} =\sum_{i,j,k,\ell(dist)}\widetilde{\Omega}_{ij}W_{jk}\widetilde{ \Omega}_{kk}W_{\ell i},\] \[X_{5} =\sum_{i,j,k,\ell(dist)}\widetilde{\Omega}_{ij}\widetilde{\Omega }_{jk}\widetilde{\Omega}_{k\ell}W_{\ell i}, X_{6} =\sum_{i,j,k,\ell(dist)}\widetilde{\Omega}_{ij}\widetilde{\Omega }_{jk}\widetilde{\Omega}_{k\ell}\widetilde{\Omega}_{\ell i}.\]
Since \(X_{1}\) does not depend on \(\tilde{\Omega}\), the bounds for \(X_{1}\) below are directly quoted from Lemma G.3 of Jin et al. (2021c). Also note that \(X_{6}\) is a non-stochastic term.
**Lemma E.7**.: _Under the alternative hypothesis, we have_
\[\mathbb{E}[X_{k}] =0\text{ for }1\leq k\leq 5,\] \[\operatorname{Var}(X_{1}) \lesssim\|\theta\|_{2}^{8}\lesssim\lambda_{1}^{4}\] \[\operatorname{Var}(X_{2}) \lesssim\|\beta\circ\theta\|_{2}^{4}\,\|\theta\|_{2}^{4}\lesssim| \tilde{\lambda}|^{2}\lambda_{1}^{2}\] \[\operatorname{Var}(X_{3}) \lesssim\|\beta\circ\theta\|_{8}^{8}\,\|\theta\|_{2}^{2}\lesssim| \tilde{\lambda}|^{4}\lambda_{1}\] \[\operatorname{Var}(X_{4}) \lesssim\|\beta\circ\theta\|_{2}^{8}\leq|\tilde{\lambda}|^{4}\] \[\operatorname{Var}(X_{5}) \lesssim\|\beta\circ\theta\|_{2}^{12}\lesssim|\tilde{\lambda}|^{ 6},\text{ and }\] \[\mathbb{E}[X_{6}] =X_{6}\sim|\tilde{\lambda}^{4}|\]
Since we assume \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\) under the alternative hypothesis, it holds that
\[\operatorname{Var}(\tilde{Q})\lesssim\lambda_{1}^{4}+|\tilde{\lambda}|^{6}.\]
Theorem E.2 follows directly from this bound and that \(\mathbb{E}X_{6}=\mathbb{E}\tilde{Q}\sim\tilde{\lambda}^{4}\).
#### e.3.2 Proxy SgnQ
The previous work Jin et al. (2021c) shows that
\[\bar{Q}-Q^{*}=U_{a}+U_{b}+U_{c},\]
where
\[U_{a} =4Y_{1}+8Y_{2}+4Y_{3}+8Y_{4}+4Y_{5}+4Y_{6}\] \[U_{b} =4Z_{1}+2Z_{2}+8Z_{3}+4Z_{4}+4Z_{5}+2Z_{6}\] \[U_{c} =4T_{1}+4T_{2}+F.\]
These terms are defined in Section G.2 of Jin et al. (2021c), and for convenience, we define them explicitly below. The previous equations are obtained by expanding carefully \(\tilde{Q}\) and \(Q^{*}\) as defined in (E.6) and (E.7). Thus, the terms on the right-hand-side above are referred as _post-expansion_ terms, and we can analyze each one individually. Now we proceed to their definitions.
First \(Y_{1},\ldots,Y_{6}\) are defined as follows.
\[Y_{1} =\sum_{i,j,k,\ell(dist)}\delta_{ij}W_{jk}W_{k\ell}W_{\ell i}, Y_{2} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\widetilde{\Omega}_{jk}W_{k \ell}W_{\ell i},\] \[Y_{3} =\sum_{i,j,k,\ell(dist)}\delta_{ij}W_{jk}\widetilde{\Omega}_{k \ell}W_{\ell i}, Y_{4} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\widetilde{\Omega}_{jk} \widetilde{\Omega}_{k\ell}W_{\ell i},\] \[Y_{5} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\widetilde{\Omega}_{jk}W_{k \ell}\widetilde{\Omega}_{\ell i}, Y_{6} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\widetilde{\Omega}_{jk} \widetilde{\Omega}_{k\ell}\widetilde{\Omega}_{\ell i}.\]
Next, \(Z_{1},\ldots,Z_{6}\) are defined as follows.
\[Z_{1} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\delta_{jk}W_{k\ell}W_{\ell i}, Z_{2} =\sum_{i,j,k,\ell(dist)}\delta_{ij}W_{jk}\delta_{k\ell}W_{\ell i},\] \[Z_{3} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\delta_{jk}\widetilde{\Omega}_ {k\ell}W_{\ell i}, Z_{4} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\widetilde{\Omega}_{jk}\delta_ {k\ell}W_{\ell i},\] \[Z_{5} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\delta_{jk}\widetilde{\Omega}_ {k\ell}\widetilde{\Omega}_{\ell i}, Z_{6} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\widetilde{\Omega}_{jk}\delta_ {k\ell}\widetilde{\Omega}_{\ell i}.\]
Last, we have the definitions of \(T_{1},T_{2}\), and \(F\).
\[T_{1} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\delta_{jk}\delta_{k\ell}W_{ \ell i}, T_{2} =\sum_{i,j,k,\ell(dist)}\delta_{ij}\delta_{jk}\delta_{k\ell} \widetilde{\Omega}_{\ell i},\] \[F =\sum_{i,j,k,\ell(dist)}\delta_{ij}\delta_{jk}\delta_{k\ell} \delta_{\ell i}.\]
The following post-expansion terms below appear in Lemma G.5 of Jin et al. (2021c). The term \(Y_{1}\) does not depend on \(\tilde{\Omega}\), so we may directly quote the result.
**Lemma E.8**.: _Under the alternative hypothesis, it holds that_
\[|\mathbb{E}Y_{1}| =0, \mathrm{Var}(Y_{1}) \lesssim\|\theta\|_{2}^{2}\|\,\|\theta\|_{3}^{6}\lesssim\lambda _{1}^{4}\] \[|\mathbb{E}Y_{2}| =0, \mathrm{Var}(Y_{2}) \lesssim\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{6}^{6}\lesssim| \tilde{\lambda}|\lambda_{1}^{3}\] \[|\mathbb{E}Y_{3}| =0, \mathrm{Var}(Y_{3}) \lesssim\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}\lesssim| \tilde{\lambda}|^{2}\lambda_{1}^{2}\] \[|\mathbb{E}Y_{4}| \lesssim\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{2}\lesssim| \tilde{\lambda}|^{2}\lambda_{1}, \mathrm{Var}(Y_{4}) \lesssim\frac{\|\beta\circ\theta\|_{2}^{6}\|\theta\|_{2}^{6}}{\| \theta\|_{1}}\lesssim|\tilde{\lambda}|^{3}\lambda_{1}^{2}\] \[|\mathbb{E}Y_{5}| =0, \mathrm{Var}(Y_{5}) \lesssim\frac{\|\beta\circ\theta\|_{2}^{6}\|\theta\|_{2}^{4}}{\| \theta\|_{1}}\lesssim|\tilde{\lambda}|^{3}\lambda_{1}\] \[|\mathbb{E}Y_{6}| =0, \mathrm{Var}(Y_{6}) \lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}}{\| \theta\|_{1}}\lesssim|\tilde{\lambda}|^{6}.\]
As a result,
\[|\mathbb{E}U_{a}|\lesssim|\tilde{\lambda}|^{2}\lambda_{1}=o(\tilde{ \lambda}^{4}).\] (E.25)
Also using Corollary E.1 and that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\), we have
\[\mathrm{Var}(U_{a})\lesssim\lambda_{1}^{4}+|\tilde{\lambda}|^{3} \lambda_{1}^{2}+|\tilde{\lambda}|^{6}.\] (E.26)
The terms below appear in Lemma G.7 of Jin et al. (2021c). The bounds on \(Z_{1}\) and \(Z_{2}\) are quoted directly from Jin et al. (2021c).
**Lemma E.9**.: _Under the alternative hypothesis, it holds that_
\[|\mathbb{E}Z_{1}| \lesssim\|\theta\|_{2}^{4}\lesssim\lambda_{1}^{2}, \mathrm{Var}(Z_{1})\lesssim\|\theta\|_{2}^{2}\|\theta\|_{3}^{6} \lesssim\lambda_{1}^{4}\] \[|\mathbb{E}Z_{2}| \lesssim\|\theta\|_{2}^{4}\lesssim\lambda_{1}^{2}, \mathrm{Var}(Z_{2})\lesssim\frac{\|\theta\|_{2}^{6}\|\theta\|_{3}^ {3}}{\|\theta\|_{1}}\lesssim\lambda_{1}^{3}\] \[|\mathbb{E}Z_{3}| =0, \mathrm{Var}(Z_{3})\lesssim\|\beta\circ\theta\|_{2}^{2}\|\theta \|_{2}^{6}\lesssim|\tilde{\lambda}|^{2}\lambda_{1}^{3}\] \[|\mathbb{E}Z_{4}| \lesssim\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}\lesssim| \tilde{\lambda}|\lambda_{1}, \mathrm{Var}(Z_{4})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\| \theta\|_{2}^{6}}{\|\theta\|_{1}}\lesssim|\tilde{\lambda}|^{2}\lambda_{1}^{2}\] \[|\mathbb{E}Z_{5}| \lesssim\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{2}\lesssim| \tilde{\lambda}|^{2}\lambda_{1}, \mathrm{Var}(Z_{5})\lesssim\frac{\|\beta\circ\theta\|_{2}^{8}\| \theta\|_{2}^{6}}{\|\theta\|_{1}^{2}}\lesssim|\tilde{\lambda}|^{4}\lambda_{1}\] \[|\mathbb{E}Z_{6}| \lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}}{\| \theta\|_{1}^{2}}\lesssim|\tilde{\lambda}|^{2}, \mathrm{Var}(Z_{6})\lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\| \theta\|_{2}^{4}}{\|\theta\|_{1}^{2}}\lesssim|\tilde{\lambda}|^{4}.\]
Using Corollary E.1 and the fact that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\) under the alternative hypothesis, we have
\[|\mathbb{E}U_{b}|\lesssim|\tilde{\lambda}|^{2}\lambda_{1},\] (E.27)
and
\[\mathrm{Var}(U_{b})\lesssim|\tilde{\lambda}|^{2}\lambda_{1}^{3}.\] (E.28)
The terms below appear in Lemma G.9 of Jin et al. (2021c). The bounds on \(T_{1}\) and \(F\) are quoted directly from Jin et al. (2021c) since they do not depend on \(\tilde{O}mega\).
**Lemma E.10**.: _Under the alternative hypothesis, it holds that_
\[|\mathbb{E}T_{1}| \leq\frac{\|\theta\|_{2}^{6}}{\|\theta\|_{1}^{2}}\lesssim\lambda_ {1}, \mathrm{Var}(T_{1})\lesssim\frac{\|\theta\|_{2}^{2}\|\theta\|_{3}^{ 3}}{\|\theta\|_{1}}\lesssim\lambda_{1}^{3}\] \[|\mathbb{E}T_{2}| \leq\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{4}}{\| \theta\|_{1}^{2}}\lesssim|\tilde{\lambda}|, \mathrm{Var}(T_{2})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\| \theta\|_{2}^{8}}{\|\theta\|_{1}^{2}}\lesssim|\tilde{\lambda}|^{2}\lambda_{1}^ {2}\] \[|\mathbb{E}F| \lesssim\|\theta\|_{2}^{4}\lesssim\lambda_{1}^{2}, \mathrm{Var}(F)\lesssim\frac{\|\theta\|_{2}^{10}}{\|\theta\|_{1}^{2}} \lesssim\lambda_{1}^{3}\]
Using Corollary E.1 and the fact that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\) under the alternative hypothesis, we have
\[|\mathbb{E}U_{e}|\lesssim\lambda_{1}^{2},\] (E.29)
and
\[\mathrm{Var}(U_{c})\lesssim|\tilde{\lambda}|^{2}\lambda_{1}^{2}.\] (E.30)
Using Corollary E.1 and that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\), the inequalities (E.25)-(E.30) imply Theorem E.3.
#### e.3.3 Real SgnQ
Our first lemma regarding real SgnQ plays the part of Lemma G.11 from Jin et al. (2021c).
**Lemma E.11**.: _Under the previous assumptions, as \(n\to\infty\),_
* _Under the null hypothesis,_ \(|\mathbb{E}[Q^{*}-\tilde{Q}^{*}]|=o(\|\theta\|_{2}^{4})\) _and_ \(\mathrm{Var}(Q^{*}-\tilde{Q}^{*})=o(\|\theta\|_{2}^{8})\)_._
* _Under the alternative hypothesis, if_ \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\)_, then_ \(|\mathbb{E}[Q^{*}-\tilde{Q}^{*}]|\lesssim|\tilde{\lambda}|^{2}\lambda_{1}\) _and_ \(\mathrm{Var}(Q^{*}-\tilde{Q}^{*})\lesssim|\tilde{\lambda}|^{2}\lambda_{1}^{3}\)_._
The following lemma plays the part of Lemma G.12 from Jin et al. (2021c).
**Lemma E.12**.: _Under the previous assumptions, as \(n\to\infty\),_
* _Under the null hypothesis,_ \(|\mathbb{E}[Q-\tilde{Q}^{*}]|=o(\|\theta\|_{2}^{4})\) _and_ \(\mathrm{Var}(Q-\tilde{Q}^{*})=o(\|\theta\|_{2}^{8})\)_._
* _Under the alternative hypothesis, if_ \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\)_, then_ \(|\mathbb{E}[Q-\tilde{Q}^{*}]|\lesssim\lambda_{1}^{2}+|\tilde{\lambda}|^{3}\) _and_ \(\mathrm{Var}(Q-\tilde{Q}^{*})\lesssim\lambda_{1}^{4}\)_._
### Proofs of Lemmas E.7-E.12
#### e.4.1 Proof strategy
First we describe our method of proof for Lemmas E.7-E.10. We borrow the following strategy from Jin et al. (2021c). Let \(T\) denote a term appearing in one of the Lemmas E.7-E.10, which takes the general form
\[T=\sum_{i_{1},\ldots,i_{m}\in\mathcal{R}}c_{i_{1},\ldots,i_{m}}G_{i_{1}, \ldots,i_{m}}\]
where
* \(m=O(1)\),
* \(\mathcal{R}\) is a subset of \([n]^{m}\),
* \(c_{i_{1},\ldots,i_{m}}=\prod_{(s,s^{\prime})\in A}\Gamma_{i_{s},i_{s^{\prime}}} ^{(s,s^{\prime})}\) is a nonstochastic coefficient where \(A\subset[m]\times[m]\) and \(\Gamma^{(s,s^{\prime})}\in\{\tilde{\Omega},\eta^{*}\mathbf{1}^{\mathsf{T}}, \eta\mathbf{1}^{\mathsf{T}},\mathbf{1}\mathbf{1}^{\mathsf{T}}\}\), and
* \(G_{i_{1},\ldots,i_{m}}=\prod_{(s,s^{\prime})\in B}W_{i_{s},i_{s^{\prime}}}\) where \(B\subset[m]\times[m]\).
Since we are studying signed quadrilateral, one can simply take \(m=4\) above, though we wish to state the lemma in a general way.
Define a _canonical upper bound_\(\overline{\Gamma_{i_{s},i_{s^{\prime}}}^{(s,s^{\prime})}}\) (up to constant factor) on \(\Gamma_{i_{s},i_{s^{\prime}}}^{(s,s^{\prime})}\) as follows:
\[\overline{\Gamma_{i_{s},i_{s^{\prime}}}^{(s,s^{\prime})}}=\begin{cases} \beta_{i_{s}}\theta_{i_{s}}\beta_{i_{s^{\prime}}}\theta_{i_{s^{\prime}}}&\text { if }\Gamma^{(s,s^{\prime})}=\tilde{\Omega},\\ \theta_{i_{s}}&\text{ if }\Gamma^{(s,s^{\prime})}\in\{\eta^{*}\mathbf{1}^{ \mathsf{T}},\eta\mathbf{1}^{\mathsf{T}}\}\\ 1&\text{ otherwise.}\end{cases}\] (E.31)
Define
\[\overline{c_{i_{1},\ldots,i_{m}}}=\prod_{(s,s^{\prime})\in A}\overline{ \Gamma_{i_{s},i_{s^{\prime}}}^{(s,s^{\prime})}}.\] (E.32)
By Corollary E.1 and Lemma E.6,
\[|c_{i_{1},\ldots i_{m}}|\lesssim\overline{c_{i_{1},\ldots,i_{m}}}.\]
In Jin et al. (2021c), each term \(T\) is decomposed into a sum of \(L=O(1)\) terms:
\[T=\sum_{\ell=1}^{L}T^{(L)}=\sum_{\ell=1}^{L}\sum_{i_{1},\ldots,i_{m}\in \mathcal{R}^{(\ell)}}c_{i_{1},\ldots,i_{m}}G_{i_{1},\ldots,i_{m}}.\] (E.33)
In our analysis below and that of Jin et al. (2021c), an upper bound \(\overline{\mathbb{E}T}\) on \(|\mathbb{E}T|\) is obtained by
\[|\mathbb{E}T|\leq\sum_{\ell=1}^{L}|\mathbb{E}T^{(\ell)}|\leq\sum_{\ell=1}^{L} \ \sum_{i_{1},\ldots,i_{m}\in\mathcal{R}^{(\ell)}}|c_{i_{1},\ldots,i_{m}}|\cdot| \mathbb{E}G_{i_{1},\ldots,i_{m}}|\]
\[\leq\sum_{\ell=1}^{L}\sum_{i_{1},\ldots,i_{m}\in\mathcal{R}^{(\ell)}} \overline{c_{i_{1},\ldots,i_{m}}}\cdot|\mathbb{E}G_{i_{1},\ldots,i_{m}}|\] \[=:\overline{\mathbb{E}T}.\] (E.34)
Also an upper bound \(\overline{\mathrm{Var}T}\) on \(\mathrm{Var}T\) is obtained by
\[\mathrm{Var}T \leq L\sum_{\ell=1}^{L}\mathrm{Var}(T^{(\ell)})\] \[\leq L\sum_{\ell=1}^{L}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{m }\in\mathcal{R}^{(\ell)}\\ i_{1}^{\prime},\ldots,i_{m}^{\prime}\in\mathcal{R}^{(\ell)}\end{subarray}}|c_{ i_{1},\ldots,i_{m}}c_{i_{1}^{\prime},\ldots,i_{m}^{\prime}}|\cdot\left|\mathrm{Cov} \big{(}G_{i_{1},\ldots,i_{m}},G_{i_{1}^{\prime},\ldots,i_{m}^{\prime}}\big{)}\right|\] \[\leq L\sum_{\ell=1}^{L}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{m }\in\mathcal{R}^{(\ell)}\\ i_{1}^{\prime},\ldots,i_{m}^{\prime}\in\mathcal{R}^{(\ell)}\end{subarray}} \overline{c_{i_{1},\ldots,i_{m}}}\cdot\overline{c_{i_{1}^{\prime},\ldots,i_{m }^{\prime}}}\cdot\left|\mathrm{Cov}\big{(}G_{i_{1},\ldots,i_{m}},G_{i_{1}^{ \prime},\ldots,i_{m}^{\prime}}\big{)}\right|\] \[=:\overline{\mathrm{Var}T}.\] (E.35)
In Lemmas E.7-E.10, all stated upper bounds are obtained in this manner and are therefore upper bounds on \(\overline{\mathbb{E}T}\) and \(\overline{\mathrm{Var}T}\).
Note that the definition of \(\overline{\mathbb{E}T}\) and \(\overline{\mathrm{Var}T}\) depends on the specific decomposition (E.33) of \(T\) given in Jin et al. (2021c). Refer to the proofs below for details including the explicit decomposition. Again we remark that the difference between our setting and Jin et al. (2021c) is that the canonical upper bound on \(|\Omega_{ij}|\) used in Jin et al. (2021c) is of the form \(\alpha\theta_{i}\theta_{j}\) rather than the inequality \(\beta_{i}\theta_{i}\beta_{j}\theta_{j}\) which is required for our purposes.
The formalism above immediately yields the following useful fact that allows us to transfer bounds between terms that have similar structures.
**Lemma E.13**.: _Suppose that_
\[T=\sum_{i_{1},\ldots,i_{m}\in\mathcal{R}}c_{i_{1},\ldots,i_{m}} G_{i_{1},\ldots,i_{m}},\] \[T^{*}=\sum_{i_{1},\ldots,i_{m}\in\mathcal{R}}c_{i_{1},\ldots,i_{ m}}^{*}G_{i_{1},\ldots,i_{m}},\]
_where_
\[|c_{i_{1},\ldots,i_{m}}|\lesssim\overline{c_{i_{1},\ldots,i_{m}}^{*}}\]
_Then_
\[|\mathbb{E}T|\lesssim\overline{\mathbb{E}[T^{*}]}\]
_and_
\[\mathrm{Var}\,T\lesssim\overline{\mathrm{Var}\,T^{*}}.\]
In the second part of our analysis, we show that Lemmas E.11 and E.12 follow from Lemmas E.7-E.10 and repeated applications of Lemma E.13.
#### e.4.2 Proof of Lemma e.7
The bounds for \(X_{1}\) follow immediately from Jin et al. (2021c).
In (Jin et al., 2021c, Supplement, pg.37) it is shown that \(\mathbb{E}X_{2}=0\), and
\[\mathrm{Var}(X_{2})=2\sum_{i,j,k,\ell(dist.)}\tilde{\Omega}_{ij}^{2}\cdot \mathrm{Var}(W_{jk}W_{k\ell}W_{\ell i}).\]
Thus by (E.1) and (E.2),
\[\mathrm{Var}(X_{2}) \lesssim\sum_{i,j,k,\ell(dist.)}\tilde{\Omega}_{ij}^{2}\cdot \mathrm{Var}(W_{jk}W_{\ell k}W_{\ell i})\lesssim\sum_{i,j,k,\ell}\beta_{i}^{2} \theta_{i}^{2}\beta_{j}^{2}\theta_{j}^{2}\cdot\Omega_{jk}\Omega_{k\ell}\Omega_ {\ell i}\] \[\lesssim\sum_{i,j,k,\ell}\beta_{i}^{2}\theta_{i}^{2}\beta_{j}^{2 }\theta_{j}^{2}\cdot\theta_{j}\theta_{k}^{2}\theta_{\ell}^{2}\theta_{i}=\| \beta\circ\theta\|_{2}^{4}\,\|\theta\|_{2}^{4}\]
In (Jin et al., 2021c, Supplement, pg. 38) it is shown that \(\mathbb{E}X_{3}=0\) and
\[\mathrm{Var}(X_{3})\lesssim\sum_{i,k,\ell(dist.)}\big{(}\sum_{j\notin\{i,k, \ell\}}\bar{\Omega}_{ij}\bar{\Omega}_{jk}\big{)}^{2}\cdot\mathrm{Var}(W_{k\ell }W_{\ell i}).\]
By (E.20) and (E.24),
\[\big{(}\sum_{j\notin\{i,k,\ell\}}\bar{\Omega}_{ij}\bar{\Omega}_{jk}\big{)}^{2 }\leq\beta_{i}^{2}\theta_{i}^{2}\,\beta_{k}^{2}\theta_{k}^{2}\,\|\beta\circ \theta\|_{2}^{4}\]
Thus by (E.1) and (E.2),
\[\mathrm{Var}(X_{3})\lesssim\sum_{i,k,\ell}\beta_{i}^{2}\theta_{i}^{2}\,\beta_ {k}^{2}\theta_{k}^{2}\,\|\beta\circ\theta\|_{2}^{4}\cdot\Omega_{k\ell}\Omega_ {\ell i}\lesssim\sum_{i,k,\ell}\beta_{i}^{2}\theta_{i}^{3}\,\beta_{k}^{2} \theta_{k}^{3}\,\|\beta\circ\theta\|_{2}^{4}\cdot\theta_{\ell}^{2}\,\,\, \lesssim\|\beta\circ\theta\|_{2}^{8}\,\|\theta\|_{2}^{2}.\]
In (Jin et al., 2021c, Supplement, pg. 38) it is shown that \(\mathbb{E}X_{4}=0\) and
\[\mathrm{Var}(X_{4})\lesssim\sum_{i,j,k,\ell(dist.)}\bar{\Omega}_{ij}^{2}\bar {\Omega}_{k\ell}^{2}\cdot\mathrm{Var}(W_{jk}W_{\ell i}).\]
By (E.1) and (E.20),
\[\mathrm{Var}(X_{4})\lesssim\sum_{i,j,k,\ell}\beta_{i}^{2}\theta_{i}^{2}\beta_ {j}^{2}\theta_{j}^{2}\theta_{k}^{2}\beta_{\ell}^{2}\theta_{\ell}^{2}\cdot \theta_{j}\theta_{k}\theta_{\ell}\theta_{i}\lesssim\|\beta\circ\theta\|_{2}^ {8}.\]
In (Jin et al., 2021c, Supplement, pg. 39) it is shown that \(\mathbb{E}X_{5}=0\) and
\[\mathrm{Var}(X_{5})=2\sum_{i<\ell}\big{(}\sum_{\begin{subarray}{c}j,k\notin\{ i,\ell\}\\ j\neq k\end{subarray}}\bar{\Omega}_{ij}\bar{\Omega}_{jk}\bar{\Omega}_{k\ell} \big{)}^{2}\cdot\mathrm{Var}(W_{\ell i}).\]
We have
\[\big{|}\sum_{\begin{subarray}{c}j,k\notin\{i,\ell\}\\ j\neq k\end{subarray}}\bar{\Omega}_{ij}\bar{\Omega}_{jk}\bar{\Omega}_{k\ell} \big{|}\lesssim\beta_{i}\theta_{i}\|\beta\circ\theta\|_{2}^{4}\beta_{\ell} \theta_{\ell}.\]
Thus by (E.1) and (E.2),
\[\mathrm{Var}(X_{5})\lesssim\sum_{i,\ell}\big{(}\beta_{i}\theta_{i}\|\beta \circ\theta\|_{2}^{4}\beta_{\ell}\theta_{\ell}\big{)}^{2}\cdot\theta_{\ell} \theta_{i}\lesssim\|\beta\circ\theta\|_{2}^{12}.\]
Note that \(X_{6}\) is a nonstochastic term. Mimicking (Jin et al., 2021c, Supplement, pg. 39), we have by (E.24),
\[|X_{6}-\tilde{\lambda}^{4}|\lesssim\sum_{i,j,k,\ell(not\,dist.)}\beta_{i}^{2} \theta_{i}^{2}\beta_{j}^{2}\theta_{j}^{2}\theta_{k}^{2}\theta_{\ell}^{2} \lesssim\sum_{i,j,k}\beta_{i}^{2}\theta_{i}^{2}\beta_{j}^{2}\theta_{j}^{4} \theta_{k}^{4}\lesssim\|\beta\circ\theta\|_{2}^{6}\lesssim|\tilde{\lambda}|^{3}.\]
This completes the proof.
#### e.4.3 Proof of Lemma e.8
The bounds on \(Y_{1}\) carry over directly from (Jin et al., 2021c, Lemma G.5).
In (Jin et al., 2021c, Supplement, pg. 43) it is shown that \(\mathbb{E}Y_{2}=0\). To study \(\mathrm{Var}(Y_{2})\), we write \(Y=Y_{2a}+Y_{2b}+Y_{2c}\) where as in (Jin et al., 2021c, Supplement, pg. 43), we define
\[Y_{2} = -\frac{1}{\sqrt{v}}\sum_{i,j,k,\ell(dist)}\eta_{i}\widetilde{ \Omega}_{jk}W_{js}W_{k\ell}W_{\ell i}\] (E.36) \[-\frac{1}{\sqrt{v}}\sum_{i,k,\ell(dist)}^{\neq j}\Big{(}\sum_{j \notin\{i,k,\ell\}}\eta_{j}\widetilde{\Omega}_{jk}\Big{)}W_{i\ell}^{2}W_{k\ell}\] \[-\frac{1}{\sqrt{v}}\sum_{i,k,\ell(dist)}\Big{(}\sum_{j\notin\{i,k, \ell\}}\eta_{j}\widetilde{\Omega}_{jk}\Big{)}W_{is}W_{k\ell}W_{\ell i}\] \[\equiv Y_{2a}+Y_{2b}+Y_{2c}.\]
There it is shown that
\[\mathrm{Var}(Y_{2a})\lesssim\frac{1}{v}\sum_{ijk\ell s}\big{|}\eta_{i} \tilde{\Omega}_{jk}+\eta_{i}\tilde{\Omega}_{sk}+\eta_{k}\tilde{\Omega}_{ji}+ \eta_{k}\tilde{\Omega}_{si}\big{|}^{2}\cdot\mathrm{Var}(W_{js}W_{k\ell}W_{ \ell i}).\]
We have by (E.22)
\[\big{|}\eta_{i}\tilde{\Omega}_{jk}+\eta_{i}\tilde{\Omega}_{sk}+\eta_{k} \tilde{\Omega}_{ji}+\eta_{k}\tilde{\Omega}_{si}\big{|}\lesssim\theta_{i}\beta _{j}\theta_{j}\beta_{k}\theta_{k}+\theta_{i}\beta_{s}\theta_{s}\beta_{k}\theta _{k}+\theta_{k}\beta_{j}\theta_{j}\beta_{i}\theta_{i}+\theta_{k}\beta_{s} \theta_{s}\beta_{i}\theta_{i}.\]
Hence by (E.1), (E.2), and (E.18),
\[\mathrm{Var}(Y_{2a})\lesssim\frac{1}{v}\sum_{ijk\ell s}\big{(} \theta_{i}\beta_{j}\theta_{j}\theta_{k}\theta_{k}+\theta_{i}\beta_{s}\theta_{ s}\beta_{k}\theta_{k}+\theta_{k}\beta_{j}\theta_{j}\beta_{i}\theta_{i}+ \theta_{k}\beta_{s}\theta_{s}\beta_{i}\theta_{i}\big{)}^{2}\cdot\theta_{j} \theta_{s}\theta_{k}\theta_{\ell}^{2}\theta_{i}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
Combining the results for \(Y_{2a},Y_{2b},Y_{2c}\) gives the claim for \(\mathrm{Var}(Y_{2})\).
In (Jin et al., 2021c, Supplement, pg.45) it is shown that \(\mathbb{E}Y_{3}=0\) and the decomposition
\[Y_{3} =-\frac{2}{\sqrt{v}}\sum_{i,j,k,\ell(dist)}\eta_{i}\widetilde{ \Omega}_{k\ell}W_{jk}^{2}W_{\ell i}-\frac{2}{\sqrt{v}}\sum_{\begin{subarray}{ c}i,j,k,\ell(dist)\\ s\notin\{j,k\}\end{subarray}}\eta_{i}\widetilde{\Omega}_{k\ell}W_{js}W_{jk}W_ {\ell i}\] \[\equiv Y_{3a}+Y_{3b},\] (E.37)
is introduced. There it is shown that
\[\mathrm{Var}(Y_{3a})=\frac{4}{v}\sum_{\begin{subarray}{c}i,j,k,\ell( dist)\\ i^{\prime},j^{\prime},k^{\prime},\ell^{\prime}(dist)\end{subarray}}(\eta_{i} \widetilde{\Omega}_{k\ell}\eta_{i^{\prime}}\widetilde{\Omega}_{k^{\prime}k^{ \prime}})\cdot\mathbb{E}[W_{jk}^{2}W_{\ell i}W_{j^{\prime}k^{\prime}}^{2}W_{ \ell^{\prime}i^{\prime}}].\]
Using (E.1), (E.2) (E.24) and the casework in (Jin et al., 2021c, Supplement, pg.45),
\[\mathrm{Var}(Y_{3a}) \lesssim\frac{1}{\|\theta\|_{1}^{2}}\bigg{(}\sum_{ijkl\ell}[ \beta_{k}^{2}\beta_{\ell}^{2}+\beta_{i}\beta_{j}\beta_{k}\beta_{\ell}]\theta_ {i}^{2}\theta_{j}^{2}\theta_{k}^{2}\theta_{\ell}^{2}+\sum_{ijk\ell j^{\prime }k^{\prime}}\beta_{k}\beta_{\ell}^{2}\beta_{k^{\prime}}\theta_{i}^{3}\theta_{j }\theta_{k}^{2}\theta_{j^{\prime}}^{2}\bigg{)}\] \[\lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{4}}{\| \theta\|_{1}^{2}}+\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}\lesssim\| \beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}\]
Similar to the study of \(Y_{2a}\) we have
\[\mathrm{Var}(Y_{3b}) \lesssim\frac{1}{v}\sum_{ij\neq k\ell}\big{(}\theta_{i}\beta_{k} \theta_{k}\beta_{\ell}\theta_{\ell}+\theta_{\ell}\beta_{k}\theta_{k}\beta_{i} \theta_{i}+\theta_{i}\beta_{s}\theta_{s}\beta_{\ell}\theta_{\ell}+\theta_{ \ell}\beta_{s}\theta_{s}\beta_{i}\theta_{i}\big{)}^{2}\cdot\mathrm{Var}(W_{sj} W_{jk}W_{\ell i})\] \[\lesssim\frac{1}{v}\sum_{ij\neq k\ell}\big{(}\theta_{i}\beta_{k} \theta_{k}\beta_{\ell}\theta_{\ell}+\theta_{\ell}\beta_{k}\theta_{k}\beta_{i} \theta_{i}+\theta_{i}\beta_{s}\theta_{s}\beta_{\ell}\theta_{\ell}+\theta_{ \ell}\beta_{s}\theta_{s}\beta_{i}\theta_{i}\big{)}^{2}\cdot\theta_{s}\theta_{ j}^{2}\theta_{k}\theta_{\ell}\theta_{i}\] \[\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}}{\| \theta\|_{1}}.\]
Combining the bounds on \(\mathrm{Var}(Y_{3a})\) and \(\mathrm{Var}(Y_{3b})\) yields the desired bound on \(\mathrm{Var}(Y_{3})\).
Following (Jin et al., 2021c, Supplement, pg.46) we obtain the decomposition
\[Y_{4} =-\frac{1}{\sqrt{v}}\sum_{\begin{subarray}{c}i,j,\ell(dist)\\ s\neq j\end{subarray}}\!\!\!\!\!\Big{(}\sum_{k\notin\{i,j,\ell\}}\eta_{i} \widetilde{\Omega}_{jk}\widetilde{\Omega}_{k\ell}\Big{)}W_{js}W_{\ell i}- \frac{1}{\sqrt{v}}\sum_{\begin{subarray}{c}i,\ell(dist)\\ s\neq i\end{subarray}}\!\!\!\!\Big{(}\sum_{j,k\notin\{i,\ell\}}\eta_{j} \widetilde{\Omega}_{jk}\widetilde{\Omega}_{k\ell}\Big{)}W_{is}W_{\ell i}\] \[\equiv Y_{4a}+Y_{4b}.\]
First we study \(Y_{4a}\), which is shown in Jin et al. (2021c) to have zero mean and satisfy the following:
\[\mathrm{Var}(Y_{4a})\lesssim\frac{1}{v}\sum_{\begin{subarray}{c}ij\neq( dist)\\ s\neq j\end{subarray}}\alpha_{ij\ell}^{2}\mathrm{Var}(W_{js}W_{\ell i})\]
where \(\alpha_{ij\ell}=\sum_{k\notin\{i,j,\ell\}}\eta_{i}\tilde{\Omega}_{jk}\tilde{ \Omega}_{k\ell}\). Simlar to previous arguments, we have
\[\mathrm{Var}(Y_{4a}) \lesssim\frac{1}{\|\theta\|_{1}^{2}}\sum_{ij\neq i\neq k}\theta_{ i}^{2}(\beta_{j}\theta_{j})^{2}(\beta_{\ell}\theta_{\ell})^{2}\|\beta\circ\theta\|_{2}^{4} \cdot\theta_{i}\theta_{j}\theta_{\ell}\theta_{s}\] \[\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{2}}{\| \theta\|_{1}}.\]
Next we study \(Y_{4b}\) using the decomposition
\[Y_{4b}=-\frac{1}{\sqrt{v}}\sum_{i,\ell(dist)}\beta_{i\ell}W_{\ell i}^{2}-\frac{ 1}{\sqrt{v}}\sum_{\begin{subarray}{c}i,\ell(dist)\\ s\notin\{i,\ell\}\end{subarray}}\beta_{i\ell}W_{is}W_{\ell i}\equiv\widetilde{Y }_{4b}+Y_{4b}^{*}.\]
from (Jin et al., 2021c, Supplement,pg.47). There it is shown that only \(\mathbb{E}\tilde{Y}_{4b}\) is nonzero and
\[|\mathbb{E}\tilde{Y}_{4b}|\lesssim\frac{1}{\|\theta\|_{1}}\sum_{i,\ell}|\alpha_ {i\ell}|\theta_{i}\theta_{\ell}.\]
where \(\alpha_{i,\ell}=\sum_{j,k\notin\{i,\ell\}}\eta_{j}\widetilde{\Omega}_{jk} \widetilde{\Omega}_{k\ell}\). In our case, we derive from (E.24),
\[|\alpha_{i\ell}|\lesssim\beta_{\ell}\theta_{\ell}\|\beta\circ\theta\|_{2}^{3} \|\theta\|_{2}.\]
Using similar arguments from before,
\[|\mathbb{E}\tilde{Y}_{4b}|\lesssim\frac{1}{\|\theta\|_{1}}\sum_{i\ell}\beta_{ i}\theta_{\ell}\|\beta\circ\theta\|_{2}^{3}\|\theta\|_{2}\cdot\theta_{i} \theta_{\ell}\lesssim\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{2}.\]
Now we study \(\mathrm{Var}(Y_{4b})\). Using the bound above on \(|\alpha_{i\ell}|\) and direct calculations,
\[\mathrm{Var}(\widetilde{Y}_{4b}) =\frac{2}{v}\sum_{i,\ell(dist)}\alpha_{i\ell}^{2}\cdot\mathrm{Var }(W_{i\ell}^{2})\lesssim\frac{1}{\|\theta\|_{1}^{2}}\sum_{i,\ell}\beta_{\ell }^{2}\theta_{\ell}^{2}\|\beta\circ\theta\|_{2}^{6}\|\theta\|_{2}^{2}\cdot \theta_{i}\theta_{\ell}\lesssim\frac{\|\beta\circ\theta\|_{2}^{8}\|\theta\|_{ 2}^{2}}{\|\theta\|_{1}},\] \[\mathrm{Var}(Y_{4b}^{*}) \leq\frac{1}{v}\sum_{\begin{subarray}{c}i,\ell(dist)\\ s\notin\{i,\ell\}\end{subarray}}\alpha_{i\ell}^{2}\cdot\mathrm{Var}(W_{is}W_{ i\ell})\leq\frac{1}{\|\theta\|_{1}^{2}}\sum_{i,\ell,s}\beta_{\ell}^{2} \theta_{\ell}^{2}\|\beta\circ\theta\|_{2}^{6}\|\theta\|_{2}^{2}\cdot\theta_{i }^{2}\theta_{\ell}\theta_{s}\leq\frac{\|\beta\circ\theta\|_{2}^{8}\|\theta\|_{ 2}^{4}}{\|\theta\|_{1}}.\]
Combining the results above yields the required bounds on \(\mathbb{E}Y_{4b}\) and \(\mathrm{Var}(Y_{4b})\).
In (Jin et al., 2021c, Supplement, pg.48) it is shown that \(\mathbb{E}Y_{5}=0\) and
\[\mathrm{Var}(Y_{5})\lesssim\frac{1}{v}\sum_{\begin{subarray}{c}j,k,\ell(dist) \\ s\neq j\end{subarray}}\alpha_{jk\ell}^{2}\cdot\mathrm{Var}(W_{js}W_{k\ell})\]
where
\[\alpha_{jk\ell}\equiv\sum_{i\notin\{j,k,\ell\}}\eta_{i}\widetilde{\Omega}_{ jk}\widetilde{\Omega}_{\ell i}.\]
We have using (E.20), (E.24) and the triangle inequality,
\[|\alpha_{jk\ell}|\lesssim\|\theta\|_{2}^{2}(\beta_{j}\theta_{j})(\beta_{k} \theta_{k})(\beta_{\ell}\theta_{\ell}).\]
Thus, by similar arguments to before,
\[\mathrm{Var}(Y_{5})\lesssim\frac{1}{\|\theta\|_{1}^{2}}\sum_{jk\ell}\big{(}\| \theta\|_{2}^{4}(\beta_{j}\theta_{j})^{2}(\beta_{k}\theta_{k})^{2}(\beta_{ \ell}\theta_{\ell})^{2}\big{)}\theta_{j}\theta_{s}\theta_{k}\theta_{\ell} \lesssim\frac{\|\theta\|_{2}^{4}\|\beta\circ\theta\|_{2}^{6}}{\|\theta\|_{1}}.\]
Next, in (Jin et al., 2021c, Supplement, pg.49) it is shown that \(\mathbb{E}Y_{6}=0\) and
\[\mathrm{Var}(Y_{6})=\frac{8}{v}\sum_{j,s(dist)}\Big{(}\sum_{i,k,\ell(dist) \notin\{j\}}\eta_{i}\widetilde{\Omega}_{jk}\widetilde{\Omega}_{k\ell} \widetilde{\Omega}_{\ell i}\Big{)}^{2}\cdot\mathrm{Var}(W_{js}).\]
We have using (E.20), (E.24) and the triangle inequality,
\[\big{|}\sum_{i,k,\ell(dist)\notin\{j\}}\eta_{i}\widetilde{\Omega}_{jk} \widetilde{\Omega}_{k\ell}\widetilde{\Omega}_{\ell i}\big{|}\lesssim\beta_{j} \theta_{j}\|\beta\circ\theta\|_{2}^{5}\|\theta\|_{2}.\]
Thus
\[\mathrm{Var}(Y_{6})\lesssim\frac{1}{\|\theta\|_{1}^{2}}\sum_{j,s}\big{(}\beta_{ j}^{2}\theta_{j}^{2}\|\beta\circ\theta\|_{2}^{10}\|\theta\|_{2}^{2}\big{)} \theta_{j}\theta_{s}\lesssim\frac{\|\beta\circ\theta\|_{2}^{12}\|\theta\|_{2}^ {2}}{\|\theta\|_{1}}.\]
This completes the proof.
#### e.4.4 Proof of Lemma e.9
The bounds on \(Z_{1}\) and \(Z_{2}\) carry over directly from (Jin et al., 2021c, Lemma G.7) since neither term depends on \(\tilde{\Omega}\).
We consider \(Z_{3}\). In (Jin et al., 2021c, Supplement, pg.61), the decomposition
\[Z_{3}=\sum_{\begin{subarray}{c}i,j,k,\ell\\ (dist)\end{subarray}}\eta_{i}(\eta_{j}-\tilde{\eta}_{j})\eta_{j}(\eta_{k}- \tilde{\eta}_{k})\widetilde{\Omega}_{k\ell}W_{\ell i}+\sum_{\begin{subarray}{ c}i,j,k,\ell\\ (dist)\end{subarray}}\eta_{i}(\eta_{j}-\tilde{\eta}_{j})^{2}\eta_{k}\widetilde{ \Omega}_{k\ell}W_{\ell i}\] \[+\sum_{\begin{subarray}{c}i,j,k,\ell\\ (dist)\end{subarray}}(\eta_{i}-\tilde{\eta}_{i})\eta_{j}^{2}(\eta_{k}-\tilde{ \eta}_{k})\widetilde{\Omega}_{k\ell}W_{\ell i}+\sum_{\begin{subarray}{c}i,j,k, \ell\\ (dist)\end{subarray}}(\eta_{i}-\tilde{\eta}_{i})\eta_{j}(\eta_{j}-\tilde{\eta}_ {j})\eta_{k}\widetilde{\Omega}_{k\ell}W_{\ell i}\] \[\qquad\equiv Z_{3a}+Z_{3b}+Z_{3c}+Z_{3d}.\] (E.38)
is introduced. We study each term separately.
In (Jin et al., 2021c, Supplement, pg.61) it is shown that \(\mathbb{E}Z_{3a}=0\) and the decomposition
\[Z_{3a} =\frac{1}{v}\sum_{i,j,k,\ell(dist)}\alpha_{ijk\ell}W_{jk}^{2}W_{ \ell i}+\frac{1}{v}\sum_{\begin{subarray}{c}i,j,k,\ell(dist)\\ s\neq j,\ell\neq k,(s,t)\neq(k,j)\end{subarray}}\alpha_{ijk\ell}W_{js}W_{kt}W_ {\ell i}\] \[\equiv\widetilde{Z}_{3a}+Z_{3a}^{*}.\]
is introduced, where \(\alpha_{ijk\ell}\equiv\eta_{i}\eta_{j}\tilde{\Omega}_{k\ell}\). Then
\[\mathrm{Var}(\tilde{Z}_{3a})\lesssim\sum_{\begin{subarray}{c}ijk\ell(dist)\\ i^{\prime}j^{\prime}k^{\prime}\ell^{\prime}(dist)\end{subarray}}|\alpha_{ ijk\ell}||\alpha_{i^{\prime}k^{\prime}j^{\prime}\ell^{\prime}}|\cdot|\mathrm{Cov}(W_{jk}^{2}W_{ \ell i},W_{j^{\prime}k^{\prime}}^{2}W_{\ell^{\prime}i^{\prime}})|.\]
Using the casework in (Jin et al., 2021c, Supplement, pg.62), (E.1), (E.2), and (E.24), we obtain
\[\mathrm{Var}(\tilde{Z}_{3a}) \lesssim\frac{1}{v^{2}}\big{(}\sum_{ijk\ell}[\beta_{k}^{2}\beta_{ \ell}^{2}+\beta_{k}\beta_{\ell}\beta_{i}\beta_{j}]\theta_{i}^{3}\theta_{j}^{3} \theta_{k}^{3}\theta_{\ell}^{3}+\sum_{ijk\ell^{\prime}j^{\prime}k^{\prime}} \beta_{\ell}\beta_{\ell}^{2}\beta_{k^{\prime}}\theta_{i}^{3}\theta_{j}^{2} \theta_{k}^{2}\theta_{\ell}^{2}\theta_{k^{\prime}}^{2}\big{)}\] \[\lesssim\frac{1}{\|\theta\|_{1}^{4}}\big{(}\|\beta\circ\theta\|_ {2}^{4}\|\theta\|_{2}^{2}+\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}+\| \beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{8}\big{)}\lesssim\frac{\|\beta\circ \theta\|_{2}^{2}\|\theta\|_{2}^{8}}{\|\theta\|_{1}^{4}}.\]
Similarly,
\[\mathrm{Var}(Z_{3a}^{*}) \lesssim\frac{1}{v^{2}}\bigg{(}\sum_{ijk\ell st}\beta_{k}^{2} \beta_{\ell}^{2}\theta_{i}^{3}\theta_{j}^{3}\theta_{k}^{3}\theta_{\ell}\theta_{ \ell}+\sum_{ijk\ell st}[\beta_{k}^{2}\ell_{\ell}\beta_{j}+\beta_{k}\beta_{\ell }^{2}\beta_{j}]\theta_{i}^{2}\theta_{j}^{3}\theta_{k}^{3}\theta_{\ell}^{3} \theta_{\ell}^{2}\theta_{\ell}\bigg{)}\] \[\lesssim\frac{1}{\|\theta\|_{1}^{4}}\big{(}\|\beta\circ\theta\|_ {2}^{4}\|\theta\|_{2}^{4}\|\theta\|_{1}^{2}+\|\beta\circ\theta\|_{2}^{4}\| \theta\|_{2}^{6}\|\theta\|_{1}\big{)}\lesssim\frac{\|\beta\circ\theta\|_{2}^{4} \|\theta\|_{2}^{4}}{\|\theta\|_{1}^{2}}.\]
It follows that
\[\mathrm{Var}(Z_{3a})\lesssim\|\beta\circ\theta\|_{2}^{4}.\]
Next, in (Jin et al., 2021c, Supplement, pg.63), it is shown that \(\mathbb{E}Z_{3b}]=0\) and the decomposition
\[Z_{3b}=\frac{1}{v}\sum_{\begin{subarray}{c}i,j,\ell(dist)\\ s\neq j\end{subarray}}\beta_{ij\ell}W_{js}^{2}W_{ji}+\frac{1}{v}\sum_{ \begin{subarray}{c}i,j,\ell(dist)\\ s,t(dist)\notin\{j\}\end{subarray}}\beta_{ij\ell}W_{js}W_{j\ell}W_{\ell i} \equiv\widetilde{Z}_{3b}+Z_{3b}^{*}.\]
is given. Using (Jin et al., 2021c, Supplement, pg.63) we have
\[\mathrm{Var}(\tilde{Z}_{3b})\lesssim\sum_{\begin{subarray}{c}i,j,\ell,s,t\\ i^{\prime},j^{\prime},\ell^{\prime},s^{\prime},t^{\prime}\end{subarray}}| \alpha_{ij\ell}||\alpha_{i^{\prime}j^{\prime}\ell^{\prime}}||\mathrm{Cov}(W_{js}^ {2}W_{\ell_{i}},W_{j^{\prime}s}^{2}W_{\ell^{\prime}i^{\prime}})|.\]
where
\[\alpha_{ij\ell}=\sum_{k\notin\{i,j,\ell\}}\eta_{i}\eta_{k}\widetilde{\Omega}_{k\ell}.\]
Using (E.24), (E.18), and similar arguments to before,
\[|\alpha_{ij\ell}|\lesssim\theta_{i}(\beta_{\ell}\theta_{\ell})\|\theta\|_{2}^{2}.\]
By the casework in (Jin et al., 2021c, Supplement, pg.63), (E.1), and (E.2),
\[\mathrm{Var}(\tilde{Z}_{3b}) \lesssim\frac{1}{v^{2}}\bigg{(}\sum_{ij\ell s}\beta_{\ell}^{2}\| \theta\|_{2}^{4}\theta_{i}^{3}\theta_{j}\theta_{\ell}^{3}\theta_{s}+\sum_{ij \ell s^{\prime}s^{\prime}}\beta_{\ell}^{2}\|\theta\|_{2}^{4}\theta_{i}^{3} \theta_{j}\theta_{\ell}^{3}\theta_{s}\theta_{j^{\prime}}\theta_{s^{\prime}}+ \sum_{ij\ell s}\beta_{\ell}\beta_{j}\|\theta\|_{2}^{4}\theta_{i}^{2}\theta_{j} ^{2}\theta_{\ell}^{2}\theta_{s}^{2}\bigg{)}\] \[\lesssim\frac{1}{\|\theta\|_{1}^{4}}\big{(}\|\beta\circ\theta\|_{ 2}^{2}\|\theta\|_{2}^{6}\|\theta\|_{1}+\|\beta\circ\theta\|_{2}^{2}\|\theta\| _{2}^{6}\|\theta\|_{1}^{3}+\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{8}\big{)} \lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{6}}{\|\theta\|_{1}}.\]
By a similar argument,
\[\mathrm{Var}(Z_{3b}^{*})\lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_ {2}^{8}}{\|\theta\|_{1}}.\]
Hence by (E.2),
\[\mathrm{Var}(Z_{3b})\lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^ {8}}{\|\theta\|_{1}}\lesssim\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{6}.\]
For \(Z_{3c}\), in (Jin et al., 2021c, Supplement, pg.64), it is shown that \(\mathbb{E}Z_{3c}=0\) and the decomposition
\[Z_{3c}=\frac{1}{v}\sum_{\begin{subarray}{c}i,k,\ell,(dist)\\ t\neq k\end{subarray}}\alpha_{ik\ell}W_{il}^{2}W_{kt}+\frac{1}{v}\sum_{ \begin{subarray}{c}i,k,\ell,(dist)\\ s\notin\{i,\ell\},t\neq k\end{subarray}}\alpha_{ik\ell}W_{is}W_{kt}W_{\ell i }\equiv\widetilde{Z}_{3c}+Z_{3c}^{*}.\]
is given. We have
\[|\alpha_{ik\ell}|=|\sum_{j\notin\{i,k,\ell\}}\eta_{j}^{2} \widetilde{\Omega}_{k\ell}|\lesssim(\beta_{k}\theta_{k})(\beta_{\ell}\theta_{ \ell})\|\theta\|_{2}^{2}.\]
By the casework in (Jin et al., 2021c, Supplement, pg.65)
\[\mathrm{Var}(\tilde{Z}_{3c}) \lesssim\sum_{\begin{subarray}{c}ik\ell(dist)\\ s\notin\{i,\ell\},t\neq k\end{subarray}}\sum_{\begin{subarray}{c}i^{\prime}k^ {\prime}\ell^{\prime}(dist)\\ t^{\prime},t^{\prime}\neq k^{\prime}\end{subarray}}|\alpha_{ik\ell}\alpha_{i^ {\prime}k^{\prime}\ell^{\prime}}||\mathbb{E}W_{it}^{2}W_{kt}W_{\bar{t}^{\prime }\ell^{\prime}}^{2}W_{k^{\prime}\ell^{\prime}}|\] \[\lesssim\frac{\|\theta\|_{2}^{2}}{\|\theta\|_{1}^{4}}\sum_{ik\ell \ell}\bigg{[}\beta_{k}^{2}\beta_{\ell}^{2}\theta_{i}\theta_{k}^{3}\theta_{ \ell}^{3}\theta_{\ell}+\beta_{k}^{2}\beta_{\ell}\beta_{\ell}\beta_{\ell}^{2} \theta_{k}^{2}\theta_{\ell}^{2}\theta_{\ell}+\beta_{k}\beta_{\ell}^{2}\beta_{ \ell}\theta_{\ell}^{1}\theta_{k}^{2}\theta_{\ell}^{3}\theta_{\ell}^{2}\] \[\quad+\beta_{k}\beta_{\ell}\beta_{\ell}\beta_{\ell}\theta_{i}^{2 }\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}+\beta_{k}^{2}\beta_{\ell} \beta_{\ell}\theta_{i}^{2}\theta_{\ell}^{3}\theta_{\ell}^{2}\theta_{\ell}^{1} +\beta_{k}\beta_{\ell}^{2}\theta_{\ell}\theta_{i}\theta_{k}^{2}\theta_{\ell}^{ 3}\theta_{\ell}^{2}+\beta_{k}^{2}\beta_{\ell}^{2}\theta_{\ell}\theta_{k}^{3} \theta_{\ell}^{3}\theta_{\ell}\bigg{]}\] \[\quad+\sum_{ik\ell t^{\prime}\ell^{\prime}}\bigg{[}\beta_{k}^{2} \beta_{\ell}\theta_{\ell}\theta_{i}\theta_{k}^{3}\theta_{\ell}^{2}\theta_{ \ell}\theta_{\ell}\theta_{\ell}^{2}+\beta_{k}\beta_{\ell}\beta_{\ell}\beta_{ \ell^{\prime}}\beta_{\ell}\theta_{i}\theta_{k}^{2}\theta_{\ell}^{2}\theta_{ \ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2} \theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell} ^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{ \ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2} \theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{ 2}\bigg{]}\]
We have by (E.2) and (E.24) that
\[\sum_{ik\ell t}\bigg{[}\beta_{k}^{2}\beta_{\ell}^{2}\theta_{\ell} \theta_{i}\theta_{k}^{3}\theta_{\ell}+ \beta_{k}^{2}\beta_{\ell}\beta_{i}\theta_{i}^{2}\theta_{\ell}^{3}\theta_{\ell}^{ 2}\theta_{\ell}+\beta_{k}\beta_{\ell}^{2}\beta_{\ell}\theta_{i}^{1}\theta_{k}^{2} \theta_{\ell}^{3}\theta_{\ell}^{2}+\beta_{k}\beta_{\ell}\beta_{i}\theta_{i} \theta_{i}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\] \[\qquad+\beta_{k}^{2}\beta_{\ell}\beta_{i}\theta_{i}^{2}\theta_{k}^{ 2}\theta_{\ell}^{1}+\beta_{k}\beta_{\ell}^{2}\beta_{\ell}\beta_{\ell}\theta_{i} \theta_{k}^{2}\theta_{\ell}^{3}\theta_{\ell}^{2}+\beta_{k}^{2}\beta_{\ell}^{2} \beta_{\ell}^{2}\theta_{i}\theta_{k}^{3}\theta_{\ell}^{3}\theta_{\ell}\bigg{]}\] \[\lesssim\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{1}^{2}+\|\beta\circ \theta\|_{2}^{2}\|\theta\|_{1}^{4}\|\theta\|_{1}+\|\beta\circ\theta\|_{2}^{4}\| \theta\|_{2}^{4}+\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{1}^{2}\]
and
\[\sum_{ik\ell t^{\prime}\ell^{\prime}}\bigg{[}\beta_{k}^{2}\beta_{\ell} \beta_{\ell}\theta_{i}\theta_{k}^{3}\theta_{\ell}^{2}\theta_{\ell}\theta_{\ell} \theta_{\ell^{\prime}}^{2}+\beta_{k}\beta_{\ell}\beta_{\ell^{\prime}}\beta_{\ell} \theta_{\ell}\theta_{k}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2}\theta_{\ell}^{2} \theta_{\ell}\theta_{\ell^{\prime}}\theta_{\ell^{\prime}}^{2}\bigg{]}\lesssim\|\beta \circ\theta\|_{2}^{2}\|\theta\|_{2}^{4}\|\theta\|_{1}^{3}+\|\beta\circ\theta\|_{2}^{4}\| \theta\|_{2}^{4}\|\theta\|_{1}^{2}\]
Thus
\[\mathrm{Var}(\tilde{Z}_{3c})\lesssim\frac{\|\theta\|_{2}^{4}}{\|\theta\|_{1}^{4}} (\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{1}^{2}+\|\beta\circ\theta\|_{2}^{2}\| \theta\|_{2}^{4}\|\theta\|_{1}^{3}+\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{ 4}\|\theta\|_{1}^{2})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^ {8}}{\|\theta\|_{1}}\]
To study \(Z_{3c}^{*}\), in (Jin et al., 2021c, Supplement, pg.65) the decomposition
\[Z_{3c}^{*}=\frac{1}{v}\sum_{i,k,\ell(dist)}\alpha_{ik\ell}W_{ik}^{2}W_{\ell i }+\frac{1}{v}\sum_{\begin{subarray}{c}i,k,\ell(dist)\\ s\notin\{i,\ell\},k\neq k,(s,\ell)\neq(k,i)\end{subarray}}\alpha_{ik\ell}W_{is }W_{kt}W_{\ell i}\equiv Z_{3c,1}^{*}+Z_{3c,2}^{*}\]
is used, where recall \(\alpha_{ik\ell}=\sum_{j\notin\{i,k,\ell\}}\eta_{j}^{2}\tilde{\Omega}_{k\ell}\). Using a similar argument as before, we have
\[\mathrm{Var}(Z_{3c,1}^{*}) \lesssim\frac{\|\theta\|_{2}^{4}}{\|\theta\|_{1}^{4}}\bigg{(} \sum_{ik\ell}\beta_{k}^{2}\beta_{\ell}^{2}\theta_{i}^{2}\theta_{k}^{3}\theta_ {\ell}^{3}+\sum_{ik\ell k^{\prime}}[\beta_{k}\beta_{k^{\prime}}\beta_{\ell}^ {2}+\beta_{k}\beta_{k^{\prime}}\beta_{i}\beta_{\ell}]\theta_{i}^{3}\theta_{ \ell}^{2}\theta_{\ell}^{3}\theta_{\ell^{\prime}}^{2}\bigg{)}\] \[\lesssim\frac{\|\theta\|_{2}^{4}}{\|\theta\|_{1}^{4}}\big{(}\| \beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{2}+\|\theta\|_{2}^{4}\|\beta\circ \theta\|_{2}^{2}\|\theta\|_{3}^{3}+\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2} ^{4}\big{)}\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{10}}{\| \theta\|_{1}^{4}}.\]
We omit the argument for \(Z_{3c,2}^{*}\) as it is similar and simply state the bound:
\[\mathrm{Var}(Z_{3c,2}^{*})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta \|_{2}^{6}}{\|\theta\|_{1}^{2}}.\]
Combining the results for \(\tilde{Z}_{3c}\) and \(Z_{3c}^{*}\), we have
\[\mathrm{Var}(Z_{3c})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^ {8}}{\|\theta\|_{1}}\lesssim\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{6}.\]
Next we study \(Z_{3d}\), which is defined as
\[Z_{3d}=\sum_{\begin{subarray}{c}i,j,k,\ell\\ (dist)\end{subarray}}(\eta_{k}\eta_{j}\tilde{\Omega}_{j\ell})(\eta_{i}-\tilde {\eta}_{i})(\eta_{k}-\tilde{\eta}_{k})W_{\ell i}=\sum_{\begin{subarray}{c}i,k, \ell(dist)\\ s\neq i,\ell\neq k\end{subarray}}\alpha_{ik\ell}W_{is}W_{kt}W_{\ell i}\]
where \(\alpha_{ik\ell}=\sum_{j\notin\{i,k,\ell\}}\eta_{k}\eta_{j}\tilde{\Omega}_{j\ell}\). We see that \(\mathbb{E}Z_{3d}=0\). To study the variance, we use a similar decomposition to that of \(Z_{3c}\). Write
\[Z_{3d}=\frac{1}{v}\sum_{\begin{subarray}{c}i,k,\ell(dist)\\ t\neq k\end{subarray}}\alpha_{ik\ell}W_{it}^{2}W_{kt}+\frac{1}{v}\sum_{ \begin{subarray}{c}i,k,\ell(dist)\\ s\notin\{i,\ell\},t\neq k\end{subarray}}\alpha_{ik\ell}W_{is}W_{kt}W_{\ell i }\equiv\widetilde{Z}_{3d}+Z_{3d}^{*}.\]
Mimicking the arguments for \(\widetilde{Z}_{3c}\) and \(Z_{3c}^{*}\) we obtain
\[\mathrm{Var}(\tilde{Z}_{3d})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\| \theta\|_{2}^{6}}{\|\theta\|_{1}},\]
and
\[\mathrm{Var}(Z_{3d}^{*})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{ 2}^{10}}{\|\theta\|_{1}^{4}}.\]
Hence
\[\mathrm{Var}(Z_{3d})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^ {6}}{\|\theta\|_{1}}.\]
Combining the results for \(Z_{3a},\ldots,Z_{3d}\), we have
\[\mathbb{E}Z_{3}=0,\quad\mathrm{Var}(Z_{3})\lesssim\|\beta\circ\theta\|_{2}^{4} \|\theta\|_{2}^{6}.\]
We proceed to study \(Z_{4}\). In (Jin et al., 2021c, Supplement,pg.67) the following decomposition is given:
\[Z_{4}=2\sum_{i,j,k,\ell(dist)}\eta_{i}(\eta_{j}-\tilde{\eta}_{j})\widetilde{ \Omega}_{jk}\eta_{k}(\eta_{\ell}-\tilde{\eta}_{\ell})W_{ti}\]
\[+\sum_{i,j,k,\ell(dist)}\eta_{i}(\eta_{j}-\tilde{\eta}_{j})\widetilde{ \Omega}_{jk}(\eta_{k}-\tilde{\eta}_{k})\eta_{\ell}W_{\ell i}\] \[+\sum_{i,j,k,\ell(dist)}(\eta_{i}-\tilde{\eta}_{i})\eta_{j} \widetilde{\Omega}_{jk}\eta_{k}(\eta_{\ell}-\tilde{\eta}_{\ell})W_{\ell i}\] \[\equiv Z_{4a}+Z_{4b}+Z_{4c}.\] (E.39)
There it is shown that \(\mathbb{E}Z_{4a}=0\). To study \(\mathrm{Var}(Z_{4a})\), we note that \(Z_{4a}\) and \(Z_{3c}\) have similar structure. In particular we have the decomposition
\[Z_{4a}=\frac{1}{v}\sum_{\begin{subarray}{c}i,k,\ell(dist)\\ t\neq k\end{subarray}}\alpha_{ik\ell}W_{i\ell}^{2}W_{kt}+\frac{1}{v}\sum_{ \begin{subarray}{c}i,k,\ell(dist)\\ s\notin\{i,\ell\},\neq k\end{subarray}}\alpha_{ik\ell}W_{is}W_{kt}W_{\ell i} \equiv\widetilde{Z}_{4a}+Z_{4a}^{*}.\]
where \(\alpha_{ik\ell}=\sum_{j\notin\{i,k,\ell\}}\eta_{j}\eta_{\ell}\tilde{\Omega}_{ k\ell}\). Mimicking the argument for \(\tilde{Z}_{3c}\) we have
\[\mathrm{Var}(\tilde{Z}_{4a}) \lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}}{\| \theta\|_{1}^{4}}\bigg{(}\sum_{ik\ell t}\big{[}\beta_{k}^{2}(\theta_{i}\theta _{k}^{2}\theta_{\ell}^{2}\theta_{t}+\theta_{i}^{2}\theta_{k}^{2}\theta_{\ell} ^{2}\theta_{t})+\beta_{k}\beta_{t}\theta_{i}\theta_{k}^{2}\theta_{\ell}^{2} \theta_{t}^{2}+\beta_{k}\beta_{t}\theta_{i}^{2}\theta_{\ell}^{2}\theta_{t}^{2} \theta_{t}^{2}\] \[\quad+\beta_{k}^{2}\theta_{i}^{2}\theta_{k}^{2}\theta_{\ell}^{2 }\theta_{t}+\beta_{k}\beta_{t}\theta_{i}\theta_{k}^{2}\theta_{\ell}^{2} \theta_{t}^{2}\big{]}+\sum_{ik\ell t^{\prime}\ell^{\prime}}\big{[}\beta_{k}^{2 }\theta_{i}\theta_{k}^{2}\theta_{\ell}^{2}\theta_{t}\theta_{i}\theta_{\ell^{ \prime}}\theta_{\ell^{\prime}}^{2}+\beta_{k}\beta_{t}\theta_{i}\theta_{k}^{2} \theta_{\ell}^{2}\theta_{t}^{2}\theta_{\ell^{\prime}}\theta_{\ell^{\prime}}^{ 2}\big{]}\bigg{)}\] \[\lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}}{\| \theta\|_{1}^{4}}\big{(}\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{1}+\|\beta \circ\theta\|_{2}^{2}\|\theta\|_{1}^{4}\|\theta\|_{1}+\|\beta\circ\theta\|_{ 2}^{2}\|\theta\|_{2}^{4}\|\theta\|_{1}^{3}+\] \[\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{4}\|\theta\|_{1}^{2} \big{)}\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{6}}{\|\theta\| _{1}}.\]
For \(\tilde{Z}_{4a}^{*}\) we adapt the decomposition used for \(\tilde{Z}_{4c}^{*}\):
\[Z_{4a}^{*}=\frac{1}{v}\sum_{i,k,\ell(dist)}\alpha_{ik\ell}W_{ik}^{2}W_{\ell i}+ \frac{1}{v}\sum_{\begin{subarray}{c}i,k,\ell(dist)\\ s\notin\{i,\ell\},\neq k,(s,t)\neq\ell,i\end{subarray}}\alpha_{ik\ell}W_{is}W_ {kt}W_{\ell i}=:Z_{4a,1}^{*}+Z_{4a,2}^{*}\]
Mimicking the argument for \(Z_{3c,1}^{*}\) and \(Z_{3c,2}^{*}\), we have
\[\mathrm{Var}(Z_{4a,1}^{*}) \lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}}{\| \theta\|_{1}^{4}}\big{(}\sum_{ik\ell}\beta_{k}^{2}\theta_{i}^{2}\theta_{k}^{2} \theta_{\ell}^{2}+\sum_{ik\ell k^{\prime}}\beta_{k}\beta_{k^{\prime}}\theta_{i} ^{2}\theta_{k}^{2}\theta_{\ell}^{2}\theta_{k^{\prime}}^{2}\big{)}\lesssim\frac{ \|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{8}}{\|\theta\|_{1}^{4}},\]
and
\[\mathrm{Var}(Z_{4a,2}^{*}) \lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}}{\| \theta\|_{1}^{4}}\sum_{ik\ell st}\big{[}\beta_{k}^{2}\theta_{i}^{2}\theta_{k}^{2 }\theta_{\ell}^{2}\theta_{\ell}\theta_{s}\theta_{t}+\beta_{k}\beta_{t}\theta_{i} ^{2}\theta_{k}^{2}\theta_{\ell}^{2}\theta_{s}\theta_{t}^{2}+\beta_{k}\beta_{s} \theta_{i}^{2}\theta_{k}^{2}\theta_{\ell}^{2}\theta_{s}^{2}\theta_{t}^{2}\big{]}\] \[\lesssim\frac{\|\theta\|_{2}^{4}\|\theta\|_{2}^{6}}{\|\theta\|_{1}^ {2}}.\]
It follows that
\[\mathrm{Var}(Z_{4a})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{6}}{ \|\theta\|_{1}}.\]
Next we study
\[Z_{4b} =\sum_{\begin{subarray}{c}i,j,k,\ell\\ (dist)\end{subarray}}\eta_{i}(\eta_{j}-\tilde{\eta}_{j})\widetilde{\Omega}_{jk}( \eta_{k}-\tilde{\eta}_{k})\eta_{\ell}W_{\ell i}=\sum_{\begin{subarray}{c}i,j,k, \ell\\ (dist)\end{subarray}}\alpha_{ijk\ell}(\eta_{j}-\tilde{\eta}_{j})(\eta_{k}-\tilde{ \eta}_{k})W_{\ell i}\] \[=\frac{1}{v}\sum_{\begin{subarray}{c}i,j,k,\ell(dist)\\ s\neq j,i\neq k\end{subarray}}\alpha_{ijk}W_{js}W_{kt}W_{\ell i}\]
where \(\alpha_{ijk\ell}=\eta_{i}\eta_{\ell}\widetilde{\Omega}_{jk}\). Mimicking the study of \(Z_{3a}\), we have the decomposition
\[Z_{4b} =\frac{1}{v}\sum_{i,j,k,\ell(dist)}\alpha_{ijk\ell}W_{jk}^{2}W_{ \ell i}+\frac{1}{v}\sum_{\begin{subarray}{c}i,j,k,\ell(dist)\\ s\neq j,t\neq k,(s,t)\neq(k,j)\end{subarray}}\alpha_{ijk\ell}W_{js}W_{kt}W_{ \ell i}\] \[\equiv\widetilde{Z}_{4b}+Z_{4b}^{*}.\]
Further we have, using (E.1), (E.2), (E.20), and (E.24), we have
\[\mathrm{Var}(\tilde{Z}_{4b}) \lesssim\frac{1}{\|\theta\|_{1}^{4}}\bigg{(}\sum_{ijk\ell}\big{[} \beta_{j}^{2}\beta_{k}^{2}+\beta_{j}\beta_{k}\beta_{i}\theta_{3}|\theta_{i}^{3 }\theta_{j}^{3}\theta_{k}^{3}\big{]}+\sum_{ijk\neq j^{\prime}k^{\prime}}\beta_ {j}\beta_{k}\beta_{j^{\prime}}\beta_{k^{\prime}}\theta_{i}^{3}\theta_{j}^{2} \theta_{k}^{2}\theta_{j^{\prime}}^{2}\theta_{k^{\prime}}^{2}\bigg{)}\] \[\lesssim\frac{1}{\|\theta\|_{1}^{4}}\big{(}\|\beta\circ\theta\|_ {2}^{4}\|\theta\|_{2}^{4}+\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{8}\big{)} \lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{8}}{\|\theta\|_{1}^ {4}}.\]
Similarly,
\[\mathrm{Var}(Z_{4b}^{*}) \lesssim\frac{1}{\|\theta\|_{1}^{4}}\bigg{(}\sum_{ijk\ell st} \big{[}\beta_{j}^{2}\beta_{k}^{2}\theta_{i}^{2}\theta_{j}^{2}\theta_{\ell}^{ 2}\theta_{\ell}\theta_{s}+\beta_{k}^{2}\beta_{\ell}\beta_{j}\theta_{i}^{2} \theta_{j}^{3}\theta_{k}^{3}\theta_{\ell}^{3}\theta_{s}^{2}\theta_{t}+\beta_{ j}\beta_{k}^{2}\beta_{\ell}\theta_{i}^{2}\theta_{i}^{3}\theta_{k}^{3}\theta_{ \ell}^{3}\theta_{s}^{2}\theta_{t}\bigg{)}\] \[\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}}{\| \theta\|_{1}^{2}}.\]
It follows that
\[\mathrm{Var}(Z_{4b})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\| \theta\|_{2}^{4}}{\|\theta\|_{1}^{2}}.\]
We study \(Z_{4c}\) using the decomposition
\[Z_{4c} =\frac{1}{v}\sum_{i,\ell(dist)}\beta_{i\ell}W_{i}^{3}+\frac{2}{v} \sum_{\begin{subarray}{c}i,\ell(dist)\\ s\neq\{i,\ell\}\end{subarray}}\beta_{i\ell}W_{is}W_{ti}^{2}+\frac{1}{v}\sum_{ \begin{subarray}{c}i,\ell(dist)\\ s\notin\{i,\ell\},\ell\notin\{i,i\}\end{subarray}}\beta_{i\ell}W_{is}W_{\ell i }W_{\ell i}\] \[\equiv\widetilde{Z}_{4c}+Z_{4c}^{*}+Z_{4c}^{\dagger}.\]
from (Jin et al., 2021c, Supplement, pg.68). Only
\[\tilde{Z}_{4c}=\frac{1}{v}\sum_{i,\ell(dist)}\alpha_{i\ell}W_{i}^{3}\]
has nonzero mean, where \(\alpha_{i\ell}=\sum_{j,k(dist)\notin\{i,\ell\}}\eta_{j}\eta_{k}\widetilde{ \Omega}_{jk}\). By (E.20)
\[|\alpha_{i\ell}|\lesssim\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}.\]
Hence
\[|\mathbb{E}\tilde{Z}_{4c}|\lesssim\frac{1}{\|\theta\|_{1}^{2}}\sum_{i\ell}\| \beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}\theta_{i}\theta_{\ell}\lesssim\| \beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}.\]
Except for when \((i,\ell)=(\ell,i)\), the summands of \(\tilde{Z}_{4c}\) are uncorrelated. Thus
\[\mathrm{Var}(\tilde{Z}_{4c})\lesssim\frac{1}{\|\theta\|_{1}^{4}}\sum_{i\ell}\| \beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}\theta_{i}\theta_{\ell}\lesssim \frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}}{\|\theta\|_{1}^{2}}.\]
Applying the casework from (Jin et al., 2021c, Supplement, pg.68),
\[\mathrm{Var}(Z_{4c}^{*}) \lesssim\sum_{\begin{subarray}{c}i,\ell(dist)\\ s\neq\{i,\ell\}\end{subarray}}\sum_{\begin{subarray}{c}i^{\prime},\ell^{ \prime}(dist)\\ s^{\prime}\neq\{i^{\prime},\ell^{\prime}\}\end{subarray}}|\alpha_{i\ell} \alpha_{i^{\prime}\ell^{\prime}}|\mathrm{Cov}(W_{is}W_{\ell i}^{2},W_{i^{ \prime}s^{\prime}}W_{\ell^{\prime}i^{\prime}}^{2})|\] \[\lesssim\frac{1}{\|\theta\|_{1}^{4}}\big{(}\sum_{i\ell s}\|\beta \circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}\theta_{i}^{2}\theta_{\ell}\theta_{s}+ \sum_{i\ell s\ell^{\prime}}\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}\theta \|_{2}^{4}\theta_{i}^{3}\theta_{\ell}\theta_{s}\theta_{\ell^{\prime}}\big{)}\]
\[\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}}{\|\theta\|_{1}^{4}} \big{(}\|\theta\|_{2}^{2}\|\theta\|_{1}^{2}+\|\theta\|_{2}^{2}\|\theta\|_{1}^{3} \big{)}\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{6}}{\|\theta\| _{1}}.\]
Next, in (Jin et al., 2021c, Supplement, pg.69) it is shown that
\[\mathrm{Var}(Z_{4c}^{\dagger})\lesssim\frac{1}{v^{2}}\sum_{ \begin{subarray}{c}i,\ell(dist)\\ s\notin\{i,\ell\}\notin\{\ell,i\}\end{subarray}}\alpha_{i\ell}^{2}\cdot \mathrm{Var}(W_{is}W_{\ell i})\]
Thus
\[\mathrm{Var}(Z_{4c}^{\dagger})\lesssim\sum_{i\ell s}\|\beta\circ \theta\|_{2}^{4}\|\theta\|_{2}^{4}\theta_{i}^{2}\theta_{\ell}^{2}\theta_{\ell} \theta_{\ell}\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{8}}{\| \theta\|_{1}^{2}}.\]
Combining the results for \(\tilde{Z}_{4c},Z_{4c}^{*},Z_{4c}^{\dagger}\), we have
\[\|\mathbb{E}Z_{4c}\|\lesssim\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}, \quad\mathrm{Var}(Z_{4c})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\| _{2}^{8}}{\|\theta\|_{1}}.\]
Combining the results for \(Z_{4a},Z_{4b}\), and \(Z_{4c}\), we have
\[|\mathbb{E}Z_{4}|\lesssim\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}, \quad\mathrm{Var}(Z_{4})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_ {2}^{6}}{\|\theta\|_{1}}\]
To study \(Z_{5}\), we use the decomposition
\[Z_{5} =2\sum_{i,j,k,\ell(dist)}\eta_{i}(\eta_{j}-\tilde{\eta}_{j})\eta_ {j}(\eta_{k}-\tilde{\eta}_{k})\widetilde{\Omega}_{k\ell}\widetilde{\Omega}_{ \ell i}+\sum_{i,j,k,\ell(dist)}\eta_{i}(\eta_{j}-\tilde{\eta}_{j})^{2}\eta_{k} \widetilde{\Omega}_{k\ell}\widetilde{\Omega}_{\ell i}\] \[\qquad+\sum_{i,j,k,\ell(dist)}(\eta_{i}-\tilde{\eta}_{i})\eta_{j }^{2}(\eta_{k}-\tilde{\eta}_{k})\widetilde{\Omega}_{k\ell}\widetilde{\Omega}_ {\ell i}\] \[\equiv Z_{5a}+Z_{5b}+Z_{5c}.\] (E.40)
from (Jin et al., 2021c, Supplement, pg. 70). We further decompose \(Z_{5a}\) as in (Jin et al., 2021c, Supplement, pg.70):
\[Z_{5a}=\frac{2}{v}\sum_{j,k(dist)}\alpha_{jk}W_{jk}^{2}+\frac{2}{v}\sum_{ \begin{subarray}{c}j,k(dist)\\ s\neq j,t\neq k,\\ (s,t)\neq(k,j)\end{subarray}}\alpha_{jk}W_{js}W_{kt}\equiv\widetilde{Z}_{5a}+ Z_{5a}^{*}.\]
where \(\alpha_{jk}=\sum_{i,\ell(dist)\notin\{j,k\}}\eta_{i}\eta_{j}\widetilde{\Omega}_{k\ell}\widetilde{\Omega}_{\ell i}\). Note that by (E.20) and (E.24),
\[|\alpha_{jk}|\lesssim\sum_{i\ell}(\beta_{k}\theta_{k})(\beta_{ \ell}\theta_{\ell})^{2}(\beta_{i}\theta_{i})\lesssim\theta_{j}(\beta_{k} \theta_{k})\|\beta\circ\theta\|_{2}^{3}\|\theta\|_{2}.\]
Only \(\tilde{Z}_{5a}\) has nonzero mean. By (E.1) and (E.2),
\[|\mathbb{E}Z_{5a}|=|\mathbb{E}\tilde{Z}_{5a}|\lesssim\frac{1}{\| \theta\|_{1}^{2}}\sum_{jk}\theta_{j}(\beta_{k}\theta_{k})\|\beta\circ\theta\|_ {2}^{3}\|\theta\|_{2}\cdot\theta_{j}\theta_{k}\lesssim\frac{\|\beta\circ \theta\|_{2}^{4}\|\theta\|_{2}^{4}}{\|\theta\|_{1}^{2}}.\]
Now we study the variance of \(Z_{5a}\). In (Jin et al., 2021c, Supplement, pg.70) it is shown that
\[\mathrm{Var}(\widetilde{Z}_{5a})\lesssim\frac{1}{v^{2}}\sum_{j,k (dist)}\alpha_{jk}^{2}\,\mathrm{Var}(W_{jk}^{2})\] \[\mathrm{Var}(Z_{5a}^{*})\lesssim\frac{1}{v^{2}}\sum_{ \begin{subarray}{c}j,k(dist)\\ s\neq j,t\neq k,\\ (s,t)\neq(k,j)\end{subarray}}\alpha_{jk}^{2}\,\mathrm{Var}(W_{js}W_{kt}).\]
Thus by (E.2) and (E.24),
\[\mathrm{Var}(\widetilde{Z}_{5a})\lesssim\frac{\|\beta\circ\theta\|_{2}^{6}\| \theta\|_{2}^{4}}{\|\theta\|_{1}^{4}}\big{(}\sum_{jk}\theta_{j}^{3}\beta_{k}^{ 2}\theta_{k}^{3}\big{)}\lesssim\frac{\|\beta\circ\theta\|_{2}^{8}\|\theta\|_{2 }^{6}}{\|\theta\|_{1}^{4}}\]
\[\mathrm{Var}(Z_{5a}^{*})\lesssim\frac{\|\beta\circ\theta\|_{2}^{8}\|\theta\|_{2}^{ 4}}{\|\theta\|_{1}^{4}}\big{(}\sum_{jk}\theta_{j}^{2}\beta_{k}^{2}\theta_{k}^{2 }\cdot\theta_{j}\theta_{s}\theta_{k}\theta_{t}\big{)}\lesssim\frac{\|\beta \circ\theta\|_{2}^{8}\|\theta\|_{2}^{6}}{\|\theta\|_{1}^{2}}.\]
We conclude that
\[\mathrm{Var}(Z_{5a})\lesssim\frac{\|\beta\circ\theta\|_{2}^{8}\|\theta\|_{2}^ {6}}{\|\theta\|_{1}^{2}}.\]
Next we study \(Z_{5b}\) using the decomposition
\[Z_{5b}=\frac{1}{v}\sum_{j,s(dist)}\alpha_{j}W_{js}^{2}+\frac{1}{v}\sum_{ \begin{subarray}{c}j\\ s,t(dist)\notin\{j\}\end{subarray}}\alpha_{j}W_{js}W_{kt}\equiv\widetilde{Z}_ {5b}+Z_{5b}^{*}.\]
from (Jin et al., 2021c, Supplement, pg.71), where \(\alpha_{j}=\sum_{i,k,\ell(dist)\notin\{j\}}\eta_{i}\eta_{k}\widetilde{\Omega} _{k\ell}\widetilde{\Omega}_{ti}\). Note that by (E.2) and (E.20),
\[|\alpha_{j}|\lesssim\sum_{i\in\ell}\theta_{i}\theta_{k}(\beta_{k}\theta_{k}) (\beta_{\ell}\theta_{t})^{2}(\beta_{i}\theta_{i})\lesssim\|\beta\circ\theta \|_{2}^{4}\|\theta\|_{2}^{2}.\]
Only \(\widetilde{Z}_{5b}\) above has nonzero mean, and we have
\[|\mathbb{E}Z_{5b}|=|\mathbb{E}Z_{5b}|\lesssim\frac{\|\beta\circ\theta\|_{2}^ {4}\|\theta\|_{2}^{2}}{\|\theta\|_{1}^{2}}\sum_{j,s}\theta_{j}\theta_{s} \lesssim\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{2}.\]
Similarly for the variances,
\[\mathrm{Var}(\tilde{Z}_{5b})\lesssim\frac{\|\beta\circ\theta\|_{2}^ {8}\|\theta\|_{2}^{4}}{\|\theta\|_{1}^{4}}\sum_{js}\theta_{j}\theta_{s} \lesssim\frac{\|\beta\circ\theta\|_{2}^{8}\|\theta\|_{2}^{4}}{\|\theta\|_{1}^ {2}}\] \[\mathrm{Var}(Z_{5b}^{*})\lesssim\frac{\|\beta\circ\theta\|_{2}^{ 8}\|\theta\|_{2}^{4}}{\|\theta\|_{1}^{4}}\sum_{jst}\theta_{j}^{2}\theta_{s} \theta_{t}\lesssim\frac{\|\beta\circ\theta\|_{2}^{8}\|\theta\|_{2}^{6}}{\| \theta\|_{1}^{2}},\]
and it follows that
\[\mathrm{Var}(Z_{5b})\lesssim\frac{\|\beta\circ\theta\|_{2}^{8}\|\theta\|_{2}^ {6}}{\|\theta\|_{1}^{2}}.\]
Next we study
\[Z_{5c} =\sum_{\begin{subarray}{c}i,j,k,\ell\\ (dist)\end{subarray}}(\eta_{j}-\tilde{\eta}_{j})\eta_{i}^{2}(\eta_{k}-\tilde{ \eta}_{k})\widetilde{\Omega}_{k\ell}\widetilde{\Omega}_{\ell j}=\sum_{ \begin{subarray}{c}i,j,k,\ell\\ (dist)\end{subarray}}(\eta_{i}^{2}\widetilde{\Omega}_{k\ell}\widetilde{\Omega} _{\ell j})(\eta_{j}-\tilde{\eta}_{j})(\eta_{k}-\tilde{\eta}_{k})\] \[=\frac{1}{v}\sum_{\begin{subarray}{c}i,j,k,\ell(dist)\\ \neq j,\neq k\end{subarray}}(\eta_{i}^{2}\widetilde{\Omega}_{k\ell}\widetilde{ \Omega}_{\ell j})W_{js}W_{kt}=\frac{1}{v}\sum_{\begin{subarray}{c}j,k(dist) \\ \neq j,\neq k\end{subarray}}\alpha_{jk}W_{js}W_{kt}\]
where \(\alpha_{jk}=\sum_{\begin{subarray}{c}i,\ell(dist)\\ i,\ell\notin\{j,k\}\end{subarray}}\eta_{i}^{2}\widetilde{\Omega}_{k\ell} \widetilde{\Omega}_{\ell j}\). Note that by (E.20) and (E.18),
\[|\alpha_{jk}|\lesssim\sum_{i\ell}\theta_{i}^{2}(\beta_{k}\theta_{k})(\beta_{ \ell}\theta_{t})^{2}(\beta_{j}\theta_{j})\lesssim(\beta_{j}\theta_{j})(\beta_ {k}\theta_{k})\|\theta\|_{2}^{2}\|\beta\circ\theta\|_{2}^{2}.\] (E.41)
We further decompose
\[Z_{5c}=\frac{1}{v}\sum_{\begin{subarray}{c}j,k\\ (dist)\end{subarray}}\alpha_{jk}W_{jk}^{2}+\frac{1}{v}\sum_{\begin{subarray}{c} j,k(dist)\\ s,t\notin\{j,k\}\end{subarray}}\alpha_{jk}W_{js}W_{kt}\equiv\tilde{Z}_{5c}+Z_{5c}^ {*}.\]
Only the first term has nonzero mean. It follows that
\[|\mathbb{E}Z_{5c}|=|\mathbb{E}\tilde{Z}_{5c}|\lesssim\frac{\|\theta\|_{2}^{2} \|\beta\circ\theta\|_{2}^{2}}{\|\theta\|_{1}^{2}}\sum_{j,k,s,t}(\beta_{j}\theta_ {j})(\beta_{k}\theta_{k})\cdot\theta_{j}\theta_{k}\lesssim\frac{\|\beta\circ \theta\|_{2}^{4}\|\theta\|_{2}^{4}}{\|\theta\|_{1}^{2}}.\]
Note that \(Z_{5c}\) and \(Z_{5a}\) have the same form, but with a different setting of the coefficient \(\alpha_{jk}\). Mimicking the variance bounds for \(Z_{5a}\) we obtain the bound
\[\mathrm{Var}(Z_{5c})\lesssim\frac{\|\beta\circ\theta\|_{2}^{8}\|\theta\|_{2}^{4 }}{\|\theta\|_{1}^{2}}.\]
Combining the previous bounds we obtain
\[|\mathbb{E}Z_{5}|\lesssim\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{2},\quad \mathrm{Var}(Z_{5})\lesssim\frac{\|\beta\circ\theta\|_{2}^{8}\|\theta\|_{2}^{6 }}{\|\theta\|_{1}^{2}}.\]
Next we study \(Z_{6}=Z_{6a}+Z_{6b}\) as defined in (Jin et al., 2021c, Supplement, pg.72), where
\[Z_{6a}=\sum_{\begin{subarray}{c}i,j,k,\ell\\ (dist)\end{subarray}}(\eta_{i}\eta_{\ell}\tilde{\Omega}_{j\ell}\tilde{\Omega }_{ki})(\eta_{j}-\tilde{\eta}_{j})(\eta_{k}-\tilde{\eta}_{k})=\frac{1}{v}\sum_ {\begin{subarray}{c}j,k(dist)\\ s\neq j,t\neq k\end{subarray}}\alpha_{jk}^{(6a)}W_{js}W_{kt}\]
\[Z_{6b}=2\sum_{\begin{subarray}{c}i,j,k,\ell\\ (dist)\end{subarray}}(\eta_{i}\eta_{\ell}\tilde{\Omega}_{jk}\tilde{\Omega}_{ \ell i})(\eta_{j}-\tilde{\eta}_{j})(\eta_{k}-\tilde{\eta}_{k})=\frac{1}{v} \sum_{\begin{subarray}{c}j,k(dist)\\ s\neq j,t\neq k\end{subarray}}\alpha_{jk}^{(6b)}W_{js}W_{kt}\]
and
\[\alpha_{jk}^{(6a)}=\sum_{\begin{subarray}{c}i,\ell(dist)\\ i,\ell\notin\{j,k\}\end{subarray}}\eta_{i}\eta_{\ell}\tilde{\Omega}_{jk}\tilde{ \Omega}_{\ell i}\] \[\alpha_{jk}^{(6b)}=\sum_{\begin{subarray}{c}i,\ell(dist)\\ i,\ell\notin\{j,k\}\end{subarray}}\eta_{i}\eta_{\ell}\tilde{\Omega}_{j\ell} \tilde{\Omega}_{ki}.\]
Thus \(Z_{6a}\) and \(Z_{6b}\) take the same form as \(Z_{5c}\), but with a different setting of \(\alpha_{jk}\). Note that by (E.24) and similar arguments from before,
\[\max(|\alpha_{jk}^{(6a)}|,|\alpha_{jk}^{(6b)}|)\lesssim(\beta_{j}\theta_{j})( \beta_{k}\theta_{k})\|\theta\|_{2}^{2}\|\beta\circ\theta\|_{2}^{2},\]
which is the same as the upper bound on \(|\alpha_{jk}|\) associated to \(Z_{5c}\) given in (E.41). It follows that
\[|\mathbb{E}Z_{6}|\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4 }}{\|\theta\|_{1}^{2}},\quad\mathrm{Var}(Z_{6})\lesssim\frac{\|\beta\circ \theta\|_{2}^{8}\|\theta\|_{2}^{4}}{\|\theta\|_{1}^{2}}.\]
We have proved all claims in Lemma E.9.
#### e.4.5 Proof of Lemma e.10
The terms \(T_{1}\) and \(F\) do not depend on \(\tilde{\Omega}\), and thus the claimed bounds transfer directly from (Jin et al., 2021c, Lemma G.9). Thus we focus on \(T_{2}\). We use the decomposition \(T_{2}=2(T_{2a}+T_{2b}+T_{2c}+T_{2d})\) from (Jin et al., 2021c, Supplement, pg.73) where
\[T_{2a}=\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\eta_{i_{2}}\eta_{i_{3 }}\eta_{i_{4}}\big{[}(\eta_{i_{1}}-\tilde{\eta}_{i_{1}})(\eta_{i_{2}}-\tilde{ \eta}_{i_{2}})(\eta_{i_{3}}-\tilde{\eta}_{i_{3}})\big{]}\cdot\tilde{\Omega}_{i _{4}i_{1}},\] \[T_{2b}=\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\eta_{i_{2}}\eta_{i_{3 }}^{2}\big{[}(\eta_{i_{1}}-\tilde{\eta}_{i_{1}})(\eta_{i_{2}}-\tilde{\eta}_{i_{ 2}})(\eta_{i_{4}}-\tilde{\eta}_{i_{4}})\big{]}\cdot\tilde{\Omega}_{i_{4}i_{1}},\] \[T_{2c}=\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\eta_{i_{1}}\eta_{i_{3 }}\eta_{i_{4}}\big{[}(\eta_{i_{2}}-\tilde{\eta}_{i_{2}})^{2}(\eta_{i_{3}}- \tilde{\eta}_{i_{3}})\big{]}\cdot\tilde{\Omega}_{i_{4}i_{1}},\] \[T_{2d}=\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\eta_{i_{1}}\eta_{i_{3 }}^{2}\big{[}(\eta_{i_{2}}-\tilde{\eta}_{i_{2}})^{2}(\eta_{i_{4}}-\tilde{\eta}_{ i_{4}})\big{]}\cdot\tilde{\Omega}_{i_{4}i_{1}}.\]
We study each term separately.
For \(T_{2a}\), in (Jin et al., 2021c, Supplement, pg.89), we have the decomposition \(T_{2a}=X_{a1}+X_{a2}+X_{a3}+X_{b}\) where
\[X_{a1} =-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\sum_{j_{3} \neq i_{3}}\eta_{i_{2}}\eta_{i_{3}}\eta_{i_{4}}W_{i_{1}i_{2}}^{2}W_{i_{3}j_{3} }\widetilde{\Omega}_{i_{1}i_{4}},\] \[X_{a2} =-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\sum_{j_{2} \neq i_{2}}\eta_{i_{2}}\eta_{i_{3}}\eta_{i_{4}}W_{i_{1}i_{3}}^{2}W_{i_{2}j_{2} }\widetilde{\Omega}_{i_{1}i_{4}},\] \[X_{a3} =-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\sum_{j_{1} \neq i_{1}}\eta_{i_{2}}\eta_{i_{3}}\eta_{i_{4}}W_{i_{2}i_{3}}^{2}W_{i_{1}j_{1} }\widetilde{\Omega}_{i_{1}i_{4}},\] \[X_{b} =-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\sum_{ \begin{subarray}{c}j_{1},j_{2},j_{3}\\ j_{k}\neq i_{k},\ell=1,2,3\end{subarray}}\eta_{i_{2}}\eta_{i_{3}}\eta_{i_{4}}W _{i_{1}j_{1}}W_{i_{2}j_{2}}W_{i_{3}j_{3}}\widetilde{\Omega}_{i_{1}i_{4}}.\]
There it is shown that \(\mathbb{E}T_{2a}=0\). Further it is argued that
\[\mathrm{Var}(X_{a1}) =\mathbb{E}X_{a1}^{2}\] \[=\frac{1}{v^{3}}\sum_{\begin{subarray}{c}i_{1},i_{2},i_{3},i_{4} (dist)\\ i_{1},i_{2}^{\prime},i_{3}^{\prime},i_{4}^{\prime}(dist)\end{subarray}}\sum_{ \begin{subarray}{c}j_{3},j_{3}^{\prime}\\ j_{3}\neq i_{3},j_{3}\neq i_{3}^{\prime}\end{subarray}}\eta_{i_{2}}\eta_{i_{3} }\eta_{i_{4}}\eta_{i_{2}^{\prime}}\eta_{i_{3}^{\prime}}\eta_{i_{4}^{\prime}} \mathbb{E}[W_{i_{1}i_{2}}^{2}W_{i_{3}j_{3}}W_{i_{1}^{\prime}i_{2}^{\prime}}^{2} W_{i_{3}^{\prime}j_{4}^{\prime}}][\widetilde{\Omega}_{i_{1}i_{4}}\widetilde{\Omega}_{i_{1}i_{4} ^{\prime}}\] \[\equiv V_{A}+V_{B}+V_{C},\] (E.42)
where the terms \(V_{A},V_{B},V_{C}\) correspond to the contributions from cases \(A,B,C\), respectively, described in (Jin et al., 2021c, Supplement, pg.89). Concretely, the nonzero terms of (E.42) fall into three cases:
Case A. \(\{i_{1},i_{2}\}=\{i_{3}^{\prime},j_{3}^{\prime}\}\) and \(\{i_{3},j_{3}\}=\{i_{1}^{\prime},i_{2}^{\prime}\}\)
Case B. \(\{i_{3},j_{3}\}=\{i_{3}^{\prime},j_{3}^{\prime}\}\) and \(\{i_{1},i_{2}\}=\{i_{1}^{\prime},i_{2}^{\prime}\}\)
Case C. \(\{i_{3},j_{3}\}=\{i_{3}^{\prime},j_{3}^{\prime}\}\) and \(\{i_{1},i_{2}\}\neq\{i_{1}^{\prime},i_{2}^{\prime}\}\).
Here \(V_{A},V_{B},\) and \(V_{C}\) are defined to be the contributions from each case.
Applying (E.2), (E.22), and (E.20),
\[|\eta_{i_{2}}\eta_{i_{3}}\eta_{i_{4}}\eta_{i_{2}^{\prime}}\eta_{i_ {3}^{\prime}}\eta_{i_{4}^{\prime}}^{*}\widetilde{\Omega}_{i_{1}i_{4}}\widetilde {\Omega}_{i_{1}i_{4}}\widetilde{\Omega}_{i_{1}i_{4}^{\prime}}| \lesssim\theta_{i_{2}}\theta_{i_{3}}\theta_{i_{4}}\theta_{i_{2} }\theta_{i_{3}^{\prime}}\theta_{i_{4}^{\prime}}(\beta_{i_{1}}\theta_{i_{1}})( \beta_{i_{4}}\theta_{i_{4}})(\beta_{i_{1}^{\prime}}\theta_{i_{1}^{\prime}})( \beta_{i_{4}^{\prime}}\theta_{i_{4}^{\prime}})\] \[\lesssim\theta_{i_{2}}\theta_{i_{3}}\theta_{i_{4}}\theta_{i_{2} ^{\prime}}\theta_{i_{3}^{\prime}}\theta_{i_{4}^{\prime}}(\beta_{i_{1}}\theta_{i _{1}})(\beta_{i_{4}}\theta_{i_{4}})\theta_{i_{1}^{\prime}}(\beta_{i_{4}^{ \prime}}\theta_{i_{2}^{\prime}}).\] (E.43)
Note that using the last inequality reduces the required casework while still yielding a good enough bound. Mimicking the casework in Case A of (Jin et al., 2021c, Supplement, pg.90) and applying (E.24), we have
\[V_{A} \lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{\begin{subarray}{c}i_{1}, i_{2},i_{3}\\ i_{4},i_{4}^{\prime},i_{3}\end{subarray}}\sum_{\begin{subarray}{c}b_{1},b_{2}\\ (b_{1}+b_{2}=1)\end{subarray}}\beta_{i_{1}}\beta_{i_{4}}\beta_{i_{4}}\theta_{i _{1}^{\prime}}^{2+b_{1}}\theta_{i_{2}}^{2+b_{2}}\theta_{i_{3}}^{3}\theta_{i_{ 2}}^{2}\theta_{i_{4}}^{2}\theta_{i_{4}^{\prime}}^{2}\] \[\lesssim\frac{1}{\|\theta\|_{1}^{6}}\big{(}\|\beta\circ\theta\|_{2} ^{3}\|\theta\|_{2}^{3}\|\theta\|_{2}^{2}\|\theta\|_{3}^{3}+\|\beta\circ\theta\|_{2} ^{3}\|\theta\|_{2}^{3}\|\theta\|_{2}^{2}\|\theta\|_{3}^{6}\big{)}\lesssim\frac{\| \beta\circ\theta\|_{2}^{3}\|\theta\|_{2}^{9}}{\|\theta\|_{1}^{6}}.\]
Similarly, applying (E.43) along with (E.22), (E.20), and (E.24) yields
\[V_{B} \lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{\begin{subarray}{c}i_{1}, i_{2},i_{3}\\ i_{4},i_{4}^{\prime},i_{3}\end{subarray}}\sum_{\begin{subarray}{c}c_{1},c_{2}\\ (c_{1}+c_{2}=1)\end{subarray}}\beta_{i_{1}}\beta_{i_{4}}\beta_{i_{4}}\theta_{i _{1}^{\prime}}^{3}\theta_{i_{2}}^{2+c_{1}}\theta_{i_{3}}^{1+c_{2}}\theta_{i_{ 2}}^{2}\theta_{i_{2}}^{2}\theta_{i_{4}^{\prime}}^{2}\lesssim\frac{\|\beta\circ \theta\|_{2}^{3}\|\theta\|_{2}^{2}}{\|\theta\|_{1}^{5}}.\]
and
\[V_{C} \lesssim\sum_{\begin{subarray}{c}i_{1},i_{2},i_{3},i_{4}\\ i_{1}^{\prime},i_{2}^{\prime},i_{4}^{\prime},i_{3}\end{subarray}}\sum_{ \begin{subarray}{c}c_{1},c_{2}\\ (c_{1}+c_{2}=1)\end{subarray}}\beta_{i_{1}}\beta_{i_{4}}\beta_{i_{1}}\beta_{i_{ 4}}\beta_{i_{1}^{\prime}}\beta_{i_{4}}^{2}\theta_{i_{1}}^{2}\theta_{i_{2}}^{2} \theta_{i_{3}}^{2}\theta_{i_{3}}^{2+c_{1}}\theta_{i_{4}}^{1+c_{2}}\theta_{i_{ 4}}^{2}\theta_{i_{1}^{\prime}}^{2}\theta_{i_{2}^{\prime}}^{2}\theta_{i_{4}^{ \prime}}^{2}\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{10}}{\| \theta\|_{1}^{5}}.\
Thus
\[\mathrm{Var}(X_{a1})\lesssim\|\beta\circ\theta\|_{2}^{4}.\]
The arguments for \(X_{a2}\) and \(X_{a3}\) are similar, and the corresponding \(V_{A},V_{B},V_{C}\) satisfy the same inequalities above. We simply state the bounds:
\[\mathbb{E}X_{a_{2}}=\mathbb{E}X_{a3}=0,\quad\mathrm{Var}(X_{a2}) \lesssim\|\beta\circ\theta\|_{2}^{4},\quad\mathrm{Var}(X_{a3})\lesssim\|\beta \circ\theta\|_{2}^{4}.\]
Next we consider \(X_{b}\) as defined in (Jin et al., 2021c, Supplement, pg.89). We have \(\mathbb{E}X_{b}=0\) and focus on the variance. In (Jin et al., 2021c, Supplement, pg.91) it is shown that
\[\mathrm{Var}(X_{b}) =\mathbb{E}[X_{b}^{2}]\] \[=v^{-3}\sum_{\begin{subarray}{c}i_{1},i_{2},i_{3},i_{4}(dist)\\ i_{1}^{\prime},i_{2}^{\prime},i_{3}^{\prime},i_{4}^{\prime}(dist)\end{subarray}} \sum_{\begin{subarray}{c}j_{3},j_{3}^{\prime}\\ j_{3}\neq i_{3},j_{3}^{\prime}\neq i_{3}^{\prime}\end{subarray}}\eta_{i_{2}} \eta_{i_{3}}\eta_{i_{4}}\eta_{i_{2}}\eta_{i_{3}}\eta_{i_{4}}\mathbb{E}[W_{i_{1},j_{1}}W_{i_{2}j_{2}}W_{i_{3}i_{3}}W_{i_{1}^{\prime},j_{1}^{\prime}}W_{i_{2}^ {\prime},j_{2}^{\prime}}W_{i_{3}^{\prime},j_{3}^{\prime}}]\widetilde{\Omega}_ {i_{1}i_{4}}\widetilde{\Omega}_{i_{1}^{\prime}i_{4}^{\prime}},\]
Note that
\[\mathbb{E}[W_{i_{1}j_{1}}W_{i_{2}j_{2}}W_{i_{3}j_{3}}W_{i_{1}^{\prime}j_{1}^{ \prime}}W_{i_{2}^{\prime}j_{2}^{\prime}}W_{i_{3}^{\prime}j_{3}^{\prime}}]\neq 0\]
if and only if the two sets of random variables \(\{W_{i_{1}j_{1}},W_{i_{2}j_{2}},W_{i_{3}j_{3}}\}\) and \(\{W_{i_{1}^{\prime}j_{1}^{\prime}},W_{i_{2}^{\prime}j_{2}^{\prime}},W_{i_{3}^ {\prime}j_{3}^{\prime}}\}\) are identical. Applying (E.22) and (E.20),
\[|\eta_{i_{2}}\eta_{i_{3}}\eta_{i_{4}}\eta_{i_{2}}\eta_{i_{3}} \eta_{i_{4}}^{*}\widetilde{\Omega}_{i_{1}i_{4}}\widetilde{\Omega}_{i_{1}^{ \prime}i_{4}^{\prime}}| \lesssim\theta_{i_{2}}\theta_{i_{3}}\theta_{i_{4}}\theta_{i_{2}} \theta_{i_{3}^{\prime}}\theta_{i_{4}^{\prime}}(\beta_{i_{1}}\theta_{i_{4}}) \theta_{i_{4}}(\beta_{i_{1}^{\prime}}\theta_{i_{4}^{\prime}})\] \[\lesssim\beta_{i_{1}}\beta_{i_{4}}\beta_{i_{4}^{\prime}}\theta_{i _{4}}^{1+a_{1}}\theta_{j_{1}}^{a_{2}}\theta_{i_{2}}^{1+a_{3}}\theta_{j_{2}}^{a _{1}}\theta_{i_{3}}^{1+a_{3}}\theta_{j_{3}}^{a_{6}}\theta_{i_{4}}^{2}\theta_{ i_{4}^{\prime}}^{2}\]
if \(\mathbb{E}[W_{i_{1}j_{1}}W_{i_{2}j_{2}}W_{i_{3}j_{3}}W_{i_{1}^{\prime}j_{1}^{ \prime}}W_{i_{2}^{\prime}j_{2}^{\prime}}W_{i_{3}^{\prime}j_{3}^{\prime}}]\neq 0\), where \(a_{i}\in\{0,1\}\) and \(\sum_{i=1}^{6}a_{i}=3\). Thus by (E.1), (E.2), and (E.24),
\[\mathrm{Var}(X_{b}) \lesssim\max_{a}\frac{1}{\|\theta\|_{1}^{6}}\sum_{\begin{subarray}{ c}i_{1},i_{2},i_{3},i_{4}\\ i_{4}^{\prime},j_{1},j_{2},j_{3}\end{subarray}}\beta_{i_{1}}\beta_{i_{4}}\beta_ {i_{4}}\beta_{i_{4}^{\prime}}\theta_{i_{1}}^{2+a_{1}}\theta_{j_{1}}^{1+a_{2} }\theta_{i_{2}}^{2+a_{3}}\theta_{j_{2}}^{1+a_{4}}\theta_{i_{3}}^{2+a_{5}} \theta_{j_{3}}^{1+a_{6}}\theta_{i_{4}}^{2}\theta_{i_{4}^{\prime}}^{2}\] \[\lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{\begin{subarray}{c}i_{1}, i_{2},i_{3},i_{4}\\ i_{4}^{\prime},j_{1},j_{2},j_{3}\end{subarray}}\beta_{i_{1}}\beta_{i_{4}}\beta_ {i_{4}}\beta_{i_{4}}\theta_{i_{1}^{\prime}}^{2}\theta_{i_{1}}^{1}\theta_{i_{2} }^{2}\theta_{i_{2}}^{1}\theta_{i_{3}}^{2}\theta_{i_{3}}^{1}\theta_{i_{4}}^{2}\theta _{i_{4}^{\prime}}^{2}\] \[\lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{2}\| \theta\|_{2}^{3}}{\|\theta\|_{1}^{6}}\lesssim\frac{\|\beta\circ\theta\|_{2}^{3} \|\theta\|_{2}^{2}}{\|\theta\|_{1}^{3}}\lesssim\|\beta\circ\theta\|_{2}^{3}\| \theta\|_{2}.\]
Combining the results for \(X_{a1},X_{a2},X_{a3}\) and \(X_{b}\), we conclude that
\[\mathbb{E}T_{2a}=0,\qquad\mathrm{Var}(T_{2a})\lesssim\|\beta\circ\theta\|_{2}^{ 4}\|\theta\|_{2}.\]
The argument for \(T_{2b}\) is similar to the one for \(T_{2a}\), so we simply state the results:
\[\mathbb{E}T_{2b}=0,\qquad\mathrm{Var}(T_{2b})\lesssim\|\beta\circ\theta\|_{2}^{ 2}\|\theta\|_{2}.\]
Next we study \(T_{2c}\), providing full details for completeness. Using the definition of \(T_{2c}\) in (Jin et al., 2021c, Supplement, pg.92), we have the following decomposition by careful casework.
\[Y_{a} =-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\eta_{i_{1} }\eta_{i_{3}}\eta_{i_{4}}W_{i_{2}i_{3}}^{3}\widetilde{\Omega}_{i_{1}i_{4}},\] \[Y_{b1} =-\frac{1}{v^{3/2}}\sum_{\begin{subarray}{c}i_{1},i_{2},i_{3},i_{4 }(dist)\\ j_{3}\neq i_{2},j_{3}\neq i_{3}\end{subarray}}\eta_{i_{1}}\eta_{i_{3}}\eta_{i_{4} }W_{i_{2}j_{2}}^{2}W_{i_{3}j_{3}}\widetilde{\Omega}_{i_{1}i_{4}},\] \[Y_{b2} =-\frac{1}{v^{3/2}}\sum_{\begin{subarray}{c}i_{1},i_{2},i_{3},i_{4 }(dist)\\ i_{2}\neq i_{3},i_{4}(dist)\end{subarray}}\sum_{\begin{subarray}{c}\ell\in\{i_{3},i_{2} \}\\ \ell\in\{i_{3},i_{2}\}\end{subarray}}\eta_{i_{1}}\eta_{i_{3}}\eta_{i_{4}}W_{i_{2 }i_{3}}^{2}W_{i_{2}i_{2}}\widetilde{\Omega}_{i_{1}i_{4}},\] \[Y_{b3} =-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\sum_{j_{2} \notin\{i_{3},i_{2}\}}\eta_{i_{1}}\eta_{i_{3}}\eta_{i_{4}}W_{i_{2}i_{3}}^{2}W_{ i_{2}j_{2}}\widetilde{\Omega}_{i_{1}i_{4}},\]
\[Y_{c}=-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\sum_{ \begin{subarray}{c}j_{2},\ell_{2},j_{3},\\ j_{2}\neq i_{2},\ell_{2},j_{3}\neq i_{3}\end{subarray}}\eta_{i_{1}}\eta_{i_{3} }\eta_{i_{4}}W_{i_{2}j_{2}}W_{i_{2}\ell_{2}}W_{i_{3}j_{3}}\widetilde{\Omega}_{ i_{1}i_{4}}.\]
Note that, by the change of variables \(\ell_{2}\to j_{2}\), it holds that \(Y_{\hat{b}2}=Y_{\hat{b}3}\).
The only term with nonzero mean is \(Y_{a}\). We have by (E.18), (E.20), (E.22), and (E.24) that
\[|\mathbb{E}Y_{a}| \lesssim\frac{1}{\|\theta\|_{1}^{5}}\sum_{i_{1},i_{2},i_{3},i_{4}} \theta_{i_{4}}\theta_{i_{3}}\theta_{i_{4}}(\beta_{i_{1}}\theta_{i_{1}})(\beta _{i_{4}}\theta_{i_{4}})\cdot|\mathbb{E}W_{i_{2}i_{3}}^{3}|\lesssim\frac{1}{\| \theta\|_{1}^{5}}\sum_{i_{1},i_{2},i_{3},i_{4}}\beta_{i_{1}}\beta_{i_{4}} \theta_{i_{1}}^{2}\theta_{i_{2}}^{2}\theta_{i_{3}}^{2}\] \[\lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{6}}{\| \theta\|_{1}^{2}}.\]
For the variance, by independence of \(\{W_{ij}\}_{i>j}\), (E.2), (E.20), and (E.24), we have
\[\mathrm{Var}(Y_{a}) \lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{i_{2},i_{3},i_{4}} \big{(}\sum_{i_{1},i_{4}}\theta_{i_{1}}\theta_{i_{3}}\theta_{i_{4}}(\beta_{i_{ 1}}\theta_{i_{1}})(\beta_{i_{4}}\theta_{i_{4}})\big{)}^{2}\theta_{i_{2}}\theta _{i_{3}}\lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{i_{2},i_{3}}\|\beta\circ \theta\|_{2}^{4}\|\theta\|_{2}^{4}\theta_{i_{2}}\theta_{i_{3}}^{2}\] \[\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{6}}{\| \theta\|_{1}^{5}}.\]
For \(Y_{\hat{b}1},Y_{\hat{b}2},Y_{\hat{b}3}\) we make note of the identity
\[W_{ij}^{2}=(1-2\Omega_{ij})W_{ij}+\Omega_{ij}(1-\Omega_{ij})\equiv A_{ij}W_{ij }+B_{ij}.\] (E.44)
Write
\[Y_{\hat{b}1} =-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\sum_{ \begin{subarray}{c}(i_{2},j_{2})\neq(j_{3},i_{3})\\ j_{2}\neq i_{2},j_{3}\neq i_{3}\end{subarray}}\eta_{i_{1}}\eta_{i_{3}}\eta_{i_ {4}}A_{i_{2}j_{2}}W_{i_{2}j_{2}}W_{i_{3}j_{3}}\widetilde{\Omega}_{i_{1}i_{4}}\] \[\quad-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\sum_{ \begin{subarray}{c}(i_{2},j_{2})\neq(j_{3},i_{3})\\ j_{2}\neq i_{2},j_{3}\neq i_{3}\end{subarray}}\eta_{i_{1}}\eta_{i_{3}}\eta_{i_ {4}}B_{i_{2}j_{2}}W_{i_{3}j_{3}}\widetilde{\Omega}_{i_{1}i_{4}}\equiv Y_{\hat{ b}1,A}+Y_{\hat{b}1,B}.\]
By similar arguments from before, and noting that \(|A_{i_{2},j_{2}}|\lesssim 1\),
\[\mathrm{Var}(Y_{\hat{b}1,A}) \lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{\begin{subarray}{c}(i_{ 2},j_{2})\neq(j_{3},i_{3})\\ j_{2}\neq i_{2},j_{3}\neq i_{3}\end{subarray}}\bigg{(}\sum_{i_{1},i_{4}}\eta_{ i_{1}}\eta_{i_{3}}\eta_{i_{4}}(\beta_{i_{1}}\theta_{i_{1}})(\beta_{i_{4}} \theta_{i_{4}})\bigg{)}^{2}|\mathbb{E}W_{i_{2}j_{2}}W_{i_{3}j_{3}}|\] \[\lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{i_{2},j_{2},i_{3},j_{3}} \bigg{(}\sum_{i_{1},i_{4}}\eta_{i_{1}}\eta_{i_{3}}\eta_{i_{4}}(\beta_{i_{1}} \theta_{i_{1}})(\beta_{i_{4}}\theta_{i_{4}})\bigg{)}^{2}\cdot\theta_{i_{2}} \theta_{j_{2}}\theta_{i_{3}}\theta_{j_{3}}\] \[\lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{i_{2},j_{2},i_{3},j_{3} }\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}\theta_{i_{2}}\theta_{j_{2}} \theta_{i_{3}}^{3}\theta_{j_{3}}\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\| \theta\|_{2}^{6}}{\|\theta\|_{1}^{3}}.\]
Similarly, using \(|B_{ij}|\lesssim\Omega_{ij}\lesssim\theta_{i}\theta_{j}\),
\[\mathrm{Var}(Y_{\hat{b}1,B}) \lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{i_{3},j_{3}(dist)} \bigg{(}\sum_{i_{1},i_{2},i_{4},j_{2}}\eta_{i_{1}}\eta_{i_{3}}\eta_{i_{4}}\theta_ {i_{2}}\theta_{j_{2}}(\beta_{i_{1}}\theta_{i_{1}})(\beta_{i_{4}}\theta_{i_{4}}) \bigg{)}^{2}\cdot|\mathbb{E}W_{i_{3},j_{3}}|\] \[\lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{i_{3},j_{3}}\|\beta\circ \theta\|_{2}^{4}\|\theta\|_{2}^{4}\|\theta\|_{1}^{2}\theta\|_{1}^{2}\theta_{i_{ 3}}^{3}\theta_{j_{3}}\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{6} }{\|\theta\|_{1}^{3}}.\]
It follows that
\[\mathrm{Var}(Y_{\hat{b}1})\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^ {6}}{\|\theta\|_{1}^{3}}\]
To control \(\mathrm{Var}(Y_{\hat{b}2})\), again we invoke the identity (E.44) to write
\[Y_{\hat{b}2}=-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\sum_{ \ell_{2}\notin\{i_{3},i_{2}\}}\eta_{i_{1}}\eta_{i_{3}}\eta_{i_{4}}A_{i_{2}i_{3} }W_{i_{2}i_{3}}W_{i_{2}\ell_{2}}\widetilde{\Omega}_{i_{1}i_{4}}\]
\[-\frac{1}{v^{3/2}}\sum_{i_{1},i_{2},i_{3},i_{4}(dist)}\sum_{\ell_{2}\notin\{i_{3}, i_{2}\}}\eta_{i_{1}}\eta_{i_{3}}\eta_{i_{4}}B_{i_{2}i_{3}}W_{i_{2}\ell_{2}} \widetilde{\Omega}_{i_{1}i_{4}}\equiv Y_{b2,A}+Y_{b2,B}.\]
Using similar arguments from before, we have
\[\mathrm{Var}(Y_{b2,A}) \lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{i_{2},i_{3}\ell_{2}} \bigg{(}\sum_{i_{1}i_{4}}\theta_{i_{1}}\theta_{i_{3}}\theta_{i_{4}}(\beta_{i_ {1}}\theta_{i_{1}})(\beta_{i_{4}}\theta_{i_{4}})\bigg{)}^{2}\theta_{i_{2}}^{2 }\theta_{i_{3}}\theta_{\ell_{2}}\] \[\lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{i_{2}i_{3}\ell_{2}}\| \beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}\theta_{i_{2}}^{2}\theta_{i_{3}}^ {3}\theta_{\ell_{2}}\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^ {8}}{\|\theta\|_{1}^{5}}.\]
Furthermore,
\[\mathrm{Var}(Y_{b2,B}) \lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{i_{2},\ell_{2}}\bigg{(} \sum_{i_{1},i_{3},i_{4}}\theta_{i_{1}}\theta_{i_{3}}\theta_{i_{4}}(\beta_{i_ {1}}\theta_{i_{1}})(\beta_{i_{4}}\theta_{i_{4}})\theta_{i_{2}}\theta_{i_{3}} \bigg{)}^{2}\theta_{i_{2}}\theta_{\ell_{2}}\] \[\lesssim\frac{1}{\|\theta\|_{1}^{6}}\sum_{i_{2},\ell_{2}}\| \beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}\theta\|_{2}^{4}\theta_{i_{2}}^{2 }\theta_{i_{2}}\lesssim\frac{\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{10}} {\|\theta\|_{1}^{5}}.\]
Since \(Y_{b2}=Y_{b3}\), we have
\[\mathrm{Var}(Y_{b2})=\mathrm{Var}(Y_{b3})\lesssim\frac{\|\beta\circ\theta\|_ {2}^{4}\|\theta\|_{2}^{10}}{\|\theta\|_{1}^{5}}.\]
Next we study the variance of \(Y_{2c}\). For notational brevity, let
\[\mathcal{R}_{i_{1},i_{2},i_{3}}=\bigg{\{}(j_{2},\ell_{2},j_{3}) \bigg{|}j_{2}\neq i_{2},\ell_{2}\neq i_{2},j_{3}\neq i_{3}j_{2}\neq\ell_{2},(i _{2},j_{2})\neq(j_{3},i_{3}),(i_{2},\ell_{2})\neq(j_{3},i_{3})\bigg{\}}.\]
We have
\[\mathrm{Var}(Y_{c})\] \[=\frac{1}{v^{3}}\sum_{\begin{subarray}{c}i_{1},i_{2},i_{3},i_{4} (dist)\\ i_{1}^{\prime},i_{2}^{\prime},i_{3}^{\prime},i_{4}^{\prime}(dist)\\ \end{subarray}}\sum_{\begin{subarray}{c}(j_{2},\ell_{2},j_{3})\in\mathcal{R}_{ i_{1},i_{2},i_{3}}\\ (j_{2}^{\prime},\ell_{2}^{\prime},j_{3}^{\prime})\in\mathcal{R}_{i_{1}^{\prime},i_{2}^{ \prime},i_{3}^{\prime}}^{\prime}\end{subarray}}\eta_{i_{1}}\eta_{i_{3}}\eta_{ i_{4}}\widetilde{\Omega}_{i_{1}i_{4}}\eta_{i_{1}^{\prime}}\eta_{i_{3}^{ \prime}}\eta_{i_{4}^{\prime}}\widetilde{\Omega}_{i_{1}^{\prime}i_{4}^{\prime}} \mathbb{E}\big{[}W_{i_{2}j_{2}}W_{i_{2}\ell_{2}}W_{i_{3}j_{3}}W_{i_{2}^{\prime} _{2}}W_{i_{2}^{\prime}\ell_{2}^{\prime}}W_{i_{2}^{\prime}\ell_{2}^{\prime}}W_{i _{2}^{\prime}\ell_{2}^{\prime}}W_{i_{2}^{\prime}\ell_{2}^{\prime}}\big{]}\]
Note that \(W_{i_{2}j_{2}}W_{i_{2}\ell_{2}}W_{i_{3}j_{3}}\) and \(W_{i_{2}^{\prime}j_{2}^{\prime}}W_{i_{2}^{\prime}\ell_{2}^{\prime}}W_{i_{3}^{ \prime}j_{3}^{\prime}}\) above are uncorrelated unless
\[\bigg{\{}\{i_{2},j_{2}\},\{i_{2},\ell_{2}\},\{i_{3},j_{3}\}\bigg{\}}=\bigg{\{} \{i_{2}^{\prime},j_{2}^{\prime}\},\{i_{2}^{\prime},\ell_{2}^{\prime}\},\{i_{3} ^{\prime},j_{3}^{\prime}\}\bigg{\}}.\]
In particular, \(i_{3}^{\prime}\in\{i_{2},j_{2},\ell_{2},i_{3},j_{3}\}\) when the above holds. Hence for some choice of \(a_{i}\in\{0,1\}\) with \(\sum_{i=1}^{5}a_{i}=1\),
\[\mathrm{Var}(Y_{c}) \lesssim\frac{1}{v^{3}}\sum_{\begin{subarray}{c}i_{1},i_{2},i_{3}, i_{4}\\ i_{1}^{\prime},i_{4}^{\prime},j_{2},\ell_{2},j_{3}\end{subarray}}\theta_{i_{2}}^{a_{1} }\theta_{i_{2}}^{a_{2}}\theta_{i_{3}}^{a_{3}}\theta_{i_{3}}^{a_{4}}\theta_{j_{3} }^{a_{5}}\cdot\theta_{i_{1}}\theta_{i_{3}}\theta_{i_{4}}(\beta_{i_{1}}\theta_{ i_{1}})(\beta_{i_{4}}\theta_{i_{4}})\theta_{i_{1}^{\prime}}\theta_{i_{4}}(\beta_{i_{1}^{ \prime}}\theta_{i_{4}})(\beta_{i_{4}}\theta_{i_{1}^{\prime}})(\beta_{i_{4}} \theta_{i_{4}^{\prime}})\cdot\theta_{i_{2}}^{2}\theta_{j_{2}}\theta_{\ell_{2}} \theta_{i_{3}}\theta_{j_{3}}\] \[\lesssim\frac{1}{v^{3}}\sum_{\begin{subarray}{c}i_{1},i_{2},i_{3}, i_{4}\\ i_{1}^{\prime},i_{4}^{\prime},j_{2},\ell_{2},j_{3}\end{subarray}}\beta_{i_{1}}\beta_{i_{1}^{ \prime}}\beta_{i_{4}}\beta_{i_{1}}\beta_{i_{4}}\beta_{i_{1}^{\prime}}\theta_{i_{2}} ^{2}\theta_{i_{2}}^{2+a_{1}}\theta_{i_{3}}^{2+a_{4}}\theta_{i_{4}}^{2}\theta_{i_ {1}^{\prime}}\theta_{i_{4}^{\prime}}^{2}\theta_{j_{2}}^{1+a_{2}}\theta_{i_{2}}^{1+ a_{3}}\theta_{j_{3}}^{1+a_{5}}\] \[\lesssim\frac{1}{v^{3}}\sum_{\begin{subarray}{c}i_{1},i_{2},i_{3}, i_{4}\\ i_{1}^{\prime},i_{4}^{\prime},j_{2},\ell_{2},j_{3}\end{subarray}}\beta_{i_{1}}\beta_{i_{1}^{ \prime}}\beta_{i_{1}^{\prime}}\beta_{i_{1}}\beta_{i_{2}}\beta_{i_{4}}\theta_{i_ {2}}^{2}\theta_{i_{3}}^{2}\theta_{i_{4}}^{2}\theta_{i_{1}^{\prime}}^{2}\theta_{j_ {2}}^{1}\theta_{i_{2}}^{1}\theta_{j_{3}}^{1}\lesssim\frac{\|\beta\circ\theta\|_{2}^ {4}\|\theta\|_{2}^{8}}{\|\theta\|_{1}^{3}},\]
where in the last line we apply (E.2) followed by (E.24). Combining our results above we have
\[|\mathbb{E}T_{2c}|\lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{4}}{\| \theta\|_{1}^{2}},\qquad\mathrm{Var}(T_{2c})\lesssim\frac{\|\beta\circ\theta\|_{2}^ {4}\|\theta\|_{2}^{6}}{\|\theta\|_{1}^{2}}.\]
The argument for \(T_{2d}\) is omitted since it is similar to the one for \(T_{2c}\) (note that the two terms have similar structure). The results are stated below.
\[|\mathbb{E}T_{2d}|\lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{4}}{ \|\theta\|_{1}^{2}},\qquad\mathrm{Var}(T_{2d})\lesssim\frac{\|\beta\circ\theta \|_{2}^{4}\|\theta\|_{2}^{8}}{\|\theta\|_{1}^{3}}.\]
Combining the results for \(T_{2a},\ldots,T_{2d}\) yields
\[|\mathbb{E}T_{2}|\lesssim\frac{\|\beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{4} }{\|\theta\|_{1}^{2}},\qquad\mathrm{Var}(T_{2})\lesssim\frac{\|\beta\circ \theta\|_{2}^{4}\|\theta\|_{2}^{8}}{\|\theta\|_{1}^{2}},\]
as desired.
#### e.4.6 Proof of Lemma e.11
As before, we only need to analyze the alternative hypothesis. In (Jin et al., 2021c, Supplement,pg.103) it is shown that \(\tilde{Q}^{*}-Q^{*}\) is a sum of \(O(1)\) terms of the form
\[Y=\Big{(}\frac{v}{V}\Big{)}^{N_{\tilde{r}}}\sum_{i,j,k,\ell( dist)}a_{ij}b_{jk}c_{k\ell}d_{\ell i},\] (E.46)
where \(a,b,c,d\in\{\widetilde{\Omega},W,\delta,-(\tilde{\eta}-\eta)(\tilde{\eta}- \eta)^{\mathsf{T}}\}\), and \(N_{\tilde{r}}\) denotes the number of \(a,b,c,d\) that are equal to \(-(\tilde{\eta}-\eta)(\tilde{\eta}-\eta)^{\mathsf{T}}\).
Similarly, let \(N_{W}\) denote the number of \(a,b,c,d\) that are equal to \(W\), and \(N_{\tilde{\Omega}}\) and \(N_{\delta}\) are similarly defined. Write
\[Y=\Big{(}\frac{v}{V}\Big{)}^{m}X,\qquad\text{where}\quad X=\sum _{i,j,k,\ell(dist)}a_{ij}b_{jk}c_{k\ell}d_{\ell i}.\] (E.47)
Note that for this proof, we do not need the explicit decomposition: we only will use the fact that \(\tilde{Q}^{*}-Q^{*}\) is a sum of \(O(1)\) terms. At times, we refer to these terms of the form \(Y\) composing \(\tilde{Q}^{*}-Q^{*}\) as _post-expansion sums_.
In Jin et al. (2021c) it is shown that \(4\geq N_{\tilde{r}}\geq 1\) for every post-expansion sum (note that the upper bound of \(4\) is trivial). It turns out that this is the _only_ constraint on the post-expansion sums; so we need to analyze every single possible combination of nonnegative integers \((N_{\tilde{\Omega}},N_{W},N_{\delta},N_{\tilde{r}})\) where their sum is \(4\) and \(N_{\tilde{r}}\geq 1\) and then arrange \(a,b,c,d\in\{\tilde{\Omega},W,\delta,-(\tilde{\eta}-\eta)(\tilde{\eta}-\eta)^ {\mathsf{T}}\}\) in all possible ways according to (E.46). This leads to a total of \(34\) possibilities, all of which are shown in Table 1 reproduced from Jin et al. (2021c).
In (Jin et al., 2021c, Supplement,pg.103) it is shown that
\[|\mathbb{E}[Y-X]| \leq o(\|\theta\|_{2}^{-2})\sqrt{\mathbb{E}[X^{2}]}+o(1),\text{ and}\] \[\mathrm{Var}(Y) \leq 2\mathrm{Var}(X)+o(\|\theta\|_{2}^{-4})\mathbb{E}[X^{2}]+o(1).\] (E.48)
The proof of (E.48) in Jin et al. (2021c) only requires the heterogeneity assumptions (E.2)-(E.4) and the following two conditions. First, we must have the tail inequality
\[\mathbb{P}(|V-v|>t)\leq\begin{cases}2\exp\bigl{(}-\frac{C_{1}}{ \|\theta\|_{1}^{2}}t^{2}\bigr{)},&\text{when }x_{n}\|\theta\|_{1}\leq t\leq\|\theta\|_{1}^{2},\\ 2\exp\bigl{(}-C_{2}t\bigr{)},&\text{when }t>\|\theta\|_{1}^{2}.\end{cases}\] (E.49)
Second, it must hold that \(|Y-X|\) is dominated by a polynomial in \(V\). See (Jin et al., 2021c, Lemma G.10 and G.11) for further details. Both conditions are satisfied in our setting, so indeed (E.48) applies.
Let \(N_{W}\) and \(N_{\delta}\) denote the number of \(a,b,c,d\) that are equal to \(W\) and \(\delta\), respectively. As in Jin et al. (2021c), we define
\[N_{W}^{*}=N_{W}+N_{\delta}+2N_{\tilde{r}}\] (E.50)
and divide our analysis into parts based on this parameter.
Analysis of terms with \(N_{W}^{*}\leq 4\)For convenience, we reproduce Table G.5 from Jin et al. (2021c) in Table 2. The left column of Table 2 lists all of the terms with \(N_{W}^{*}\leq 4\), where note that factors of \((\frac{v}{V})^{N_{F}}\) are removed. In the right column terms are listed that have similar structure to those on the left. Precisely, a term in the left column has the form
\[X=\sum_{i_{1},\ldots,i_{m}\in\mathcal{R}}c_{i_{1},\ldots,i_{m}}G_{i_{1}, \ldots,i_{m}},\]
and its adjacent term on the right column has the form
\[X^{*}=\sum_{i_{1},\ldots,i_{m}\in\mathcal{R}}c_{i_{1},\ldots,i_{m}}^{*}G_{i_{1 },\ldots,i_{m}},\]
\begin{table}
\begin{tabular}{l l l l l l} Notation & \(\#\) & \(N_{F}\) & \((N_{\delta},N_{\overline{\Omega}},N_{W})\) & Examples & \(N_{W}^{*}\) \\ \hline \(R_{1}\) & 4 & 1 & (0, 0, 3) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}W_{jk}W_{k\ell}W_{\ell i}\) & 5 \\ \(R_{2}\) & 8 & 1 & (0, 1, 2) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\Omega}_{jk}W_{k\ell}W_{\ell i}\) & 4 \\ \(R_{3}\) & 4 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}W_{jk}\bar{\Omega}_{k\ell}W_{\ell i}\) & 4 \\ \(R_{4}\) & 8 & 1 & (0, 2, 1) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\Omega}_{jk}\bar{\Omega}_{k\ell}W_{ \ell i}\) & 3 \\ \(R_{5}\) & 4 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\Omega}_{jk}\bar{\Omega}_{k\ell}\bar{ \Omega}_{\ell i}\) & 3 \\ \(R_{6}\) & 4 & 1 & (0, 3, 0) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\Omega}_{jk}\bar{\Omega}_{k\ell}\bar{ \Omega}_{\ell i}\) & 2 \\ \(R_{7}\) & 8 & 1 & (1, 0, 2) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}W_{k\ell}W_{\ell i}\) & 5 \\ \(R_{8}\) & 4 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}W_{ \ell i}\) & 5 \\ \(R_{9}\) & 8 & 1 & (1, 1, 1) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{\ell\ell}W _{\ell i}\) & 4 \\ \(R_{10}\) & 8 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\Omega}_{jk}W_{k\ell}\bar{ \Omega}_{\ell i}\) & 4 \\ \(R_{11}\) & 8 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\delta_{k\ell}\bar{ \Omega}_{\ell i}\) & 4 \\ \(R_{12}\) & 8 & 1 & (1, 2, 0) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}\bar{ \Omega}_{\ell i}\) & 3 \\ \(R_{13}\) & 4 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\Omega}_{jk}\delta_{k\ell}\bar{ \Omega}_{\ell i}\) & 3 \\ \(R_{14}\) & 8 & 1 & (2, 0, 1) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\delta_{k\ell}W_{ \ell i}\) & 5 \\ \(R_{15}\) & 4 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}W_{k\ell}\bar{ \Omega}_{\ell i}\) & 5 \\ \(R_{16}\) & 8 & 1 & (2, 1, 0) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\delta_{k\ell}\bar{ \Omega}_{\ell i}\) & 4 \\ \(R_{17}\) & 4 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\delta_{k\ell}\bar{ \Omega}_{\ell i}\) & 4 \\ \(R_{18}\) & 4 & 1 & (3, 0, 0) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\delta_{k\ell}\delta_{ \ell i}\) & 5 \\ \hline \(R_{19}\) & 4 & 2 & (0, 0, 2) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}W_{ \ell i}\) & 6 \\ \(R_{20}\) & 2 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}W_{ \ell i}\) & 6 \\ \(R_{21}\) & 4 & 2 & (0, 2, 0) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}\bar{ \Omega}_{\ell i}\) & 4 \\ \(R_{22}\) & 2 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\Omega}_{jk}\bar{\Omega}_{k\ell}\bar{ \Omega}_{\ell i}\) & 4 \\ \(R_{23}\) & 4 & 2 & (2, 0, 0) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\Omega}_{jk\ell}\bar{ \Omega}_{\ell i}\) & 6 \\ \(R_{24}\) & 2 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}\bar{ \Omega}_{\ell i}\) & 6 \\ \(R_{25}\) & 8 & 2 & (0, 1, 1) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}W_{ \ell i}\) & 5 \\ \(R_{26}\) & 4 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}W_{ \ell i}\) & 5 \\ \(R_{27}\) & 8 & 2 & (1, 1, 0) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}\bar{ \Omega}_{\ell i}\) & 5 \\ \(R_{28}\) & 4 & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}\bar{ \Omega}_{\ell i}\) & 5 \\ \(R_{29}\) & 8 & 2 & (1, 0, 1) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}W_{ \ell i}\) & 6 \\ \(R_{30}\) & 4 & & & & & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{\partial}_{jk}\bar{\Omega}_{k\ell}W_{ \ell i}\) & 6 \\ \hline \(R_{31}\) & 4 & 3 & (0, 0, 1) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{r}_{ij}\bar{r}_{jk}\bar{\Omega}_{k}\) & 7 \\ \(R_{32}\) & 4 & 3 & (0, 1, 0) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{r}_{ij}\bar{r}_{jk}\bar{\Omega}_{ \ell i}\) & 6 \\ \(R_{33}\) & 4 & 3 & (1, 0, 0) & \(\sum_{i,j,k,\ell(dist)}\bar{r}_{ij}\bar{r}_{ij}\bar{r}_{jk}\bar{\Omega}_{ \ell i}\) & 7 \\ \hline \(R_{34}\) & 1 & 4 & (0, 0, 0) & \(\sum_{i,
analogous to \(T\) and \(T^{*}\) from Lemma E.13. By inspection, we see that for each term in the left column, the canonical upper bounds \(\overline{c_{i_{1},\ldots,i_{m}}}\) and \(\overline{c^{*}_{i_{1},\ldots,i_{m}}}\) on the coefficients \(c_{i_{1},\ldots,i_{m}}\) and \(c^{*}_{i_{1},\ldots,i_{m}}\) satisfy
\[\overline{c_{i_{1},\ldots,i_{m}}}\lesssim\overline{c^{*}_{i_{1},\ldots,i_{m}}}.\]
Recall that these canonical upper bounds were defined in Section E.4.1. Thus the conclusion of Lemma E.13 applies, and we have for each term \(X\) in the left column of Table 2,
\[|\mathbb{E}X|\lesssim\overline{\mathbb{E}X^{*}},\qquad\mathrm{Var}(X)\lesssim \overline{\mathrm{Var}(X^{*})}.\]
As discussed in Section E.4.1, the upper bounds on the means and variances in Lemmas E.7-E.10 are in fact upper bounds on \(\overline{\mathbb{E}X^{*}}\) and \(\overline{\mathrm{Var}(X^{*})}\). By (E.48) and Lemmas E.7-E.10, for every post-expansion sum \(Y\) with \(N^{*}_{W}\leq 4\) we have
\[|\mathbb{E}Y| \leq|\mathbb{E}X|+o(\|\theta\|_{2}^{-2})\sqrt{\mathbb{E}[X^{2}] }=|\mathbb{E}X|+o(\|\theta\|_{2}^{-2})\sqrt{\mathbb{E}[X]^{2}+\mathrm{Var}(X)}\] \[\lesssim\tilde{\lambda}^{2}\lambda_{1}+o(\|\theta\|_{2}^{-2}) \cdot\sqrt{\tilde{\lambda}^{4}\lambda_{1}^{2}+\lambda_{1}^{4}+\tilde{ \lambda}^{6}+\tilde{\lambda}^{2}\lambda_{1}^{3}}\] \[\lesssim\tilde{\lambda}^{2}\lambda_{1}+\lambda_{1}^{2}+\tilde{ \lambda}^{3}+|\tilde{\lambda}|\lambda_{1}^{3/2}=o(\tilde{\lambda}^{4})\]
by the assumption that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\). Similarly,
\[\mathrm{Var}(Y) \lesssim\mathrm{Var}(X)+o(\|\theta\|_{2}^{-4})\mathbb{E}[X^{2}] =\mathrm{Var}(X)+o(\|\theta\|_{2}^{-4})(\mathbb{E}[X]^{2}+\mathrm{Var}(X))\] \[\lesssim\lambda_{1}^{4}+\tilde{\lambda}^{6}+\tilde{\lambda}^{2} \lambda_{1}^{3}+o(\|\theta\|_{2}^{-4})\cdot(\tilde{\lambda}^{4}\lambda_{1}^{ 2}+\lambda_{1}^{4}+\tilde{\lambda}^{6}+\tilde{\lambda}^{2}\lambda_{1}^{3}) \lesssim o(\tilde{\lambda}^{8}).\]
#### Analysis of terms with \(N^{*}_{W}>4\)
Recall that
\[\eta=\frac{1}{\sqrt{v}}(\mathbb{E}A)\mathbf{1}_{n},\ \ \tilde{\eta}=\frac{1}{\sqrt{v}}A \mathbf{1}_{n},\ \ v=\mathbf{1}^{\prime}_{n}(\mathbb{E}A)\mathbf{1}_{n}\]
\begin{table}
\begin{tabular}{l c|c|c} & Expression & & Expression \\ \hline \(R_{2}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})\tilde{\Omega}_{jk}W_ {\mathcal{K}}W_{i\ell}\) & \(Z_{1b}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})\eta_{j}(\tilde{\eta}_{j}-\eta_{j})\eta_{k}W_{ \mathcal{K}}W_{i\ell}\) \\ \(R_{3}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})W_{j\ell}\tilde{ \Omega}_{k\ell}W_{\ell i}\) & \(Z_{2a}\) & \(\eta_{\ell}(\tilde{\eta}_{j}-\eta_{j})W_{j\ell}\eta_{k}\eta_{(\tilde{\eta}_{i} -\eta_{i})W_{i\ell}}\) \\ \(R_{4}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})\tilde{\Omega}_{jk} \tilde{\Omega}_{k\ell}W_{\ell i}\) & \(Z_{3d}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})\eta_{j}\eta_{k}\tilde{\Omega}_{k\ell}W_{i\ell}\) \\ \(R_{5}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})\tilde{\Omega}_{jk}W_ {\mathcal{K}}\tilde{\Omega}_{i\ell}\) & \(Z_{4b}\) & \(\sum\tilde{\eta}_{i}(\eta_{j}-\eta_{j})W_{j\ell}W_{\ell\ell i}(\tilde{\eta}_{i} -\eta_{i})\) \\ \(R_{6}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})\tilde{\Omega}_{jk}W_ {\mathcal{K}}\tilde{\Omega}_{i\ell}\) & \(Z_{5a}\) & \(\sum_{\ell}(\tilde{\eta}_{i}-\eta_{j})\tilde{\Omega}_{jk}\tilde{\Omega}_{k\ell} \eta_{(\tilde{\eta}_{i}-\eta_{i})}\) \\ \(R_{9}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})^{2}\eta_{k}\tilde{ \Omega}_{k\ell}W_{i\ell i}\) & \(T_{1d}\) & \(\sum\eta_{\ell}(\tilde{\eta}_{j}-\eta_{j})^{2}\tilde{\eta}_{2}(\tilde{\eta}_{i} -\eta_{i})W_{i\ell}\) \\ & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})\eta_{j}(\tilde{\eta} _{k}-\eta_{k})\tilde{\Omega}_{k\ell}W_{i\ell}\) & \(T_{1a}\) & \(\sum\eta_{\ell}(\tilde{\eta}_{j}-\eta_{j})\eta_{j}(\tilde{\eta}_{k}-\eta_{k}) \eta_{k}(\tilde{\eta}_{i}-\eta_{i})W_{i\ell}\) \\ \(R_{10}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})^{2}(\tilde{\eta}_{j}-\eta_{j})\tilde{\Omega}_{jk }W_{k\ell}\eta_{\ell}\) & \(T_{1c}\) & \(\sum(\tilde{\eta}_{j}-\eta_{j})\eta_{k}W_{k\ell}\eta_{(\tilde{\eta}_{i}-\eta_{i}) ^{2}\eta_{j}}\) \\ & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})\tilde{\Omega}_{jk}W_ {k\ell}(\tilde{\eta}_{i}-\eta_{\ell})\eta_{j}\) & \(T_{1a}\) & \(\sum(\tilde{\eta}_{j}-\eta_{j})\eta_{k}W_{\ell\ell}(\tilde{\eta}_{i}-\eta_{i}) \eta_{j}\) \\ \(R_{11}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})W_{j\ell}\eta_{k} \eta_{i}(\tilde{\eta}_{i}-\eta_{k})\tilde{\Omega}_{i\ell}\) & \(T_{1a}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})\eta_{k}W_{kj}(\tilde{\eta}_{j}-\eta_{j})\eta_{ \ell}(\tilde{\eta}_{i}-\eta_{i})\eta_{i}\) \\ & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})W_{j\ell}(\tilde{ \eta}_{i}-\eta_{k})\eta_{j\ell}\tilde{\Omega}_{i\ell}\) & \(T_{2c}\) & \(\sum\eta_{i}(\tilde{\eta}_{j}-\eta_{j})^{2}\eta_{k}\tilde{\Omega}_{k\ell}\eta_{ (\tilde{\eta}_{i}-\eta_{i})}\) \\ \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})\eta_{j\ell}(\tilde{ \eta}_{i}-\eta_{k})\eta_{j\ell}\tilde{\Omega}_{i\ell}\) & \(T_{2a}\) & \(\sum\eta_{i}(\tilde{\eta}_{j}-\eta_{j})\eta_{j\ell}(\tilde{\eta}_{k}-\eta_{k}) \tilde{\Omega}_{i\ell}\) \\ \(R_{13}\) & \(\sum(\tilde{\eta}_{i}-\eta_{i})(\tilde{\eta}_{j}-\eta_{j})\eta_{k}(\tilde{ \eta}_{i}-\eta_{k})\eta_{k}(\tilde{\eta}_{i}-\eta_{k})\eta_{k}\tilde{ \Omega}_{i\ell}\) & \(T_{2b}\) & \(\sum\eta_{i}(\tilde
Define
\[G_{i}=\tilde{\eta}_{i}-\eta_{i}.\] (E.51)
Among the post-expansion sums in Table (1) satisfying \(N_{W}^{*}=5\), only \(R_{7},R_{8},\) and \(R_{25}\)-\(R_{28}\) depend on \(\tilde{\Omega}\). Each of these terms falls into one of the types
\[J_{5}^{\prime} =\sum_{i,j,k,\ell(dist)}\widetilde{\Omega}_{jk}(G_{i}G_{j}G_{k}G_ {\ell}W_{\ell i}),\] \[J_{6}^{\prime} =\sum_{i,j,k,\ell(dist)}\widetilde{\Omega}_{k\ell}(G_{i}G_{j}^{2 }G_{k}W_{\ell i})\] \[J_{9} =\sum_{i,j,k,\ell(dist)}\eta_{k}\widetilde{\Omega}_{\ell i}(G_{i} G_{j}^{2}G_{k}G_{\ell})\] \[J_{10} =\sum_{i,j,k,\ell(dist)}\eta_{\ell}\widetilde{\Omega}_{\ell i}(G_ {i}G_{j}^{2}G_{k}^{2}).\]
See (Jin et al., 2021c, Supplement, Section G.4.10.2) for more details.
To handle \(J_{5}^{\prime}\) and \(J_{6}^{\prime}\), we compare them to
\[J_{5} =\sum_{i,j,k,\ell(dist)}\eta_{j}\eta_{k}(G_{i}G_{j}G_{k}G_{\ell}W _{\ell i})\] \[J_{6} =\sum_{i,j,k,\ell(dist)}\eta_{k}\eta_{\ell}(G_{i}G_{j}^{2}G_{k}W _{\ell i}),\]
both of which are considered in (Jin et al., 2021c, Supplement, Section G.4.10.2). Note that neither \(J_{5}\) nor \(J_{5}\) depends on \(\tilde{\Omega}\). Setting \(T=J_{5}^{\prime}\) and \(T^{*}=J_{5}\) in Lemma E.13 and noting that \(|\tilde{\Omega}_{jk}|\lesssim\theta_{j}\theta_{k}\) by (E.24), we see that the hypotheses of Lemma E.13 are satisfied. In (Jin et al., 2021c, Supplement, Section G.4.10.2), it is shown that
\[\mathbb{E}[J_{5}^{2}]\leq\overline{\mathbb{E}[J_{5}]}^{2}+\overline{\mathrm{ Var}(J_{5})}=o(\|\theta\|_{2}^{8}).\]
Applying Lemma E.13, we conclude that
\[\mathbb{E}[J_{5}^{\prime 2}]=o(\|\theta\|_{2}^{8}).\]
Similarly, it is shown in (Jin et al., 2021c, Supplement, Section G.4.10.2) that
\[\mathbb{E}[J_{6}^{2}]\leq\overline{\mathbb{E}[J_{6}]}^{2}+\overline{\mathrm{ Var}(J_{6})}=o(\|\theta\|_{2}^{8}).\]
Setting \(T=J_{6}^{\prime}\) and \(T^{*}=J_{6}\), the hypotheses of Lemma E.13 are satisfied because \(|\tilde{\Omega}_{k\ell}|\lesssim\theta_{k}\theta_{\ell}\). We conclude that
\[\mathbb{E}[J_{6}^{\prime 2}]=o(\|\theta\|_{2}^{8}).\]
The terms \(J_{9}\) and \(J_{10}\) can be analyzed explicitly using the strategy described in Section E.4.1. We omit the full details and instead give a simplified proof in the case where \(\|\theta\|_{2}\gg\lceil\log(n)\rceil^{5/2}\). The event
\[E=\cap_{i=1}^{n}E_{i},\qquad\text{where}\quad E_{i}=\big{\{}\sqrt{v}|G_{i}| \leq C_{0}\sqrt{\theta_{i}\|\theta\|_{1}\log(n)}\big{\}}.\] (E.52)
is introduced in (Jin et al., 2021c, Supplement,pg.110). By applying Bernstein's inequality and the union bound, it is shown that \(E\) holds with probability at least \(1-n^{-C_{0}/2.01}\). Applying the crude bound \(|G_{i}|\leq n\) and triangle inequality, we see that \(|J_{9}|\lesssim n^{9}\) with high probability, and thus for \(C_{0}\) sufficiently large,
\[\mathbb{E}[|J_{9}|^{2}\cdot\mathbf{1}_{E^{c}}]=o(1).\]
Under the event \(E\), we have by (E.20),
\[|J_{9}|\leq\sum_{i,j,k,\ell}|\eta_{k}\widetilde{\Omega}_{\ell i}||G_{i}G_{j}^{ 2}G_{k}G_{\ell}|\]
\[\lesssim\frac{[\log(n)]^{5/2}}{\sqrt{\|\theta\|_{1}^{3}}}\Big{(}\sum_{ i}\theta_{i}^{3/2}\Big{)}\Big{(}\sum_{j}\theta_{j}\Big{)}\Big{(}\sum_{k}\theta_{k} \Big{)}\Big{(}\sum_{\ell}\theta_{\ell}\Big{)}\] \[\lesssim\frac{[\log(n)]^{5/2}}{\sqrt{\|\theta\|_{1}^{3}}}\Big{(} \|\theta\|\sqrt{\|\theta\|_{1}}\Big{)}^{2}\|\theta\|_{1}^{2}\] \[\lesssim[\log(n)]^{3}\|\theta\|^{2},\]
Above we apply (E.20) and (E.24) as well as Cauchy-Schwarz. It follows that
\[\mathbb{E}[K_{5}^{\prime 2}]=\mathrm{Var}(K_{5}^{\prime})+\mathbb{E}[K_{5}^{ \prime}]^{2}=o(\|\theta\|_{2}^{8}).\]
Finally, all terms with \(N_{W}^{*}\geq 7\) have no dependence on \(\tilde{\Omega}\), and thus the bounds carry over immediately (see (Jin et al., 2021c, Supplement, Section G.4.10.4) for details). This completes the proof of the lemma.
#### e.4.7 Proof of Lemma e.12
Define
\[\epsilon^{(1)}_{ij}=\eta_{i}^{*}\eta_{j}^{*}-\eta_{i}\eta_{j}, \quad\epsilon^{(2)}_{ij}=(1-\frac{v}{V})\eta_{i}\eta_{j},\quad\epsilon^{(3)}_ {ij}=-(1-\frac{v}{V})\delta_{ij}.\]
Note that \(\epsilon^{(1)}_{ij}\) is a nonstochastic term. As shown in (Jin et al., 2021c, Supplement, pg. 119), we have
\[|\epsilon^{(1)}_{ij}|\lesssim\frac{\|\theta\|_{\infty}}{\|\theta \|_{1}}\cdot\theta_{i}\theta_{j},\]
which implies that
\[|\epsilon^{(1)}_{ij}|\lesssim\frac{1}{\|\theta\|_{2}^{2}}\cdot \theta_{i}\theta_{j}\] (E.53)
by (E.2).
As discussed in (Jin et al., 2021c, Supplement, Section G.3), \(Q-Q^{*}\) is a finite sum of terms of the form
\[\sum_{i,j,k,\ell(dist)}a_{ij}b_{jk}c_{k\ell}d_{\ell i},\qquad \text{where}\quad a,b,c,d\in\{\widetilde{\Omega},W,\delta,\tilde{r},\epsilon^ {(1)},\epsilon^{(2)},\epsilon^{(3)}\}.\] (E.54)
Let \(Y\) denote an arbitrary term of the form above, and given \(X\in\{\widetilde{\Omega},W,\delta,\tilde{r},\epsilon^{(1)},\epsilon^{(2)}, \epsilon^{(3)}\}\), let \(N_{X}\) denote the total number of \(a,b,c,d\) that are equal to \(X\). It holds that
\[Y=\big{(}\frac{v}{V}\big{)}^{N_{\tilde{\tau}}}(-1)^{N_{\tilde{ \tau}}^{(3)}}\Big{(}1-\frac{v}{V}\Big{)}^{N_{\epsilon}^{(2)}+N_{\epsilon}^{(3) }}X,\qquad X\equiv\sum_{i,j,k,\ell(dist)}a_{ij}b_{jk}c_{k\ell}d_{\ell i}.\]
where
\[\begin{cases}a,b,c,d\in\{\widetilde{\Omega},W,\delta,(V/v)\tilde{r },\epsilon^{(1)},\eta\eta^{\mathsf{T}}\},\\ \text{number of $\eta_{i}\eta_{j}$ in the product is $N_{\epsilon}^{(2)}$},\\ \text{number of $\delta_{ij}$ in the product is $N_{\delta}+N_{\epsilon}^{(3)}$},\\ \text{number of any other term in the product is same as before.}\end{cases}\] (E.55)
Let \(x_{n}\) denote a sequence of real numbers such that \(\sqrt{\log(\|\theta\|_{1})}\ll x_{n}\ll\|\theta\|_{1}\). Mimicking the argument in (Jin et al., 2021c, Supplement,pg.121), it holds that
\[\mathbb{E}[Y^{2}]\lesssim\Big{(}\frac{x_{n}^{2}}{\|\theta\|_{1} ^{2}}\Big{)}^{N_{\epsilon}^{(2)}+N_{\epsilon}^{(3)}}\cdot\mathbb{E}[X^{2}]+o (1),\]
By (E.4), there exists a sequence \(\log(\|\theta\|_{1})\ll x_{n}\ll\|\theta\|_{1}/\|\theta\|_{2}^{2}\). Hence,
\[\mathbb{E}[Y^{2}]\lesssim\Big{(}\frac{1}{\|\theta\|_{2}^{2}}\Big{)}^{N_{ \epsilon}^{(2)}+N_{\epsilon}^{(3)}}\cdot\mathbb{E}[X^{2}]+o(1),\] (E.56)
Thus we focus on controlling \(\mathbb{E}[X^{2}]\).
Consider a new random variable \(X^{*}\) defined to be
\[X^{*}\equiv\sum_{i,j,k,\ell(dist)}a_{ij}^{*}b_{jk}c_{k\ell}^{*}d _{\ell i}^{*}\]
where
\[a^{*}=\begin{cases}\frac{1}{\|\theta\|_{2}^{2}}\cdot\theta\theta ^{\mathsf{T}}&\text{if $a=\epsilon^{(1)}$}\\ \theta\theta^{\mathsf{T}}&\text{if $a\in\{\tilde{\Omega},\eta\eta^{\mathsf{T}}\}$}\\ a&\text{otherwise}\end{cases}\]
\[b^{*}=\begin{cases}\frac{1}{\|\theta\|_{2}^{2}}\cdot\theta\theta^{\mathsf{T}}&\text{ if }b=\epsilon^{(1)}\\ \theta\theta^{\mathsf{T}}&\text{ if }b\in\{\tilde{\Omega},\eta\eta^{\mathsf{T}}\} \\ b&\text{ otherwise}\end{cases}\]
\[c^{*}=\begin{cases}\frac{1}{\|\theta\|_{2}^{2}}\cdot\theta\theta^{\mathsf{T}} &\text{ if }c=\epsilon^{(1)}\\ \theta\theta^{\mathsf{T}}&\text{ if }c\in\{\tilde{\Omega},\eta\eta^{\mathsf{T}}\} \\ c&\text{ otherwise}\end{cases}\]
\[d^{*}=\begin{cases}\frac{1}{\|\theta\|_{2}^{2}}\cdot\theta\theta^{\mathsf{T}} &\text{ if }d=\epsilon^{(1)}\\ \theta\theta^{\mathsf{T}}&\text{ if }\in\{\tilde{\Omega},\eta\eta^{\mathsf{T}}\} \\ d&\text{ otherwise }.\end{cases}\]
Also define
\[\tilde{X}=\sum_{ijk\ell(dist)}\tilde{a}_{ij}\tilde{b}_{jk}\tilde{c}_{k\ell} \tilde{d}_{\ell i}\]
where
\[\tilde{a}=\begin{cases}\theta\theta^{\mathsf{T}}&\text{ if }a\in\{ \epsilon^{(1)},\tilde{\Omega},\eta\eta^{\mathsf{T}}\}\\ a&\text{ otherwise}\end{cases}\]
\[\tilde{b}=\begin{cases}\theta\theta^{\mathsf{T}}&\text{ if }b\in\{ \epsilon^{(1)},\tilde{\Omega},\eta\eta^{\mathsf{T}}\}\\ b&\text{ otherwise}\end{cases}\]
\[\tilde{c}=\begin{cases}\theta\theta^{\mathsf{T}}&\text{ if }c\in\{\epsilon^{(1)}, \tilde{\Omega},\eta\eta^{\mathsf{T}}\}\\ c&\text{ otherwise}\end{cases}\]
\[\tilde{d}=\begin{cases}\theta\theta^{\mathsf{T}}&\text{ if }d\in\{ \epsilon^{(1)},\tilde{\Omega},\eta\eta^{\mathsf{T}}\}\\ d&\text{ otherwise }.\end{cases}\]
Note that \(X^{*}=\big{(}\frac{1}{\|\theta\|_{2}^{2}}\big{)}^{N_{\epsilon}^{(1)}}\tilde{X}\) and \(\tilde{a},\tilde{b},\tilde{c},\tilde{d}\in\{\theta\theta^{\mathsf{T}},W,\delta,(V/v)\tilde{r}\}\). Later we show that
\[\mathbb{E}[X^{2}]\lesssim\mathbb{E}[X^{*2}]\] (E.57)
First we bound \(\mathbb{E}[\tilde{X}^{2}]\) in the case when \(N_{W}+N_{\delta}+N_{\tilde{r}}=0\). Note that for all such terms in \(Q-Q^{*}\), we have \(N_{\epsilon}^{(1)}+N_{\epsilon}^{(2)}+N_{\epsilon}^{(3)}+N_{\tilde{\Omega}}=4\) and \(N_{\tilde{\Omega}}<4\). In particular, \(\tilde{X}\) and \(X^{*}\) are nonstochastic. If \(N_{\tilde{\Omega}}=3\), then by (E.22) and (E.24),
\[|\tilde{X}|=\big{|}\sum_{ijk\ell(dist)}\tilde{\Omega}_{ij}\tilde{\Omega}_{jk }\tilde{\Omega}_{k\ell}\theta_{i}\theta_{\ell}\big{|}\lesssim\frac{1}{\|\theta \|_{2}^{2}}\sum_{ijk\ell}\beta_{i}\theta_{i}^{2}\beta_{j}^{2}\theta_{j}^{2} \beta_{k}^{2}\theta_{k}^{2}\beta_{\ell}\theta_{\ell}^{2}\lesssim\|\beta\circ \theta\|_{2}^{6}\|\theta\|_{2}^{2}\]
If \(N_{\tilde{\Omega}}=2\), there are two cases. First,
\[|\tilde{X}|=\big{|}\sum_{ijk\ell(dist)}\tilde{\Omega}_{ij}\tilde{\Omega}_{jk }\theta_{k}\theta_{\ell}\theta_{\ell}\theta_{i}\big{|}\lesssim\sum_{ijk\ell} \beta_{i}\theta_{i}\beta_{j}^{2}\theta_{j}^{2}\beta_{k}\theta_{k}^{2}\theta_{ \ell}^{2}\theta_{i}\lesssim\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4},\]
and second
\[|\tilde{X}|=\big{|}\sum_{ijk\ell(dist)}\tilde{\Omega}_{ij}\theta_{j}\theta_{k }\tilde{\Omega}_{k\ell}\theta_{\ell}\theta_{i}\big{|}\lesssim\sum_{ijk\ell} \beta_{i}\theta_{i}^{2}\beta_{j}\theta_{j}^{2}\beta_{k}\theta_{\ell}^{2}\theta _{\ell}^{2}\lesssim\|\beta\circ\theta\|_{2}^{4}\|\theta\|_{2}^{4}\]
Finally if \(N_{\tilde{\Omega}}=1\),
\[|\tilde{X}|=\big{|}\sum_{ijk\ell(dist)}\tilde{\Omega}_{ij}\theta_{j}\theta_{k }^{2}\theta_{\ell}^{2}\theta_{i}\big{|}\lesssim\sum_{ijk\ell}\tilde{\beta}_{i} \theta_{i}^{2}\beta_{j}\theta_{j}^{2}\theta_{k}^{2}\theta_{\ell}^{2}\lesssim\| \beta\circ\theta\|_{2}^{2}\|\theta\|_{2}^{6}.\]
Note that when \(N_{W}+N_{\delta}+N_{\tilde{r}}=0\)
\[|X|\lesssim|X^{*}|\]
by (E.22), (E.20), and (E.53). By the bounds above, we conclude that
\[|Y|\lesssim\big{(}\frac{1}{\|\theta\|_{2}^{2}}\big{)}^{N_{\epsilon}^{(1)}+N_{ \epsilon}^{(2)}+N_{\epsilon}^{(3)}}|\tilde{X}|\lesssim\max_{1\leq k\leq 3}\| \beta\circ\theta\|_{2}^{2k}\|\theta\|_{2}^{2(4-k)}\lesssim|\tilde{\lambda}|^{3}.\] (E.58)
Next we bound \(\mathbb{E}[\tilde{X}^{2}]\) in the case when \(N_{W}+N_{\delta}+N_{\tilde{r}}>0\). By Lemma E.2 and the definition of \(f\in\mathbb{R}^{2}\) there, we have \(\tilde{\Omega}_{ij}=\alpha_{i}\alpha_{j}\theta_{i}\theta_{j}\) where \(\alpha=\Pi f\). Observe that in Lemmas E.7-E.11, we bound the mean and variance of all terms of the form
\[Z\equiv\sum_{i,j,k,\ell(dist)}a_{ij}b_{jk}c_{k\ell}d_{\ell i},\qquad\text{ where}\quad a,b,c,d\in\{\widetilde{\Omega},W,\delta,(V/v)\tilde{r}\}.\]
As a result, the proofs of Lemmas E.7-E.11 produce a function \(F\) such that
\[\mathbb{E}[Z^{2}]\leq F(\theta,\beta;N_{\tilde{\Omega}},N_{W},N_{\delta},N_{ \tilde{r}}),\]
where recall that \(|\alpha_{i}|\leq\beta_{i}\).
Note that in what follows, we use \({}^{\prime}\) to denote a new variable rather than the transpose. As a direct corollary to the proofs of Lemmas E.7-E.11, if we define a new matrix \(\tilde{\Omega}^{\prime}=\alpha^{\prime}_{i}\alpha^{\prime}_{j}\theta_{i}\theta _{j}\) where \(\alpha^{\prime}\) is a vector with a coordinate-wise bound of the form \(|\alpha^{\prime}_{i}|\leq\beta^{\prime}_{i}\), then
\[Z^{\prime}\equiv\sum_{i,j,k,\ell(dist)}a_{ij}b_{jk}c_{k\ell}d_{\ell i},\qquad \text{where}\quad a,b,c,d\in\{\widetilde{\Omega}^{\prime},W,\delta,(V/v) \tilde{r}\}\]
satisfies
\[\mathbb{E}[Z^{{}^{\prime}2}]\leq F(\theta,\beta^{\prime};N^{\prime}_{\Omega^{ \prime}},N^{\prime}_{W},N^{\prime}_{\delta},N^{\prime}_{\tilde{r}}),\]
where, for example, \(N^{\prime}_{\delta}\) counts the number of appearances of \(\delta\) in \(Z^{\prime}\). This can be verified by tracing each calculation in Lemmas E.7-E.11 line by line, replacing all occurences of \(\tilde{\Omega}\) with \(\tilde{\Omega}^{\prime}\), and replacing every usage of the bound \(|\alpha_{i}|\leq\beta_{i}\) with \(|\alpha^{\prime}_{i}|\leq\beta^{\prime}_{i}\) instead. In other words, our proofs make no use of the specific value of \(\alpha=\Pi f\).
In particular, if \(\alpha=\mathbf{1}\), then \(\tilde{\Omega}^{\prime}=\theta\theta^{\mathsf{T}}\). In this case we may set \(\beta=\mathbf{1}\). Observe that \(\tilde{X}\) has the form of \(Z^{\prime}\) with this choice of \(\tilde{\Omega}^{\prime}\). Hence,
\[\mathbb{E}[\tilde{X}^{2}]\leq F(\theta,\mathbf{1};\tilde{N}_{\tilde{\Omega}^{ \prime}},\tilde{N}_{W},\tilde{N}_{\delta},\tilde{N}_{\tilde{r}}).\] (E.59)
By careful inspection of the bounds in Lemmas E.7-E.11, we see that
\[F(\theta,\mathbf{1};N_{\tilde{\Omega}^{\prime}},N_{W},N_{\delta},N_{\tilde{r} })\lesssim\|\theta\|_{2}^{12}.\] (E.60)
In (Jin et al., 2021c, Supplement, Section G.3) it is shown that all terms in the decomposition of \(Q-Q^{*}\) satisfy \(N_{\epsilon}^{(1)}+N_{\epsilon}^{(2)}+N_{\epsilon}^{(3)}>0\). Using this fact along with (E.56), (E.57), (E.59) and (E.60),
\[\mathbb{E}[Y^{2}]\lesssim\Big{(}\frac{1}{\|\theta\|_{2}^{4}}\Big{)}^{N_{ \epsilon}^{(2)}+N_{\epsilon}^{(3)}}\cdot\big{(}\frac{1}{\|\theta\|_{2}^{2}} \big{)}^{2N_{\epsilon}^{(1)}}\cdot\mathbb{E}[\tilde{X}^{2}]+o(1)\lesssim\| \theta\|_{2}^{8}.\] (E.61)
Observe that (E.58) and (E.61) recover the bounds in Lemma E.12 under the alternative hypothesis, and the bounds under the null hypothesis transfer directly from (Jin et al., 2021c, Lemma G.12). Thus it only remains to justify (E.57) when \(N_{W}+N_{\delta}+N_{\tilde{r}}>0\). Let us write
\[X =\sum_{i_{1},\ldots,i_{m}}c_{i_{1},\ldots,i_{m}}G_{i_{1},\ldots,i_ {m}}\] \[X^{*} =\sum_{i_{1},\ldots,i_{m}}c_{i_{1},\ldots,i_{m}}^{*}G_{i_{1},\ldots,i_{m}}\]
in the form described in Section E.4.1, where now
* \(c_{i_{1},\ldots,i_{m}}=\prod_{(s,s^{\prime})\in A}\Gamma_{i_{s},i_{s^{\prime}}} ^{(s,s^{\prime})}\) is a nonstochastic term where \(A\subset[m]\times[m]\) and \[\Gamma^{(s,s^{\prime})}\in\{\tilde{\Omega},\eta^{*}\mathbf{1}^{\mathsf{T}}, \eta\mathbf{1}^{\mathsf{T}},\mathbf{1}\mathbf{1}^{\mathsf{T}},\epsilon^{(1)}, \eta\eta^{\mathsf{T}}\}\]
* \(c_{i_{1},\ldots,i_{m}}^{*}=\prod_{(s,s^{\prime})\in A}\Gamma_{i_{s},i_{s^{ \prime}}}^{(s,s^{\prime})}\) is a nonstochastic term where \(A\subset[m]\times[m]\) and \[\Gamma^{(s,s^{\prime})}\in\{\eta^{*}\mathbf{1}^{\mathsf{T}},\eta\mathbf{1}^{ \mathsf{T}},\mathbf{1}\mathbf{1}^{\mathsf{T}},\theta\theta^{\mathsf{T}}/\| \theta\|_{2}^{2},\theta\theta^{\mathsf{T}}\}\]
* \(G_{i_{1},\ldots,i_{m}}=\prod_{(s,s^{\prime})\in B}W_{i_{s},i_{s^{\prime}}}\) where \(B\subset[m]\times[m]\).
If \(\Gamma^{(s,s^{\prime})}\in\{\theta\theta^{\mathsf{T}},\theta\theta^{\mathsf{T}}/ \|\theta\|_{2}^{2}\}\), we simply let \(\overline{\Gamma^{(s,s^{\prime})}}=\Gamma^{(s,s^{\prime})}\) and define
\[\overline{c_{i_{1},\ldots,i_{m}}^{*}}=\prod_{(s,s^{\prime})\in A}\overline{ \Gamma^{(s,s^{\prime})}_{i_{s},i_{s^{\prime}}}}\]
as in Section E.4.1. We also define the canonical upper bound \(\overline{\mathbb{E}X^{*}}\) on \(|\mathbb{E}X^{*}|\) and the canonical upper bound \(\overline{\mathrm{Var}(X^{*})}\) on \(\mathrm{Var}(X^{*})\) similarly to Section E.4.1. By the discussion above and (E.59),
\[\overline{\mathbb{E}[X^{*}]}\equiv\big{(}\frac{1}{\|\theta\|_{2}^{2}}\big{)} ^{N_{i}^{(1)}}\sqrt{F(\theta,\mathbf{1};\tilde{N}_{\Omega^{\prime}},\tilde{N }_{W},\tilde{N}_{\delta},\tilde{N}_{\tilde{r}})},\]
and
\[\overline{\mathrm{Var}(X^{*})}\equiv\big{(}\frac{1}{\|\theta\|_{2}^{2}}\big{)} ^{2N_{i}^{(1)}}F(\theta,\mathbf{1};\tilde{N}_{\Omega^{\prime}},\tilde{N}_{W}, \tilde{N}_{\delta},\tilde{N}_{\tilde{r}}).\]
Next observe that
\[|c_{i_{1},\ldots,i_{m}}|\lesssim|c_{i_{1},\ldots,i_{m}}^{*}| \lesssim|\overline{c_{i_{1},\ldots,i_{m}}^{*}}|.\]
By a mild extension of Lemma E.13 it follows that
\[|\mathbb{E}X| \lesssim\overline{\mathbb{E}X^{*}}\] \[\mathrm{Var}(X) \lesssim\overline{\mathrm{Var}(X^{*})},\]
which verifies (E.57) and completes the proof.
### Calculations in the SBM setting
We compute the order of \(\lambda_{1}\) and \(\tilde{\lambda}_{1}=\lambda_{2}\) in the SBM setting (which are the two nonzero eigenvalues of \(\Omega\)). By basic algebra, \(\lambda_{1},\lambda_{2}\) are also the two nonzero eigenvalues of the following matrix
\[\left[\begin{array}{cc}N&0\\ 0&n-N\end{array}\right]^{1/2}\times\left[\begin{array}{cc}a&b\\ b&c\end{array}\right]\times\left[\begin{array}{cc}N&0\\ 0&n-N\end{array}\right]^{1/2}=\left[\begin{array}{cc}aN&\sqrt{N(n-N)}b\\ \sqrt{N(n-N)}b&(n-N)c\end{array}\right],\]
where \(b\) is given by (H.1). By direct calculations and pluging the definitions of \(b\),
\[\lambda_{1}= \frac{aN+(n-N)c+\sqrt{(aN-(n-N)c)^{2}+4N(n-N)b^{2}}}{2}\] \[= \frac{aN+(n-N)c+|(n-N)c-aN|\frac{n}{n-2N}}{2}.\]
Recall that
\[b=\frac{nc-N(a+c)}{n-2N}.\]
It is required that \(b\geq 0\). Therefore,
\[nc-(a+c)N\geq 0,\qquad\text{and so}\qquad(n-N)c\geq aN.\] (E.62)
By direct calculations, it follows that
\[\lambda_{1}=\frac{(n-N)^{2}c-aN^{2}}{n-2N}=\frac{(n-N)c((n-N)-\frac{aN}{(n-N)c }N)}{n-2N}\sim\frac{(n-N)c(n-N)}{n-2N}\sim nc\]
where in the last two \(\asymp\), we have used \((n-N)c\geq aN\) and \(N=o(n)\). Similarly,
\[\lambda_{2}=\frac{aN+(n-N)c-\sqrt{(aN-(n-N)c)^{2}+4N(n-N)b^{2}}}{2}=\frac{(a- c)N(n-N)}{n-2N}\sim N(a-c).\]
Proof of Theorem 2.3 (Powerlessness of \(\chi^{2}\) test)
We compare the SgnQ test with the \(\chi^{2}\) test. Recall we assume \(\theta_{i}=\mathbf{1}_{n}\). The \(\chi^{2}\) test statistic is defined to be
\[X_{n}=\frac{1}{\hat{\alpha}(1-\hat{\alpha})(n-1)}\sum_{i=1}^{n}\big{(}(A \mathbf{1}_{n})_{i}-\hat{\alpha}n\big{)}^{2},\qquad\text{ where }\hat{\alpha}=\frac{1}{n(n-1)}\sum_{i\neq j}A_{ij}.\]
We also define an idealized \(\chi^{2}\) test statistic by
\[\tilde{X}_{n}=\frac{1}{\alpha(1-\alpha)(n-1)}\sum_{i=1}^{n}\big{(}(A\mathbf{ 1}_{n})_{i}-\alpha n\big{)}^{2},\qquad\text{ where }\alpha=\frac{1}{n(n-1)}\sum_{i\neq j} \Omega_{ij}.\]
The \(\chi^{2}\) test is defined to be
\[\chi^{2}_{n}=\mathbf{1}\Big{[}\frac{|X_{n}-n|}{\sqrt{2n}}>z_{\gamma/2}\Big{]},\]
where \(z_{\gamma}\) is such that \(\mathbb{P}[|N(0,1)|\geq z_{\gamma}]=\gamma\). Similarly, the idealized \(\chi^{2}\) test is defined by
\[\tilde{\chi}^{2}_{n}=\mathbf{1}\Big{[}\frac{|\tilde{X}_{n}-n|}{\sqrt{2n}}>z_{ \gamma/2}\Big{]},\]
In certain degree-homogeneous settings, the \(\chi^{2}\) test is known to have full power Arias-Castro & Verzelen (2014); Cammarata & Ke (2022).
We prove the following, which directly implies Theorem 2.3.
**Theorem F.1**.: _Suppose that (2.7) holds and that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\), and recall that under these conditions, the power of the SgnQ test goes to \(1\). Next suppose that the following regularity conditions hold under the null and alternative:_
1. \(\theta=\mathbf{1}_{n}\)__
2. \(\alpha\to 0\)__
3. \(\alpha^{2}n\to\infty\)__
4. \(\sum_{ij}(\Omega_{ij}-\alpha)^{2}=o(\alpha n^{3/2})\)_._
_Then the power of both the \(\chi^{2}\)-test and idealized \(\chi^{2}\)-test goes to \(\gamma\) (which is the prescribed level of the test)._
Note that the previous theorem implies Theorem 2.3. By Theorem 2.2, SgnQ has full power even without the extra regularity conditions (i)-(iv). On the other hand, for any fixed alternative DCBM satisfying (i)-(iv), Theorem F.1 implies that \(\chi^{2}\) has power \(\kappa\).
Proof of Theorem F.1.: Theorem 2.2 confirms that SgnQ has full power provided that (2.7) holds and that \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\to\infty\). It remains to justify the powerlessness of the \(\chi^{2}\) test.
Consider an SBM in the alternative such that \(\Omega\mathbf{1}=(\alpha n)\mathbf{1}\) and \(|\tilde{\lambda}|/\sqrt{\lambda_{1}}\asymp N(a-c)/\sqrt{nc}\to\infty\). To do this we select an integer \(N>0\) to be the size of the smaller community and set \(b=\frac{cn-(a+c)N}{n-2N}\). The remaining regularity conditions are satisfied if \(c\to 0\) and \(cn\ll N(a-c)^{2}\ll cn^{3/2}\). We show that both \(\tilde{X}_{n}\) and \(\tilde{X}_{n}\) are asymptotically normal under the specified alternative, which is enough to imply Theorem F.1.
In Cammarata & Ke (2022) it is shown that
\[\hat{T}_{n}\equiv[(n-1)\hat{\alpha}(1-\hat{\alpha})](X_{n}-n)=\sum_{i,j,k\text { (dist.)}}(A_{ik}-\hat{\alpha})(A_{jk}-\hat{\alpha}).\] (F.1)
We introduce an idealized version \(T_{n}\) of \(\hat{T}_{n}\), which is
\[T_{n}=\sum_{i,j,k\text{ (dist.)}}(A_{ik}-\alpha)(A_{jk}-\alpha),\]
Following Cammarata & Ke (2022), we have
\[\frac{X_{n}-n}{\sqrt{2n}}=\left(\frac{n-2}{n-1}\right)^{1/2}U_{n}V_{n}Z_{n}.\] (F.2)
where
\[U_{n}=\frac{\alpha_{n}(1-\alpha_{n})}{\hat{\alpha}_{n}(1-\hat{ \alpha}_{n})}, V_{n}=\frac{\hat{T}_{n}}{T_{n}}, Z_{n}=\frac{\frac{T_{n}}{(n-1)\alpha_{n}(1-\alpha_{n})}}{\sqrt{ \frac{2n(n-2)}{(n-1)}}}.\]
Since the terms of \(\hat{\alpha}\) are bounded, the law of large numbers implies that \(U_{n}\stackrel{{\mathbb{P}}}{{\rightarrow}}1\). Furthermore, since \(\alpha n\rightarrow\infty\) by assumption that \(\alpha^{2}n\rightarrow\infty\), a straightforward application of the Berry-Esseen theorem implies that
\[\sqrt{\frac{n(n-1)}{2}}\frac{\hat{\alpha}_{n}-\alpha_{n}}{\sqrt{ \alpha_{n}(1-\alpha_{n})}}\Rightarrow\mathcal{N}(\mu,1).\]
With the previous fact, mimicking the argument in (Cammarata & Ke, 2022, pg.32), it also follows that
\[V_{n}\stackrel{{\mathbb{P}}}{{\rightarrow}}1,\]
_provided we can show that \(Z_{n}\Rightarrow N(0,1)\)._ We omit the details since the argument is very similar. Thus it suffices to study \(Z_{n}\). We first analyze \(T_{n}\), which we decompose as
\[T_{n} =\sum_{i,j,k\,(dist.)}(A_{ik}-\Omega_{ik})(A_{jk}-\Omega_{jk})+2 \sum_{ijk(dist)}(\Omega_{ik}-\alpha)(A_{jk}-\Omega_{jk})\] \[\quad+\sum_{ijk(dist)}(\Omega_{ik}-\alpha)(\Omega_{jk}-\alpha) \equiv T_{n1}+T_{n2}+T_{n3}.\]
Observe that \(T_{n3}\) is non-stochastic. The second and third term are negligible compared to \(T_{n1}\). Define \(\overline{\Omega}=\Omega-\alpha 1\mathbf{1}^{\prime}\). By direct calculations,
\[\mathbb{E}T_{n2}=0,\]
and
\[\mathrm{Var}(T_{n2})=8\sum_{j<k(dist)}\bigl{(}\sum_{i\notin\{j,k \}}\overline{\Omega}_{ik}\bigr{)}^{2}\Omega_{jk}(1-\Omega_{jk})=8\sum_{j<k( dist)}\bigl{(}\overline{\Omega}_{jk}+\overline{\Omega}_{kk}\bigr{)}^{2} \Omega_{jk}(1-\Omega_{jk})\lesssim\alpha n^{2}.\]
Next,
\[|T_{n3}| =\big{|}\sum_{ijk}\overline{\Omega}_{ik}\overline{\Omega}_{jk}- \sum_{ijk(not\,dist.)}\overline{\Omega}_{ik}\overline{\Omega}_{jk}\big{|}= \big{|}\sum_{ijk(not\,dist.)}\overline{\Omega}_{ik}\overline{\Omega}_{jk}\big{|}\] \[\lesssim\big{|}\sum_{ij}\overline{\Omega}_{ii}\overline{\Omega}_{ ji}\big{|}+\big{|}\sum_{ik}\overline{\Omega}_{ik}^{2}\big{|}+\big{|}\sum_{i} \overline{\Omega}_{ii}^{2}\big{|}=0+o(\alpha n^{3/2})+n=o(\alpha n^{3/2}),\]
where we apply the third regularity condition.
Now we focus on \(T_{n1}\). By direct calculations
\[\mathbb{E}T_{n1}=0,\]
and
\[\mathrm{Var}\,T_{n1} =2\sum_{i,j,k(dist)}\Omega_{ik}(1-\Omega_{ik})\Omega_{jk}(1- \Omega_{jk})\] \[=2\sum_{i,j,k}\Omega_{ik}(1-\Omega_{ik})\Omega_{jk}(1-\Omega_{jk}) -2\sum_{i,j,k(not\,dist.)}\Omega_{ik}(1-\Omega_{ik})\Omega_{jk}(1-\Omega_{jk})\] \[=2\mathbf{1}^{\prime}\Omega^{2}\mathbf{1}-2\sum_{i,j,k(not\,dist.)}\Omega_ {ik}(1-\Omega_{ik})\Omega_{jk}(1-\Omega_{jk})\]
Note that
\[2\mathbf{1}^{\prime}\Omega^{2}\mathbf{1}\sim 2n(n-1)(n-2)\alpha^{2}\]
since \(\alpha\to 0\). Moreover, with some simple casework we can show
\[\sum_{i,j,k(not\,dist.)}\Omega_{ik}(1-\Omega_{ik})\Omega_{jk}(1-\Omega_{jk}) \lesssim\alpha n^{2}=o(\alpha^{2}n^{3}),\]
where we use that \(\alpha n\to\infty\) (because \(\alpha^{2}n\to\infty\)). Hence
\[\mathrm{Var}\,T_{n1}\sim 2n(n-1)(n-2)\alpha^{2}(1-\alpha)^{2}\sim 2n(n-1)(n-2) \alpha^{2}(1-\alpha)^{2}.\]
To study \(T_{n1}\) we apply the martingale central limit theorem using a similar argument to Cammarata & Ke (2022)). Define \(W_{ij}=A_{ij}-\Omega_{ij}\) and
\[T_{n,m} =\sum_{(i,j,k)\in I_{m}}W_{ik}W_{jk},\qquad\text{ and }\qquad T_{n,0}=0,\] \[Z_{n,m} =\sqrt{\frac{n-1}{2n(n-2)}}\frac{T_{n,m}}{(n-1)\alpha_{n}(1- \alpha_{n})},\qquad\text{ and }\qquad Z_{n,0}=0.\]
where
\[I_{m}=\{(i,j,k)\in[m]^{3}\text{ s.t. }i,j,k\text{ are distinct}\},\]
and \(m\leq n\). Define a filtration \(\{\mathcal{F}_{n,m}\}\) where \(\mathcal{F}_{n,m}=\sigma\{W_{ij},(i,j)\in[m]^{2}\}\) for all \(m\in[n]\), and let \(\mathcal{F}_{n,0}\) be the trivial \(\sigma\)-field. It is straightforward to verify that \(T_{n,m}\) and \(Z_{n,m}\) are martingales with respect to this filtration. We further define a martingale difference sequence
\[X_{n,m}=Z_{n,m}-Z_{n,m-1}\]
for all \(m\in[n]\).
If we can show that the following conditions hold
\[\text{(a) }\sum_{m=1}^{n}\mathbb{E}[X_{n,m}^{2}|\mathcal{F}_{n,m-1}] \xrightarrow{\mathbb{P}}1,\] (F.3) \[\text{(b) }\forall\epsilon>0,\sum_{m=1}^{n}\mathbb{E}[X_{n,m}^{2} \mathbf{1}\{|X_{n,m}>\epsilon|\}|\mathcal{F}_{n,m-1}]\xrightarrow{\mathbb{P }}0,\] (F.4)
then the Martingale Central Limit Theorem implies that \(Z_{n}\Rightarrow\mathcal{N}(0,1)\).
Our argument follows closely Cammarata & Ke (2022). First consider (F.3). It suffices to show that
\[\mathbb{E}\left[\sum_{m=1}^{n}\mathbb{E}[X_{n,m}^{2}|\mathcal{F}_{n,m-1}] \right]\xrightarrow{n\to\infty}1,\] (F.5)
and
\[\mathrm{Var}\left(\sum_{m=1}^{n}\mathbb{E}[X_{n,m}^{2}|\mathcal{F}_{n,m-1}] \right)\xrightarrow{n\to\infty}0.\] (F.6)
For notational brevity, define
\[C_{n}:=(n-1)\alpha_{n}(1-\alpha_{n})\sqrt{\frac{2n(n-2)}{n-1}}.\]
Mimicking the argument in (Cammarata & Ke, 2022, pgs.33-34) shows the following. Note that all sums below are indexed up to \(m-1\).
\[\mathbb{E}[C_{n}^{2}X_{n,m}^{2}| \mathcal{F}_{n,m-1}]=4\sum_{k\neq j;\ i\neq l}W_{jk}W_{il}\mathbb{ E}\left[W_{mk}W_{mi}\right]+4\sum_{k\neq j;\ i\neq l}W_{jk}\mathbb{E}\left[W_{im}W_{km}W_{ lm}\right]\] \[+\sum_{i\neq j;\ k\neq l}\mathbb{E}\left[W_{im}W_{jm}W_{km}W_{ lm}\right].\] (F.7)
Continuing, we have
\[\mathbb{E}[C_{n}^{2}X_{n,m}^{2}|\mathcal{F}_{n,m-1}] =4\sum_{i}\sum_{j\neq i,l\neq i}W_{ij}W_{il}\Omega_{mi}(1-\Omega_{ mi})+2\sum_{i,j(dist)}\Omega_{im}(1-\Omega_{im})\Omega_{jm}(1-\Omega_{jm})\] \[=4\sum_{ij\ell(dist)}W_{ij}W_{il}\Omega_{mi}(1-\Omega_{mi})+4 \sum_{i,j(dist)}W_{ij}^{2}\Omega_{mi}(1-\Omega_{mi})\] \[\qquad+2\sum_{i,j(dist)}\Omega_{im}(1-\Omega_{im})\Omega_{jm}(1- \Omega_{jm}).\] (F.8)
Computing expectations,
\[\mathbb{E}[\mathbb{E}[C_{n}^{2}X_{n,m}^{2}|\mathcal{F}_{n,m-1}]]\] \[=4\sum_{i,j(dist)}\Omega_{ij}(1-\Omega_{ij})\Omega_{mi}(1-\Omega _{mi})+2\sum_{i,j(dist)}\Omega_{im}(1-\Omega_{im})\Omega_{jm}(1-\Omega_{jm})\]
Summing over \(m\) and a simple combinatorial argument yields
\[C_{n}^{2}\mathbb{E}\big{[}\sum_{m=1}^{n}\mathbb{E}[X_{n,m}^{2}|\mathcal{F}_{n,m-1}]\big{]}=2\sum_{i,j,k(dist)}\Omega_{ik}(1-\Omega_{ik})\Omega_{jk}(1-\Omega _{jk})\sim C_{n}^{2}.\]
Using the identity
\[W_{ij}^{2}=(1-2\Omega_{ij})W_{ij}+\Omega_{ij}(1-\Omega_{ij}),\]
we have
\[\mathbb{E}[C_{n}^{2}X_{n,m}^{2}|\mathcal{F}_{n,m-1}] =4\sum_{ij\ell(dist)}W_{ij}W_{il}\Omega_{mi}(1-\Omega_{mi})+4\sum _{i,j(dist)}W_{ij}^{2}\Omega_{mi}(1-\Omega_{mi})\] \[=24\sum_{i<j<\ell}W_{ij}W_{il}\Omega_{mi}(1-\Omega_{mi})+8\sum_{i <j}W_{ij}(1-2\Omega_{ij})\Omega_{mi}(1-\Omega_{mi})\] \[\quad+4\sum_{i<j}\Omega_{ij}(1-\Omega_{ij})\Omega_{mi}(1-\Omega _{mi}).\]
Thus
\[\sum_{m=1}^{n}\mathbb{E}[C_{n}^{2}X_{n,m}^{2}|\mathcal{F}_{n,m-1}] =24\sum_{i<j<\ell}\big{(}\sum_{m>\max(i,j,\ell)}\Omega_{mi}(1- \Omega_{mi})\ \big{)}W_{ij}W_{i\ell}\] \[\quad+8\sum_{i<j}\big{(}\sum_{m>\max(i,j,\ell)}\Omega_{mi}(1- \Omega_{mi})\ \big{)}(1-2\Omega_{ij})W_{ij}.\]
All terms above are uncorrelated. Hence,
\[\mathrm{Var}\left(\sum_{m=1}^{n}\mathbb{E}[C_{n}^{2}X_{n,m}^{2}| \mathcal{F}_{n,m-1}]\right) =24^{2}\sum_{i<j<\ell}\big{(}\sum_{m>\max(i,j,\ell)}\Omega_{mi}( 1-\Omega_{mi})\ \big{)}^{2}\Omega_{ij}(1-\Omega_{ij})\Omega_{i\ell}(1-\Omega_{i\ell})\] \[\quad+64\sum_{i<j}\big{(}\sum_{m>\max(i,j,\ell)}\Omega_{mi}(1- \Omega_{mi})\ \big{)}^{2}(1-2\Omega_{ij})^{2}\Omega_{ij}(1-\Omega_{ij})\] \[\lesssim n^{2}\cdot C_{n}^{2},\]
whence,
\[\mathrm{Var}\left(\sum_{m=1}^{n}\mathbb{E}[X_{n,m}^{2}|\mathcal{F}_{n,m-1}] \right)\lesssim\frac{n^{2}}{C_{n}^{2}}\asymp\frac{n^{2}}{\alpha^{2}n^{3}}\to 0\]
since \(\alpha^{2}n\to\infty\). Thus we have shown (F.5) and (F.6), which together prove (F.3).
Next we prove (F.4), again following the argument in Cammarata & Ke (2022). In (Cammarata & Ke, 2022, pg.36) it is shown that it suffices to prove
\[\sum_{m=1}^{n}\mathbb{E}[X_{n,m}^{4}]\xrightarrow{n\to\infty}0.\] (F.9)
Further in (Cammarata & Ke, 2022, pg.37), it is shown that
\[\mathbb{E}[C_{n}^{4}X_{n,m}^{4}]= 16\biggl{[}\sum_{i<j}\mathbb{E}[W_{jm}^{4}]\mathbb{E}[(W_{ij}+W_{ im})^{4}]\] \[+3\sum_{\begin{subarray}{c}i<j,u<v\\ i\neq u,j\neq v\end{subarray}}\mathbb{E}[W_{jm}^{2}]\,\mathbb{E}[(W_{ij}+W_{ im})^{2}]\,\mathbb{E}[W_{em}^{2}]\,\mathbb{E}[(W_{uv}+W_{um})^{2}]\] \[+3\sum_{\begin{subarray}{c}i<j,v\\ j\neq v\end{subarray}}\mathbb{E}[W_{jm}^{2}]\,\mathbb{E}[W_{vm}^{2}]\, \mathbb{E}[(W_{ij}+W_{im})^{2}(W_{iv}+W_{im})^{2}]\] \[+3\sum_{\begin{subarray}{c}i,u<j\\ i\neq u\end{subarray}}\mathbb{E}[(W_{ij}+W_{im})^{2}]\,\mathbb{E}[(W_{uj}+W_{ um})^{2}]\,\mathbb{E}[W_{jm}^{4}]\biggr{]}.\]
Going through term by term, we have for \(n\) sufficiently large
\[\sum_{i<j}\mathbb{E}[W_{jm}^{4}]\mathbb{E}[(W_{ij}+W_{im})^{4}]\lesssim\sum_ {i,j}\Omega_{jm}\bigl{(}\Omega_{ij}+\Omega_{im}\bigr{)}\lesssim\alpha^{2}n^{2}\]
Next
\[\sum_{\begin{subarray}{c}i<j,u<v\\ i\neq u,j\neq v\end{subarray}}\mathbb{E}[W_{jm}^{2}] \mathbb{E}[(W_{ij}+W_{im})^{2}]\,\mathbb{E}[W_{vm}^{2}]\, \mathbb{E}[(W_{ij}+W_{im})^{2}]\lesssim\sum_{ijuv}\Omega_{jm}(\Omega_{ij}+ \Omega_{jm})\Omega_{vm}(\Omega_{uv}+\Omega_{um})\] \[=\sum_{ijuv}\Omega_{jm}\Omega_{ij}\Omega_{vm}\Omega_{uv}+\sum_{ ijuv}\Omega_{jm}\Omega_{ij}\Omega_{vm}\Omega_{um}+\sum_{ijuv}\Omega_{jm}^{2} \Omega_{vm}\Omega_{uv}\] \[+\sum_{ijuv}\Omega_{jm}^{2}\Omega_{vm}\Omega_{um}\] \[\lesssim\alpha^{4}n^{4}+\alpha^{3}n^{3}\]
With a similar argument, we also have, for \(n\) sufficiently large,
\[\sum_{\begin{subarray}{c}i<j,v\\ j\neq v\end{subarray}}\mathbb{E}[W_{jm}^{2}]\,\mathbb{E}[W_{vm}^{2}]\, \mathbb{E}[(W_{ij}+W_{im})^{2}(W_{iv}+W_{im})^{2}]\lesssim\alpha^{2}n^{2}+ \alpha^{3}n^{3}\] \[\sum_{\begin{subarray}{c}i,u<j\\ i\neq u\end{subarray}}\mathbb{E}[(W_{ij}+W_{im})^{2}]\,\mathbb{E}[(W_{uj}+W_{ um})^{2}]\,\mathbb{E}[W_{jm}^{4}]\biggr{]}\lesssim\alpha^{3}n^{3}+\alpha^{2}n^{2}.\]
Thus
\[\sum_{m=1}^{n}\mathbb{E}[X_{n,m}^{4}]\lesssim\frac{\alpha^{4}n^{5}}{C_{n}^{4}} \sim\frac{\alpha^{4}n^{5}}{\alpha^{4}n^{6}}\to 0,\]
which verifies (F.9). Since (F.9) implies (F.4), this completes the proof.
## Appendix G Proof of Theorem 2.4 (Statistical lower bound)
Let \(f_{0}(A)\) be the density under the null hypothesis. Let \(\mu(\Pi)\) be the density of \(\Pi\), and let \(f_{1}(A|\Pi)\) be the conditional density of \(A\) given \(\Pi\). The \(L_{1}\) distance between two hypotheses is
\[\ell^{*}\equiv\frac{1}{2}\mathbb{E}_{A\sim f_{0}}\bigl{|}\mathbb{E}_{\Pi\sim \mu}L(A,\Pi)-1\bigr{|},\qquad L(A,\Pi)=f_{1}(A|\Pi)/f_{0}(A).\]
Define
\[\mathcal{M}=\bigl{\{}\Pi:\Pi\text{ is an eligible membership matrix and }\sum_{i}\pi_{i}(1)\leq 2n\epsilon\bigr{\}}.\] (G.1)
Write \(L^{\mathcal{M}}(A,\Pi)=L(A,\Pi)\cdot 1\{\Pi\in\mathcal{M}\}\) and define \(L^{\mathcal{M}^{\varepsilon}}(A,\Pi)\) similarly. By direct calculations, we have
\[\ell^{*}=\frac{1}{2}\mathbb{E}_{A\sim f_{0}}\bigl{|}\mathbb{E}_{\Pi\sim\mu}L^{ \mathcal{M}}(A,\Pi)-1+\mathbb{E}_{\Pi\sim\mu}L^{\mathcal{M}^{\varepsilon}}(A, \Pi)\bigr{|}\]
\[\leq\frac{1}{2}\mathbb{E}_{A\sim f_{0}}\big{|}\mathbb{E}_{\Pi\sim\mu} L^{\mathcal{M}}(A,\Pi)-1\big{|}+\frac{1}{2}\mathbb{E}_{A\sim f_{0}}\mathbb{E}_{\Pi \sim\mu}L^{\mathcal{M}^{c}}(A,\Pi)\] \[\equiv\frac{1}{2}\ell_{0}+\frac{1}{2}\ell_{1}.\] (G.2)
Note that \(\mathbb{E}_{A\sim f_{0}}\mathbb{E}_{\Pi\sim\mu}L^{\mathcal{M}^{c}}(A,\Pi)= \int_{\Pi\in\mathcal{M}^{c}}f_{1}(A|\Pi)\mu(\Pi)d\Pi dA=\int_{\Pi\in\mathcal{M }^{c}}\mu(\Pi)d\Pi=\mu(\mathcal{M}^{c})\). We bound the probability of \(\mu\in\mathcal{M}^{c}\). Note that \(\pi_{i}(1)\) are independent Bernoulli variables with mean \(\epsilon\), where \(\epsilon\asymp n^{-1}N\). It follows by Bernstein inequality that if \(t=100\sqrt{N\log N}\), the we have conservatively,
\[\mathbb{P}\Big{(}\Big{|}\sum_{i}\pi_{i}(1)-N\Big{|}>t\Big{)}\leq 2 \exp\biggl{(}-\frac{t^{2}/2}{n\varepsilon+t/3}\biggr{)}\leq 2\exp\biggl{(}- \frac{100^{2}N(\log N)/2}{200N}\biggr{)}\lesssim N^{-c}=o(1)\] (G.3)
for some \(c>0\). It follows that
\[\ell_{1}=\mu(\mathcal{M}^{c})=o(1).\] (G.4)
By Cauchy-Schwarz inequality,
\[\ell_{0}^{2} \leq\mathbb{E}_{A\sim f_{0}}\big{|}\mathbb{E}_{\Pi\sim\mu}L^{ \mathcal{M}}(A,\Pi)-1\big{|}^{2}\] \[=\mathbb{E}_{A\sim f_{0}}\big{(}\mathbb{E}_{\Pi\sim\mu}L^{ \mathcal{M}}(A,\Pi))^{2}-2\mathbb{E}_{A\sim f_{0}}\mathbb{E}_{\Pi\sim\mu}L^{ \mathcal{M}}(A,\Pi)+1\] \[=\mathbb{E}_{A\sim f_{0}}\big{(}\mathbb{E}_{\Pi\sim\mu}L^{ \mathcal{M}}(A,\Pi))^{2}-2\big{[}1-\mathbb{E}_{A\sim f_{0}}\mathbb{E}_{\Pi\sim \mu}L^{\mathcal{M}^{c}}(A,\Pi)\big{]}+1\] \[\leq\mathbb{E}_{A\sim f_{0}}\big{(}\mathbb{E}_{\Pi\sim\mu}L^{ \mathcal{M}}(A,\Pi))^{2}-1+o(1),\]
where the third line is from \(\mathbb{E}_{A\sim f_{0}}\mathbb{E}_{\Pi\sim\mu}L(A,\Pi)=1\) and the last line is from (G.4). We plug it into (G.2) to get
\[\ell^{*}\leq\sqrt{\ell_{2}-1}+o(1),\qquad\text{where}\quad\ell_{2}\equiv \mathbb{E}_{A\sim f_{0}}\big{(}\mathbb{E}_{\Pi\sim\mu}L^{\mathcal{M}}(A,\Pi) )^{2}.\] (G.5)
It suffices to prove that \(\ell_{2}\leq 1+o(1)\).
Below, we study \(\ell_{2}\). Let \(\widetilde{\Pi}\) be an independent copy of \(\Pi\). Define
\[S(A,\Pi,\widetilde{\Pi})=L(A,\Pi)\cdot L(\widetilde{\Pi},A).\]
It is easy to see that
\[\ell_{2}=\mathbb{E}_{A\sim f_{0},\Pi,\widetilde{\Pi}\sim\mu}\big{[}S(A,\Pi, \widetilde{\Pi})\cdot 1\{\Pi\in\mathcal{M},\widetilde{\Pi}\in\mathcal{M}\}\big{]}.\] (G.6)
Denote by \(p_{ij}\) and \(q_{ij}(\Pi)\) the values of \(\Omega_{ij}\) under the null and the alternative, respectively. Write \(\delta_{ij}(\Pi)=(q_{ij}(\Pi)-p_{ij})/p_{ij}\). By definition,
\[S(A,\Pi,\widetilde{\Pi})=\prod_{i<j}\left[\frac{q_{ij}(\Pi)q_{ij}(\widetilde{ \Pi})}{p_{ij}^{2}}\right]^{A_{ij}}\left[\frac{(1-q_{ij}(\Pi))(1-q_{ij}( \widetilde{\Pi}))}{(1-p_{ij})^{2}}\right]^{1-A_{ij}}.\]
Write for short \(q_{ij}(\Pi)=q_{ij}\), \(q_{ij}(\widetilde{\Pi})=\tilde{q}_{ij}\), \(\delta_{ij}(\Pi)=\delta_{ij}\) and \(\delta_{ij}(\widetilde{\Pi})=\tilde{\delta}_{ij}\). By straightforward calculations, we have the following claims:
\[\mathbb{E}_{A\sim f_{0}}[S(A,\Pi,\widetilde{\Pi})]=\prod_{i<j} \Bigl{(}1+\frac{p_{ij}\delta_{ij}\tilde{\delta}_{ij}}{1-p_{ij}}\Bigr{)},\] (G.7)
and
\[\ln S(A,\Pi,\widetilde{\Pi})=\sum_{i<j}A_{ij}\ln\biggl{[}\frac{(1 +\delta_{ij})(1+\tilde{\delta}_{ij})}{(1-\frac{p_{ij}}{1-p_{ij}}\delta_{ij})(1 -\frac{p_{ij}}{1-p_{ij}}\tilde{\delta}_{ij})}\biggr{]}\] \[\qquad\qquad\qquad+\ln\biggl{[}\Bigl{(}1-\frac{p_{ij}}{1-p_{ij}} \delta_{ij}\Bigr{)}\Bigl{(}1-\frac{p_{ij}}{1-p_{ij}}\delta_{ij}\Bigr{)} \biggr{]}.\] (G.8)
The expression (G.8) may be useful for the case of \(Nc\to 0\). In the current case of \(Nc\to\infty\), we use (G.7). It follows from (G.6) that
\[\ell_{2}=\mathbb{E}_{\Pi,\widetilde{\Pi}\sim\mu}\left[\prod_{i<j} \Bigl{(}1+\frac{p_{ij}\delta_{ij}\tilde{\delta}_{ij}}{1-p_{ij}}\Bigr{)}\cdot 1 \{\Pi\in\mathcal{M},\widetilde{\Pi}\in\mathcal{M}\}\right]\]
\[=\mathbb{E}_{\Pi,\widetilde{\Pi}\sim\mu}\bigg{[}\exp\biggl{(}\sum_{i<j} \ln\Bigl{(}1+\frac{p_{ij}\delta_{ij}\tilde{\delta}_{ij}}{1-p_{ij}}\Bigr{)} \biggr{)}\cdot 1\{\Pi\in\mathcal{M},\widetilde{\Pi}\in\mathcal{M}\}\bigg{]}\] \[\leq\mathbb{E}_{\Pi,\widetilde{\Pi}\sim\mu}\bigg{[}\exp(X)\cdot 1 \{\Pi\in\mathcal{M},\widetilde{\Pi}\in\mathcal{M}\}\bigg{]},\quad\text{with} \ X\equiv\sum_{i<j}\frac{p_{ij}\delta_{ij}\tilde{\delta}_{ij}}{1-p_{ij}}.\] (G.9)
where the last line is from the universal inequality of \(\ln(1+t)\leq t\).
We further work out the explicit expressions of \(p_{ij}\), \(\delta_{ij}\) and \(\tilde{\delta}_{ij}\). Let \(h=(\epsilon,1-\epsilon)^{\prime}\), and recall that \(\alpha_{0}=a\epsilon+b(1-\epsilon)\). The condition of \(b\) in (H.1) guarantees that
\[Ph=\alpha_{0}\mathbf{1}_{2},\qquad\alpha_{0}=a\epsilon+b(1-\epsilon).\]
By direct calculations,
\[\alpha_{0}=\frac{c(1-\epsilon)^{2}-a\epsilon^{2}}{1-2\epsilon}.\] (G.10)
It follows that
\[P=\alpha_{0}\mathbf{1}_{2}\mathbf{1}_{2}^{\prime}+M,\qquad\text{where}\quad M =\frac{a-c}{1-2\epsilon}\xi\xi^{\prime},\quad\xi=(1-\epsilon,-\epsilon)^{ \prime}.\] (G.11)
Write \(z_{i}=\pi_{i}-h\). Since \(Ph=\alpha_{0}\mathbf{1}_{2}\) and \(z_{i}^{\prime}\mathbf{1}_{2}=0\), we have
\[\Omega_{ij} =\theta_{j}\theta_{j}(h+z_{i})^{\prime}P(h+z_{i})\] \[=\theta_{i}\theta_{j}(h^{\prime}Ph+z_{i}^{\prime}Pz_{j})\] \[=\theta_{i}\theta_{j}(\alpha_{0}+z_{i}^{\prime}Pz_{j})\] \[=\theta_{i}\theta_{j}(\alpha_{0}+z_{i}^{\prime}Mz_{j})\] \[=\theta_{i}\theta_{j}\Big{[}\alpha_{0}+\frac{a-c}{1-2\epsilon}( \xi^{\prime}z_{i})(\xi^{\prime}z_{j})\Big{]}.\]
Let \(t_{i}\) be the indicator that node \(i\) belongs to the first community and write \(u_{i}=t_{i}-\frac{N}{n}\). Then, \(\pi_{i}=(t_{i},1-t_{i})\) and \(z_{i}=u_{i}(1,-1)^{\prime}\). It follows that \(\xi^{\prime}z_{i}=u_{i}\). Therefore,
\[\Omega_{ij}=\theta_{i}\theta_{j}\Big{[}\alpha_{0}+\frac{a-c}{1-2\epsilon}u_{i }u_{j}\Big{]},\qquad\text{where}\quad u_{i}\stackrel{{ iid}}{{\sim}}\text{Bernoulli}( \epsilon)-\epsilon.\] (G.12)
Consequently,
\[p_{ij}=\alpha_{0}\theta_{i}\theta_{j},\qquad\delta_{ij}(\Pi)=\frac{a-c}{(1-2 \epsilon)\alpha_{0}}u_{i}u_{j}.\]
We plug it into (G.9) to obtain
\[X=\sum_{i<j}\frac{\theta_{i}\theta_{j}}{1-\alpha_{0}\theta_{i}\theta_{j}} \frac{(a-c)^{2}}{(1-2\epsilon)^{2}\alpha_{0}}u_{i}u_{j}\tilde{u}_{i}\tilde{u} _{j}.\] (G.13)
Below, we use (G.13) to bound \(\ell^{2}\). Since \(\alpha_{0}\theta_{\max}^{2}=O(c\theta_{\max}^{2})=o(1)\), by Taylor expansion of \((1-\alpha_{0}\theta_{i}\theta_{j})^{-1}\), we have
\[X=\frac{(a-c)^{2}}{(1-2\epsilon)^{2}\alpha_{0}}\sum_{i<j}\sum_{s=1}^{\infty} \alpha_{0}^{s-1}\theta_{j}^{s}\theta_{i}^{s}u_{i}u_{j}\tilde{u}_{i}\tilde{u}_ {j}.\]
Let \(b_{i}=\theta_{i}\theta_{\max}^{-1}<1\). We re-write \(X\) as
\[X=\gamma\sum_{s=1}^{\infty}w_{s}X_{s},\]
where
\[\gamma=\frac{\theta_{\max}^{2}(a-c)^{2}}{(1-\alpha_{0}\theta_{\max}^{2})(1-2 \epsilon)^{2}\alpha_{0}},\ \ w_{s}=(1-\alpha_{0}\theta_{\max}^{2})\alpha_{0}^{s-1}\theta_{\max}^{2s-2}, \ \ \text{and}\ X_{s}=\sum_{i<j}b_{j}^{s}b_{i}^{s}u_{i}u_{j}\tilde{u}_{i}\tilde{u}_{j}.\] (G.14)
Let \(\tilde{\mathbb{E}}\) be the conditional expectation by conditioning on the event of \(\{\Pi\in\mathcal{M},\widetilde{\Pi}\in\mathcal{M}\}\). It follows from (G.9) that
\[\ell_{2} =\mathbb{P}(\Pi\in\mathcal{M},\widetilde{\Pi}\in\mathcal{M})\cdot \tilde{\mathbb{E}}[\exp(X)]\] \[=\mathbb{P}(\Pi\in\mathcal{M},\widetilde{\Pi}\in\mathcal{M}) \cdot\tilde{\mathbb{E}}\Big{[}\exp\Bigl{(}\gamma\sum_{s=1}^{\infty}w_{s}X_{s} \Bigr{)}\Big{]}\] \[\leq\mathbb{P}(\Pi\in\mathcal{M},\widetilde{\Pi}\in\mathcal{M}) \cdot\sum_{s=1}^{\infty}w_{s}\tilde{\mathbb{E}}[\exp(\gamma X_{s})]\] \[=\sum_{s=1}^{\infty}w_{s}\,\mathbb{E}\bigl{[}\exp(\gamma X_{s}) \cdot 1\{\Pi\in\mathcal{M},\widetilde{\Pi}\in\mathcal{M}\}\bigr{]}.\] (G.15)
The third line follows using Jensen's inequality and that \(\sum_{s\geq 1}w_{s}=1\).
It suffices to bound the term in (G.15) for each \(s\geq 1\). Note that
\[X_{s}\leq Y_{s}^{2},\qquad Y_{s}=\sum_{i}b_{i}^{*}u_{i}\tilde{u}_{i}.\] (G.16)
We recall that \(u_{i}=t_{i}-\epsilon\), where \(t_{i}=\pi_{i}(1)\in\{0,1\}\). The event \(\{\Pi\in\mathcal{M},\widetilde{\Pi}\in\mathcal{M}\}\) translates to \(\max\{\sum_{i}t_{i},\;\sum_{i}\tilde{t}_{i}\}\leq 2n\epsilon\). Note that
\[u_{i}\tilde{u}_{i}=\begin{cases}(1-\epsilon)^{2},&\text{when }t_{i}+\tilde{t}_{i}=2,\\ -\epsilon(1-\epsilon),&\text{when }t_{i}+\tilde{t}_{i}=1,\\ \epsilon^{2},&\text{where }t_{i}+\tilde{t}_{i}=0.\end{cases}\]
It follows that \(|u_{i}\tilde{u}_{i}|\leq(t_{i}+\tilde{t}_{i})/2+\epsilon^{2}\). Note that \(\epsilon=O(N/n)\). Therefore, on this event,
\[|Y_{s}|\leq\sum_{i}[(t_{i}+\tilde{t}_{i})/2+\epsilon^{2}]\leq 2n\epsilon+n \epsilon^{2}\leq 3N.\]
We immediately have
\[\mathbb{E}\bigl{[}\exp(\gamma X_{s})\cdot 1\{\Pi\in\mathcal{M},\widetilde{\Pi} \in\mathcal{M}\}\bigr{]}\leq\mathbb{E}\Biggl{[}\exp(\gamma Y_{s}^{2})\cdot 1 \{|Y_{s}|\leq 3N\}\Biggr{]}.\] (G.17)
The following lemma is useful.
**Lemma G.1**.: _Let \(Z\) be a random variable satisfying that_
\[\mathbb{P}(|Z|>t)\leq 2\exp\Bigl{(}-\frac{t^{2}/2}{\sigma^{2}+bt}\Bigr{)}, \qquad\text{for all }t>0.\]
_Then, for any \(\gamma>0\) and \(B>0\) such that \(\gamma(\sigma^{2}+bB)<1/2\), we have_
\[\mathbb{E}\bigl{[}\exp(\gamma Z^{2})1\{|Z|\leq B\}\bigr{]}\leq 1+\frac{4\gamma( \sigma^{2}+bB)}{1-2\gamma(\sigma^{2}+bB)}.\]
Note that \(Y_{s}=\sum_{i}b_{i}^{*}u_{i}\tilde{u}_{i}\) is a sum of independent, mean-zero variables, where \(|b_{i}^{*}u_{i}\tilde{u}_{i}|\leq 2\) and \(\sum_{i}\operatorname{Var}(b_{i}^{*}u_{i}\tilde{u}_{i})\leq\sum_{i}b_{i}^{2*}2 \epsilon^{2}\leq 2n\epsilon^{2}\). It follows from Bernstein's inequality that
\[\mathbb{P}(|Y_{s}|>t)\leq\exp\biggl{(}-\frac{t^{2}/2}{2n\epsilon^{2}+2t} \biggr{)},\qquad\text{for all }t>0.\]
To apply Lemma G.1, we set
\[b=2,\qquad\sigma^{2}=2n\epsilon^{2}\leq 2n^{-1}N^{2},\qquad Z=Y_{s},\qquad B=3N,\]
and \(\gamma\) as in (G.14). The choice of \(B\) is in light of (G.17). Furthermore, by (G.10), we have \(\alpha_{0}\asymp c\). Also we have \(\theta_{\max}^{2}\alpha_{0}\to 0\). Hence,
\[\gamma=\frac{\theta_{\max}^{2}(a-c)^{2}}{(1-\alpha_{0}\theta_{\max}^{2})(1-2 \epsilon)^{2}\alpha_{0}}\leq C\cdot\bigl{(}\frac{\theta_{\max}^{2}(a-c)^{2}}{ c}\bigr{)}.\]
Thus by the hypothesis \(\frac{\theta_{\max}^{2}N(a-c)^{2}}{c_{1}}\to 0\), it holds that \(\gamma(\sigma^{2}+bB)<1/2\) for \(n\) sufficiently large. Applying Lemma G.1, we obtain
\[\mathbb{E}\big{[}\exp(\gamma X_{s})\cdot 1\{\Pi\in\mathcal{M}, \widetilde{\Pi}\in\mathcal{M}\}\big{]} \leq 1+C(\gamma(\sigma^{2}+bB))\] \[\leq 1+C\cdot\big{(}\frac{\theta_{\max}^{2}N(a-c)^{2}}{c}\big{)}\]
We further plug it into (G.15) to get
\[\ell_{2}\leq\sum_{s=1}^{\infty}w_{s}\Big{[}1+C\cdot\big{(}\frac{ \theta_{\max}^{2}N(a-c)^{2}}{c}\big{)}\Big{]}\leq 1+\big{(}\frac{\theta_{\max}^{2}N( a-c)^{2}}{c}\big{)},\]
where we use that \(\sum w_{s}=1\).
It follows immediately that
\[\ell_{2}\leq 1+o(1),\qquad\text{if}\quad\theta_{\max}\frac{\sqrt{N}(a-c)}{ \sqrt{c}}\to 0.\]
This proves the claim.
### Proof of Lemma g.1
Let \(X\) denote a nonnegative random variable, and define \(\overline{F}(x)=\mathbb{P}_{X}[X\geq x]\). For any positive number \(\beta>0\), we have
\[\mathbb{E}[\exp(\gamma X)1\{X<\beta\}] =\int_{0}^{\beta}e^{\gamma x}\,\mathrm{d}\mathbb{P}_{X}(x)\] \[=-e^{\gamma x}\bar{F}(x)\bigg{|}_{0}^{\beta}+\int_{0}^{\beta} \gamma e^{\gamma x}\bar{F}(x)dx\] \[=1-e^{\gamma\beta}\bar{F}(\beta)+\int_{0}^{\beta}\gamma e^{ \gamma x}\bar{F}(x)dx\] \[\leq 1+\int_{0}^{\beta}\gamma e^{\gamma x}\bar{F}(x)dx.\]
We apply it to \(X=Z^{2}\) and \(\beta=B^{2}\) to get
\[\mathbb{E}\big{[}\exp(\gamma Z^{2})1\{|Z|\leq B\}\big{]} \leq 1+\int_{0}^{B^{2}}\gamma\exp(\gamma x)\mathbb{P}(|Z|>\sqrt{x})dx\] \[\leq 1+2\gamma\int_{0}^{B^{2}}\exp(\gamma x)\exp\biggl{\{}-\frac{x }{2(\sigma^{2}+b\sqrt{x})}\biggr{\}}dx\] \[\leq 1+2\gamma\int_{0}^{\infty}\exp\biggl{\{}-\frac{1-2\gamma( \sigma^{2}+bB)}{2(\sigma^{2}+bB)}x\biggr{\}}dx\] \[\leq 1+\frac{4\gamma(\sigma^{2}+bB)}{1-2\gamma(\sigma^{2}+bB)}.\]
This proves the claim.
## Appendix H Proof of Theorem 2.5 (Tightness of the statistical lower bound)
Let \(\rho\in\mathbb{R}^{n}\). We consider the global testing problem in the DCBM model where
* \(P=\begin{pmatrix}1&b\\ b&1\end{pmatrix}\)
* \(b=\tilde{b}/\sqrt{ac}\),
* \(\theta_{i}=\rho_{i}\sqrt{a}\) for \(i\in S\),
* \(\theta_{i}=\rho_{i}\sqrt{c}\) for \(i\notin S\), and
* \(aN_{0}+\tilde{b}(n-N_{0})=\tilde{b}N_{0}+c(n-N_{0})\),
Recall that \(h=(N_{0}/n,1-N_{0}/n)^{\mathsf{T}}\), and \(N_{0}\) is the size of the smaller community in the alternative. Observe that the null model \(K=1\) is parameterized by setting \(a=c=\tilde{b}=1\).
Recall that \(\varepsilon=N/n\). We define
\[\alpha_{0}\equiv\frac{aN_{0}+\tilde{b}(n-N_{0})}{n}.\]
Note that by Assumption (\(E\)),
\[\tilde{b} =\frac{nc-(a+c)N_{0}}{n-2N_{0}}\] (H.1) \[a\epsilon =O(c),\text{ and}\] (H.2) \[c \sim\tilde{b}\sim\alpha_{0}.\] (H.3)
Our assumptions in this section are the following:
* There exists an absolute constant \(C_{\rho}>0\) such that \(\rho_{\max}\leq C_{\rho}\,\rho_{\min}\)
* \(\frac{\rho_{\max}^{2}\alpha\alpha n}{\sqrt{\log n}}\to\infty\)
* An integer \(N\) is known such that \(N_{0}=N[1+o(1)]\).
Note that since we tolerate a small error in the clique size by Assumption (\(c\)), our setting indeed matches that of the statistical lower bound, by (G.3).
Define the signed scan statistic
\[\phi_{sc}=\max_{D\subset[n];|D|=N}\mathbf{1}_{D}^{\prime}\big{(}A-\hat{\eta} \hat{\eta}^{\mathsf{T}}\big{)}\mathbf{1}_{D}.\] (H.4)
For notational brevity, define \(n^{(2)}=\binom{n}{2}\). Let
\[\hat{\gamma}=\frac{1}{n^{(2)}}\sum_{i,j}A_{ij}.\]
The estimator \(\hat{\gamma}\) provides a constant factor approximation of the edge density of the least-favorable null model. See Lemma H.1 for further details.
Next let
\[h(u)=(1+u)\log(1+u)-u,\] (H.5)
and note that this function is strictly increasing on \(\mathbb{R}_{\geq 0}\). Define a random threshold \(\hat{\tau}\) to be
\[\hat{\tau}=C^{*}\hat{\gamma}N^{2}h^{-1}\bigg{(}\frac{C^{*}N\log( \frac{n\varepsilon}{N})}{\hat{\gamma}N^{2}}\bigg{)}\] (H.6)
Let \(C^{*}>0\) denote a sufficiently large constant, to be determined, that depends only on \(C_{\rho}\) from Assumption (\(a\)). Finally define the scan test to be
\[\varphi_{sc}=\mathbf{1}\big{[}|\phi_{sc}|>\hat{\tau}\big{]}\]
Note that, if we assume \(a\geq c\), as in the main text, then \(b<1\). In this case, we can simply take
\[\varphi_{sc}=\mathbf{1}\big{[}\phi_{sc}>\hat{\tau}\big{]},\]
and the same guarantees hold. On the other hand, if \(b>1\), then the scan test skews negative, as our proof shows.
**Theorem H.1**.: _If_
\[h\bigg{(}\frac{\|\theta_{S}\|_{1}^{2}|1-b^{2}|}{\rho_{\max}^{2}\alpha_{0}N_{0}^{2 }}\bigg{)}\gg\frac{\log\frac{ne}{N_{0}}}{\rho_{\max}^{2}\alpha_{0}N_{0}},\] (H.7)
_then the type 1 and 2 error of \(\varphi_{sc}\) tend to \(0\) as \(n\to\infty\)._
We interpret the previous result in the following concrete settings.
**Corollary H.1**.: _If_
\[\frac{\rho_{\max}^{2}\alpha_{0}N_{0}}{\log\frac{ne}{N_{0}}}\to 0,\]
_then \(\varphi_{sc}\) has type 1 and 2 errors tending to \(0\) as \(n\to\infty\), provided that_
\[\frac{\rho_{\max}^{2}N_{0}(a-c)}{\log\frac{ne}{N_{0}}}\gg 1.\]
_If_
\[\frac{\rho_{\max}^{2}\alpha_{0}N_{0}}{\log\frac{ne}{N_{0}}}\to\infty,\]
_then \(\varphi_{sc}\) has type 1 and 2 errors tending to \(0\) as \(n\to\infty\), provided that_
\[\frac{\rho_{\max}^{2}N_{0}(a-c)}{\sqrt{\rho_{\max}^{2}N_{0}\alpha_{0}\log \frac{ne}{N_{0}}}}\gg 1.\]
Proof.: Note that
\[\|\theta_{S}\|_{1}^{2}|1-b^{2}|=\rho_{\max}^{2}N_{0}^{2}(a-\tilde{b}^{2}/\sqrt {c})\sim\rho_{\max}^{2}N_{0}^{2}(a-c).\]
In the first case,
\[h\bigg{(}\frac{\|\theta_{S}\|_{1}^{2}|1-b^{2}|}{\rho_{\max}^{2}\alpha_{0}N_{0} ^{2}}\bigg{)}\gg h\bigg{(}\frac{\log\frac{ne}{N_{0}}}{\rho_{\max}^{2}\alpha_{ 0}N_{0}}\bigg{)}\gtrsim\frac{\log\frac{ne}{N_{0}}}{\rho_{\max}^{2}\alpha_{0}N _{0}}.\]
We use the fact that \(h(u)\gtrsim u\) for \(u\geq 1\).
In the second case,
\[h\bigg{(}\frac{\|\theta_{S}\|_{1}^{2}|1-b^{2}|}{\rho_{\max}^{2} \alpha_{0}N_{0}^{2}}\bigg{)}\gg h\bigg{(}\frac{N_{0}\cdot\sqrt{\rho_{\max}^{ 2}N_{0}\alpha_{0}\log\frac{ne}{N_{0}}}}{\rho_{\max}^{2}\alpha_{0}N_{0}^{2}} \bigg{)}=h\bigg{(}\sqrt{\frac{\log\frac{ne}{N_{0}}}{\rho_{\max}^{2}\alpha_{0}N _{0}}}\bigg{)}\gtrsim\frac{\log\frac{ne}{N_{0}}}{\rho_{\max}^{2}\alpha_{0}N_{0 }}.\]
The upper bounds in the second part of Corollary H.1 is the best possible up to logarithmic factors. For example, suppose that \(\theta_{\max}\lesssim\theta_{\min}\) in Theorem 2.4. Then the upper bound for the second case of Corollary H.1 matches the lower bound of Theorem 2.4 up to logarithmic factors.
To prove Theorem 2.5, first we establish concentration of \(\hat{\gamma}\).
**Lemma H.1**.: _Recall_
\[\hat{\gamma}=\frac{1}{n^{(2)}}\sum_{i,j(dist)}A_{ij}.\]
_There exists an absolute constant \(C>0\) such that for all \(\delta>0\), it holds that_
\[|\hat{\gamma}-\mathbb{E}\hat{\gamma}|\leq\frac{C\sqrt{\rho_{\max}^{2}\alpha_{ 0}\log(1/\delta)}}{n}\]
_with probability at least \(1-\delta\)._
Proof.: As a preliminary, we claim that
\[(\Omega\mathbf{1})_{i}\asymp\rho_{\max}^{2}\alpha_{0}n.\] (H.8)
To see this, note that if \(i\in S\), then by (\(E\))
\[(\Omega\mathbf{1})_{i} =\sum_{j}\Omega_{ij}=\theta_{i}(\|\theta_{S}\|_{1}+b\|\theta_{S^{c }}\|_{1})\] \[\asymp\rho_{\max}\sqrt{a}\cdot\big{(}\sqrt{a}N\rho_{\max}+\frac{ \tilde{b}}{\sqrt{ac}}\cdot\sqrt{c}\rho_{\max}\big{)}=\rho_{\max}^{2}\alpha_{0}n.\]
The claim for \(i\notin S\) follows by a similar argument applying (\(E\)). It follows that
\[v_{0}=\mathbf{1}^{\mathsf{T}}\Omega\mathbf{1}\asymp\rho_{\max}^{2}\alpha_{0}n^ {2}\]
The expectation is
\[\mathbb{E}\hat{\gamma}=\frac{1}{n^{(2)}}\sum_{i,j(dist)}\Omega_{ij},\]
and the variance is
\[\operatorname{Var}(\hat{\gamma})=\frac{1}{(n^{(2)})^{2}}\sum_{i,j(dist)} \Omega_{ij}(1-\Omega_{ij}).\]
By Bernstein's inequality,
\[\mathbb{P}\big{[}n^{(2)}\big{|}\hat{\gamma}-\mathbb{E}\hat{\gamma}\big{|}>t \big{]}\leq 2\exp\bigg{(}-\frac{ct^{2}}{\sum_{i,j(dist)}\Omega_{ij}+t} \bigg{)}.\] (H.9)
By Assumptions (\(a\)) and (\(b\)),
\[\sum_{i,j(dist)}\Omega_{ij}\asymp\rho_{\max}^{2}\alpha_{0}n^{2}\gg n.\]
Setting
\[t=\tau\equiv C\sqrt{\rho_{\max}^{2}\alpha_{0}n^{2}\log(1/\delta)}\]
for a large enough absolute constant \(C>0\), (H.9) implies that
\[|\hat{\gamma}-\mathbb{E}\hat{\gamma}|\leq\frac{\tau}{n^{2}}\asymp\frac{\sqrt{ \rho_{\max}^{2}\alpha_{0}\log(1/\delta)}}{n}\]
with probability at least \(1-\delta\).
Next we control the error arising from the plug-in effect of approximating \(\eta^{*}\) by \(\hat{\eta}\).
**Lemma H.2**.: _Given \(D\subset[n]\), define_
\[L_{D}\equiv\mathbf{1}_{D}^{\mathsf{T}}(\eta^{*}\eta^{*\mathsf{T}}-\hat{\eta} \hat{\eta}^{\mathsf{T}})\mathbf{1}_{D}.\]
_Then under the null and alternative hypothesis,_
\[\max_{|D|=N}|L_{D}|\lesssim\sqrt{N_{0}^{3}\rho_{\max}^{2}\alpha_{0}\log(\frac {ne}{N_{0}})}\]
_with probability at least \(1-\binom{n}{N}^{-1}-2v_{0}^{-c_{1}}\), for an absolute constant \(c_{1}>0\)._
Proof.: In this proof, \(c>0\) is an absolute constant that may vary from line to line.
Given \(D\subset[n]\), let
\[L_{D}\equiv\mathbf{1}_{D}^{\mathsf{T}}(\eta^{*}\eta^{*\mathsf{T}}-\hat{\eta} \hat{\eta}^{\mathsf{T}})\mathbf{1}_{D}=\mathbf{1}_{D}^{\mathsf{T}}\eta^{*}( \eta^{*}-\hat{\eta})^{\mathsf{T}}\mathbf{1}_{D}+\mathbf{1}_{D}^{\mathsf{T}}( \eta^{*}-\hat{\eta})\hat{\eta}^{\mathsf{T}}\mathbf{1}_{D}\] (H.10)
Our first goal is to control
\[\big{|}\mathbf{1}_{D}^{\mathsf{T}}(\hat{\eta}-\eta^{*})\big{|}.\]
Define \(\overline{\Omega}=\Omega-\mathrm{diag}(\Omega)\). Note that
\[\hat{\eta}-\eta^{*}=\frac{A\mathbf{1}}{\sqrt{V}}-\frac{\Omega\mathbf{1}}{\sqrt {v_{0}}}=\big{(}\frac{A\mathbf{1}}{\sqrt{V}}-\frac{A\mathbf{1}}{\sqrt{v_{0}}} \big{)}+\big{(}\frac{A\mathbf{1}}{\sqrt{v_{0}}}-\frac{\overline{\Omega} \mathbf{1}}{\sqrt{v_{0}}}\big{)}+\big{(}\frac{\overline{\Omega}\mathbf{1}}{ \sqrt{v_{0}}}-\frac{\Omega\mathbf{1}}{\sqrt{v_{0}}}\big{)}\] (H.11)
We study each term of (H.11). First note that
\[(\overline{\Omega}\mathbf{1})_{i}=(\Omega\mathbf{1})_{i}-\Omega_{ii}=\rho_{ \max}^{2}\alpha_{0}n+O(1),\]
and thus
\[v_{0}=\sum_{i}(\Omega\mathbf{1})_{i}\sim\sum_{i}(\overline{\Omega }\mathbf{1})_{i}=v,\text{ and }\] \[|v_{0}-v|\lesssim 1\] (H.12)
Next note that
\[\mathrm{Var}\big{(}\mathbf{1}_{D}^{\mathsf{T}}\big{(}A\mathbf{1}-\overline{ \Omega}\mathbf{1}\big{)}\big{)}\lesssim\sum_{\begin{subarray}{c}i\in[n],j\in D \\ i\neq j\end{subarray}}\Omega_{ij}\lesssim|D|\rho_{\max}^{2}\alpha_{0}n.\]
By Bernstein's inequality,
\[\mathbb{P}\big{[}\big{|}\mathbf{1}_{D}^{\mathsf{T}}\big{(}A\mathbf{1}- \overline{\Omega}\mathbf{1}\big{)}\big{|}\geq t\big{]}\leq 2\exp\bigg{(}-\frac{ ct^{2}}{|D|\rho_{\max}^{2}\alpha_{0}n+t}\bigg{)}\] (H.13)
for all \(t>0\). Setting
\[t=\tau\equiv\sqrt{4/c}\cdot\sqrt{|D|\rho_{\max}^{2}\alpha_{0}n\log(1/\delta)},\]
we have
\[\frac{1}{\sqrt{v_{0}}}\big{|}\mathbf{1}_{D}^{\mathsf{T}}\big{(}A\mathbf{1}- \overline{\Omega}\mathbf{1}\big{)}\big{|}\lesssim\frac{\sqrt{|D|\rho_{\max}^ {2}\alpha_{0}n\log(1/\delta)}}{\sqrt{\rho_{\max}^{2}\alpha_{0}n^{2}}}=\sqrt{( |D|/n)\cdot\log(1/\delta)}\] (H.14)
with probability at least \(1-\delta\).
Next, it is shown in (Jin et al., 2021c, Supplement, pg.100) that for \(\sqrt{\log\|\theta\|_{1}}\ll x_{n}\ll\|\theta\|_{1}\),
\[\mathbb{P}\big{[}|V-v|>x_{n}\|\theta\|_{1}\big{]}=\mathbb{P}\bigg{[}|\sqrt{V}- \sqrt{v}|>\frac{x_{n}\|\theta\|_{1}}{\sqrt{V}+\sqrt{v}}\bigg{]}\leq 2\exp(-cx_{n} ^{2}).\]
Hence
\[\mathbb{P}\bigg{[}|\sqrt{V}-\sqrt{v}|>\frac{x_{n}\|\theta\|_{1}}{\sqrt{v}} \bigg{]}\leq 2\exp(-cx_{n}^{2}),\]
Note that by (H.2) and (H.3),
\[\frac{\|\theta\|_{1}}{\sqrt{v}}\asymp\frac{N_{0}\rho_{\max}\sqrt{a}+(n-N_{0}) \rho_{\max}\sqrt{c}}{\rho_{\max}\sqrt{\alpha_{0}}n}\asymp 1.\]
By (H.12), we have
\[\mathbb{P}\bigg{[}|\sqrt{V}-\sqrt{v_{0}}|>\frac{x_{n}\|\theta\|_{1}}{\sqrt{v}} \bigg{]}\leq 2\exp(-cx_{n}^{2}).\] (H.15)
Hence with probability at least \(1-2\exp(-cx_{n}^{2})\),
\[V\gtrsim v_{0}.\]
It follows that
\[\mathbb{P}\bigg{[}\Big{|}\frac{1}{\sqrt{V}}-\frac{1}{\sqrt{v_{0}}}\Big{|}\geq \frac{x_{n}\|\theta\|_{1}}{v_{0}\sqrt{v}}\bigg{]}=\mathbb{P}\bigg{[}\frac{| \sqrt{V}-\sqrt{v_{0}}|}{\sqrt{V}\cdot v_{0}}\geq\frac{x_{n}\|\theta\|_{1}}{v_{0} \sqrt{v}}\bigg{]}\leq 2\exp(-cx_{n}^{2}).\]
Hence with probability at least \(1-\delta-2\exp(-cx_{n}^{2})\),
\[\left|\mathbf{1}_{D}^{\mathsf{T}}(\frac{A\mathbf{1}}{\sqrt{V}}- \frac{A\mathbf{1}}{\sqrt{v_{0}}})\right| \leq\frac{x_{n}\cdot\left(|D|\rho_{\max}^{2}\alpha_{0}n+\sqrt{|D| \rho_{\max}^{2}\alpha_{0}n\log(1/\delta)}\right)}{v_{0}}\] \[\asymp\frac{x_{n}\cdot\left(|D|\rho_{\max}^{2}\alpha_{0}n+\sqrt{|D |\rho_{\max}^{2}\alpha_{0}n\log(1/\delta)}\right)}{\rho_{\max}^{2}\alpha_{0}n^ {2}}.\] (H.16)
For the last term of (H.11),
\[\mathbf{1}_{D}^{\mathsf{T}}\big{(}\frac{\overline{\Omega}\mathbf{ 1}}{\sqrt{v_{0}}}-\frac{\Omega\mathbf{1}}{\sqrt{v_{0}}}\big{)} =\frac{\sum_{i\in D}\Omega_{ii}}{\sqrt{v_{0}}}\asymp\frac{\rho_{ \max}^{2}a|D\cap S|+\rho_{\max}^{2}c|D\cap S^{c}|}{\sqrt{\rho_{\max}^{2}\alpha _{0}n^{2}}}\] \[\lesssim\rho_{\max}a\epsilon/\sqrt{\alpha_{0}}\lesssim\rho_{ \max}\sqrt{c}\lesssim 1.\] (H.17)
Next we control \(\mathbf{1}_{D}^{\mathsf{T}}\hat{\eta}\). By (H.13) and (H.15),
\[|\mathbf{1}_{D}^{\mathsf{T}}\hat{\eta}|=\frac{|\mathbf{1}_{D}^{ \mathsf{T}}\mathbf{A}\mathbf{1}|}{\sqrt{V}}\lesssim\frac{|D|\rho_{\max}^{2} \alpha_{0}n+\sqrt{|D|\rho_{\max}^{2}\alpha_{0}n\log(1/\delta)}}{\sqrt{v_{0}}- cx_{n}}\] (H.18)
with probability at least \(1-\delta-2\exp(-cx_{n}^{2})\). It also holds that
\[|\mathbf{1}_{D}^{\mathsf{T}}\eta^{*}|=\big{|}\frac{\mathbf{1}_{D}^{\mathsf{T} }\Omega\mathbf{1}|}{\sqrt{v_{0}}}=\frac{|D|\rho_{\max}^{2}\alpha_{0}n}{\rho_{ \max}\sqrt{\alpha_{0}}n}=|D|\rho_{\max}\sqrt{\alpha_{0}}.\] (H.19)
Next we set \(x_{n}=\sqrt{\log\|\theta\|_{1}}\asymp\sqrt{\log v_{0}}\). Then from (H.16) and (H.18),
\[\left|\mathbf{1}_{D}^{\mathsf{T}}(\frac{A\mathbf{1}}{\sqrt{V}}- \frac{A\mathbf{1}}{\sqrt{v_{0}}})\right| \asymp\frac{\sqrt{\log v_{0}}\cdot\left(|D|\rho_{\max}^{2} \alpha_{0}n+\sqrt{|D|\rho_{\max}^{2}\alpha_{0}n\log(1/\delta)}\right)}{\rho_{ \max}^{2}\alpha_{0}n^{2}}\] \[\asymp\sqrt{\log v_{0}}\cdot\big{(}(|D|/n)+\frac{\sqrt{(|D|/n) \log(1/\delta)}}{\rho_{\max}\sqrt{\alpha_{0}}n}\big{)},\] (H.20)
and
\[|\mathbf{1}_{D}^{\mathsf{T}}\hat{\eta}| \lesssim\frac{|D|\rho_{\max}^{2}\alpha_{0}n+\sqrt{|D|\rho_{\max} ^{2}\alpha_{0}n\log(1/\delta)}}{\sqrt{v_{0}}}\] \[\asymp\frac{|D|\rho_{\max}^{2}\alpha_{0}n+\sqrt{|D|\rho_{\max}^{ 2}\alpha_{0}n\log(1/\delta)}}{\rho_{\max}\sqrt{\alpha_{0}}n}\] \[\asymp|D|\rho_{\max}\sqrt{\alpha_{0}}+\sqrt{(|D|/n)\cdot\log(1/ \delta)}\] (H.21)
with probability at least \(1-\delta-2v_{0}^{-c_{1}}\).
By (H.14),(H.17), (H.19), (H.20), and (H.21)
\[|L_{D}| \leq\big{|}\mathbf{1}_{D}^{\mathsf{T}}\eta^{*}(\eta^{*}-\hat{\eta })^{\mathsf{T}}\mathbf{1}_{D}\big{|}+\big{|}\mathbf{1}_{D}^{\mathsf{T}}(\eta^{* }-\hat{\eta})\hat{\eta}^{\mathsf{T}}\mathbf{1}_{D}\big{|}\] \[\lesssim\big{(}|D|\rho_{\max}\sqrt{\alpha_{0}}+\sqrt{(|D|/n)\cdot \log(1/\delta)}\big{)}\cdot\big{(}\sqrt{\log v_{0}}(|D|/n)+\sqrt{(|D|/n)\log(1 /\delta)}+1\big{)}.\]
with probability at least \(1-\delta-2v_{0}^{-c_{1}}\).
It follows that, setting \(\delta=1/{n\choose N}^{2}\) above and applying the union bound,
\[\max_{|D|=N}|L_{D}|\lesssim\big{(}N\rho_{\max}\sqrt{\alpha_{0}}+\sqrt{N \epsilon\cdot\log(\frac{ne}{N})}\big{)}\cdot\big{(}\epsilon\sqrt{\log v_{0}}+ \sqrt{N\epsilon\cdot\log(\frac{ne}{N})}+1\big{)}\]
with probability at least \(1-{n\choose N}^{-1}-2v_{0}^{-c_{1}}\to 1\). Note that
\[\frac{n\log\frac{ne}{N}}{\log v_{0}}\asymp\frac{n\log\frac{ne}{N}}{\log(\rho_{ \max}^{2}\alpha_{0}n^{2})}\gtrsim 1\Rightarrow\]
\[\frac{N^{2}}{n}\log\frac{ne}{N} \gtrsim\frac{N^{2}}{n^{2}}\log(\rho_{\max}^{2}\alpha_{0}n^{2})\Rightarrow\] \[\sqrt{N\epsilon\cdot\log(\frac{ne}{N})} \gtrsim\epsilon\sqrt{\log v_{0}}.\]
Further, since \((N/n)\log\frac{ne}{N}\ll 1\) and \(\rho_{\max}^{2}\alpha_{0}n\rightarrow\infty\) by Assumption (\(b\)),
\[N\log\frac{ne}{N} \lesssim\rho_{\max}^{2}\alpha_{0}n^{2}\Rightarrow\] \[\frac{N}{n}\sqrt{\log\frac{ne}{N}} \lesssim\sqrt{N\rho_{\max}^{2}\alpha_{0}}\Rightarrow\] \[N\epsilon\log\frac{ne}{N} \lesssim\sqrt{N^{3}\rho_{\max}^{2}\alpha_{0}\log\frac{ne}{N}}.\]
Hence
\[\max_{|D|=N}|L_{D}|\lesssim\sqrt{N^{3}\rho_{\max}^{2}\alpha_{0}\log(\frac{ne}{ N})}+N\epsilon\log(\frac{ne}{N})\lesssim\sqrt{N^{3}\rho_{\max}^{2}\alpha_{0} \log(\frac{ne}{N})}\]
with probability at least \(1-\binom{n}{N}^{-1}-2v_{0}^{-c_{1}}\). Recalling that \(N=N_{0}[1+o(1)]\) yields the statement of the lemma.
Next we study an ideal version of \(\phi_{sc}\).
**Lemma H.3**.: _Define the ideal scan statistic_
\[\tilde{\phi}_{sc}=\max_{|D|=N}\mathbf{1}_{D}^{\mathsf{T}}(A-\eta^{*}\eta^{* \mathsf{T}})\mathbf{1}_{D},\]
_and corresponding test_
\[\tilde{\varphi}_{sc}=\mathbf{1}\bigg{[}\tilde{\phi}_{sc}>\tilde{\tau}\bigg{]},\]
_where_
\[\tilde{\tau}\equiv\tilde{C}\hat{\gamma}N^{2}h^{-1}\bigg{(}\frac{\tilde{C}N \log(\frac{ne}{N})}{\hat{\gamma}N^{2}}\bigg{)},\]
_and \(\tilde{C}>0\) is a sufficiently large absolute constant that depends only on \(C_{\rho}\) from Assumption (\(a\)). Then under the null hypothesis,_
\[\mathbb{P}\big{[}|\tilde{\phi}_{sc}|>\tilde{\tau}\big{]}\leq n^{-c_{0}}+\exp \big{(}-N\log\frac{ne}{N}\big{)}\]
_and under the alternative hypothesis,_
\[\mathbb{P}\big{[}|\tilde{\phi}_{sc}|\leq\tilde{\tau}\big{]}\leq n^{-c_{0}}+ \big{(}\frac{N}{ne}\big{)}^{10}\]
_for \(n\) sufficiently large, where \(c_{0}\) is an absolute constant._
Proof.: In this proof, \(c>0\) is an absolute constant that may vary form line to line.
Define the ideal scan statistic
\[\tilde{\phi}_{sc}=\max_{|D|=N}\mathbf{1}_{D}^{\mathsf{T}}(A-\eta^{*}\eta^{* \mathsf{T}})\mathbf{1}_{D}.\]
Also define
\[Z_{D}\equiv\sum_{i,j\in D(dist)}(A_{ij}-\Omega_{ij})\]
First consider the type 1 error. Under the null hypothesis, we have \(\eta^{*}=\theta=\rho\) and \(\alpha_{0}=1\). Observe that
\[\sigma_{D}^{2}\equiv\operatorname{Var}(Z_{D})=\operatorname{Var}\big{(}\sum_{ i,j\in D(dist)}(A_{ij}-\theta_{i}\theta_{j})\big{)}\lesssim\|\theta_{D}\|_{1}^{2} \asymp\rho_{\max}^{2}N^{2}\sim\rho_{\max}^{2}N_{0}^{2}\]
By the Bennett inequality, (Vershynin, 2018, Theorem 2.9.2),
\[\mathbb{P}\big{[}\sum_{i,j\in D}(A_{ij}-\theta_{i}\theta_{j})>t\big{]}\leq\exp \bigg{(}-\sigma_{D}^{2}\,h\bigg{(}\frac{t}{\sigma_{D}^{2}}\bigg{)}\bigg{)},\] (H.22)
where \(h(u)=(1+u)\log(1+u)-u\).
Next, by Lemma H.1,
\[|\hat{\gamma}-\mathbb{E}\hat{\gamma}|\lesssim\frac{\sqrt{\log n}}{n}\]
with probability \(n^{-c_{0}}\). Also recall that
\[\mathbb{E}\,\hat{\gamma}=\frac{1}{n^{(2)}}\sum_{i,j(dist)}\Omega_{ij}\asymp \rho_{\max}^{2}\alpha_{0}=\rho_{\max}^{2}\gg\frac{\sqrt{\log n}}{n}\]
by Assumptions (\(a\)) and (\(b\)). It follows that there exist absolute constants \(c_{0},c_{\gamma},C_{\gamma}>0\) such that
\[c_{\gamma}\rho_{\max}^{2}<\hat{\gamma}<C_{\gamma}\rho_{\max}^{2}\] (H.23)
with probability \(n^{-c_{0}}\). Let \(\mathcal{E}\) denote this event. Under \(\mathcal{E}\), we have that for \(\tilde{C}\) sufficiently large,
\[\tilde{C}\hat{\gamma}N^{2}h^{-1}\bigg{(}\frac{\tilde{C}N\log(\frac{ne}{N})}{ \hat{\gamma}N^{2}}\bigg{)}\geq\sigma_{D}^{2}h^{-1}\bigg{(}\frac{2N\log\frac{ne }{N}}{\sigma_{D}^{2}}\bigg{)}\]
It follows from this, the union bound, and the Bennett inequality,
\[\mathbb{P}\bigg{[}|\tilde{\phi}_{sc}|>\tilde{C}\hat{\gamma}N^{2}h ^{-1}\bigg{(}\frac{\tilde{C}N\log(\frac{ne}{N})}{\hat{\gamma}N^{2}}\bigg{)} \bigg{]} \leq\mathbb{P}[\mathcal{E}^{c}]+\mathbb{P}\bigg{[}|\tilde{\phi}_ {sc}|>\tilde{C}\hat{\gamma}N^{2}h^{-1}\bigg{(}\frac{\tilde{C}N\log(\frac{ne}{ N})}{\hat{\gamma}N^{2}}\bigg{)},\ \mathcal{E}\bigg{]}\] \[\leq n^{-c_{0}}+\sum_{|D|=N}\mathbb{P}\bigg{[}|Z_{D}|>\tilde{C} \hat{\gamma}N^{2}h^{-1}\bigg{(}\frac{\tilde{C}N\log(\frac{ne}{N})}{\hat{\gamma }N^{2}}\bigg{)}\bigg{]}\] \[\leq n^{-c_{0}}+\sum_{|D|=N}\mathbb{P}\bigg{[}|Z_{D}|>\sigma_{D}^ {2}h^{-1}\bigg{(}\frac{2N\log\frac{ne}{N}}{\sigma_{D}^{2}}\bigg{)}\bigg{]}\] \[\leq n^{-c_{0}}+\big{(}\frac{ne}{N}\big{)}^{N}\exp\big{(}-2N\log \frac{ne}{N}\big{)}.\]
This shows that the type 1 error for the ideal scan statistic is \(o(1)\).
Next consider the type 2 error. We have by Lemma (E.2),
\[\mathbf{1}_{S}^{\mathsf{T}}(A-\eta^{*}\eta^{*\mathsf{T}})\mathbf{1}_{S}=\sum_{ i,j\in S(dist)}(A_{ij}-\Omega_{ij})+\mathbf{1}_{S}^{\mathsf{T}}\tilde{\Omega} \mathbf{1}_{S}=Z_{S}+\|\theta_{S}\|_{1}^{2}(1-b^{2})\cdot\frac{\|\theta_{S^{c }}\|_{1}^{2}}{v_{0}}.\]
Note that by (H.12)
\[\|\theta_{S}\|_{1}^{2}(1-b^{2})\cdot\frac{\|\theta_{S^{c}}\|_{1}^{2}}{v_{0}} \sim\|\theta_{S}\|_{1}^{2}(1-b^{2}).\]
Next,
\[\mathrm{Var}(Z_{S})=\sum_{i,j\in S(dist)}\Omega_{ij}(1-\Omega_{ij})\lesssim\| \theta_{S}\|_{1}^{2}\asymp\rho_{\max}^{2}Na\sim\rho_{\max}^{2}N_{0}a\]
By Bernstein's inequality,
\[|Z_{S}|\lesssim\sqrt{\|\theta_{S}\|_{1}^{2}\log(1/\delta)}\lor\log(1/\delta) \leq\|\theta_{S}\|_{1}\log(1/\delta)\]
with probability at least \(1-\delta\). Setting \(\delta=(\frac{N}{ne})^{10}\), we have
\[|Z_{S}|\lesssim\|\theta_{S}\|_{1}\log\big{(}\frac{ne}{N}\big{)}\]
with probability at least \(1-(\frac{N}{ne})^{10}\).
Next we show that
\[\|\theta_{S}\|_{1}|1-b^{2}|\gtrsim\log\frac{ne}{N}\] (H.24)
using (H.7), which we rewrite as
\[\|\theta_{S}\|_{1}^{2}|1-b^{2}|\gg\gamma N_{0}^{2}h^{-1}\bigg{(} \frac{\log\frac{ne}{N_{0}}}{\gamma N_{0}}\bigg{)}\sim\gamma N^{2}h^{-1}\bigg{(} \frac{\log\frac{ne}{N}}{\gamma N}\bigg{)}\] (H.25)
where \(\gamma=\rho_{\max}^{2}\alpha_{0}\). Recall that \(\alpha_{0}=1\) under the null, and \(\alpha_{0}\sim c\) under the alternative. Let
\[u=\frac{\log\frac{ne}{N}}{\gamma N}.\]
Consider two cases: (i) \(u\leq 0.01\), and (ii) \(u\geq 0.01\). For \(u^{\prime}\leq h^{-1}(0.01)\), we have \(h(u^{\prime})\asymp(u^{\prime})^{2}\), and therefore \(h^{-1}(u)\asymp u^{2}\) for \(u\leq 0.01\). In this case (H.25) implies
\[\|\theta_{S}\|_{1}^{2}|1-b^{2}|\gg\gamma N^{2}\sqrt{\frac{\log \frac{ne}{N}}{\gamma N}}=\sqrt{\gamma N^{3}\log\frac{ne}{N}}.\]
In addition,
\[\|\theta_{S}\|_{1}=N\sqrt{a}\rho_{\max},\]
so that
\[\|\theta_{S}\|_{1}(1-b^{2})\gg\sqrt{\frac{\gamma N\log\frac{ne}{N}}{a\rho_{ \max}^{2}}}\gtrsim\log\frac{ne}{N}\]
since \(u\leq 0.01\) and \(a\rho_{\max}^{2}\lesssim 1\). Thus in case (i), (H.24) is satisfied for \(n\) sufficiently large.
Now consider case (ii) where \(u\geq 0.01\). Note that \(h(u)\leq(u+1)\log(u+1)\), and thus
\[\frac{1}{2}(u+1)\leq u\leq h^{-1}((u+1)\log(u+1)).\]
Let \(\varphi\equiv(u+1)\log(u+1)\geq u\) and observe that
\[u+1=\frac{\varphi}{\log(u+1)}\geq\frac{\varphi}{\log\varphi}.\]
Hence
\[h^{-1}((u+1)\log(u+1))\geq\frac{1}{2}\cdot\frac{(u+1)\log(u+1)}{\log\big{[}(u +1)\log(u+1)\big{]}}.\]
Applying (H.25),
\[\|\theta_{S}\|_{1}^{2}|1-b^{2}|\gg\gamma N^{2}\cdot\frac{(\frac{ \log\frac{ne}{N}}{\gamma N}+1)\log(\frac{\log\frac{ne}{N}}{\gamma N}+1)}{\log \big{[}(\frac{\log\frac{ne}{N}}{\gamma N}+1)\log(\frac{\log\frac{ne}{N}}{ \gamma N}+1)\big{]}}\gtrsim N\log\frac{ne}{N}.\]
Hence
\[\|\theta_{S}\|_{1}|1-b^{2}|\gg\frac{\log\frac{ne}{N}}{\sqrt{a}\rho_{\max}} \gtrsim\log\frac{ne}{N}.\]
Thus in case (ii), (H.24) is also satisfied.
Next we have,
\[\mathbb{P}\bigg{[}|\tilde{\phi}_{sc}|\leq\tilde{C}\hat{\gamma}N^{ 2}h^{-1}\bigg{(}\frac{\tilde{C}N\log(\frac{ne}{N})}{\hat{\gamma}N^{2}}\bigg{)} \bigg{]}\] \[\qquad\qquad\qquad\leq n^{-c_{0}}+\mathbb{P}\bigg{[}|\tilde{\phi}_ {sc}|\leq\tilde{C}\hat{\gamma}N^{2}h^{-1}\bigg{(}\frac{\tilde{C}N\log(\frac{ ne}{N})}{\hat{\gamma}N^{2}}\bigg{)},\,\mathcal{E}\bigg{]}\]
\[\leq n^{-c_{0}}+\mathbb{P}\bigg{[}\left|\|\theta_{S}\|_{1}^{2}(1-b^{2} )+Z_{S}\right|\leq C\gamma N^{2}h^{-1}\bigg{(}\frac{CN\log(\frac{ne}{N})}{\gamma N ^{2}}\bigg{)}\bigg{]}\] \[\leq n^{-c_{0}}+\mathbb{P}\bigg{[}\big{|}Z_{S}|\geq\big{|}\|\theta _{S}\|_{1}^{2}(1-b^{2})\big{|}-C\gamma N^{2}h^{-1}\bigg{(}\frac{CN\log(\frac{ne} {N})}{\gamma N^{2}}\bigg{)}\bigg{]},\]
where \(C>0\) is a sufficiently large absolute constant. In the second line and third lines we use the event \(\mathcal{E}\) from (H.23), and in the last line we use the triangle inequality. By (H.7), we have conservatively that
\[\big{|}\|\theta_{S}\|_{1}^{2}(1-b^{2})\big{|}-C\gamma N^{2}h^{-1}\bigg{(}\frac {CN\log(\frac{ne}{N})}{\gamma N^{2}}\bigg{)}\geq\frac{1}{2}\big{|}\|\theta_{S }\|_{1}^{2}(1-b^{2})\big{|}\gg\|\theta_{S}\|_{1}\log\frac{ne}{N}\]
for \(n\) sufficiently large. Thus for \(n\) sufficiently large,
\[\mathbb{P}\bigg{[}\lvert\tilde{\phi}_{sc}\rvert\leq\tilde{C} \dot{\gamma}N^{2}h^{-1}\bigg{(}\frac{\tilde{C}N\log(\frac{ne}{N})}{\dot{\gamma }N^{2}}\bigg{)}\bigg{]} \leq n^{-c_{0}}+\mathbb{P}\bigg{[}\big{|}Z_{S}|\geq\frac{1}{2} \big{|}\,\|\theta_{S}\|_{1}^{2}(1-b^{2})\big{|}\,\bigg{]}\] \[\leq n^{-c_{0}}+\big{(}\frac{N}{ne}\big{)}^{10}.\]
Therefore the type 2 error for the ideal scan statistic is also \(o(1)\).
**Lemma H.4**.: _Let \(\phi_{sc}\) denote the scan statistic defined in (H.4), and let \(\hat{\tau}\) denote the random threshold defined in (H.6). Then under the null hypothesis,_
\[\mathbb{P}\big{[}\lvert\phi_{sc}\rvert>\hat{\tau}\big{]}\leq\binom{n}{N}^{-1} +v_{0}^{-c_{1}}+n^{-c_{0}}+\exp\big{(}-N\log\frac{ne}{N}\big{)},\]
_and under the alternative hypothesis,for \(n\) sufficiently large we have_
\[\mathbb{P}\big{[}\lvert\phi_{sc}\rvert<\hat{\tau}\big{]}\leq\binom{n}{N}^{-1 }+v_{0}^{-c_{1}}+n^{-c_{0}}+\big{(}\frac{N}{ne}\big{)}^{10}.\]
Proof.: We show that the plug-in effect is negligible compared to the threshold and signal-strength. By Lemma H.2,
\[\max_{\lvert D\rvert=N}\left|L_{D}\right|\lesssim\sqrt{N_{0}^{3}\gamma\log( \frac{ne}{N_{0}})}\]
with high probability. Since \(h(u)\leq u^{2}\) for \(u\geq 0\), it follows that
\[h\bigg{(}\frac{\sqrt{N_{0}^{3}\gamma\log(\frac{ne}{N_{0}})}}{ \gamma N_{0}^{2}}\bigg{)} \leq\frac{N_{0}^{3}\gamma\log(\frac{ne}{N_{0}})}{\gamma^{2}N_{0} ^{3}}=\frac{\log\frac{ne}{N_{0}}}{\gamma N_{0}}\Rightarrow\] \[\sqrt{N_{0}^{3}\gamma\log(\frac{ne}{N_{0}})} \leq\gamma N_{0}^{2}h^{-1}\bigg{(}\frac{\log\frac{ne}{N_{0}}}{ \gamma N_{0}}\bigg{)}\Rightarrow\] \[\sqrt{N^{3}\gamma\log(\frac{ne}{N})} \leq[1+o(1)]\gamma N^{2}h^{-1}\bigg{(}\frac{\log\frac{ne}{N}}{ \gamma N}\bigg{)}.\]
Under the null, we have by Lemma H.3 that
\[\mathbb{P}\big{[}\lvert\phi_{sc}\rvert\geq\hat{\tau}\big{]} \leq\mathbb{P}\big{[}\,\lvert\tilde{\phi}_{sc}\rvert\geq\hat{\tau}- \max_{\lvert D\rvert=N}\left|L_{D}\right|\big{]}\] \[\leq\binom{n}{N}^{-1}+v_{0}^{-c_{1}}+\mathbb{P}\bigg{[}\lvert \tilde{\phi}_{sc}\rvert\geq C^{*}\hat{\gamma}N^{2}h^{-1}\bigg{(}\frac{C^{*}N \log(\frac{ne}{N})}{\dot{\gamma}N^{2}}\bigg{)}-\gamma N^{2}h^{-1}\bigg{(}\frac{ \log\frac{ne}{N}}{\gamma N}\bigg{)}\bigg{]}\] \[\leq\binom{n}{N}^{-1}+v_{0}^{-c_{1}}+n^{-c_{0}}+\exp\big{(}-N\log \frac{ne}{N}\big{)}\]
for \(C^{*}>0\) a sufficiently large absolute constant. It suffices to take \(C^{*}\geq 2\tilde{C}\).
Under the alternative hypothesis, we have by Lemma H.3 that
\[\mathbb{P}\big{[}|\phi_{sc}|\leq\hat{\tau}\big{]} \leq\mathbb{P}\big{[}\,|\tilde{\phi}_{sc}|\leq\hat{\tau}+\max_{|D|=N }|L_{D}|\,\big{]}\] \[\leq\binom{n}{N}^{-1}+v_{0}^{-c_{1}}+\mathbb{P}\bigg{[}\,|\tilde{ \phi}_{sc}|\leq C^{*}\hat{\gamma}N^{2}h^{-1}\bigg{(}\frac{C^{*}N\log(\frac{ne}{ N})}{\hat{\gamma}N^{2}}\bigg{)}+\gamma N^{2}h^{-1}\bigg{(}\frac{\log\frac{ne}{ N}}{\gamma N}\bigg{)}\bigg{]}\] \[\leq\binom{n}{N}^{-1}+v_{0}^{-c_{1}}+\mathbb{P}\bigg{[}\,|\tilde{ \phi}_{sc}|\leq 2C^{*}\hat{\gamma}N^{2}h^{-1}\bigg{(}\frac{C^{*}N\log(\frac{ne}{ N})}{\hat{\gamma}N^{2}}\bigg{)}\bigg{]}\] \[\leq\binom{n}{N}^{-1}+v_{0}^{-c_{1}}+n^{-c_{0}}+\big{(}\frac{N}{ ne}\big{)}^{10}\]
for \(n\) sufficiently large.
Observe that Theorem 2.5 follows directly from Lemma H.4.
## Appendix I Proof of Theorem 2.6 (Computational lower bound)
In this section, we provide the proof of Theorem 2.6. For convenience, we denote \(b=\frac{nc-(a+c)N}{n-2N}\), \(d=\frac{c(n-N)^{2}-aN^{2}}{n(n-2N)}\). Under \(H_{0}\), all upper triangular entries \(A\) are i.i.d. Bernoulli distributed with probability \(d\). Then an orthonormal basis of the adjacency matrix of graph \(D\) is
\[f_{\Gamma}(A)=\prod_{i<j:(i,j)\in\Gamma}\frac{A_{ij}-d}{\sqrt{d(1-d)}}.\]
Here, \(\Gamma\subseteq\{(i,j):1\leq i<j\leq n\}\) takes all subsets of all upper triagonal entries of \(A\). Denote \(|\Gamma|\) as the cardinality of \(\Gamma\) and \(B(D)=\{\Gamma\subseteq\{\text{unordered pairs }(i,j):i\neq j,i,j\in[n]\},\Gamma\neq\emptyset,|\Gamma|\leq D\}\) as all subsets of off-diagonal entries of \(A\) of cardinality at most \(D\). By Proposition I.1 and the property of the orthonormal basis function of \(A\),
\[\begin{split}&\text{supp}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \
\[c-d=-\frac{N}{n-N}\left(b-d\right)=\left(\frac{N}{n-N}\right)^{2}\left(a-d\right).\] (I.1)
Since \(b=\frac{c(n-N)-aN}{n-2N}\geq 0\) and \(N\leq n/3\), we know \(a\leq c(n-N)/N\) and
\[c\geq d=\frac{c(n-N)^{2}-aN^{2}}{n(n-2N)}\geq\frac{c(n-N)^{2}-N(n-N)c}{n(n-2N) }\geq(n-N)/n\cdot c\geq 2/3\cdot c.\]
Under the asymptotic regime of this theorem, we have \(d=\frac{c(n-N)^{2}-aN^{2}}{n(n-2N)}\) and
\[p_{1}=\frac{(n-N)^{2}(a-c)}{n(n-2N)\sqrt{d(1-d)}}\asymp\frac{a-c}{\sqrt{c}},\] (I.2)
i.e., there exists constant \(\delta>1\) such that \(\delta^{-1}c\leq p_{1}\leq\delta c\). By (I.1), we have \(p_{3}=-N/(n-N)p_{2}=N^{2}/(n-N)^{2}p_{1}\). For any fixed \(\Gamma\subseteq\{(i,j):1\leq i<j\leq n\}\),
\[\mathbb{E}_{H_{1}}\prod_{(i,j)\in\Gamma}\frac{A_{ij}-d}{\sqrt{d( 1-d)}}=\mathbb{E}_{\Pi}\left\{\mathbb{E}\left\{\prod_{(i,j)\in\Gamma}\frac{A _{ij}-d}{\sqrt{d(1-d)}}\right|\text{ \ $A$ has two communities assigned by $\Pi$ }\right\}\right\}\] \[= \mathbb{E}_{\Pi}p_{1}^{|\Gamma\cap K\otimes K|}\cdot p_{2}^{| \Gamma\cap K\otimes K^{c}|}\cdot p_{3}^{|\Gamma\cap K^{c}\otimes K^{c}|}= \mathbb{E}_{\Pi}\prod_{(i,j)\in\Gamma}\left\{p_{1}\cdot\left(-N/(n-N)\right)^ {\pi_{i}+\pi_{j}-2}\right\}\] \[= p_{1}^{|\Gamma|}\cdot\left(\frac{-N}{n-N}\right)^{\sum_{(i,j) \in\Gamma}(\pi_{i}+\pi_{j}-2)}=p_{1}^{|\Gamma|}\cdot\left(\frac{-N}{n-N}\right) ^{\sum_{(i,j)\in\Gamma}(\pi_{i}+\pi_{j}-2)}\] \[= p_{1}^{|\Gamma|}\cdot\prod_{i=1}^{n}\left(\frac{-N}{n-N}\right) ^{(\pi_{i}-1)\cdot|\{j^{\prime}:(i,j^{\prime})\in\Gamma\}|}\overset{(a)}{=}p_ {1}^{|\Gamma|}\cdot\prod_{i=1}^{n}\left\{\left(\frac{N}{n}\right)+\frac{n-N}{ n}\left(\frac{-N}{n-N}\right)^{|\{j^{\prime}:(i,j^{\prime})\in\Gamma\}|}\right\}.\]
Here, (a) is because \(\mathbb{P}(\pi_{i}=1)=N/n\); \(\mathbb{P}(\pi_{i}=2)=(n-N)/n\). Thus, the following fact holds: if there exists a node \(i\) that appears exactly one time in \(\Gamma\), i.e., \(|\{j^{\prime}:(i,j^{\prime})\in\Gamma\}|=1\), \(\mathbb{E}_{H_{i}}\prod_{(i,j)\in\Gamma}\frac{A_{ij}-d}{\sqrt{d(1-d)}}=0\). On the other hand, for all \(\Gamma\) that each node appear zero times or at least two times, we have
\[\mathbb{E}_{H_{1}}\prod_{(i,j)\in\Gamma}\frac{A_{ij}-d}{\sqrt{d( 1-d)}}\leq p_{1}^{|\Gamma|}\cdot\left\{\frac{N}{n}+\frac{n-N}{n}\left(\frac{-N }{n-N}\right)^{2}\right\}^{|\{i:i\text{ appears at least 2 times in }\Gamma\}|}\] \[\leq p_{1}^{|\Gamma|}\cdot\left(\frac{2N}{n}\right)^{|\{i:i\text{ appears at least 2 times in }\Gamma\}|}.\]
Finally, we denote
\[B_{0}(D)=\left\{\Gamma\in B(D):\text{ each node in }[n]\text{ appears zero time or at least 2 times}\right\},\]
\[m(\Gamma)=|\{i:i\text{ appears in some pair of }\Gamma\}|.\]
For any \(\Gamma\in B_{0}(D)\), we must have \(m(\Gamma)\leq|\Gamma|\leq m(\Gamma)(m(\Gamma)-1)/2\). Then,
\[\sum_{\Gamma\in B(D)}\left(\mathbb{E}_{H_{1}}\prod_{(i,j)\in \Gamma}\frac{A_{ij}-d}{\sqrt{d(1-d)}}\right)^{2}=\sum_{\Gamma\in B_{0}(D)} \left(\mathbb{E}_{H_{1}}\prod_{(i,j)\in\Gamma}\frac{A_{ij}-d}{\sqrt{d(1-d)}} \right)^{2}\] \[= \sum_{\Gamma\in B_{0}(D)}p_{1}^{2|\Gamma|}\cdot\left(\frac{2N}{n} \right)^{2|\{\text{\tiny$\left\{i:\text{\tiny$ appears at least 2 times in \Gamma\}$}\right\}|\}}\leq\sum_{\Gamma\in B_{0}(D)}p_{1}^{2|\Gamma|}\cdot\left( \frac{2N}{n}\right)^{2m(\Gamma)}\] \[= \sum_{m=2}^{D}\sum_{g=m}^{m(m-1)/2}\sum_{\begin{subarray}{c} \Gamma\in B_{0}(D)\\ |\Gamma|=g\end{subarray}}p_{1}^{2g}\left(\frac{2N}{n}\right)^{2m}\overset{(a )}{\leq}\sum_{m=2}^{D}\sum_{g=m}^{\frac{m(m-1)}{2}}\binom{n}{m}m^{g}p_{1}^{g} \left(\frac{2N}{n}\right)^{m}\] \[\leq \sum_{m=2}^{D}\sum_{g=m}^{\frac{m(m-1)}{2}}\frac{m^{g}p_{1}^{2g}( 2N)^{2m}}{m!\cdot n^{m}}\leq\sum_{m=2}^{D}\frac{D\max\left\{(mp_{1}^{2})^{m}, (mp_{1}^{2})^{D\wedge m(m-1)/2}\right\}\cdot(2N)^{2m}}{n^{m}}\] \[= D\sum_{m=2}^{D}\left(\frac{\max\{mp_{1}^{2},(mp_{1}^{2})^{M}\} \cdot(2N)^{2}}{n}\right)^{m}\overset{(b)}{=}o(1)\]
Here, \(M=\max_{m\geq 1}\frac{D\wedge m(m-1)/2}{m}\leq\sqrt{D/2-1}\); (a) is because the number of \(\Gamma\in B_{0}(D)\) with \(m(\Gamma)=m\) and \(|\Gamma|=g\) is at most \(\binom{n}{m}\cdot m^{g}\); (b) is due to the asymptotic assumption and (I.2), which leads to
\[\frac{N}{\sqrt{n}}\left(p_{1}\lor p_{1}^{M}\right)\leq n^{-\varepsilon}.\]
We have thus finished the proof of this theorem. \(\qed\)
**Proposition I.1** (Proposition 1.15 of Kunisky et al. (2019)).: _Given data \(A\), consider the simple hypothesis testing problem: \(H_{0}\) versus \(H_{1}\). Let the likelihood ratio function be \(LR(A)=\frac{p_{H_{1}}(A)}{p_{H_{0}}(A)}\). Define \(\|f\|=\sqrt{\mathbb{E}_{H_{0}}f^{2}(A)}\) and \(f^{\leq D}\) as the projection of any function \(f\) to the subspace of polynomials of degree at most \(D\), i.e., \(f^{\leq D}=\text{argmin}_{\begin{subarray}{c}g\text{ is polynomial}\\ degree(g)\leq D\end{subarray}}\|f-g\|\). Then for any positive integer \(D\), we have_
\[\|LR^{\leq D}(A)-1\|=\max_{\begin{subarray}{c}f\text{-degree}(f)\leq D\\ \mathbb{E}_{H_{0}}f^{2}(A)=1\\ \mathbb{E}_{H_{0}}f(A)=0\end{subarray}}\mathbb{E}_{H_{1}}f(A);\]
\[\frac{LR^{\leq D}(A)-1}{\|LR^{\leq D}(A)-1\|}=\text{argmax}_{\begin{subarray}{ c}f\text{-degree}(f)\leq D\\ \mathbb{E}_{H_{0}}f^{2}(A)=1\\ \mathbb{E}_{H_{0}}f(A)=0\end{subarray}}\mathbb{E}_{H_{1}}f(A).\]
## Appendix J Proof of Theorem 2.7 (Power of EST)
The EST statistic is defined to be
\[\phi_{EST}^{(v)}\equiv\sup_{|S|\leq v}\sum_{i,j\in S}A_{ij},\]
and the EST is defined to be
\[\varphi_{EST}=\mathbf{1}\big{[}\phi_{EST}^{(r)}\geq e\big{]},\]
where \(v,e\) are relatively prime and satisfy
\[\frac{\omega}{1-\beta}<\frac{v}{e}<\delta.\]
Such \(v\) and \(e\) exist because
\[\frac{\omega}{1-\beta}<\delta,\]
by assumption. Furthermore, we have
\[v<e\]
since \(\omega,\delta\in(0,1)\).
To prove the statement, we require some preliminaries. Let \(G(n,p)\) denote an Erdos-Renyi graph with parameter \(p\). A graph \(H\) with \(v\) vertices and \(e\) edges is said to be _balanced_ if for all (not necessarily induced) subgraphs \(H^{\prime}\subset H\) with \(v^{\prime}\) vertices and \(e^{\prime}\) edges, it holds that
\[e/v>e^{\prime}/v^{\prime}.\]
Next, the power of EST hinges on two well-known facts from probabilistic combinatorics. The first concerns the appearance of an arbitrary graph \(H\) in \(G(n,p)\).
**Theorem J.1** (Adapted from Theorem 4.4.2. of Alon & Spencer (2016)).: _Let \(H\) denote a graph with \(v\) vertices and \(e\) edges. Then if \(p\ll n^{-v/e}\), the random graph \(G(n,p)\) does not have \(H\) as a subgraph, with high probability as \(n\to\infty\)._
_On the other hand, if \(H\) is balanced and \(p\gg n^{-v/e}\), the random graph \(G(n,p)\) contains \(H\) as a subgraph, with high probability as \(n\to\infty\)._
**Theorem J.2** (Rucinski & Vince (1986); Catlin et al. (1988)).: _There exists a balanced graph with \(v\) vertices and \(e\) edges if and only if \(1\leq v-1\leq e\leq\binom{v}{2}\)._
Now we continue the proof. Recall that \(v\) and \(e\) are integers chosen such that \(\frac{\omega}{1-\beta}<v/e<\delta\).
_Type 1 error:_ Observe that
\[b=\frac{cn-(a+c)N}{n-2N}=c\cdot\frac{n-N}{n-2N}-a\cdot\frac{N}{n-2N},\]
and thus
\[\alpha =a\varepsilon+b(1-\varepsilon)=a\varepsilon+(1-\varepsilon)\big{(} c\cdot\frac{n-N}{n-2N}-a\cdot\frac{N}{n-2N}\big{)}\] \[=a\bigg{(}\frac{N}{n}-(1-\varepsilon)\frac{N}{n-2N}\bigg{)}+(1- \varepsilon)\cdot\frac{n-N}{n-2N}\cdot c=-a\cdot\frac{N^{2}}{n(n-2N)}+(1- \varepsilon)\cdot\frac{n-N}{n-2N}\cdot c\sim c,\]
where above we use that \(a\varepsilon\leq c\).
Thus under the alternative, \(A\) is distributed as Erdos-Renyi with parameter
\[\alpha\sim c=n^{-\delta}\ll n^{-v/e},\]
by our choice of \(v\) and \(e\). By the first part of Theorem J.1, no subset of size \(v\) of \(A\) contains more than \(e\) edges, with high probability as \(n\to\infty\).
To be more precise, there are a finite number of graphs \(H_{1},\ldots,H_{L}\) with \(v\) vertices and at least \(e\) edges, where \(L\) is a constant depending only on \(v\). For each graph \(H_{i}\), Theorem J.1 contains \(H_{i}\) as a subgraph with probability tending \(0\) as \(n\to\infty\). The type 1 error of EST thus vanishes by the union bound.
_Type 2 error:_ Let \(H\) denote a balanced graph on \(v\) vertices and \(e\) edges, whose existence is guaranteed by Theorem J.2. Consider the induced subgraph on \(\mathcal{C}_{1}\), the smaller community, which is an Erdos-Renyi random graph on \(N\) vertices with parameter \(a=n^{-\omega}\). By our choice of \(v\) and \(e\), we have
\[a=n^{-\omega}=N^{-\frac{\omega}{1-\beta}}\gg N^{-v/e}.\]
By Theorem J.1, \(\mathcal{C}_{1}\) contains a copy of \(H\) with high probability. Since \(H\) has \(e\) edges, we conclude that \(\phi_{EST}^{(v)}\geq e\), and thus the null is rejected with high probability as \(n\to\infty\).
|
2310.12460 | Linear Source Apportionment using Generalized Least Squares | Motivated by applications to water quality monitoring using fluorescence
spectroscopy, we develop the source apportionment model for high dimensional
profiles of dissolved organic matter (DOM). We describe simple methods to
estimate the parameters of a linear source apportionment model, and show how
the estimates are related to those of ordinary and generalized least squares.
Using this least squares framework, we analyze the variability of the
estimates, and we propose predictors for missing elements of a DOM profile. We
demonstrate the practical utility of our results on fluorescence spectroscopy
data collected from the Neuse River in North Carolina. | Jordan Bryan, Peter Hoff | 2023-10-19T04:36:56Z | http://arxiv.org/abs/2310.12460v1 | # Linear Source Apportionment using Generalized Least Squares
###### Abstract
Motivated by applications to water quality monitoring using fluorescence spectroscopy, we develop the source apportionment model for high dimensional profiles of dissolved organic matter (DOM). We describe simple methods to estimate the parameters of a linear source apportionment model, and show how the estimates are related to those of ordinary and generalized least squares. Using this least squares framework, we analyze the variability of the estimates, and we propose predictors for missing elements of a DOM profile. We demonstrate the practical utility of our results on fluorescence spectroscopy data collected from the Neuse River in North Carolina.
_Keywords:_ dependent data, latent variable model, linear model, source separation.
Introduction
Increasing land development and the growth of large-scale agricultural operations have led to concerns about water pollution and a need for quantitative methods for water quality monitoring. The water quality of a river basin is affected by the water quality of the streams that feed into it, which in turn are affected by the land-use features of their local watersheds. As a result, the water at a particular point of a river will contain a mixture of dissolved organic matter (DOM) whose sources are determined by the upstream land use. For example, the DOM profile of the water at a point downstream from both a poultry farm and a community septic system will resemble a mixture of the DOM profiles of water near the farm and of water near the septic system.
In order to monitor pollution and the sources of DOM in the Neuse River basin in Eastern North Carolina, researchers at North Carolina State University obtained 202 water samples, each one being representative of one of nine different categories of land use. Fluorescence spectroscopy was used to obtain a multivariate DOM profile for each water sample. Taken together, these 202 profiles make up a "dictionary" to which the DOM profile of a water sample obtained downstream can be compared (Osburn et al., 2016). In particular, it is of interest to estimate in what proportions each of the nine source categories contribute to the DOM profile of a downstream water sample. Such estimates can identify water quality issues and provide information about land use and drainage patterns in the river basin.
A DOM profile is often represented as a matrix, having elements that record the fluorescence intensity spectra emitted by a water sample when it is excited with light of a
Figure 1: From left to right, three EEMs from the Neuse River dictionary and one hypothetical downstream EEM. As is typical, the lower-right region of each EEM is excluded due to Rayleigh scatter (Andersen and Bro, 2003). Each of the three left EEMs represents a DOM profile from a particular land use source. The right-most EEM, which is meant to represent an EEM from a downstream water sample, is computed by taking the average of the fluorescence intensities of the left three EEMs.
range of frequencies (see Figure 1). In the water chemistry literature it is common practice to stack these "excitation-emission matrices" (EEMs) to form a three-way array, and then to analyze the data array using multiway statistical methods, in particular, the PARAFAC model. Osburn et al. (2016) developed a PARAFAC-based method called FluorMod to estimate the source proportions of a downstream water sample from its DOM profile and the dictionary of profiles from the nine source categories. While providing promising results, FluorMod is somewhat numerically complicated, involving both simulation and iterative estimation of the non-linear PARAFAC model. These complexities are a barrier to adoption of the method by potential users, such as managers of drinking water and wastewater treatment facilities, who may only be familiar with or have the software to implement simple linear models.
As alternatives, Bryan et al. (2023) considered two simple methods of source estimation that can be implemented using only the tools of simple linear regression and vector summation. The first method, which we refer to as "average-then-regress" (ATR), proceeds by averaging the dictionary DOM profiles by source category, and then regressing the downstream profile on these average profiles. The second method, called "regress-then-sum" (RTS), instead regresses the downstream profile on all of the dictionary profiles, then sums the coefficients by category. In multiple simulation studies, it was observed that the RTS method provided notably superior estimates than either the FluorMod method or the ATR method. Some heuristic explanation of this phenomenon was given, but no theory was provided.
In this article we formalize the ATR and RTS methods in the context of a latent variable model for the downstream DOM profile, which we refer to as the _source apportionment model_. Marginalizing over the latent variables, this model can be expressed as a linear regression model where both the mean and covariance of the downstream DOM profile are affected by the proportion of DOM arising from each source. We show that the ATR estimate corresponds to a feasible ordinary least-squares (OLS) estimate, whereas the RTS estimate corresponds to a type of feasible generalized least squares (GLS) estimate. This result explains the observed superior performance of the RTS estimate, as GLS estimates have lower mean squared error than OLS estimates in general. Additionally, we show how this GLS framework may be used to obtain standard errors for the coefficient estimates, as well as feasible predictors of missing data in the downstream DOM profile.
While our discussion focuses on fluorescence spectroscopy data, we note that the source
algorithment model may be applied to other kinds of data with similar structure, such as hyperspectral images, audio spectrograms, and power meter readings collected over time. For each of these, the estimated coefficients may represent, respectively, proportions of land cover (tree, water, street) in a given pixel (Bioucas-Dias et al., 2012), proportions of note amplitudes sounding at a given time (Benetos et al., 2019), and proportions of several appliances used over the period of a given month (Wytock and Kolter, 2013). However, the source apportionment model is distinct from other models that are commonly applied to these data for the purpose of _source separation_. In the task of source separation, the estimands of interest are the unobserved source signals themselves, not the coefficients representing the contributions of these signals to the total.
The remainder of this article is as follows: In the next section, we formulate the source apportionment model and inference problem, and describe the ATR and RTS estimates of the source proportions. In Section 3 we show how the ATR and RTS estimates can be interpreted as OLS and GLS estimates, respectively, in a linear regression model. We then extend this analogy to propose ATR and RTS predictors for missing data. Section 4 discusses the relative variability of the ATR and RTS estimates and also develops a method to obtain standard errors for the RTS estimates. Finally, Section 5 illustrates the results in a numerical study using the dictionary of 202 DOM profiles originally described in Osburn et al. (2016). Directions for further research are discussed in Section 6.
The source apportionment model
Let \(\mathbf{y}\) be a \(p\)-dimensional vector representing the DOM profile of a downstream water sample of unknown composition. For such a profile obtained using fluorescence spectroscopy, it is reasonable to assume that \(\mathbf{y}\) is a weighted sum of \(K\)_latent source profiles_: \(\mathbf{x}_{1}^{*},\ldots,\mathbf{x}_{K}^{*}\),
\[\mathbf{y}=\theta_{1}\mathbf{x}_{1}^{*}+\cdots+\theta_{K}\mathbf{x}_{K}^{*}. \tag{1}\]
The latent source profiles represent the DOM profiles of the component water samples from each of the \(K\) source categories, which contribute to the combined water sample with profile \(\mathbf{y}\). The vector \(\boldsymbol{\theta}\) represents the proportions of each of the \(K\) sources that contribute to \(\mathbf{y}\). If \(\mathbf{x}_{1}^{*},\ldots,\mathbf{x}_{K}^{*}\) were known, \(\boldsymbol{\theta}\) could be determined exactly as the solution to a least squares regression. However, the latent source profiles cannot be observed directly, as only the total downstream DOM profile \(\mathbf{y}\) can be measured by the spectrometer.
As a substitute for direct observation, we assume each latent source profile is a random vector arising from a source-specific distribution, so that \(\mathbf{x}_{1}^{*}\sim P_{1},\ldots,\mathbf{x}_{K}^{*}\sim P_{K}\) with \(\mathbf{x}_{1}^{*},\ldots,\mathbf{x}_{K}^{*}\) being jointly independent. We further assume that data information about \(P_{1},\ldots,P_{K}\) is available in the form of a dictionary of \(n\) DOM profiles \(\mathbf{X}\in\mathbb{R}^{p\times n}\). The dictionary profiles may be mixtures of known source proportions in general (see comment at the conclusion of Section 4), but for now, we assume each dictionary DOM profile is representative of exactly one source category, so that \(n=\sum_{k=1}^{K}n_{k}\). Letting \(\mathbf{x}_{i,k}\) be the DOM profile of the \(i\)th dictionary water sample from source category \(k\), the \(\mathbf{x}_{i,k}\)'s along with the latent \(\mathbf{x}_{k}^{*}\)'s are modeled as random samples from the source-specific distributions
\[\mathbf{x}_{1,k},\ldots,\mathbf{x}_{n_{k},k},\mathbf{x}_{k}^{*}\sim\text{i.i. d.}\ P_{k},\ k=1,\ldots,K, \tag{2}\]
with these profiles additionally being independent across source categories. We refer to the linear model (1) together with the sampling model (2) as the _source apportionment model_, and refer to the task of estimating \(\mathbf{\theta}\) from \(\mathbf{y}\) and the dictionary profiles as the _source apportionment problem_. In what follows, we consider the source apportionment problem in source apportionment models with \(n<p\) and \(P_{1},\ldots,P_{K}\) non-degenerate, so that \(\mathbf{X}\) is full-rank with probability 1.
The source apportionment model bears some resemblance to a latent factor model. It may also be viewed a linear regression model with correlated errors. To make these connections, we first write the model in matrix form: Let \(\mathbf{X}^{*}\in\mathbb{R}^{p\times K}\) be the matrix formed by column-binding \(\mathbf{x}_{1}^{*},\ldots,\mathbf{x}_{K}^{*}\), and let \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{K})^{\top}\). Then the sampling model (1) is \(\mathbf{y}=\mathbf{X}^{*}\mathbf{\theta}\). This looks similar to a linear latent factor model, which expresses a data vector as a factor loading matrix multiplied by a latent factor vector, both of which are unobserved. However, the source apportionment model differs from the latent factor model in terms of both the target of inference, and the data available for estimation. In particular, in the source apportionment problem there is only one outcome vector \(\mathbf{y}\), the matrix \(\mathbf{X}^{*}\) is viewed as random, and the target of inference is \(\mathbf{\theta}\). In contrast, in factor analysis we would have multiple \(\mathbf{y}\)-vectors observed, \(\mathbf{\theta}\) would be viewed as random (typically \(\mathbf{\theta}\sim N_{K}(\mathbf{0},\mathbf{I}_{K})\)), and the target of inference would be \(\mathbf{X}^{*}\).
Now let \(\mathbf{\mu}_{k}=\mathrm{E}[\mathbf{x}_{k}^{*}],\Sigma_{k}=\mathrm{Var}[\mathbf{x} _{k}^{*}]\), \(k=1,\ldots,K\), be the mean vectors and covariance matrices of the distributions \(P_{1},\ldots,P_{K}\), and let \(\mathbf{M}\in\mathbb{R}^{p\times K}\) be the matrix obtained by
column-binding \(\mathbf{\mu}_{1},\ldots,\mathbf{\mu}_{K}\). Marginalizing over the latent profile vectors \(\mathbf{X}^{*}\), we have
\[\mathrm{E}[\mathbf{y}] =\mathrm{E}[\mathbf{X}^{*}\mathbf{\theta}]=\mathbf{M}\mathbf{\theta}\] \[\mathrm{Var}[\mathbf{y}] =\mathrm{Var}[\theta_{1}\mathbf{x}_{1}^{*}+\cdots+\theta_{K} \mathbf{x}_{K}^{*}]\] \[=\theta_{1}^{2}\Sigma_{1}+\cdots+\theta_{K}^{2}\Sigma_{K}\equiv \Sigma_{\theta}.\]
If \(\mathbf{M}\) were known, then the above two equations specify a linear regression model for \(\mathbf{y}\), in which case the OLS estimate of \(\mathbf{\theta}\) would be \((\mathbf{M}^{\top}\mathbf{M})^{-1}\mathbf{M}^{\top}\mathbf{y}\), and the GLS estimate would be \((\mathbf{M}^{\top}\Sigma_{\theta}^{-1}\mathbf{M})^{-1}\mathbf{M}^{\top} \Sigma_{\theta}^{-1}\mathbf{y}\). Of course, the latter can only be computed if additionally \(\mathbf{\theta}\) were known, which if it were, would make estimation unnecessary. If instead
\[\Sigma_{1}=\cdots=\Sigma_{K}:=\Sigma, \tag{3}\]
then \(\mathrm{Var}[\mathbf{y}]\propto\Sigma\), and the GLS estimate can be computed without knowledge of \(\mathbf{\theta}\) because it is invariant to re-scaling of the error covariance matrix. Note that here and in what follows \(\mathbf{A}\propto\mathbf{B}\) means \(\mathbf{A}=c\mathbf{B}\) for some constant \(c\). In the next section, we use assumption (3) to develop the ATR and RTS estimates and discuss their respective connections to OLS and GLS estimates.
## 3 Linear estimators of source proportions
The authors in Bryan et al. (2023) proposed two estimates, the ATR and RTS estimates, as solutions to the source apportionment problem. Both estimates can be motivated by the idea that elements of the DOM profile dictionary \(\mathbf{X}\in\mathbb{R}^{p\times n}\) or functions thereof may
serve as surrogates for the latent profiles \(\mathbf{X}^{*}\). Let \(\mathbf{A}\) be the \(n\times K\) matrix with entries
\[A_{ik}=\left\{\begin{array}{ll}1&\quad\text{if DOM profile $i$ is from source category $k$}\\ 0&\quad\text{otherwise.}\end{array}\right. \tag{4}\]
Then the ATR and RTS estimates may be written as
\[\hat{\boldsymbol{\theta}}_{\text{ATR}} =\mathbf{A}^{\top}\mathbf{A}(\mathbf{A}^{\top}\mathbf{X}^{\top} \mathbf{X}\mathbf{A})^{-1}\mathbf{A}^{\top}\mathbf{X}^{\top}\mathbf{y},\] \[\hat{\boldsymbol{\theta}}_{\text{RTS}} =\mathbf{A}^{\top}(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{X}^{ \top}\mathbf{y}.\]
The ATR estimate is obtained by regressing \(\mathbf{y}\) on the matrix of DOM profile averages from each source category, which can be written as \(\mathbf{X}\mathbf{A}(\mathbf{A}^{\top}\mathbf{A})^{-1}\). The RTS estimate is obtained by first regressing \(\mathbf{y}\) on the matrix containing all dictionary profiles and then summing the resulting coefficients by source category. Intuition suggests that the ATR estimate should perform well when each of the latent DOM profiles resembles the average dictionary profile from the corresponding source category. The RTS estimate, on the other hand, may perform well even if each latent profile only resembles one of the dictionary profiles from its source category.
In the context of the source apportionment model, the ATR and RTS estimates can be more formally understood as OLS and GLS estimates of \(\boldsymbol{\theta}\), where \(\mathbf{X}\) has been used to obtain feasible substitutes for the unknowns \(\mathbf{M}\) and \(\Sigma\). According to the sampling model (2), we have that \(\text{E}[\mathbf{X}]=\mathbf{M}\mathbf{A}^{\top}\). Furthermore, by independence of the DOM profiles within and across source categories, along with assumption (3), we have \(\text{Var}[\mathbf{X}]\propto\mathbf{I}_{n}\otimes\Sigma\), where \(\otimes\) denotes the Kronecker product. Let \(\hat{\mathbf{M}}=\mathbf{X}\mathbf{A}(\mathbf{A}^{\top}\mathbf{A})^{-1}\), so that \(\hat{\mathbf{M}}\) is the OLS estimate of \(\mathbf{M}\) based on \(\mathbf{X}\). Next, let
\[\mathbf{S}=(\mathbf{X}-\hat{\mathbf{M}}\mathbf{A}^{\top})(\mathbf{X}-\hat{ \mathbf{M}}\mathbf{A}^{\top})^{\top}=\mathbf{X}(\mathbf{I}_{n}-\mathbf{P}_{ \mathbf{A}})\mathbf{X}^{\top},\]
where \(\mathbf{P_{A}}=\mathbf{A}(\mathbf{A}^{\top}\mathbf{A})^{-1}\mathbf{A}^{\top}\), so that \(\mathbf{S}\) is the \(p\times p\) residual sum of squares matrix from the OLS fit. Finally, define a mean-zero "residual" matrix \(\mathbf{E}=\mathbf{X}\mathbf{N}\), where \(\mathbf{N}\in\mathbb{R}^{n\times(n-K)}\) is an orthonormal basis for the null space of \(\mathbf{A}^{\top}\). Then \(\mathbf{N}^{\top}\mathbf{N}=\mathbf{I}_{n-K}\), \(\mathbf{N}\mathbf{N}^{\top}=\mathbf{I}_{n}-\mathbf{P_{A}}\), \(\mathbf{S}=\mathbf{E}\mathbf{E}^{\top}\), and
\[\mathrm{E}[\hat{\mathbf{M}}]=\mathrm{E}[\mathbf{X}]\mathbf{A}( \mathbf{A}^{\top}\mathbf{A})^{-1}=\mathbf{M}\] \[\mathrm{E}[\mathbf{S}]=\mathrm{E}[\mathbf{E}\mathbf{E}^{\top}]= \Sigma\times(n-K).\]
Hence, a reasonable, feasible OLS estimate of \(\boldsymbol{\theta}\) is \((\hat{\mathbf{M}}^{\top}\hat{\mathbf{M}})^{-1}\hat{\mathbf{M}}^{\top}\mathbf{y}\), which is precisely the ATR estimator, as the columns of \(\hat{\mathbf{M}}\) are the average DOM profiles from each source category. Since \(n<p\), \(\mathbf{S}\) will be singular, so we consider estimating \(\Sigma\) as
\[\hat{\Sigma}_{\gamma}\propto\mathbf{S}+\gamma\mathbf{I}_{p}\]
for some regularization parameter \(\gamma\geq 0\). This leads to the feasible GLS estimate
\[\hat{\boldsymbol{\theta}}_{\gamma}=(\hat{\mathbf{M}}^{\top}\hat{\Sigma}_{ \gamma}^{-1}\hat{\mathbf{M}})^{-1}\hat{\mathbf{M}}^{\top}\hat{\Sigma}_{\gamma }^{-1}\mathbf{y}. \tag{5}\]
To see the relationship between this estimate and the RTS estimate, consider that the RTS estimate can be written as \(\hat{\boldsymbol{\theta}}_{\text{RTS}}=\mathbf{A}^{\top}(\mathbf{X}^{\top} \mathbf{X})^{-1}\mathbf{X}^{\top}\mathbf{y}\). This is just \(\mathbf{A}^{\top}\hat{\boldsymbol{\beta}}\), where \(\hat{\boldsymbol{\beta}}\) is the OLS estimate for \(\boldsymbol{\beta}\) in the linear model \(\mathrm{E}[\mathbf{y}]=\mathbf{X}\boldsymbol{\beta}\). We can reparameterize this linear model as
\[\mathbf{X}\boldsymbol{\beta} =\mathbf{X}\mathbf{P_{A}}\boldsymbol{\beta}+\mathbf{X}(\mathbf{I} _{n}-\mathbf{P_{A}})\boldsymbol{\beta}\] \[=\hat{\mathbf{M}}\boldsymbol{\theta}+\mathbf{E}\boldsymbol{\eta}\] \[\equiv\mathbf{Z}\boldsymbol{\psi},\]
where \(\mathbf{\theta}=\mathbf{A}^{\top}\mathbf{\beta}\), \(\mathbf{\eta}=\mathbf{N}^{\top}\mathbf{\beta}\), \(\mathbf{Z}=[\hat{\mathbf{M}}\ \mathbf{E}]\), and \(\mathbf{\psi}=(\mathbf{\theta}^{\top}\mathbf{\eta}^{\top})^{\top}\). This reparameterization relates the linear model with all DOM profiles as regressors to the linear model with two sets of regressors: the average profiles \(\hat{\mathbf{M}}\) and the "residual" profiles \(\mathbf{E}\). The following proposition then relates the coefficient estimates from these two models.
**Proposition 1**.: _Let \(\hat{\mathbf{\beta}}=(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{X}^{\top}\mathbf{y}\), and let \(\hat{\mathbf{\psi}}=(\mathbf{Z}^{\top}\mathbf{Z})^{-1}\mathbf{Z}^{\top}\mathbf{y}\), where \(\mathbf{Z}\) is defined as above. Then_
\[\mathbf{A}^{\top}\hat{\mathbf{\beta}}=\hat{\mathbf{\psi}}[1:K]\]
Hence, \(\hat{\mathbf{\theta}}_{\mathrm{RTS}}=\hat{\mathbf{\psi}}[1:K]\). Together with previous results, Proposition 1 implies that while the ATR estimate is equivalent to the OLS estimate of the \(\hat{\mathbf{M}}\) coefficients in a regression of \(\mathbf{y}\) on \(\hat{\mathbf{M}}\), the RTS estimate is equivalent to the OLS estimate of the \(\hat{\mathbf{M}}\) coefficients in an expanded linear model, one which controls for the variation described by \(\mathbf{E}\). On the other hand, applying Seber and Lee (2003) Theorem 3.6(i) to \(\hat{\mathbf{\psi}}[1:K]\), we find another equivalence
\[\hat{\mathbf{\theta}}_{\mathrm{RTS}}=(\hat{\mathbf{M}}^{\top}(\mathbf{I}_{p}- \mathbf{P}_{\mathbf{E}})\hat{\mathbf{M}})^{-1}\hat{\mathbf{M}}^{\top}(\mathbf{ I}_{p}-\mathbf{P}_{\mathbf{E}})\mathbf{y},\]
where \(\mathbf{P}_{\mathbf{E}}=\mathbf{E}(\mathbf{E}^{\top}\mathbf{E})^{-1}\mathbf{ E}^{\top}\). This shows that the RTS estimate can also be expressed as a type of feasible GLS estimate, in which \(\mathbf{I}_{p}-\mathbf{P}_{\mathbf{E}}\) plays the role of an inverse covariance matrix. The next proposition makes this notion precise, showing that the RTS estimate is a limiting case of the feasible GLS estimate in (5).
**Proposition 2**.: _Let \(\hat{\mathbf{\theta}}_{\gamma}=(\hat{\mathbf{M}}^{\top}\hat{\Sigma}_{\gamma}^{-1} \hat{\mathbf{M}})^{-1}\hat{\mathbf{M}}^{\top}\hat{\Sigma}_{\gamma}^{-1} \mathbf{y}\). Then_
\[\lim_{\gamma\to 0}\ \hat{\mathbf{\theta}}_{\gamma}=\hat{\mathbf{\theta}}_{ \mathrm{RTS}}.\]
_Also_
\[\lim_{\gamma\to\infty}\ \hat{\boldsymbol{\theta}}_{\gamma}=\hat{\boldsymbol{ \theta}}_{\rm ATR}.\]
The least squares connections developed above can also be extended to the problem of prediction in the source apportionment model. Suppose that instead of observing all elements of the \(p\)-dimensional DOM profile \(\mathbf{y}\), we only observe a \(q\)-dimensional subvector. In a practical setting, this might happen if a downstream DOM profile is measured only on a subset of the excitation frequencies used to measure the profiles in the dictionary. Partition \(\mathbf{y}\) into observed and unobserved components \(\mathbf{y}_{0}\in\mathbb{R}^{p-q},\mathbf{y}^{\prime}\in\mathbb{R}^{q}\), and consider the problem of predicting \(\mathbf{y}^{\prime}\) from \(\mathbf{X}\) and \(\mathbf{y}_{0}\).
Assume that \(\mathbf{y}\) follows the partitioned source apportionment model
\[(\mathrm{E}[\mathbf{y}_{0}],\mathrm{E}[\mathbf{y}^{\prime}]) =(\mathbf{M}_{0}\boldsymbol{\theta},\mathbf{M}^{\prime}\boldsymbol {\theta})\] \[\mathrm{Var}[\mathbf{y}] =\left[\begin{array}{cc}\Sigma_{0}&\Delta\\ \Delta^{\top}&\Sigma^{\prime}\end{array}\right],\]
where \(\Sigma_{0}\) is \((p-q)\times(p-q)\), \(\Delta\) is \(q\times(p-q)\), and \(\Sigma^{\prime}\) is \(q\times q\). The best linear unbiased predictor for the unobserved portion of the downstream profile is then
\[\hat{\mathbf{y}}^{\prime}=\mathbf{M}^{\prime}\hat{\boldsymbol{ \theta}}+\Delta^{\top}\Sigma_{0}^{-1}(\mathbf{y}_{0}-\mathbf{M}_{0}\hat{ \boldsymbol{\theta}}), \tag{6}\]
where \(\hat{\boldsymbol{\theta}}=(\mathbf{M}_{0}^{\top}\Sigma_{0}^{-1}\mathbf{M}_{0}) ^{-1}\mathbf{M}_{0}^{\top}\Sigma_{0}^{-1}\mathbf{y}_{0}\)(Kariya and Kurata, 2004). As before, we can obtain a feasible version of \(\hat{\mathbf{y}}^{\prime}\) by using the dictionary of DOM profiles to create substitutes for the unknowns in (6). Partition the dictionary in the same manner as \(\mathbf{y}\), and let
\[\hat{\mathbf{M}} =(\hat{\mathbf{M}}_{0},\hat{\mathbf{M}}^{\prime})\] \[\hat{\Sigma}_{\gamma} =\left[\begin{array}{cc}\hat{\Sigma}_{0\gamma}&\hat{\Delta}\\ \hat{\Delta}^{\top}&\hat{\Sigma}_{\gamma}^{\prime}\end{array}\right]\]
be the corresponding partitions of \(\hat{\mathbf{M}},\hat{\Sigma}_{\gamma}\). Then define the feasible predictor
\[\hat{\mathbf{y}}^{\prime}_{\gamma}=\hat{\mathbf{M}}^{\prime}\hat{ \boldsymbol{\theta}}_{0\gamma}+\hat{\Delta}^{\top}\hat{\Sigma}_{0\gamma}^{-1}( \mathbf{y}_{0}-\hat{\mathbf{M}}_{0}\hat{\boldsymbol{\theta}}_{0\gamma}),\]
where \(\hat{\boldsymbol{\theta}}_{0\gamma}=(\hat{\mathbf{M}}_{0}^{\top}\hat{\Sigma}_{0 \gamma}^{-1}\hat{\mathbf{M}}_{0})^{-1}\hat{\mathbf{M}}_{0}^{\top}\hat{\Sigma}_ {0\gamma}^{-1}\mathbf{y}_{0}\). In analogy to Proposition 2, we obtain simple limiting expressions for \(\hat{\mathbf{y}}^{\prime}_{\gamma}\).
**Proposition 3**.: \[\lim_{\gamma\to 0}\ \hat{\mathbf{y}}^{\prime}_{\gamma}=\mathbf{X}^{ \prime}(\mathbf{X}_{0}^{\top}\mathbf{X}_{0})^{-1}\mathbf{X}_{0}^{\top} \mathbf{y}_{0}.\]
_Also,_
\[\lim_{\gamma\to\infty}\ \hat{\mathbf{y}}^{\prime}_{\gamma}=\hat{\mathbf{M}}^{ \prime}(\hat{\mathbf{M}}_{0}^{\top}\hat{\mathbf{M}}_{0})^{-1}\hat{\mathbf{M}} _{0}^{\top}\mathbf{y}_{0}.\]
We call these limiting predictors the RTS and ATR predictors, respectively. Like the ATR and RTS estimates, the ATR and RTS predictors can be explained in simple terms and can be computed using standard linear regression tools.
## 4 Variability of the RTS estimates
Because the RTS estimate is a feasible version of the actual GLS estimate in the source apportionment model, it is not, in general, optimal in terms of mean squared error, nor is it necessarily unbiased. However, the same can be said of the ATR estimate. The properties of each depend on the extent to which the feasible approximations \(\hat{\mathbf{M}}\approx\mathbf{M}\) and \(\hat{\Sigma}_{\gamma}\approx\Sigma\) hold. In this section, we first analyze the variability of the ATR and RTS estimates assuming an idealized model where the mean and variance of the downstream DOM profile can be described exactly using the dictionary profiles. Doing so offers some insight into
when the RTS estimate may be less variable than the ATR estimate and allows us to derive simple standard errors for the RTS estimate. We then discuss what happens to the RTS standard errors in the general case. The variability we consider here is with respect to the variability in the latent DOM profiles only, as we assume the dictionary profiles to be fixed at their observed values.
Suppose that \(\mathbf{y}\) follows a source apportionment model given by
\[\begin{split}\mathrm{E}[\mathbf{y}]&=\hat{\mathbf{ M}}\boldsymbol{\theta}\\ \mathrm{Var}[\mathbf{y}]&=\|\boldsymbol{\theta}\|_ {2}^{2}\hat{\Sigma}_{\gamma},\end{split} \tag{7}\]
where \(\hat{\mathbf{M}},\hat{\Sigma}_{\gamma}\) are functions, as defined in the previous section, of a non-random dictionary \(\mathbf{X}\) and a design-like matrix \(\mathbf{A}\). In this model, both \(\hat{\boldsymbol{\theta}}_{\mathrm{ATR}}\) and \(\hat{\boldsymbol{\theta}}_{\mathrm{RTS}}\) are unbiased since they are both of the form \(\mathbf{C}^{\top}\mathbf{y}\) for some matrix \(\mathbf{C}\in\mathbb{R}^{p\times K}\) such that \(\mathbf{C}^{\top}\hat{\mathbf{M}}=\mathbf{I}_{K}\). However, their variances differ. Recalling the definition of \(\hat{\Sigma}_{\gamma}\), we have
\[\begin{split}\mathrm{Var}[\hat{\boldsymbol{\theta}}_{\mathrm{ATR}}]& \propto(\hat{\mathbf{M}}^{\top}\hat{\mathbf{M}})^{-1}\hat{\mathbf{M}}^{ \top}\mathbf{S}\hat{\mathbf{M}}(\hat{\mathbf{M}}^{\top}\hat{\mathbf{M}})^{-1} +\gamma(\hat{\mathbf{M}}^{\top}\hat{\mathbf{M}})^{-1}\\ \mathrm{Var}[\hat{\boldsymbol{\theta}}_{\mathrm{RTS}}]& \propto\gamma\mathbf{A}^{\top}(\mathbf{X}^{\top}\mathbf{X})^{-1} \mathbf{A},\end{split} \tag{8}\]
with respect to the same constant of proportionality. In the expression for the variance of the RTS estimate, the additional term involving \(\mathbf{S}\) has vanished because \(\mathbf{S}=\mathbf{E}\mathbf{E}^{\top}\), and
\[\mathbf{A}^{\top}(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{X}^{\top}\mathbf{E }=\mathbf{A}^{\top}(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{X}^{\top}\mathbf{ X}\mathbf{N}=\mathbf{A}^{\top}\mathbf{N}=\mathbf{0}.\]
Looking at (8), it is clear that, at least in this idealized model, the variance of the RTS estimate can become arbitrarily small as \(\gamma\to 0\). However, as a consequence of the matrix Cauchy-Schwarz inequality (Marshall and Olkin, 1990) (alternatively, a consequence of the Gauss-Markov Theorem (Aitken, 1936)), we have the following correspondence in the
Loewner partial order
\[(\hat{\mathbf{M}}^{\top}\hat{\mathbf{M}})^{-1}\preceq\mathbf{A}^{\top}(\mathbf{X} ^{\top}\mathbf{X})^{-1}\mathbf{A},\]
so the variance of the RTS estimate actually becomes greater than that of the ATR estimate as \(\gamma\rightarrow\infty\). The next proposition gives the interval of values for \(\gamma\) in which the RTS estimate outperforms the ATR estimate as a function of the various matrices in (8).
**Proposition 4**.: _Let \(\gamma>0\), and assume \(\mathbf{y}\) follows the source apportionment model in (7). Let_
\[\mathbf{V}_{1} =\mathbf{A}^{\top}(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{A}-( \hat{\mathbf{M}}^{\top}\hat{\mathbf{M}})^{-1}\] \[\mathbf{V}_{2} =(\hat{\mathbf{M}}^{\top}\hat{\mathbf{M}})^{-1}\hat{\mathbf{M}}^ {\top}\mathbf{S}\hat{\mathbf{M}}(\hat{\mathbf{M}}^{\top}\hat{\mathbf{M}})^{- 1}.\]
_Then \(\mathrm{Var}[\hat{\boldsymbol{\theta}}_{\mathrm{RTS}}]\preceq\mathrm{Var}[\hat {\boldsymbol{\theta}}_{\mathrm{ATR}}]\) if and only if \(\gamma\leq\lambda_{\min}(\mathbf{V}_{1}^{-1}\mathbf{V}_{2})\), where \(\lambda_{\min}\) denotes the minimum eigenvalue._
While the condition in Proposition 4 cannot be checked directly because \(\gamma\) is unknown, one can determine the range of \(\gamma\) values favorable to the RTS estimate because \(\lambda_{\min}(\mathbf{V}_{1}^{-1}\mathbf{V}_{2})\) may be computed from the dictionary. The matrix \(\mathbf{V}_{1}\) quantifies the gap between the RTS and ATR variance in the case of entirely isotropic error, and as the scale of this term grows, the region favorable to the RTS estimate shrinks. Recalling that \(\mathbf{S}=\mathbf{X}(\mathbf{I}_{n}-\mathbf{P}_{\mathbf{A}})\mathbf{X}^{\top}\), it can be shown that computing the matrix \(\mathbf{V}_{2}\) is equivalent to first computing ATR coefficients on each dictionary element and then taking the sum of the source-wise covariance matrices of these coefficients. As the scale of \(\mathbf{V}_{2}\) increases, the region favorable to the RTS estimate grows.
The expression for \(\mathrm{Var}[\hat{\boldsymbol{\theta}}_{\mathrm{RTS}}]\) in (8) suggests that the matrix of squared standard errors
\[\mathrm{SSE}[\hat{\boldsymbol{\theta}}_{\mathrm{RTS}}]=\frac{(\mathbf{y}^{ \top}(\mathbf{I}_{p}-\mathbf{P}_{\mathbf{X}})\mathbf{y})}{p-n}\mathbf{A}^{ \top}(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{A} \tag{9}\]
may be used to estimate the variability of the RTS estimate. Assuming (7), SSE[\(\hat{\mathbf{\theta}}_{\text{RTS}}\)] is unbiased for \(\text{Var}[\hat{\mathbf{\theta}}_{\text{RTS}}]\) because \(\frac{(\mathbf{y}^{\top}(\mathbf{I}-\mathbf{P}_{\mathbf{X}})\mathbf{y})}{p-n}\) is an unbiased estimate of the magnitude of the isotropic component of the variance of \(\mathbf{y}\). Specifically,
\[\text{E}[(\mathbf{I}_{p}-\mathbf{P}_{\mathbf{X}})\mathbf{y}] =(\mathbf{I}_{p}-\mathbf{P}_{\mathbf{X}})\hat{\mathbf{M}}\mathbf{ \theta}=\mathbf{0}\] \[\text{Var}[(\mathbf{I}_{p}-\mathbf{P}_{\mathbf{X}})\mathbf{y}] =\|\mathbf{\theta}\|_{2}^{2}(\mathbf{I}_{p}-\mathbf{P}_{\mathbf{X}}) \hat{\Sigma}_{\gamma}=c\gamma\|\mathbf{\theta}\|_{2}^{2}(\mathbf{I}_{p}-\mathbf{P }_{\mathbf{X}}),\]
for some constant \(c>0\). Therefore, \(\text{E}[(\mathbf{y}^{\top}(\mathbf{I}_{p}-\mathbf{P}_{\mathbf{X}})\mathbf{y}) /(p-n)]=c\gamma\|\mathbf{\theta}\|_{2}^{2}\). When (7) does not hold, the difference between SSE[\(\hat{\mathbf{\theta}}_{\text{RTS}}\)] and \(\text{Var}[\hat{\mathbf{\theta}}_{\text{RTS}}]\) will depend on the relationship between the dictionary profiles and the unknowns \(\mathbf{M},\Sigma\). The next proposition characterizes the average behavior of SSE[\(\hat{\mathbf{\theta}}_{\text{RTS}}\)] in a general source apportionment model in terms of three mutually orthogonal subspaces of \(\mathbb{R}^{p}\).
**Proposition 5**.: _Assume that \(\mathbf{y}\) follows a general source apportionment model_
\[\text{E}[\mathbf{y}] =\mathbf{M}\mathbf{\theta}\] \[\text{Var}[\mathbf{y}] =\|\mathbf{\theta}\|_{2}^{2}\Sigma.\]
_Let \(v_{k}\) be the \(k^{\text{th}}\) diagonal entry of \(\text{Var}[\hat{\mathbf{\theta}}_{\text{RTS}}]\) and let \(\hat{v}_{k}\) be the \(k^{\text{th}}\) diagonal entry of SSE[\(\hat{\mathbf{\theta}}_{\text{RTS}}\)]. Also let_
* \(\mathbf{U}_{1}\) _be the_ \(p\times(n-k)\) _matrix whose columns are the left singular vectors of_ \(\mathbf{E}\)_._
* \(\mathbf{U}_{2}\) _be the_ \(p\times k\) _matrix whose columns are the left singular vectors of_ \(\mathbf{X}(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{A}\)_._
* \(\mathbf{U}_{3}\) _be the_ \(p\times(p-n)\) _matrix whose columns are the left singular vectors of_ \(\mathbf{P}_{\mathbf{X}}\)
_Then_
\[\mathrm{E}[v_{k}-\hat{v}_{k}]/\|\boldsymbol{\theta}\|_{2}^{2}\leq \mathbf{a}_{k}^{\top}(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{a}_{k}[\lambda_{ \mathrm{max}}(\mathbf{U}_{2}^{\top}\Sigma\mathbf{U}_{2})-\bar{\lambda}(\mathbf{ U}_{3}^{\top}\Sigma\mathbf{U}_{3})] \tag{10}\] \[\mathrm{E}[v_{k}-\hat{v}_{k}]/\|\boldsymbol{\theta}\|_{2}^{2}\geq \mathbf{a}_{k}^{\top}(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{a}_{k}[\lambda_ {\mathrm{min}}(\mathbf{U}_{2}^{\top}\Sigma\mathbf{U}_{2})-\bar{\lambda}( \mathbf{U}_{3}^{\top}\Sigma\mathbf{U}_{3})-\bar{\lambda}(\mathbf{U}_{3}^{ \top}\mathbf{M}\mathbf{M}^{\top}\mathbf{U}_{3})] \tag{11}\]
_where \(\lambda_{\mathrm{min}}\) denotes the minimum eigenvalue, \(\lambda_{\mathrm{max}}\) denotes the maximum eigenvalue, and \(\bar{\lambda}\) denotes the average eigenvalue._
To interpret Proposition 5, first note that the columns of \(\mathbf{U}_{1}\) form an orthonormal basis for the subspace spanned by the first \(n-k\) singular vectors of \(\hat{\Sigma}_{\gamma}\). The magnitude of \(\Sigma\) lying in this subspace, the principal subspace of the feasible approximation to \(\Sigma\), contributes nothing to the bias of the squared standard errors. Instead, the bias depends on the magnitude of \(\Sigma\) lying along the remaining, orthogonal, directions described by the columns of \(\mathbf{U}_{2}\) and \(\mathbf{U}_{3}\). The term \(\bar{\lambda}(\mathbf{U}_{3}^{\top}\mathbf{M}\mathbf{M}^{\top}\mathbf{U}_{3})\) is the average squared residual between \(\mathbf{M}\) and its projection onto the column space of \(\mathbf{X}\). The more the columns of \(\mathbf{M}\) lie in the column space of the dictionary DOM profiles, the more this term approaches 0. The upper bound in (10) can be interpreted as measuring both the non-isotropy of \(\mathbf{U}_{2}^{\top}\Sigma\mathbf{U}_{2}\) and the difference between the magnitudes of \(\mathbf{U}_{2}^{\top}\Sigma\mathbf{U}_{2}\) and \(\mathbf{U}_{3}^{\top}\Sigma\mathbf{U}_{3}\). The lower bound in (11) (excluding \(\bar{\lambda}(\mathbf{U}_{3}^{\top}\mathbf{M}\mathbf{M}^{\top}\mathbf{U}_{3})\)) has the same interpretation. If the variability described by \(\Sigma\) in the directions orthogonal to \(\mathbf{U}_{1}\) is isotropic and the columns of \(\mathbf{M}\) can be written as linear combinations of the columns of \(\mathbf{X}\), then SSE\([\hat{\boldsymbol{\theta}}_{\mathrm{RTS}}]\) will be unbiased for Var\([\hat{\boldsymbol{\theta}}_{\mathrm{RTS}}]\). This is of course the case when \(\mathbf{M}=\hat{\mathbf{M}}\) and \(\Sigma=\hat{\Sigma}_{\gamma}\).
To conclude, we note that none of the analyses in this section nor those in the previous section depend on the particular structure of \(\mathbf{A}\) described in (4). In the Neuse River dataset, each dictionary EEM comes from a single land use source, so the corresponding \(\mathbf{A}\) contains
only zeros and ones. However, experimental conditions in other source apportionment problems may permit the collection of a dictionary with mixed elements of known source proportions. The rows of the corresponding \(\mathbf{A}\) will then be composed of these known mixing proportions, and the corresponding "RTS" coefficients will be weighted sums of regression coefficients.
## 5 Source apportionment in practice
While least squares theory guarantees the superiority of the GLS estimate and predictor over those of OLS, the results from Section 4 show that the relative performance of the corresponding RTS and ATR quantities is model- and dictionary-dependent. In this section, we provide numerical evidence that the RTS estimate and predictor are indeed superior to the ATR estimate and predictor in the context of fluorescence spectroscopy measurements of DOM. The approach we take is to evaluate the properties of the ATR and RTS methods in the context of a realistic source apportionment model, for which the population-level quantities \(\mathbf{M}\) and \(\Sigma\) are derived from the 202 DOM profiles in the Neuse River dataset.
Let \(\mathbf{X}\) be the \(4891\times 202\) matrix whose columns are the DOM profiles in the Neuse River dataset. The numerical results in this section are computed with respect to a source apportionment model for a downstream DOM profile that has mean and covariance
\[\mathbf{M}\boldsymbol{\theta} =\mathbf{X}\mathbf{A}(\mathbf{A}^{\top}\mathbf{A})^{-1}\boldsymbol {\theta}\] \[\|\boldsymbol{\theta}\|_{2}^{2}\Sigma =\|\boldsymbol{\theta}\|_{2}^{2}\left[\frac{\nu^{*}}{n-K}\mathbf{ X}(\mathbf{I}_{n}-\mathbf{P}_{\mathbf{A}})\mathbf{X}^{\top}+\gamma^{*}\mathbf{I}_{p} \right],\]
where \(\nu^{*}\) and \(\gamma^{*}\) are positive scalars chosen to make \(\Sigma\) equal to the optimal covariance estimator defined in Ledoit and Wolf (2004) Eqn. 14. Within this model, we evaluate the
ATR and RTS methods for each of 250 values of \(\mathbf{\theta}\), and for each of 4 possible dictionaries, yielding a total of 1000 distinct points of evaluation. The \(\mathbf{\theta}\) values, pictured in Figure 2, are simulated independently from a Dirichlet\((\mathbf{1}_{K}/K)\) distribution, which has coverage near the boundaries of the \(K\)-dimensional probability simplex. Each dictionary is constructed by selecting a fraction, \(\alpha\), of the DOM profiles from the total Neuse River dataset. The profiles are sampled uniformly at random from each source category, and then column-bound to form a dictionary matrix \(\mathbf{X}_{\alpha}\) for each of \(\alpha\in\{0.25,0.5,0.75,0.95\}\).
As \(\alpha\) increases, the dictionary matrix \(\mathbf{X}_{\alpha}\) explains more of the variation in the population mean and covariance of the source apportionment model considered in this study. The different values of \(\alpha\) therefore allow us to observe what happens as the feasible ATR and RTS estimates approach the oracle OLS and GLS estimates, which require knowledge of \(\mathbf{M}\) and \(\Sigma\). To be precise, as \(\alpha\to 1\) the properties of the RTS estimate actually approach those attained in the ideal model (7). However, as the optimal \(\gamma^{*}\) in this study is quite
Figure 2: Depiction of \(\mathbf{\theta}\) values at which ATR and RTS estimates and predictors are evaluated. There are \(K=9\) source categories, each corresponding to a land use source. Darker/lighter gray signifies higher/lower coefficient weight. The \(\mathbf{\theta}\)’s all have positive entries that sum to 1 and are arranged in increasing order of entropy, from left to right.
small, these are very close to the properties of the oracle GLS estimates.
As seen in Figure 3, the square root of the mean squared error (RMSE), \(\sqrt{\text{E}[\|\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}\|_{2}^{2}]}\), of the ATR and RTS estimates gets closer to the RMSE attained by their oracle counterparts as \(\alpha\) increases. Importantly, the RMSE of the RTS estimate is lower than that of the ATR estimate for all values of \(\boldsymbol{\theta}\), even when \(\alpha=0.25\), suggesting that the RTS estimate should be preferred to the ATR estimate when applied to fluorescence profiles of DOM. The fact that the points in Figure 3 seem to lie nearly along straight lines suggests that the RMSE of all of these estimates is dominated by the variance component (the variances of the ATR, RTS, oracle OLS, and oracle GLS estimates are all proportional to \(\|\boldsymbol{\theta}\|\)). This
Figure 3: Performance of the ATR and RTS estimates in the numerical study as measured by RMSE. Each point corresponds to a different value of \(\boldsymbol{\theta}\) and \(\alpha\). The RMSE of the RTS estimates is lower than that of the ATR estimates for all values of \(\boldsymbol{\theta}\) and \(\alpha\). As \(\alpha\) increases, the RTS RMSE continues to decrease to that of oracle GLS, while the ATR RMSE stabilizes around that of oracle OLS.
suggests that the superiority of the RTS estimate relative to the ATR estimate is due primarily to a reduction in variance, which is consistent with its connections to GLS.
A similar phenomenon is seen regarding the ATR and RTS predictors. To assess these, we imagine that a downstream DOM profile is only scanned at 28 out of the 43 excitation wavelengths used to create the Neuse River dataset (see Figure 4, left). As described in Section 3, the prediction task is then to reconstruct the full profile from the partially observed downstream profile and the fully observed dictionary profiles. Using the same evaluation points as before, we compute prediction RMSE \(\sqrt{\text{E}[\|\mathbf{y}^{\prime}-\hat{\mathbf{y}}^{\prime}\|_{2}^{2}]}\) for each value of \(\boldsymbol{\theta}\) and \(\alpha\). While visually there are only minor differences between the ATR and RTS reconstructions in the left panel of Figure 4, it is clear from the right panel of Figure 4
Figure 4: Performance of the ATR and RTS predictors. On the left, raster images of an example partial, ATR/RTS reconstructed, and true simulated EEM. On the right, the \(y\)-axis is the difference between the prediction RMSE of the ATR predictor and that of the RTS predictor. The \(x\)-axis is the prediction RMSE of the RTS predictor. Each point corresponds to a different value of \(\boldsymbol{\theta}\) and \(\alpha\).
that the RTS predictor has lower prediction RMSE than the ATR predictor for nearly all values of \(\mathbf{\theta}\) and \(\alpha\).
The results concerning our proposed standard errors for the RTS estimate are pictured in Figure 5. The standard errors are biased downwards, meaning they tend to underestimate the standard deviation of the RTS estimate, for all values of \(\alpha\). At \(\alpha=0.25\), the scale of the bias is quite large relative to the standard deviation. However, at \(\alpha=0.75\) and \(\alpha=0.95\) the standard errors begin to give a more accurate sense of the true variability of the RTS estimate. When \(\alpha=1\) (not pictured), the standard errors are unbiased, as discussed in Section 4.
Figure 5: Expected standard error of the RTS estimate versus its standard deviation. There is one point per value of \(\mathbf{\theta}\), \(\alpha\), and source category, which produces the effect of having \(K=9\) visually distinct trajectories for each value of \(\alpha\). Our standard errors are biased downwards for all values of \(\alpha\), though the bias decreases as \(\alpha\) increases.
Discussion
The source apportionment model is a latent variable model for DOM profiles collected downstream of known land-use sources. In the context of the source apportionment model, least squares theory implies the existence of an optimal linear estimate of source proportions, the GLS estimates, which requires knowledge of the source-specific mean DOM profiles and covariance matrices. Given a dictionary of DOM profiles collected from the same land-use sources that contribute to the downstream profile, a feasible version of this optimal estimate, the RTS estimate, may be computed using the tools of simple linear regression. While the RTS estimate is not guaranteed to be optimal in the source apportionment model, our numerical results suggest that the RTS estimate has similar behavior to its oracle GLS counterpart when applied to fluorescence spectroscopy measurements of DOM. Similarly, the RTS predictor behaves like the oracle GLS predictor.
As discussed in Section 4, the bias in our proposed RTS squared standard errors results from a discrepancy between the matrices \(\mathbf{M}\) and \(\Sigma\) and their dictionary-derived approximations. A promising direction for debiasing these squared standard errors is to try to estimate the bias components in Proposition 5 from the dictionary, perhaps using disjoint subsets of the dictionary profiles. However, a full account of such an approach should consider the randomness in the dictionary profiles, and remains a direction for future research.
The classical least squares framework used in this article provides a prescription for how to use a DOM profile dictionary to solve the source apportionment problem. However, it ignores the non-negative nature of fluorescence spectroscopy data, and places no non-negativity restrictions on the estimated source proportions. Another interesting research
direction is to study the properties of positive analogues of the OLS, GLS, ATR, and RTS estimates computed using non-negative least squares regression (Lawson and Hanson, 1995), and to determine the extent to which the results of classical regression still apply.
The Neuse River dataset, proofs of the propositions in this article, and software to replicate the figures in this article are available as supplementary files.
|
2303.12558 | Wasserstein Auto-encoded MDPs: Formal Verification of Efficiently
Distilled RL Policies with Many-sided Guarantees | Although deep reinforcement learning (DRL) has many success stories, the
large-scale deployment of policies learned through these advanced techniques in
safety-critical scenarios is hindered by their lack of formal guarantees.
Variational Markov Decision Processes (VAE-MDPs) are discrete latent space
models that provide a reliable framework for distilling formally verifiable
controllers from any RL policy. While the related guarantees address relevant
practical aspects such as the satisfaction of performance and safety
properties, the VAE approach suffers from several learning flaws (posterior
collapse, slow learning speed, poor dynamics estimates), primarily due to the
absence of abstraction and representation guarantees to support latent
optimization. We introduce the Wasserstein auto-encoded MDP (WAE-MDP), a latent
space model that fixes those issues by minimizing a penalized form of the
optimal transport between the behaviors of the agent executing the original
policy and the distilled policy, for which the formal guarantees apply. Our
approach yields bisimulation guarantees while learning the distilled policy,
allowing concrete optimization of the abstraction and representation model
quality. Our experiments show that, besides distilling policies up to 10 times
faster, the latent model quality is indeed better in general. Moreover, we
present experiments from a simple time-to-failure verification algorithm on the
latent space. The fact that our approach enables such simple verification
techniques highlights its applicability. | Florent Delgrange, Ann Nowé, Guillermo A. Pérez | 2023-03-22T13:41:42Z | http://arxiv.org/abs/2303.12558v2 | # Wasserstein Auto-encoded MDPs
###### Abstract
Although deep reinforcement learning (DRL) has many success stories, the large-scale deployment of policies learned through these advanced techniques in safety-critical scenarios is hindered by their lack of formal guarantees. Variational Markov Decision Processes (VAE-MDPs) are discrete latent space models that provide a reliable framework for distilling formally verifiable controllers from any RL policy. While the related guarantees address relevant practical aspects such as the satisfaction of performance and safety properties, the VAE approach suffers from several learning flaws (posterior collapse, slow learning speed, poor dynamics estimates), primarily due to the absence of abstraction and representation guarantees to support latent optimization. We introduce the Wasserstein auto-encoded MDP (WAE-MDP), a latent space model that fixes those issues by minimizing a penalized form of the optimal transport between the behaviors of the agent executing the original policy and the distilled policy, for which the formal guarantees apply. Our approach yields bisimulation guarantees while learning the distilled policy, allowing concrete optimization of the abstraction and representation model quality. Our experiments show that, besides distilling policies up to 10 times faster, the latent model quality is indeed better in general. Moreover, we present experiments from a simple time-to-failure verification algorithm on the latent space. The fact that our approach enables such simple verification techniques highlights its applicability.
## 1 Introduction
_Reinforcement learning_ (RL) is emerging as a solution of choice to address challenging real-word scenarios such as epidemic mitigation and prevention strategies (Libin et al., 2020), multi-energy management (Ceusters et al., 2021), or effective canal control (Ren et al., 2021). RL enables learning high performance controllers by introducing general nonlinear function approximators (such as neural networks) to scale with high-dimensional and continuous state-action spaces. This introduction, termed _deep-RL_, causes the loss of the conventional convergence guarantees of RL (Tsitsiklis, 1994) as well as those obtained in some continuous settings (Nowe, 1994), and hinders their wide roll-out in critical settings. This work _enables_ the _formal verification of any_ such policies, learned by agents interacting with unknown, continuous environments modeled as _Markov decision processes_ (MDPs). Specifically, we learn a _discrete_ representation of the state-action space of the MDP, which yield both a (smaller, explicit) _latent space model_ and a distilled version of the RL policy, that are tractable for _model checking_(Baier and Katoen, 2008). The latter are supported by _bisimulation guarantees_: intuitively, the agent behaves similarly in the original and latent models. The strength of our approach is not simply that we verify that the RL agent meets a _predefined_ set of specifications, but rather provide an abstract model on which the user can reason and check _any_ desired agent property.
_Variational MDPs_ (VAE-MDPs, Delgrange et al. 2022) offer a valuable framework for doing so. The distillation is provided with PAC-verifiable bisimulation bounds guaranteeing that the agent behaves similarly (i) in the original and latent model (_abstraction quality_); (ii) from all original states embedded to the same discrete state (_representation quality_). Whilst the bounds offer a confidence metric that enables the verification of performance and safety properties, VAE-MDPs suffer from several learning flaws. First, training a VAE-MDP relies on variational proxies to the bisimulation
bounds, meaning there is no learning guarantee on the quality of the latent model via its optimization. Second, _variational autoencoders_ (VAEs) (Kingma & Welling, 2014; Hoffman et al., 2013) are known to suffer from _posterior collapse_ (e.g., Alemi et al. 2018) resulting in a deterministic mapping to a unique latent state in VAE-MDPs. Most of the training process focuses on handling this phenomenon and setting up the stage for the concrete distillation and abstraction, finally taking place in a second training phase. This requires extra regularizers, setting up annealing schemes and learning phases, and defining prioritized replay buffers to store transitions. Distillation through VAE-MDPs is thus a meticulous task, requiring a large step budget and tuning many hyperparameters.
Building upon _Wasserstein_ autoencoders (Tolstikhin et al., 2018) instead of VAEs, we introduce _Wasserstein auto-encoded MDPs_ (VAE-MDPs), which overcome those limitations. Our WAE relies on the _optimal transport_ (OT) from trace distributions resulting from the execution of the RL policy in the real environment to that reconstructed from the latent model operating under the distilled policy. In contrast to VAEs which rely on variational proxies, we derive a novel objective that directly incorporate the bisimulation bounds. Furthermore, while VAEs learn stochastic mappings to the latent space which need be determinized or even entirely reconstructed from data at the deployment time to obtain the guarantees, our WAE has no such requirements, and learn _all the necessary components to obtain the guarantees during learning_, and does not require such post-processing operations.
Those theoretical claims are reflected in our experiments: policies are distilled up to \(10\) times faster through WAE- than VAE-MDPs and provide better abstraction quality and performance in general, without the need for setting up annealing schemes and training phases, nor prioritized buffer and extra regularizer. Our distilled policies are able to recover (and sometimes even outperform) the original policy performance, highlighting the representation quality offered by our new framework: the distillation is able to remove some non-robustness of the input RL policy. Finally, we formally verified _time-to-failure_ properties (e.g., Pnueli 1977) to emphasize the applicability of our approach.
**Other Related Work.** Complementary works approach safe RL via formal methods (Junges et al., 2016; Alshiekh et al., 2018; Jansen et al., 2020; Simao et al., 2021), aimed at formally ensuring safety _during RL_, all of which require providing an abstract model of the safety aspects of the environment. They also include the work of Alamdari et al. (2020), applying synthesis and model checking on policies distilled from RL, without quality guarantees. Other frameworks share our goal of verifying deep-RL policies (Bacci & Parker, 2020; Carr et al., 2020) but rely on a known environment model, among other assumptions (e.g., deterministic or discrete environment). Finally, _DeepSynth_(Hasanbeig et al., 2021) allows learning a formal model from execution traces, with the different purpose of guiding the agent towards sparse and non-Markovian rewards.
On the latent space training side, WWAEs (Zhang et al., 2019) reuse OT as latent regularizer discrepancy (in Gaussian closed form), whereas we derive two regularizers involving OT. These two are, in contrast, optimized via the dual formulation of Wasserstein, as in _Wasserstein-GANs_(Arjovsky et al., 2017). Similarly to _VQ-VAEs_(van den Oord et al., 2017) and _Latent Bernoulli AEs_(Fajdl et al., 2020), our latent space model learns discrete spaces via deterministic encoders, but relies on a smooth approximation instead of using the straight-through gradient estimator.
Works on _representation learning_ for RL (Gelada et al., 2019; Castro et al., 2021; Zhang et al., 2021; Zang et al., 2022) consider bisimulation metrics to optimize the representation quality, and aim at learning (continuous) representations which capture bisimulation, so that two states close in the representation are guaranteed to provide close and relevant information to optimize the performance of the controller. In particular, as in our work, _DeepMDPs_(Gelada et al., 2019) are learned by optimizing _local losses_, by assuming a deterministic MDP and without verifiable confidence measurement.
## 2 Background
In the following, we write \(\Delta(\mathcal{X})\) for the set of measures over (complete, separable metric space) \(\mathcal{X}\). The index of all the notations introduced along the paper is available at the end of the Appendix.
**Markov decision processes** (MDPs) are tuples \(\mathcal{M}=\langle\mathcal{S},\mathcal{A},\mathbf{P},\mathcal{R},\ell, \mathbf{AP},s_{\mathit{I}}\rangle\) where \(\mathcal{S}\) is a set of _states_; \(\mathcal{A}\), a set of _actions_; \(\mathbf{P}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\), a _probability transition function_ that maps the current state and action to a _distribution_ over the next states; \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\), a _reward function_; \(\ell:\mathcal{S}\to 2^{\mathbf{AP}}\), a _labeling function_ over a set of atomic propositions \(\mathbf{AP}\); and \(s_{\mathit{I}}\in\mathcal{S}\), the _initial state_. If \(|\mathcal{A}|=1\), \(\mathcal{M}\) is a fully stochastic process called a _Markov chain_ (MC). We write \(\mathcal{M}_{s}\) for
the MDP obtained when replacing the initial state of \(\mathcal{M}\) by \(s\in\mathcal{S}\). An agent interacting in \(\mathcal{M}\) produces _trajectories_, i.e., sequences of states and actions \(\tau=\langle s_{{}_{0:T}},a_{{}_{0:T-1}}\rangle\) where \(s_{0}=s_{I}\) and \(s_{t+1}\sim\mathbf{P}(\cdot\mid s_{t},a_{t})\) for \(t<T\). The set of infinite trajectories of \(\mathcal{M}\) is _Traj_. We assume \(\mathbf{AP}\) and labels being respectively one-hot and binary encoded. Given \(\mathsf{T}\subseteq\mathbf{AP}\), we write \(s\models\mathsf{T}\) if \(s\) is labeled with \(\mathsf{T}\), i.e., \(\ell(s)\cap\mathsf{T}\neq\emptyset\), and \(s\models\mathsf{-T}\) for \(s\not\models\mathsf{T}\). We refer to MDPs with continuous state or action spaces as _continuous MDPs_. In that case, we assume \(\mathcal{S}\) and \(\mathcal{A}\) are complete separable metric spaces equipped with a Borel \(\sigma\)-algebra, and \(\ell^{-1}(\mathsf{T})\) is Borel-measurable for any \(\mathsf{T}\subseteq\mathbf{AP}\).
**Policies and stationary distributions.** A (_memoryless_) policy \(\pi\colon\mathcal{S}\to\Delta(\mathcal{A})\) prescribes which action to choose at each step of the interaction. The set of memoryless policies of \(\mathcal{M}\) is \(\Pi\). The MDP \(\mathcal{M}\) and \(\pi\in\Pi\) induce an MC \(\mathcal{M}_{\pi}\) with unique probability measure \(\mathbb{P}_{\pi}^{\mathcal{M}}\) on the Borel \(\sigma\)-algebra over measurable subsets \(\varphi\subseteq\textit{Traj}\)(Puterman, 1994). We drop the superscript when the context is clear. Define \(\xi_{\pi}^{t}(s^{\prime}\mid s)=\mathbb{P}_{\pi}^{\mathcal{M}_{\pi}}(\{s_{{}_{0: \infty}},a_{{}_{0:\infty}}\mid s_{t}=s^{\prime}\})\) as the distribution giving the probability of being in each state of \(\mathcal{M}_{\pi}\) after \(t\) steps. \(B\subseteq\mathcal{S}\) is a _bottom strongly connected component_ (BSCC) of \(\mathcal{M}_{\pi}\) if (i) \(B\) is a maximal subset satisfying \(\xi_{\pi}^{t}(s^{\prime}\mid s)>0\) for any \(s,s^{\prime}\in B\) and some \(t\geqslant 0\), and (ii) \(\mathbb{E}_{a\sim\pi(\cdot\mid s)}\,\mathbf{P}(B\mid s,a)=1\) for all \(s\in\mathcal{S}\). The unique stationary distribution of \(B\) is \(\xi_{\pi}\in\Delta(B)\). We write \(s,a\sim\xi_{\pi}\) for sampling \(s\) from \(\xi_{\pi}\) then \(a\) from \(\pi\). An MDP \(\mathcal{M}\) is _ergodic_ if for all \(\pi\in\Pi\), the state space of \(\mathcal{M}_{\pi}\) consists of a unique aperiodic BSCC with \(\xi_{\pi}=\lim_{t\to\infty}\xi_{\pi}^{t}(\cdot\mid s)\) for all \(s\in\mathcal{S}\).
**Value objectives.** Given \(\pi\in\Pi\), the _value_ of a state \(s\in\mathcal{S}\) is the expected value of a random variable obtained by running \(\pi\) from \(s\). For a discount factor \(\gamma\in[0,1]\), we consider the following objectives. (i) _Discounted return_: we write \(V_{\pi}(s)=\mathbb{E}_{\pi}^{\mathcal{M}_{\pi}}\big{[}\sum_{t=0}^{\infty}\gamma ^{t}\mathcal{R}(s_{t},a_{t})\big{]}\) for the expected discounted rewards accumulated along trajectories. The typical goal of an RL agent is to learn a policy \(\pi^{\star}\) that maximizes \(V_{\pi^{\star}}(s_{I})\) through interactions with the (unknown) MDP; (ii) _Reachability_: let \(\mathsf{C},\mathsf{T}\subseteq\mathbf{AP}\), the (_constrained_) _reachability_ event is \(\mathsf{C}\,\mathsf{U}\,\mathsf{T}=\{\,s_{{}_{0:\infty}},a_{{}_{0:\infty}}\mid \exists i\in\mathsf{N},\,\forall j<i,s_{j}\models\mathsf{C}\,s_{i}\models \mathsf{T}\}\subseteq\textit{Traj}\). We write \(V_{\pi}^{\varphi}(s)=\mathbb{E}_{\pi}^{\mathcal{M}_{\pi}}\big{[}\gamma^{t^{ \star}}\mathbf{1}_{(s_{{}_{0:\infty}},a_{{}_{0:\infty}})\in\varphi}\big{]}\) for the _discounted probability of satisfying \(\varphi=\mathsf{C}\,\mathsf{U}\,\mathsf{T}\)_, where \(t^{\star}\) is the length of the shortest trajectory prefix that allows satisfying \(\varphi\). Intuitively, this denotes the discounted return of remaining in a region of the MDP where states are labeled with \(\mathsf{C}\), until visiting _for the first time_ a _goal state_ labeled with \(\mathsf{T}\), and the return is the binary reward signal capturing this event. _Safety_ w.r.t. failure states \(\mathsf{C}\) can be expressed as the safety-constrained reachability to a destination \(\mathsf{T}\) through \(\neg\mathsf{C}\,\mathsf{U}\,\mathsf{T}\). Notice that \(V_{\pi}^{\varphi}(s)=\mathbb{P}_{\pi}^{\mathcal{M}_{\pi}}(\varphi)\) when \(\gamma=1\).
**Latent MDP.** Given the original (continuous, possibly unknown) environment model \(\mathcal{M}\), a _latent space model_ is another (smaller, explicit) MDP \(\overline{\mathcal{M}}=\langle\widetilde{\mathcal{S}},\overline{\mathcal{A}}, \overline{\mathbf{P}},\overline{\mathcal{R}},\overline{\ell},\overline{\ell}, \mathbf{AP},\vec{s}_{\mathit{I}}\rangle\) with state-action space linked to the original one via state and action _embedding functions_: \(\phi\colon\mathcal{S}\to\overline{\mathcal{S}}\) and \(\psi\colon\overline{\mathcal{S}}\times\overline{\mathcal{A}}\to\mathcal{A}\). We refer to \(\langle\overline{\mathcal{M}},\phi,\psi\rangle\) as a _latent space model_ of \(\mathcal{M}\) and \(\overline{\mathcal{M}}\) as its _latent MDP_. Our goal is to learn \(\langle\overline{\mathcal{M}},\phi,\psi\rangle\) by optimizing an _equivalence criterion_ between the two models. We assume that \(d_{\mathcal{S}}\) is a metric on \(\overline{\mathcal{S}}\), and write \(\overline{\Pi}\) for the set of policies of \(\overline{\mathcal{M}}\) and \(\overline{V}_{\pi}\) for the values of running \(\overline{\pi}\in\overline{\Pi}\) in \(\overline{\mathcal{M}}\).
_Remark 1_ (Latent flow).: The latent policy \(\overline{\pi}\) can be seen as a policy in \(\mathcal{M}\) (cf. Fig. 0(a)): states passed to \(\overline{\pi}\) are first embedded with \(\phi\) to the latent space, then the actions produced by \(\overline{\pi}\) are executed via \(\psi\) in the original environment. Let \(s\in\mathcal{S}\), we write \(\vec{a}\sim\overline{\pi}(\cdot\mid s)\) for \(\overline{\pi}(\cdot\mid\phi(s))\), then the reward and next state are respectively given by \(\mathcal{R}(s,\vec{a})=\mathcal{R}(s,\psi(\phi(s),\vec{a}))\) and \(s^{\prime}\sim\mathbf{P}(\cdot\mid s,\vec{a})=\mathbf{P}(\cdot\mid s,\psi(\phi(s ),\vec{a}))\).
**Local losses** allow quantifying the distance between the original and latent reward/transition functions _in the local setting_, i.e., under a given state-action distribution \(\xi\in\Delta\big{(}\mathcal{S}\times\overline{\mathcal{A}}\big{)}\):
\[L_{\mathcal{R}}^{\xi}=\mathop{\mathbb{E}}_{s,\overline{\delta}\sim\xi}\big{|} \mathcal{R}(s,\vec{a})-\overline{\mathcal{R}}(\phi(s),\vec{a})\big{|}\,,\qquad L_ {\mathbf{P}}^{\xi}=\mathop{\mathbb{E}}_{s,\overline{\delta}\sim\xi}D\big{(}\phi \mathbf{P}(\cdot\mid s,\vec{a}),\overline{\mathbf{P}}(\cdot\mid\phi(s),\vec{a}) \big{)}\]
where \(\phi\mathbf{P}(\cdot\mid s,\vec{a})\) is the distribution of drawing \(s^{\prime}\sim\mathbf{P}(\cdot\mid s,\vec{a})\) then embedding \(\vec{s}^{\prime}=\phi(s^{\prime})\), and \(D\) is a discrepancy measure. Fig 0(a) depicts the losses when states and actions are drawn from a stationary distribution \(\xi_{\pi}\) resulting from running \(\overline{\pi}\in\overline{\Pi}\) in \(\mathcal{M}\). In this work, we focus on the case where \(D\) is the _Wasserstein distance_\(W_{d_{\mathcal{S}}}\): given two distributions \(P,Q\) over a measurable set \(\mathcal{X}\) equipped with a metric \(d\), \(W_{d}\) is the solution of the _optimal transport_ (OT) from \(P\) to \(Q\), i.e, the minimum cost of changing \(P\) into \(Q\)(Villani, 2009): \(W_{d}\,(P,Q)=\inf_{\lambda\in\Delta(P,Q)}\mathbb{E}_{x,y\sim\lambda}\,d(x,y),\)\(\Lambda(P,Q)\) being the set of all _couplings_ of \(P\) and \(Q\). The _Kantorovich duality_ yields \(W_{d}\,(P,Q)=\sup_{f\in\mathcal{I}_{\pi}}\mathbb{E}_{x\sim P}\,f(x)-\mathbb{E}_{x \sim Q}\,f(y)\) where \(\mathcal{F}_{d}\) is the set of 1-Lipschitz functions. Local losses are related to a well-established _behavioral_ equivalence between transition systems, called _bisimulation_.
**Bisimulation.** A _bisimulation_\(\mathcal{B}\) on \(\mathcal{M}\) is a behavioral equivalence between states \(s_{1},s_{2}\in\mathcal{S}\) so that, \(s_{1}\,\mathcal{B}\,s_{2}\) iff (i) \(\mathbf{P}(T\mid s_{1},a)=\mathbf{P}(T\mid s_{2},a)\), (ii) \(\ell(s_{1})=\ell(s_{2})\), and (iii) \(\mathcal{R}(s_{1},a)=\mathcal{R}(s_{2},a)\) for each action \(a\in\mathcal{A}\) and (Borel measurable) equivalence class \(T\in\mathcal{S}/\mathcal{B}\). Properties of bisimulation include trajectory and value equivalence (Larsen and Skou, 1989; Givan et al., 2003). Requirements (ii) and (iii) can be respectively relaxed depending on whether we focus only on behaviors formalized through \(\mathbf{AP}\) or rewards. The relation can be extended to compare two MDPs (e.g., \(\mathcal{M}\) and \(\overline{\mathcal{M}}\)) by considering the disjoint union of their state space. We denote the largest bisimulation relation by \(\sim\).
Characterized by a logical family of functional expressions derived from a logic \(\mathcal{L}\), _bisimulation pseudometrics_(Desharnais et al., 2004) generalize the notion of bisimilariy. More specifically, given a policy \(\pi\in\Pi\), we consider a family \(\mathcal{F}\) of real-valued functions parameterized by a discount factor \(\gamma\) and defining the semantics of \(\mathcal{L}\) in \(\mathcal{M}_{\pi}\). Such functional expressions allow to formalize discounted properties such as reachability, safety, as well as general \(\omega\)-regular specifications (Chatterjee et al., 2010) and may include rewards as well (Ferns et al., 2014). The pseudometric \(\,\tilde{d}\,_{\pi}\) is defined as _the largest behavioral difference_\(\,\tilde{d}\,_{\pi}(s_{1},s_{2})=\sup_{f\in\mathcal{F}}|f(s_{1})-f(s_{2})|,\) and _its kernel is bisimilarity_: \(\,\tilde{d}\,_{\pi}(s_{1},s_{2})=0\) iff \(s_{1}\thicksim s_{2}\). In particular, _value functions are Lipschitz-continuous w.r.t._\(\,\tilde{d}\,_{\pi}\): \(|V^{\prime}_{\pi}(s_{1})-V^{\prime}_{\pi}(s_{2})|\leq K\,\tilde{d}\,_{\pi}(s_{1},s_{2})\), where \(K\) is \(\nicefrac{{1}}{{(1-\gamma)}}\) if rewards are included in \(\mathcal{F}\) and \(1\) otherwise. To ensure the upcoming bisimulation guarantees, we make the following assumptions:
**Assumption 2.1**.: _MDP \(\mathcal{M}\) is ergodic, \(\mathrm{Im}(\mathcal{R})\) is a bounded space scaled in \([-\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}}]\), and the embedding function preserves the labels, i.e., \(\phi(s)=\tilde{s}\implies\ell(s)=\tilde{\ell}(\tilde{s})\) for \(s\in\mathcal{S}\), \(\tilde{s}\in\bar{\mathcal{S}}\)._
Note that the ergodicity assumption is compliant with episodic RL and a wide range of continuous learning tasks (see Huang 2020; Delgrange et al. 2022 for detailed discussions on this setting).
**Bisimulation bounds (Delgrange et al., 2022).**\(\mathcal{M}\) being set over continuous spaces with possibly unknown dynamics, evaluating \(\,\tilde{d}\,\) can turn out to be particularly arduous, if not intractable. A solution is to evaluate the original and latent model bisimilarity via local losses: fix \(\overline{\pi}\in\overline{\Pi}\), assume \(\overline{\mathcal{M}}\) is discrete, then given the induced stationary distribution \(\xi_{\overline{\pi}}\) in \(\mathcal{M}\), let \(s_{1},s_{2}\in\mathcal{S}\) with \(\phi(s_{1})=\phi(s_{2})\):
\[\underset{s\sim\xi_{\overline{\pi}}}{\mathbb{E}}\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
## 3 Wasserstein Auto-encoded MDPs
Fix \(\overline{\mathcal{M}}_{\theta}=\left\langle\bar{\mathcal{S}},\overline{\mathcal{A}},\overline{\mathbf{P}}_{\theta},\overline{\mathcal{R}}_{\theta},\tilde{\ell}, \mathbf{AP},\bar{s}_{\tilde{I}}\right\rangle\) and \(\left\langle\overline{\mathcal{M}}_{\theta},\phi_{\iota},\psi_{\theta}\right\rangle\) as a latent space model of \(\mathcal{M}\) parameterized by \(\iota\) and \(\theta\). Our method relies on learning a _behavioral model_\(\xi_{\theta}\) of \(\mathcal{M}\) from which we can retrieve the latent space model and distill \(\pi\). This can be achieved via the minimization of a suitable discrepancy between \(\xi_{\theta}\) and \(\mathcal{M}_{\pi}\). VAE-MDPs optimize a lower bound on the likelihood of the dynamics of \(\mathcal{M}_{\pi}\) using the _Kullback-Leibler divergence_, yielding (i) \(\overline{\mathcal{M}}_{\theta}\), (ii) a distillation \(\overline{\pi}_{\theta}\) of \(\pi\), and (iii) \(\phi_{\iota}\) and \(\psi_{\theta}\). Local losses are not directly minimized, but rather variational proxies that do not offer theoretical guarantees during the learning process. To control the local losses minimization and exploit their theoretical guarantees, we present a novel autoencoder that incorporates them in its objective, derived from the OT. Proofs of the claims made in this Section are provided in Appendix A.
### The Objective Function
Assume that \(\mathcal{S}\), \(\mathcal{A}\), and \(\mathrm{Im}(\mathcal{R})\) are respectively equipped with metrics \(d_{\mathcal{S}}\), \(d_{\mathcal{A}}\), and \(d_{\mathcal{R}}\), we define the _raw transition distance metric_\(\tilde{d}\) as the component-wise sum of distances between states, actions, and rewards occurring of along transitions: \(\tilde{d}(\langle s_{1},a_{1},r_{1},s^{\prime}_{1}\rangle\,,\langle s_{2},a_{2},r_{2},s^{\prime}_{2}\rangle)=d_{\mathcal{S}}(s_{1},s_{2})+d_{\mathcal{A}}(a_ {1},a_{2})+d_{\mathcal{R}}(r_{1},r_{2})+d_{\mathcal{S}}(s^{\prime}_{1},s^{ \prime}_{2})\). Given Assumption 2.1, we consider the OT between _local_ distributions, where traces are drawn from episodic RL processes or infinite interactions (we show in Appendix A.1 that considering the OT between trace-based distributions in the limit amounts to reasoning about stationary distributions). Our goal is to minimize \(W_{\tilde{d}}(\xi_{\pi},\xi_{\theta})\) so that
\[\xi_{\theta}\big{(}s,a,r,s^{\prime}\big{)}=\int_{\overline{\mathcal{S}}\times \overline{\mathcal{A}}\times\overline{\mathcal{S}}}P_{\theta}\big{(}s,a,r,s^{ \prime}\bigm{|}s,\vec{a},\vec{s}^{\prime}\big{)}\,d\bar{\xi}_{\pi_{\theta}} \big{(}\vec{s},\vec{s}^{\prime}\big{)}, \tag{2}\]
where \(P_{\theta}\) is a transition decoder and \(\bar{\xi}_{\pi_{\theta}}\) denotes the stationary distribution of the latent model \(\overline{\mathcal{M}}_{\theta}\). As proved by Bousquet et al. (2017), this model allows to derive a simpler form of the OT: instead of finding the optimal coupling of (i) the stationary distribution \(\xi_{\pi}\) of \(\mathcal{M}_{\pi}\) and (ii) the behavioral model \(\xi_{\theta}\), in the primal definition of \(W_{\tilde{d}}(\xi_{\pi},\xi_{\theta})\), it is sufficient to find an encoder \(q\) whose marginal is given by \(Q(\vec{s},\vec{a},\vec{s}^{\prime})=\mathbb{E}_{s,a,s^{\prime}\prec\xi_{\pi}} \,q(\vec{s},\vec{a}^{\prime}\mid s,a,s^{\prime})\) and identical to \(\xi_{\pi}\). This is summarized in the following Theorem, yielding a particular case of _Wasserstein-autoencoder_Tolstikhin et al. (2018):
**Theorem 3.1**.: _Let \(\xi_{\theta}\) and \(P_{\theta}\) be respectively a behavioral model and transition decoder as defined in Eq. 2, \(\mathcal{G}_{\theta}\colon\overline{\mathcal{S}}\to\mathcal{S}\) be a state-wise decoder, and \(\psi_{\theta}\) be an action embedding function. Assume \(P_{\theta}\) is deterministic with Dirac function \(G_{\theta}(\vec{s},\vec{a},\vec{s}^{\prime})=\left\langle\mathcal{G}_{\theta} (\vec{s}),\psi_{\theta}(\vec{s},\vec{a}),\overline{\mathcal{R}}_{\theta}(\vec{ s},\vec{a}),\mathcal{G}_{\theta}(\vec{s}^{\prime})\right\rangle\), then_
\[W_{\tilde{d}}(\xi_{\pi},\xi_{\theta})=\inf_{q:\,Q=\xi_{\pi_{\theta}}}\mathbb{E} _{s,a,r,s^{\prime}\prec\xi_{\pi}}\mathbb{E}_{s,\vec{a},\vec{s}^{\prime}\prec q (\mid s,a,s^{\prime})}\tilde{d}\Big{(}\big{\langle}s,a,r,s^{\prime}\big{\rangle} \,,G_{\theta}\big{(}\vec{s},\vec{a},\vec{s}^{\prime}\big{)}\big{)}.\]
Henceforth, fix \(\phi_{\iota}\colon\mathcal{S}\to\overline{\mathcal{S}}\) and \(\phi_{\iota}^{\iota}\colon\overline{\mathcal{S}}\times\mathcal{A}\to\Delta \big{(}\overline{\mathcal{A}}\big{)}\) as parameterized state and action encoders with \(\phi_{\iota}(\vec{s},\vec{a},\vec{s}^{\prime}\mid s,a,s^{\prime})=\mathbf{1}_ {\phi_{\iota}(s)=\vec{s}}\cdot\phi_{\iota}^{\iota}(\vec{a}\mid\vec{s},a)\cdot \mathbf{1}_{\phi_{\iota}(s^{\prime})=\vec{s}^{\prime}}\), and define the marginal encoder as \(Q_{\iota}=\mathbb{E}_{s,a,s^{\prime}\prec\xi_{\iota}}\,\phi_{\iota}(\cdot\mid s,a,s^{\prime})\). Training the model components can be achieved via the objective:
\[\min_{\iota,\theta}\mathbb{E}_{s,a,r,s^{\prime}\prec\xi_{\pi}}\mathbb{E}_{s, \vec{a},\vec{s}^{\prime}\prec\xi_{\iota}(\mid s,a,s^{\prime})}\tilde{d}\Big{(} \big{\langle}s,a,r,s^{\prime}\big{\rangle}\,,G_{\theta}\big{(}\vec{s},\vec{a}, \vec{s}^{\prime}\big{)}\Big{)}+\beta\cdot D\big{(}Q_{\iota},\bar{\xi}_{\pi_{ \theta}}\big{)},\]
where \(D\) is an arbitrary discrepancy metric and \(\beta>0\) a hyperparameter. Intuitively, the encoder \(\phi_{\iota}\) can be learned by enforcing its marginal distribution \(Q_{\iota}\) to match \(\bar{\xi}_{\pi_{\theta}}\) through this discrepancy.
_Remark 2_.: If \(\mathcal{M}\) has a discrete action space, then learning \(\overline{\mathcal{A}}\) is not necessary. We can set \(\overline{\mathcal{A}}=\mathcal{A}\) using identity functions for the action encoder and decoder (details in Appendix A.2).
When \(\pi\) is executed in \(\mathcal{M}\), observe that its _parallel execution_ in \(\overline{\mathcal{M}}_{\theta}\) is enabled by the action encoder \(\phi_{\iota}^{\mathcal{A}}\): given an original state \(s\in\mathcal{S}\), \(\pi\) first prescribes the action \(a\sim\pi(\cdot\mid s)\), which is then embedded in the latent space via \(\vec{a}\sim\phi_{\iota}^{\mathcal{A}}(\cdot\mid\phi_{\iota}(s),a)\) (cf. Fig. 1b). This parallel execution, along with setting \(D\) to \(W_{\tilde{d}}\), yield an upper bound on the latent regularization, compliant with the bisimulation bounds. A two-fold regularizer is obtained thereby, defining the foundations of our objective function:
**Lemma 3.2**.: _Define \(\mathcal{T}(\vec{s},\vec{a},\vec{s}^{\prime})=\mathbb{E}_{s,a\sim\xi_{\pi}}[ \mathbf{1}_{\phi_{\iota}(s)=\vec{s}^{\prime}}\cdot\phi_{\iota}^{\mathcal{A}}( \vec{a}\mid\vec{s},a)\cdot\overline{\mathbf{P}}_{\theta}(\vec{s}^{\prime}\mid \vec{s},\vec{a})]\) as the distribution of drawing state-action pairs from interacting with \(\mathcal{M}\), embedding them to the latent spaces, and finally letting them transition to their successor state in \(\overline{\mathcal{M}}_{\theta}\). Then, \(W_{\tilde{d}}\big{(}Q_{\iota},\bar{\xi}_{\pi_{\theta}}\big{)}\leqslant W_{\tilde {d}}\big{(}\bar{\xi}_{\pi_{\theta}},\mathcal{T}\big{)}+L_{\mathbf{P}}^{\xi_{\pi}}\)._
We therefore define the W\({}^{2}\)AE-MDP (_Wasserstein-Wasserstein auto-encoded MDP_) objective as:
\[\min_{\iota,\theta}\underset{s,\tilde{a},\tilde{a}^{\prime}\sim \xi_{\pi}}{\mathbb{E}}\left[d_{\mathcal{S}}(s,\mathcal{G}_{\theta}(\tilde{s}))+ d_{\mathcal{A}}(a,\psi_{\theta}(\tilde{s},\tilde{a}))+d_{\mathcal{S}} \big{(}s^{\prime},\mathcal{G}_{\theta}\big{(}\tilde{s}^{\prime})\big{)}\right]+L _{\mathcal{R}}^{\xi_{\pi}}+\beta\cdot(\mathcal{W}_{\xi_{\pi}}+L_{\mathbf{P}}^{ \xi_{\pi}}),\]
where \(\mathcal{W}_{\xi_{\pi}}=W_{\mathcal{J}}\big{(}\mathcal{T},\bar{\xi}_{\pi_{\theta }}\big{)}\) and \(L_{\mathbf{P}}^{\xi_{\pi}}\) are respectively called _steady-state_ and _transition_ regularizers. The former allows to quantify the distance between the stationary distributions respectively induced by \(\pi\) in \(\mathcal{M}\) and \(\bar{\pi}_{\theta}\) in \(\overline{\mathcal{M}_{\theta}}\), further enabling the distillation. The latter allows to learn the latent dynamics. Note that \(L_{\mathcal{R}}^{\xi_{\pi}}\) and \(L_{\mathbf{P}}^{\xi_{\pi}}\) -- set over \(\xi_{\pi}\) instead of \(\xi_{\pi_{\theta}}\) -- are not sufficient to ensure the bisimulation bounds (Eq. 1): running \(\pi\) in \(\overline{\mathcal{M}_{\theta}}\) depends on the parallel execution of \(\pi\) in the original model, which does not permit its (conventional) verification. Breaking this dependency is enabled by learning the distillation \(\bar{\pi}_{\theta}\) through \(\mathcal{W}_{\xi_{\pi}}\), as shown in Fig. 1b: minimizing \(\mathcal{W}_{\xi_{\pi}}\) allows to make \(\xi_{\pi}\) and \(\bar{\xi}_{\pi_{\theta}}\) closer together, further bridging the gap of the discrepancy between \(\pi\) and \(\bar{\pi}_{\theta}\). At any time, recovering the local losses along with the linked bisimulation bounds in the objective function of the W\({}^{2}\)AE-MDP is allowed by considering the latent policy resulting from this distillation:
**Theorem 3.3**.: _Assume that traces are generated by running a latent policy \(\bar{\pi}\in\overline{\Pi}\) in the original environment and let \(d_{\mathcal{R}}\) be the usual Euclidean distance, then the W\({}^{2}\)AE-MDP objective is_
\[\min_{\iota,\theta}\underset{s,s^{\prime}\sim\xi_{\pi}}{\mathbb{E}}\big{[}d_{ \mathcal{S}}(s,\mathcal{G}_{\theta}(\phi_{\iota}(s)))+d_{\mathcal{S}}\big{(}s ^{\prime},\mathcal{G}_{\theta}\big{(}\phi_{\iota}(s^{\prime})\big{)}\big{)} \big{]}+L_{\mathcal{R}}^{\xi_{\pi}}+\beta\cdot(\mathcal{W}_{\xi_{\pi}}+L_{ \mathbf{P}}^{\xi_{\pi}}).\]
**Optimizing the regularizers** is enabled by the dual form of the OT: we introduce two parameterized networks, \(\varphi_{\omega}^{\xi}\) and \(\varphi_{\omega}^{\mathbf{P}}\), constrained to be \(1\)-Lipschitz and trained to attain the supremum of the dual:
\[\mathcal{W}_{\xi_{\pi}}(\omega)=\max_{\omega}\underset{s,a,\xi_{ \pi}\sim\xi_{\pi}}{\mathbb{E}}\underset{\tilde{a}^{\prime}\sim\phi_{\iota}^{ \mathbf{P}}(\cdot|\phi_{\iota}(s),a)}{\mathbb{E}}\underset{\tilde{\pi}^{ \prime}\sim\bar{\mathbf{P}}_{\theta}(\cdot|\phi_{\iota}(s),\tilde{a})}{\mathbb{ E}}\varphi_{\omega}^{\xi}(\phi_{\iota}(s),\tilde{a},\tilde{s}^{\prime})- \underset{z,\tilde{a}^{\prime},z^{\prime}\sim\xi_{\pi_{\theta}}}{\mathbb{E}} \varphi_{\omega}^{\xi}\big{(}z,\tilde{a}^{\prime},z^{\prime}\big{)}\]
\[L_{\mathbf{P}}^{\xi_{\pi}}(\omega)=\max_{\omega}\underset{s,a,\xi_{\pi}^{ \prime}\sim\xi_{\pi}}{\mathbb{E}}\underset{\tilde{a},\tilde{a},\tilde{a}^{ \prime}\sim\phi_{\iota}^{\mathbf{P}}(\cdot|s,a,s^{\prime})}{\mathbb{E}} \underset{\tilde{a}^{\prime}\sim\bar{\mathbf{P}}_{\theta}(\cdot|\tilde{a}, \tilde{a},\tilde{s}^{\prime})}{\mathbb{E}}\varphi_{\omega}^{\mathbf{P}}(s,a, \tilde{s},\tilde{a},\tilde{s}^{\prime})\Big{]}\]
Details to derive this tractable form of \(L_{\mathbf{P}}^{\xi_{\pi}}(\omega)\) are in Appendix A.5. The networks are constrained via the gradient penalty approach of Gulrajani et al. (2017), leveraging that any differentiable function is \(1\)-Lipschitz iff it has gradients with norm at most \(1\) everywhere (we show in Appendix A.6 this is still valid for relaxations of discrete spaces). The final learning process is presented in Algorithm 1.
### Discrete Latent Spaces
To enable the verification of latent models supported by the bisimulation guarantees of Eq. 1, we focus on the special case of _discrete latent space models_. Our approach relies on continuous relaxation of discrete random variables, regulated by some _temperature_ parameter(s) \(\lambda\): discrete random variables are retrieved as \(\lambda\to 0\), which amounts to applying a rounding operator. For training, we use the temperature-controlled relaxations to differentiate the objective and let the gradient flow through the network. When we deploy the latent policy in the environment and formally check the latent model, the zero-temperature limit is used. An overview of the approach is depicted in Fig. 2.
**State encoder.** We work with a _binary representation_ of the latent states. First, this induces compact networks, able to deal with a large discrete space via a tractable number of parameter variables. But most importantly, this ensures that Assumption 2.1 is satisfied: let \(n=\log_{2}|\vec{\mathcal{S}}|\), we reserve \(|\mathbf{AP}|\) bits in \(\vec{\mathcal{S}}\) and each time \(s\in\mathcal{S}\) is passed to \(\phi_{i}\), \(n-|\mathbf{AP}|\) bits are produced and concatenated with \(\ell(s)\), ensuring a perfect reconstruction of the labels and further bisimulation bounds. To produce Bernoulli variables, \(\phi_{i}\) deterministically maps \(s\) to a latent code \(\mathbf{z}\), passed to the Heaviside \(H(\mathbf{z})=\mathbf{1}_{\mathbf{z}>0}\). We train \(\phi_{i}\) by using the smooth approximation \(H_{\lambda}(\mathbf{z})=\sigma(\nicefrac{{2s}}{{\lambda}})\), satisfying \(H=\lim_{\lambda\to 0}H_{\lambda}\).
**Latent distributions.** Besides the discontinuity of their latent image space, a major challenge of optimizing over discrete distributions is _sampling_, required to be a differentiable operation. We circumvent this by using _concrete distributions_(Jang et al., 2017; Maddison et al., 2017): the idea is to sample reparameterizable random variables from \(\lambda\)-parameterized distributions, and applying a differentiable, nonlinear operator in downstream. We use the _Gumbel softmax trick_ to sample from distributions over (one-hot encoded) latent actions (\(\phi_{i}^{\star}\), \(\pi_{\theta}\)). For binary distributions (\(\mathbf{\bar{P}}_{\theta}\), \(\bar{\xi}_{\pi_{\theta}}\)), each relaxed Bernoulli with logit \(\alpha\) is retrieved by drawing a logistic random variable located in \(\nicefrac{{\alpha}}{{\lambda}}\) and scaled to \(\nicefrac{{1}}{{\lambda}}\), then applying a sigmoid in downstream. We emphasize that this trick alone (as used by Corneil et al. 2018; Delgrange et al. 2022) is not sufficient: it yields independent Beronulls, being too restrictive in general, which prevents from learning sound transition dynamics (cf. Example 1).
_Example 1_.: Let \(\overline{\mathcal{M}}\) be the discrete MC of Fig. 3 (the labels of \(\overline{\mathcal{M}}\) are drawn next to their state). In one-hot, \(\mathbf{AP}=\{\textit{goal}:\langle 1,0\rangle\,,\textit{unsafe}:\langle 0,1\rangle\}\). We assume that 3 bits are used for the (binary) state space, with \(\vec{\mathcal{S}}=\{\bar{s}_{0}:\langle 0,0,0\rangle\,,\bar{s}_{1}:\langle 1,0,0 \rangle\,,\bar{s}_{2}:\langle 0,1,0\rangle\,,\bar{s}_{3}:\langle 0,1,1\rangle\}\) (the two first bits are reserved for the labels). Considering each bit as being independent is not sufficient to learn \(\mathbf{\bar{P}}\): the optimal estimation \(\mathbf{\bar{P}}_{\theta^{\star}}(\cdot\mid\bar{s}_{0})\) is in that case represented by the independent Bernoulli vector \(\mathbf{b}=\langle\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}},\nicefrac{{1}}{{4}}\rangle\),
Figure 3: Markov Chain with four states.
Figure 2: W\({}^{2}\)AE-MDP architecture. Distances are depicted by red dotted lines.
giving the probability to go from \(\bar{s}_{0}\) to each bit _independently_. This yields a poor estimation of the actual transition function: \(\overline{\mathbf{P}}_{\theta^{*}}(\bar{s}_{0}\mid\bar{s}_{0})=(1-\mathbf{b}_{1 })\cdot(1-\mathbf{b}_{2})\cdot(1-\mathbf{b}_{3})=\overline{\mathbf{P}}_{\theta ^{*}}(\bar{s}_{1}\mid\bar{s}_{0})=\mathbf{b}_{1}\cdot(1-\mathbf{b}_{2})\cdot(1- \mathbf{b}_{3})=\overline{\mathbf{P}}_{\theta^{*}}(\bar{s}_{2}\mid\bar{s}_{0} )=(1-\mathbf{b}_{1})\cdot\mathbf{b}_{2}\cdot(1-\mathbf{b}_{3})=\nicefrac{{3}}{{16}}\), \(\overline{\mathbf{P}}_{\theta^{*}}(\bar{s}_{3}\mid\bar{s}_{0})=(1-\mathbf{b} _{1})\cdot\mathbf{b}_{2}\cdot\mathbf{b}_{3}=\nicefrac{{1}}{{16}}\).
We consider instead relaxed multivariate Bernoulli distributions by decomposing \(P\in\Delta(\overline{\mathcal{S}})\) as a product of conditionals: \(P(\bar{s})=\prod_{i=1}^{n}P(\bar{s}_{i}\mid\bar{s}_{1:\ i-1})\) where \(\bar{s}_{i}\) is the \(i^{\text{th}}\) entry (bit) of \(\bar{s}\). We learn such distributions by introducing a _masked autoregressive flow_ (MAF, Papamakarios et al.2017) for relaxed Bernoulli via the recursion: \(\bar{s}_{i}=\sigma(\nicefrac{{l_{i}+\alpha_{i}}}{{\lambda}})\), where \(l_{i}\sim\operatorname{Logistic}(0,1)\), \(\alpha_{i}=f_{i}(\bar{s}_{1:\ i-1})\), and \(f\) is a MADE (Germain et al., 2015), a feedforward network implementing the conditional output dependency on the inputs via a mask that only keeps the necessary connections to enforce the conditional property. We use this MAF to model \(\overline{\mathbf{P}}_{\theta}\) and the dynamics related to the labels in \(\bar{\xi}_{\pi_{\theta}}\). We fix the logits of the remaining \(n-|\mathbf{AP}|\) bits to \(0\) to allow for a fairly distributed latent space.
## 4 Experiments
We evaluate the quality of latent space models learned and policies distilled through W\({}^{2}\)AE-MDPs. To do so, we first trained deep-RL policies (DQN, Mnih et al.2015 on discrete, and SAC, Haarnoja et al.2018 on continuous action spaces) for various OpenAI benchmarks (Brockman et al., 2016), which we then distill via our approach (Figure 4). We thus evaluate (a) the W\({}^{2}\)AE-MDP training metrics, (b) the abstraction and representation quality via _PAC local losses upper bounds_ (Delgrange et al., 2022), and (c) the distilled policy performance when deployed in the original environment. The confidence metrics and performance are compared with those of VAE-MDPs. Finally, we formally verify properties in the latent model. The exact setting to reproduce our results is in Appendix B.
**Learning metrics.** The objective (Fig. 3(a)) is a weighted sum of the reconstruction loss and the two Wasserstein regularizers. The choice of \(\beta\) defines the optimization direction. Posterior collapse is not observed, naturally avoided in WAEs (Tolstikhin et al., 2018), which reflects that the latent space is consistently distributed (see Appendix C for a discussion and a concrete illustration of collapsing issues occurring in VAE-MDPs). Optimizing the objective (Fig. 3(a)) effectively allows minimizing the local losses (Fig. 3(b)) and recovering the performance of the original policy (Fig. 3(c)).
**Local losses.** For V- and WAEs, we formally evaluate PAC upper bounds on \(L_{\mathcal{R}^{g}}^{\xi_{\pi_{\theta}}}\) and \(L_{\mathcal{P}}^{\xi_{\pi_{\theta}}}\) via the algorithm of Delgrange et al. (2022) (Fig 3(b)). The lower the local losses, the closer \(\mathcal{M}\) and
Figure 4: For each environment, we trained five different instances of the models with different random seeds: the solid line is the median and the shaded interval the interquartile range.
\(\overline{\mathcal{M}}_{\theta}\) are in terms of behaviors induced by \(\pi_{\theta}\) (cf. Eq. 1). In VAEs, the losses are evaluated on a transition function \(\hat{\mathbf{P}}\) obtained via frequency estimation of the latent transition dynamics (Delgrange et al., 2022), by reconstructing the transition model a posteriori and collecting data to estimate the transition probabilities (e.g., Bazille et al.2020; Corneil et al.2018). We thus also report the metrics for \(\hat{\mathbf{P}}\). Our bounds quickly converge to close values in general for \(\overline{\mathbf{P}}_{\theta}\) and \(\hat{\mathbf{P}}\), whereas for VAEs, the convergence is slow and unstable, with \(\hat{\mathbf{P}}\) offering better bounds. We emphasize that WAEs do not require this additional reconstruction step to obtain losses that can be leveraged to assess the quality of the model, in contrast to VAEs, where learning \(\overline{\mathbf{P}}_{\theta}\) was performed via overly restrictive distributions, leading to poor estimation in general (cf. Ex. 1). Finally, _when the distilled policies offer comparable performance_ (Fig. 4c), our bounds are either close to or better than those of VAEs.
**Distillation.** The bisimulation guarantees (Eq. 1) are only valid for \(\overline{\pi}_{\theta}\), the policy under which formal properties can be verified. It is crucial that \(\overline{\pi}_{\theta}\) achieves performance close to \(\pi\), the original one, when deployed in the RL environment. We evaluate the performance of \(\overline{\pi}_{\theta}\) via the undiscounted episode return \(\mathbf{R}_{\overline{\pi}_{\theta}}\) obtained by running \(\overline{\pi}_{\theta}\) in the original model \(\mathcal{M}\). We observe that \(\mathbf{R}_{\overline{\pi}_{\theta}}\) approaches faster the original performance \(\mathbf{R}_{\pi}\) for W- than VAEs: WAEs converge in a few steps for all environments, whereas the full learning budget is sometimes necessary with VAEs. The success in recovering the original performance emphasizes the representation quality guarantees (Eq. 1) induced by WAEs: when local losses are minimized, all original states that are embedded to the same representation are bisimilarly close. Distilling the policy over the new representation, albeit discrete and hence coarser, still achieves effective performance since \(\phi_{k}\) keeps only what is important to preserve behaviors, and thus values. Furthermore, the distillation can remove some non-robustness obtained during RL: \(\overline{\pi}_{\theta}\) prescribes the same actions for bisimilarly close states, whereas this is not necessarily the case for \(\pi\).
**Formal verification.** To formally verify \(\overline{\mathcal{M}}_{\theta}\), we implemented a _value iteration_ (VI) engine, handling the neural network encoding of the latent space for discounted properties, which is one of the most popular algorithms for checking property probabilities in MDPs (e.g., Baier and Katoen2008; Hensel et al.2021; Kwiatkowska et al.2022). We verify _time-to-failure_ properties \(\varphi\), often used to check the failure rate of a system (Pnueli, 1977) by measuring whether the agent fails _before the end of the episode_. Although simple, such properties highlight the applicability of our approach on reachability events, which are building blocks to verify MDPs (Baier and Katoen2008; cf. Appendix B.7 for a discussion). In particular, we checked whether the agent reaches an unsafe position or angle (CartPole, LunarLander), does not reach its goal position (MountainCar, Acrobot), and does not reach and stay in a safe region of the system (Pendulum). Results are in Table 1: for each environment, we select the distilled policy which gives the best trade-off between performance (episode return) and abstraction quality (local losses). As extra confidence metric, we report the value difference \(\|V_{\overline{\pi}_{\theta}}\|=|V_{\overline{\pi}_{\theta}}(s_{I})-\widehat{ V}_{\overline{\pi}_{\theta}}(\vec{s}_{I})|\) obtained by executing \(\overline{\pi}_{\theta}\) in \(\mathcal{M}\) and \(\overline{\mathcal{M}}_{\theta}\) (\(V_{\overline{\pi}_{\theta}}(\cdot)\) is averaged while \(\widehat{V}_{\overline{\pi}_{\theta}}(\cdot)\) is formally computed).
## 5 Conclusion
We presented WAE-MDPs, a framework for learning formally verifiable distillations of RL policies with bisimulation guarantees. The latter, along with the learned abstraction of the unknown continuous environment to a discrete model, enables the verification. Our method overcomes the limitations of VAE-MDPs and our results show that it outperforms the latter in terms of learning speed, model quality, and performance, in addition to being supported by stronger learning guarantees. As mentioned by Delgrange et al. (2022), distillation failure reveals the lack of robustness of original RL policies. In particular, we found that distilling highly noise-sensitive RL policies (such as robotics simulations, e.g., Todorov et al.2012) is laborious, even though the result remains formally verifiable.
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline Environment & step (\(10^{5}\)) & \(\mathcal{S}\) & \(\mathcal{A}\) & \(|\mathcal{S}|\) & \(|\mathcal{A}|\) & \(L_{\mathcal{R}}^{\mathbf{C_{\pi}}}\) (PAC) & \(L_{\mathcal{R}}^{\mathbf{C_{\pi}}}\) (PAC) & \(|V_{\overline{\pi}_{\theta}}|\) & \(\widehat{V}_{\overline{\pi}_{\theta}}^{\mathbf{o}}(\mathbb{\mathbb{\mathbb{ \mathbb{\mathbb{\mathbb{missingmissingmissingmissingmissing}}}}}})\) \\ \hline CartPole & \(1.2\) & \(\in\mathbb{R}^{4}\) & \(\{1,2\}\) & \(512\) & \(2\) & \(0.0049653\) & \(0.399636\) & \(3.712132\) & \(0.0316655\) \\ MountainCar & \(2.32\) & \(\in\mathbb{R}^{2}\) & \(\{1,2\}\) & \(1024\) & \(2\) & \(0.0141763\) & \(0.3823232\) & \(2.83714\) & \(0\) \\ Acrobot & \(4.3\) & \(\in\mathbb{R}^{6}\) & \(\{1,2,3\}\) & \(8192\) & \(3\) & \(0.0347698\) & \(0.649478\) & \(2.22006\) & \(0.0021911\) \\ LunarLander & \(3.2\) & \(\in\mathbb{R}^{8}\) & \(\{-1,1\}^{2}\) & \(16384\) & \(3\) & \(0.0207205\) & \(0.131357\) & \(0.0372883\) & \(0.0702039\) \\ Pendulum & \(3.7\) & \(\in\mathbb{R}^{3}\) & \(\{-2,2\}\) & \(8192\) & \(3\) & \(0.0260745\) & \(0.539508\) & \(4.33006\) & \(0.0384892\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Formal Verification of distilled policies. Values are computed for \(\gamma=0.99\) (lower is better).
We demonstrated the feasibility of our approach through the verification of reachability objectives, which are building blocks for stochastic model-checking (Baier & Katoen, 2008). Besides the scope of this work, the verification of general discounted \(\omega\)-regular properties is theoretically allowed in our model via the reachability to components of standard constructions based on automata products (e.g., Baier et al. 2016; Sickert et al. 2016), and discounted games algorithms (Chatterjee et al., 2010). Beyond distillation, our results, supported by Thm. 3.3, suggest that our WAE-MDP can be used as a _general latent space learner_ for RL, further opening possibilities to combine RL and formal methods _online_ when no formal model is a priori known, and address this way safety in RL with guarantees.
### Reproducibility Statement
We referenced in the main text the Appendix parts presenting the proofs or additional details of every claim, Assumption, Lemma, and Theorem occurring in the paper. In addition, Appendix B is dedicated to the presentation of the setup, hyperparameters, and other extra details required for reproducing the results of Section 4. We provide the source code of the implementation of our approach in Supplementary material 1, and we also provide the models saved during training that we used for model checking (i.e., reproducing the results of Table 1). Additionally, we present in a notebook (evaluation.html) videos demonstrating how our distilled policies behave in each environment, and code snippets showing how we formally verified the policies.
Footnote 1: available at [https://github.com/florentdelgrange/wae_mdp](https://github.com/florentdelgrange/wae_mdp)
#### Acknowledgments
This research received funding from the Flemish Government (AI Research Program) and was supported by the DESCARTES iBoF project. G.A. Perez is also supported by the Belgian FWO "SAILor" project (G030020N). We thank Raphael Avalos for his valuable feedback during the preparation of this manuscript.
## References
* Alamdari et al. (2020) Parand Alizadeh Alamdari, Guy Avni, Thomas A. Henzinger, and Anna Lukina. Formal methods with a touch of magic. In _2020 Formal Methods in Computer Aided Design, FMCAD 2020, Haifa, Israel, September 21-24, 2020_, pp. 138-147. IEEE, 2020. doi: 10.34727/2020/isbn.978-3-85448-042-6_21. URL [https://doi.org/10.34727/2020/isbn.978-3-85448-042-6_21](https://doi.org/10.34727/2020/isbn.978-3-85448-042-6_21).
* Alemi et al. (2018) Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, and Kevin Murphy. Fixing a broken ELBO. In Jennifer G. Dy and Andreas Krause (eds.), _Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmassan, Stockholm, Sweden, July 10-15, 2018_, volume 80 of _Proceedings of Machine Learning Research_, pp. 159-168. PMLR, 2018. URL [http://proceedings.mlr.press/v80/alemi18a.html](http://proceedings.mlr.press/v80/alemi18a.html).
* Alshiekh et al. (2018) Mohammed Alshiekh, Roderick Bloem, Rudiger Ehlers, Bettina Konighofer, Scott Niekum, and Ufuk Topcu. Safe reinforcement learning via shielding. In Sheila A. McIlraith and Kilian Q. Weinberger (eds.), _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018_, pp. 2669-2678. AAAI Press, 2018. URL [https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17211](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17211).
* Arjovsky et al. (2017) Martin Arjovsky, Soumith Chintala, and Leon Bottou. Wasserstein generative adversarial networks. In Doina Precup and Yee Whye Teh (eds.), _Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017_, volume 70 of _Proceedings of Machine Learning Research_, pp. 214-223. PMLR, 2017. URL [http://proceedings.mlr.press/v70/arjovsky17a.html](http://proceedings.mlr.press/v70/arjovsky17a.html).
* 18th International Conference, FORMATS 2020, Vienna, Austria, September 1-3, 2020, Proceedings_, volume 12288 of _LNCS_, pp. 231-248. Springer, 2020. doi: 10.1007/978-3-030-57628-8_14. URL [https://doi.org/10.1007/978-3-030-57628-8_14](https://doi.org/10.1007/978-3-030-57628-8_14).
- 28th International Conference, CAV 2016, Toronto, ON, Canada, July 17-23, 2016, Proceedings, Part I_, volume 9779 of _Lecture Notes in Computer Science_, pp. 23-42. Springer, 2016. doi: 10.1007/978-3-319-41528-4_2. URL [https://doi.org/10.1007/978-3-319-41528-4_2](https://doi.org/10.1007/978-3-319-41528-4_2).
* 32nd International Conference, CAV 2020, Los Angeles, CA, USA, July 21-24, 2020, Proceedings, Part II_, volume 12225 of _Lecture Notes in Computer Science_, pp. 304-326. Springer, 2020. doi: 10.1007/978-3-030-53291-8_17. URL [https://doi.org/10.1007/978-3-030-53291-8_17](https://doi.org/10.1007/978-3-030-53291-8_17).
* Bousquet et al. (2017) O. Bousquet, S. Gelly, I. Tolstikhin, Carl-Johann Simon-Gabriel, and B. Scholkopf. From optimal transport to generative modeling: the vegan cookbook. _arXiv: Machine Learning_, 2017.
* Brockman et al. (2016) Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. _CoRR_, abs/1606.01540, 2016. URL [http://arxiv.org/abs/1606.01540](http://arxiv.org/abs/1606.01540).
* Carr et al. (2020) Steven Carr, Nils Jansen, and Ufuk Topcu. Verifiable rnn-based policies for pomdps under temporal logic constraints. In Christian Bessiere (ed.), _Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020_, pp. 4121-4127. ijcai.org, 2020. doi: 10.24963/ijcai.2020/570. URL [https://doi.org/10.24963/ijcai.2020/570](https://doi.org/10.24963/ijcai.2020/570).
* Castro et al. (2021) Pablo Samuel Castro, Tyler Kastner, Prakash Panangaden, and Mark Rowland. Mico: Improved representations via sampling-based state similarity for markov decision processes. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), _Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual_, pp. 30113-30126, 2021. URL [https://proceedings.neurips.cc/paper/2021/hash/fd06b8ea02fe5b1c2496fe1700e9d16c-Abstract.html](https://proceedings.neurips.cc/paper/2021/hash/fd06b8ea02fe5b1c2496fe1700e9d16c-Abstract.html).
* Ceusters et al. (2021) Glenn Ceusters, Roman Cantu Rodriguez, Alberte Bouso Garcia, Rudiger Franke, Geert Deconinck, Lieve Helsen, Ann Nowe, Maarten Messagie, and Luis Ramirez Camargo. Model-predictive control and reinforcement learning in multi-energy system case studies. _Applied Energy_, 303:117634, 2021. ISSN 0306-2619. doi: [https://doi.org/10.1016/j.apenergy.2021.117634](https://doi.org/10.1016/j.apenergy.2021.117634). URL [https://www.sciencedirect.com/science/article/pii/S0306261921010011](https://www.sciencedirect.com/science/article/pii/S0306261921010011).
* Chatterjee et al. (2010) Krishnendu Chatterjee, Luca de Alfaro, Rupak Majumdar, and Vishwanath Raman. Algorithms for game metrics (full version). _Log. Methods Comput. Sci._, 6(3), 2010. URL [http://arxiv.org/abs/0809.4326](http://arxiv.org/abs/0809.4326).
* Corneil et al. (2018) Dane S. Corneil, Wulfram Gerstner, and Johann Brea. Efficient modelbased deep reinforcement learning with variational state tabulation. In Jennifer G. Dy and Andreas Krause (eds.), _Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmassan, Stockholm, Sweden, July 10-15, 2018_, volume 80 of _Proceedings of Machine Learning Research_, pp. 1057-1066. PMLR, 2018. URL [http://proceedings.mlr.press/v80/corneill8a.html](http://proceedings.mlr.press/v80/corneill8a.html).
* Delgrange et al. (2022) Florent Delgrange, Ann Nowe, and Guillermo A. Perez. Distillation of rl policies with formal guarantees via variational abstraction of markov decision processes. _Proceedings of the AAAI Conference on Artificial Intelligence_, 36(6):6497-6505, Jun. 2022. doi: 10.1609/aaai.v36i6.20602. URL [https://ojs.aaai.org/index.php/AAAI/article/view/20602](https://ojs.aaai.org/index.php/AAAI/article/view/20602).
* Desharnais et al. (2004) Josee Desharnais, Vineet Gupta, Radha Jagadeesan, and Prakash Panangaden. Metrics for labelled markov processes. _Theor. Comput. Sci._, 318(3):323-354, 2004. doi: 10.1016/j.tcs.2003.09.013. URL [https://doi.org/10.1016/j.tcs.2003.09.013](https://doi.org/10.1016/j.tcs.2003.09.013).
* Fajtl et al. (2020) Jiri Fajtl, Vasileios Argyriou, Dorothy Monekosso, and Paolo Remagnino. Latent bernoulli autoencoder. In _Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event_, volume 119 of _Proceedings of Machine Learning Research_, pp. 2964-2974. PMLR, 2020. URL [http://proceedings.mlr.press/v119/fajtl20a.html](http://proceedings.mlr.press/v119/fajtl20a.html).
* Essays Dedicated to Prakash Panangaden on the Occasion of His 60th Birthday_, volume 8464 of _LNCS_, pp. 319-342. Springer, 2014. doi: 10.1007/978-3-319-06880-0_17. URL [https://doi.org/10.1007/978-3-319-06880-0_17](https://doi.org/10.1007/978-3-319-06880-0_17).
* Gelada et al. (2019) Carles Gelada, Saurabh Kumar, Jacob Buckman, Ofir Nachum, and Marc G. Bellemare. Deepmdp: Learning continuous latent space models for representation learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), _Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA_, volume 97 of _Proceedings of Machine Learning Research_, pp. 2170-2179. PMLR, 2019. URL [http://proceedings.mlr.press/v97/geladal9a.html](http://proceedings.mlr.press/v97/geladal9a.html).
* Germain et al. (2015) Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: masked autoencoder for distribution estimation. In Francis R. Bach and David M. Blei (eds.), _Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015_, volume 37 of _JMLR Workshop and Conference Proceedings_, pp. 881-889. JMLR.org, 2015. URL [http://proceedings.mlr.press/v37/germain15.html](http://proceedings.mlr.press/v37/germain15.html).
* Givan et al. (2003) Robert Givan, Thomas L. Dean, and Matthew Greig. Equivalence notions and model minimization in markov decision processes. _Artif. Intell._, 147(1-2):163-223, 2003. doi: 10.1016/S0004-3702(02)00376-4. URL [https://doi.org/10.1016/S0004-3702](https://doi.org/10.1016/S0004-3702)(02)00376-4.
* Gulrajani et al. (2017) Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of wasserstein gans. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_, pp. 5767-5777, 2017. URL [https://proceedings.neurips.cc/paper/2017/hash/892c3b1c6dccd52936e27cbd0ff683d6-Abstract.html](https://proceedings.neurips.cc/paper/2017/hash/892c3b1c6dccd52936e27cbd0ff683d6-Abstract.html).
* Haarnoja et al. (2018) Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Jennifer G. Dy and Andreas Krause (eds.), _Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmassan, Stockholm, Sweden, July 10-15, 2018_, volume 80 of _Proceedings of Machine Learning Research_, pp. 1856-1865. PMLR, 2018. URL [http://proceedings.mlr.press/v80/haarnoja18b.html](http://proceedings.mlr.press/v80/haarnoja18b.html).
* Hasanbeig et al. (2021) Mohammad Hasanbeig, Natasha Yogananda Jeppu, Alessandro Abate, Tom Melham, and Daniel Kroening. Deepsynth: Automata synthesis for automatic task segmentation in deep reinforcement learning. In _Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021_, pp. 7647-7656. AAAI Press, 2021. URL [https://ojs.aaai.org/index.php/AAAI/article/view/16935](https://ojs.aaai.org/index.php/AAAI/article/view/16935).
* Hensel et al. (2021) Christian Hensel, Sebastian Junges, Joost-Pieter Katoen, Tim Quatmann, and Matthias Volk. The probabilistic model checker storm. _International Journal on Software Tools for Technology Transfer_, 2021. ISSN 1433-2787. doi: 10.1007/s10009-021-00633-z. URL [https://doi.org/10.1007/s10009-021-00633-z](https://doi.org/10.1007/s10009-021-00633-z).
* Hoffman et al. (2013) Matthew D. Hoffman, David M. Blei, Chong Wang, and John W. Paisley. Stochastic variational inference. _J. Mach. Learn. Res._, 14(1):1303-1347, 2013. URL [http://dl.acm.org/citation.cfm?id=2502622](http://dl.acm.org/citation.cfm?id=2502622).
* Huang (2020) Bojun Huang. Steady state analysis of episodic reinforcement learning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_, 2020. URL [https://proceedings.neurips.cc/paper/2020/hash/69bfa2aa2b7b139ff581a06abf0a886-Abstract.html](https://proceedings.neurips.cc/paper/2020/hash/69bfa2aa2b7b139ff581a06abf0a886-Abstract.html).
* Jang et al. (2017) Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net, 2017. URL [https://openreview.net/forum?id=rkE3y85ee](https://openreview.net/forum?id=rkE3y85ee).
* Jansen et al. (2020) Nils Jansen, Bettina Konighofer, Sebastian Junges, Alex Serban, and Roderick Bloem. Safe Reinforcement Learning Using Probabilistic Shields (Invited Paper). In Igor Konnov and Laura Kovacs (eds.), _31st International Conference on Concurrency Theory (CONCUR 2020)_, volume 171 of _Leibniz International Proceedings in Informatics (LIPIcs)_, pp. 3:1-3:16, Dagstuhl, Germany, 2020. Schloss Dagstuhl-Leibniz-Zentrum fur Informatik. ISBN 978-3-95977-160-3. doi: 10.4230/LIPIcs.CONCUR.2020.3. URL [https://drops.dagstuhl.de/opus/volltexte/2020/12815](https://drops.dagstuhl.de/opus/volltexte/2020/12815).
* 22nd International Conference, TACAS 2016, Eindhoven, The Netherlands, April 2-8, 2016, Proceedings_, volume 9636 of _LNCS_, pp. 130-146. Springer, 2016. doi: 10.1007/978-3-662-49674-9_8. URL [https://doi.org/10.1007/978-3-662-49674-9_8](https://doi.org/10.1007/978-3-662-49674-9_8).
* Kingma and Welling (2014) Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun (eds.), _2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings_, 2014. URL [http://arxiv.org/abs/1312.6114](http://arxiv.org/abs/1312.6114).
* Kwiatkowska et al. (2022) Marta Kwiatkowska, Gethin Norman, and David Parker. Probabilistic model checking and autonomy. _Annual Review of Control, Robotics, and Autonomous Systems_, 5(1):385-410, 2022. doi: 10.1146/annurev-control-042820-010947. URL [https://doi.org/10.1146/annurev-control-042820-010947](https://doi.org/10.1146/annurev-control-042820-010947).
* Larsen and Skou (1989) Kim Guldstrand Larsen and Arne Skou. Bisimulation through probabilistic testing. In _Conference Record of the Sixteenth Annual ACM Symposium on Principles of Programming Languages, Austin, Texas, USA, January 11-13, 1989_, pp. 344-352. ACM Press, 1989. doi: 10.1145/75277.75307. URL [https://doi.org/10.1145/75277.75307](https://doi.org/10.1145/75277.75307).
* European Conference, ECML PKDD 2020, Ghent, Belgium, September 14-18, 2020, Proceedings, Part V_, volume 12461 of _Lecture Notes in Computer Science_, pp. 155-170. Springer, 2020. doi: 10.1007/978-3-030-67670-4_10. URL [https://doi.org/10.1007/978-3-030-67670-4_10](https://doi.org/10.1007/978-3-030-67670-4_10).
* Littman et al. (2017) Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Lee Isbell Jr., Min Wen, and James MacGlashan. Environment-independent task specifications via GLTL. _CoRR_, abs/1704.04341, 2017. URL [http://arxiv.org/abs/1704.04341](http://arxiv.org/abs/1704.04341).
* Maddison et al. (2017) Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net, 2017. URL [https://openreview.net/forum?id=S1jE5L5gl](https://openreview.net/forum?id=S1jE5L5gl).
* Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. _Nat._, 518(7540):529-533, 2015. doi: 10.1038/nature14236. URL [https://doi.org/10.1038/nature14236](https://doi.org/10.1038/nature14236).
* Nowe (1994) Ann Nowe. _Synthesis of "safe" fuzzy controllers based on reinforcement learning_. PhD thesis, Vrije Universiteit Brussel, 1994.
* Papamakarios et al. (2017) George Papamakarios, Iain Murray, and Theo Pavlakou. Masked autoregressive flow for density estimation. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_, pp. 2338-2347, 2017. URL [https://proceedings.neurips.cc/paper/2017/hash/6clda886822c67822bcf3679d04369fa-Abstract.html](https://proceedings.neurips.cc/paper/2017/hash/6clda886822c67822bcf3679d04369fa-Abstract.html).
* 1 November 1977_, pp. 46-57. IEEE Computer Society, 1977. doi: 10.1109/SFCS.1977.32. URL [https://doi.org/10.1109/SFCS.1977.32](https://doi.org/10.1109/SFCS.1977.32).
* Puterman (1994) Martin L. Puterman. _Markov Decision Processes: Discrete Stochastic Dynamic Programming_. Wiley Series in Probability and Statistics. Wiley, 1994. ISBN 978-0-47161977-2. doi: 10.1002/9780470316887. URL [https://doi.org/10.1002/9780470316887](https://doi.org/10.1002/9780470316887).
* Ren et al. (2021) Tao Ren, Jianwei Niu, Jiahe Cui, Zhenchao Ouyang, and Xuefeng Liu. An application of multi-objective reinforcement learning for efficient model-free control of canals deployed with iot networks. _Journal of Network and Computer Applications_, 182:103049, 2021. ISSN 1084-8045. doi: [https://doi.org/10.1016/j.jnca.2021.103049](https://doi.org/10.1016/j.jnca.2021.103049). URL [https://www.sciencedirect.com/science/article/pii/S1084804521000734](https://www.sciencedirect.com/science/article/pii/S1084804521000734).
* 28th International Conference, CAV 2016, Toronto, ON, Canada, July 17-23, 2016, Proceedings, Part II_, volume 9780 of _Lecture Notes in Computer Science_, pp. 312-332. Springer, 2016. doi: 10.1007/978-3-319-41540-6_17. URL [https://doi.org/10.1007/978-3-319-41540-6_17](https://doi.org/10.1007/978-3-319-41540-6_17).
* Simao et al. (2021) Thiago D. Simao, Nils Jansen, and Matthijs T. J. Spaan. Alwayssafe: Reinforcement learning without safety constraint violations during training. In Frank Dignum, Alessio Lomuscio, Ulle Endriss, and Ann Nowe (eds.), _AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, May 3-7, 2021_, pp. 1226-1235. ACM, 2021. URL [https://dl.acm.org/doi/10.5555/3463952.3464094](https://dl.acm.org/doi/10.5555/3463952.3464094).
* Todorov et al. (2012) Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In _2012 IEEE/RSJ International Conference on Intelligent Robots and Systems_, pp. 5026-5033. IEEE, 2012.
* May 3, 2018, Conference Track Proceedings_. OpenReview.net, 2018. URL [https://openreview.net/forum?id=HKL7nl-0b](https://openreview.net/forum?id=HKL7nl-0b).
* Tsitsiklis (1994) John N. Tsitsiklis. Asynchronous stochastic approximation and q-learning. _Mach. Learn._, 16(3):185-202, 1994. doi: 10.1007/BF00993306. URL [https://doi.org/10.1007/BF00993306](https://doi.org/10.1007/BF00993306).
* van den Oord et al. (2017) Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA_, pp. 6306-6315, 2017. URL [http://papers.nips.cc/paper/7210-neural-discrete-representation-learning](http://papers.nips.cc/paper/7210-neural-discrete-representation-learning).
* Villani (2009) Cedric Villani. _Optimal Transport: Old and New_. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. ISBN 978-3-540-71050-9. doi: 10.1007/978-3-540-71050-9_6. URL [https://doi.org/10.1007/978-3-540-71050-9_6](https://doi.org/10.1007/978-3-540-71050-9_6).
* Wells et al. (2020) Andrew M. Wells, Morteza Lahijanian, Lydia E. Kavraki, and Moshe Y. Vardi. Ltlf synthesis on probabilistic systems. In Jean-Francois Raskin and Davide Bresolin (eds.), _Proceedings 11th International Symposium on Games, Automata, Logics, and Formal Verification, GandALF 2020, Brussels, Belgium, September 21-22, 2020_, volume 326 of _EPTCS_, pp. 166-181, 2020. doi: 10.4204/EPTCS.326.11. URL [https://doi.org/10.4204/EPTCS.326.11](https://doi.org/10.4204/EPTCS.326.11).
* Zang et al. (2022) Hongyu Zang, Xin Li, and Mingzhong Wang. Simsr: Simple distance-based state representations for deep reinforcement learning. _Proceedings of the AAAI Conference on Artificial Intelligence_, 36(8):8997-9005, Jun. 2022. doi: 10.1609/aaai.v36i8.20883. URL [https://ojs.aaai.org/index.php/AAAI/article/view/20883](https://ojs.aaai.org/index.php/AAAI/article/view/20883).
* Zhang et al. (2021) Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, and Sergey Levine. Learning invariant representations for reinforcement learning without reconstruction. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net, 2021. URL [https://openreview.net/forum?id=-2FCwDRKREu](https://openreview.net/forum?id=-2FCwDRKREu).
* Zhang et al. (2019) Shunkang Zhang, Yuan Gao, Yuling Jiao, Jin Liu, Yang Wang, and Can Yang. Wasserstein-wasserstein auto-encoders. _CoRR_, abs/1902.09323, 2019. URL [http://arxiv.org/abs/1902.09323](http://arxiv.org/abs/1902.09323).
## Appendix A Theoretical Details on WAE-MDPs
### The Discrepancy Measure
We show that reasoning about discrepancy measures between stationary distributions is sound in the context of infinite interaction and episodic RL processes. Let \(P_{\theta}\) be a parameterized behavioral model that generate finite traces from the original environment (i.e., finite sequences of state, actions, and rewards of the form \(\left<s_{0:T},a_{0:T-1},r_{0:T-1}\right>\)), our goal is to find the best parameter \(\theta\) which offers the most accurate reconstruction of the original traces issued from the original model \(\mathcal{M}\) operating under \(\pi\). We demonstrate that, in the limit, considering the OT between trace-based distributions is equivalent to considering the OT between the stationary distribution of \(\mathcal{M}_{\pi}\) and the one of the behavioral model.
Let us first formally recall the definition of the metric on the _transitions_ of the MDP.
**Raw transition distance.** Assume that \(\mathcal{S}\), \(\mathcal{A}\), and \(\mathrm{Im}(\mathcal{R})\) are respectively equipped with metric \(d_{\mathcal{S}}\), \(d_{\mathcal{A}}\), and \(d_{\mathcal{R}}\), let us define the _raw transition distance metric_ over _transitions_ of \(\mathcal{M}\), i.e., tuples of the form \(\left<s,a,r,s^{\prime}\right>\), as \(\overline{d}\): \(\mathcal{S}\times\mathcal{A}\times\mathrm{Im}(\mathcal{R})\times\mathcal{S}\),
\[\overline{d}\big{(}\big{<}s_{1},a_{1},r_{1},s^{\prime}_{1}\big{>}\,,\big{<}s_ {2},a_{2},r_{2},s^{\prime}_{2}\big{>}\big{)}=d_{\mathcal{S}}(s_{1},s_{2})+d_{ \mathcal{A}}(a_{1},a_{2})+d_{\mathcal{R}}(r_{1},r_{2})+d_{\mathcal{S}}\big{(} s^{\prime}_{1},s^{\prime}_{2}\big{)}.\]
In a nutshell, \(\vec{d}\) consists of the sum of the distance of all the transition components. Note that it is a well defined distance metric since the sum of distances preserves the identity of indiscernible, symmetry, and triangle inequality.
**Trace-based distributions.** The raw distance \(\vec{d}\) allows to reason about _transitions_, we thus consider the distribution over _transitions which occur along traces of length \(T\)_ to compare the dynamics of the original and behavioral models:
\[\mathcal{D}_{\pi}\left[T\right]\big{(}s,a,r,s^{\prime}\big{)} =\frac{1}{T}\sum_{t=1}^{T}\xi_{\pi}^{t}(s\mid s_{I})\cdot\pi(a\mid s )\cdot\mathbf{P}\big{(}s^{\prime}\mid s,a\big{)}\cdot\mathbf{1}_{r=\mathcal{R }(s,a)},\text{ and}\] \[\mathcal{P}_{\theta}\big{[}T\big{]}\big{(}s,a,r,s^{\prime}\big{)} =\frac{1}{T}\sum_{t=1}^{T}\underset{s_{0:t},a_{0:t},r_{0:t-1}\sim P _{\theta}[t]}{\mathbb{E}}\mathbf{1}_{\left<s_{t-1},a_{t-1}r_{t-1},s_{t}\right> =\left<s,a,r,s^{\prime}\right>},\]
where \(P_{\theta}[T]\) denotes the distribution over traces of length \(T\), generated from \(P_{\theta}\). Intuitively, \(\nicefrac{{1}}{{T}}\cdot\sum_{t=1}^{T}\xi_{\pi}^{t}(s\mid s_{I})\) can be seen as the fraction of the time spent in \(s\) along traces of length \(T\), starting from the initial state Kulkarni (1995). Therefore, drawing \(\left<s,a,r,s^{\prime}\right>\sim\mathcal{D}_{\pi}\left[T\right]\) trivially follows: it is equivalent to drawing \(s\) from \(\nicefrac{{1}}{{T}}\cdot\sum_{t=1}^{T}\xi_{\pi}^{t}(\cdot\mid s_{I})\), then respectively \(a\) and \(s^{\prime}\) from \(\pi(\cdot\mid s)\) and \(\mathbf{P}(\cdot\mid s,a)\), to finally obtain \(r=\mathcal{R}(s,a)\). Given \(T\in\mathbb{N}\), our objective is to minimize the Wasserstein distance between those distributions: \(W_{\vec{d}}(\mathcal{D}_{\pi}[T],\mathcal{P}_{\theta}[T])\). The following Lemma enables optimizing the Wasserstein distance between the original MDP and the behavioral model when traces are drawn from episodic RL processes or infinite interactions (Huang, 2020).
**Lemma A.1**.: _Assume the existence of a stationary behavioral model \(\xi_{\theta}=\lim_{T\to\infty}\mathcal{P}_{\theta}[T]\), then_
\[\lim_{T\to\infty}W_{\vec{d}}\left(\mathcal{D}_{\pi}[T],\mathcal{P}_{\theta}[T] \right)=W_{\vec{d}}\left(\xi_{\pi},\xi_{\theta}\right).\]
Proof.: First, note that \(\nicefrac{{1}}{{T}}\cdot\sum_{t=1}^{T}\xi_{\pi}^{t}(\cdot\mid s_{I})\) weakly converges to \(\xi_{\pi}\) as \(T\) goes to \(\infty\) Kulkarni (1995). The result follows then from (Villani, 2009, Corollary 6.9).
### Dealing with Discrete Actions
When the policy \(\pi\) executed in \(\mathcal{M}\) already produces discrete actions, learning a latent action space is, in many cases, not necessary. We thus make the following assumptions:
**Assumption A.2**.: _Let \(\pi\colon\mathcal{S}\to\Delta(\mathcal{A}^{\star})\) be the policy executed in \(\mathcal{M}\) and assume that \(\mathcal{A}^{\star}\) is a (tractable) finite set. Then, we take \(\overline{\mathcal{A}}=\mathcal{A}^{\star}\) and \(\phi_{\iota}^{\star}\) as the identity function, i.e., \(\phi_{\iota}^{\star}\colon\overline{\mathcal{S}}\times\mathcal{A}^{\star}\to \mathcal{A}^{\star},\,\left<\bar{s},a^{\star}\right>\mapsto a^{\star}\)._
**Assumption A.3**.: _Assume that the action space of the original environment \(\mathcal{M}\) is a (tractable) finite set. Then, we take \(\psi_{\theta}\) as the identity function, i.e., \(\psi_{\theta}=\phi_{\iota}^{A}\)._
Concretely, the premise of Assumption A.2 typically occurs when \(\pi\) is a latent policy (see Rem. 1) _or_ when \(\mathcal{M}\) has already a discrete action space. In the latter case, Assumption A.2 and A.3 amount to setting \(\overline{\mathcal{A}}=\mathcal{A}\) and ignoring the action encoder and embedding function. Note that if a discrete action space is too large, or if the user explicitly aims for a coarser space, then the former is not considered as tractable, these assumptions do not hold, and the action space is abstracted to a smaller set of discrete actions.
### Proof of Lemma 3.2
**Notation.** From now on, we write \(\phi_{\iota}(\vec{s},\vec{a}\mid s,a)=\mathbf{1}_{\phi_{\iota}(s)=\vec{s}} \cdot\phi_{\iota}^{A}(\vec{a}\mid\vec{s},a)\).
**Lemma 3.2**.: _Define \(\mathcal{T}(\vec{s},\vec{a},\vec{s}^{\prime})=\mathbb{E}_{s,a\sim\xi_{\iota}} [\mathbf{1}_{\phi_{\iota}(s)=\vec{s}^{\prime}}\phi_{\iota}^{A}(\vec{a}\mid \vec{s},a)\cdot\overline{\mathbf{P}}_{\theta}(\vec{s}^{\prime}\mid\vec{s}, \vec{a})]\) as the distribution of drawing state-action pairs from interacting with \(\mathcal{M}\), embedding them to the latent spaces, and finally letting them transition to their successor state in \(\overline{\mathcal{M}}_{\theta}\). Then, \(W_{\vec{d}}\big{(}Q_{\iota},\bar{\xi}_{\pi_{\theta}}\big{)}\leqslant W_{\vec{ d}}\big{(}\bar{\xi}_{\pi_{\theta}},\mathcal{T}\big{)}+L_{\mathbf{P}}^{\xi_{\pi}}\)._
Proof.: Wasserstein is compliant with the triangular inequality (Villani, 2009), which gives us:
\[W_{\vec{d}}\big{(}Q_{\iota},\bar{\xi}_{\pi_{\theta}}\big{)}\leqslant W_{\vec{ d}}(Q_{\iota},\mathcal{T})+W_{d\vec{g}}\left(\mathcal{T},\bar{\xi}_{\pi_{ \theta}}\right),\]
where
\[W_{\vec{d}}\big{(}\mathcal{T},\bar{\xi}_{\pi_{\theta}}\big{)} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
which finally yields the result.
### Proof of Theorem 3.3
Before proving Theorem 3.3, let us introduce the following Lemma, that explicitly demonstrates the link between the transition regularizer of the W\({}^{2}\)AE-MDP objective and the local transition loss required to obtain the guarantees related to the bisimulation bounds of Eq. 1.
**Lemma A.4**.: _Assume that traces are generated by running \(\overline{\pi}\in\overline{\Pi}\) in the original environment, then_
\[\mathop{\mathbb{E}}_{s,a^{\star}\sim\xi_{\overline{\pi}}}\mathop{ \mathbb{E}}_{\overline{a}\sim\phi_{\epsilon}(\cdot|\phi_{\epsilon}(s),a^{ \star})}W_{d\mathcal{F}}\left(\phi_{\epsilon}\mathbf{P}(\cdot\mid s,a^{\star}), \overline{\mathbf{P}}_{\theta}(\cdot\mid\phi_{\epsilon}(s),\overline{a}) \right)=L_{\mathbf{P}}^{\xi_{\overline{\pi}}}.\]
Proof.: Since the latent policy \(\overline{\pi}\) generates latent actions, Assumption A.2 holds, which means:
\[\mathop{\mathbb{E}}_{s,a^{\star}\sim\xi_{\overline{\pi}}}\mathop{ \mathbb{E}}_{\overline{a}\sim\phi_{\epsilon}^{\star}(\cdot|\phi_{\epsilon}(s),a^{\star})}W_{d\mathcal{F}}\left(\phi_{\epsilon}\mathbf{P}(\cdot\mid s,a^{ \star}),\overline{\mathbf{P}}_{\theta}(\cdot\mid\phi_{\epsilon}(s),\overline{ a})\right)\] \[=\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}} \mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{ \mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}} \mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{ \mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}} \mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{ \mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}} \mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}} \mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}} \mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}} \mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{ \mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{\overline{\pi}}\mathop{\mathbb{E}}_{ \overline{\pi}}
Proof.: Our objective is to show that
\[\operatorname*{\mathbb{E}}_{x\sim\xi}\left[\sup_{y_{1}\sim P(\cdot|x )}\operatorname*{\mathbb{E}}_{y_{2}\sim Q(\cdot|x)}\varphi^{\star}(x)(y_{2}) \right] \tag{6}\] \[=\sup_{\varphi\colon\mathcal{X}\sim\mathcal{J}_{x}}\operatorname*{ \mathbb{E}}_{x\sim\xi}\left[\operatorname*{\mathbb{E}}_{y_{1}\sim P(\cdot|x)} \varphi(x)(y_{1})-\operatorname*{\mathbb{E}}_{y_{2}\sim Q(\cdot|x)}\varphi(x)(y _{2})\right] \tag{7}\]
We start with (6) \(\leqslant\) (7). Construct \(\varphi^{\star}\colon\mathcal{X}\to\mathcal{F}_{d}\) by setting for all \(x\in\mathcal{X}\)
\[\varphi^{\star}(x)=\arg\sup_{f\in\mathcal{F}_{d}}\operatorname*{ \mathbb{E}}_{y_{1}\sim P(\cdot|x)}f(y_{1})-\operatorname*{\mathbb{E}}_{y_{2} \sim Q(\cdot|x)}f(y_{2}).\]
This gives us
\[\operatorname*{\mathbb{E}}_{x\sim\xi}\left[\sup_{f\in\mathcal{F}_ {d}}\operatorname*{\mathbb{E}}_{y_{1}\sim P(\cdot|x)}f(y_{1})-\operatorname* {\mathbb{E}}_{y_{2}\sim Q(\cdot|x)}f(y_{2})\right]\] \[=\operatorname*{\mathbb{E}}_{x\sim\xi}\left[\operatorname*{ \mathbb{E}}_{y_{1}\sim P(\cdot|x)}\varphi^{\star}(x)(y_{1})-\operatorname*{ \mathbb{E}}_{y_{2}\sim Q(\cdot|x)}\varphi^{\star}(x)(y_{2})\right]\] \[\leqslant\sup_{\varphi\colon\mathcal{X}\sim\mathcal{F}_{d}} \operatorname*{\mathbb{E}}_{x\sim\xi}\left[\operatorname*{\mathbb{E}}_{y_{1} \sim P(\cdot|x)}\varphi(x)(y_{1})-\operatorname*{\mathbb{E}}_{y_{2}\sim Q( \cdot|x)}\varphi(x)(y_{2})\right].\]
It remains to show that (6) \(\geqslant\) (7). Take
\[\varphi^{\star}=\arg\sup_{\varphi\colon\mathcal{X}\sim\mathcal{F}_{d}} \operatorname*{\mathbb{E}}_{x\sim\xi}\left[\operatorname*{\mathbb{E}}_{y_{1} \sim P(\cdot|x)}\varphi(x)(y_{1})-\operatorname*{\mathbb{E}}_{y_{2}\sim Q( \cdot|x)}\varphi(x)(y_{2})\right].\]
Then, for all \(x\in\mathcal{X}\), we have \(\varphi^{\star}(x)\in\mathcal{F}_{d}\) which means:
\[\operatorname*{\mathbb{E}}_{y_{1}\sim P(\cdot|x)}\varphi^{\star} (x)(y_{1})-\operatorname*{\mathbb{E}}_{y_{2}\sim Q(\cdot|x)}\varphi^{\star}(x )(y_{2})\] \[\leqslant\sup_{f\in\mathcal{F}_{d}}\operatorname*{\mathbb{E}}_{y_ {1}\sim P(\cdot|x)}f(y_{1})-\operatorname*{\mathbb{E}}_{y_{2}\sim Q(\cdot|x)}f (y_{2})\]
This finally yields
\[\operatorname*{\mathbb{E}}_{x\sim\xi}\left[\operatorname*{\mathbb{ E}}_{y_{1}\sim P(\cdot|x)}\varphi^{\star}(x)(y_{1})-\operatorname*{\mathbb{E}}_{y_{2} \sim Q(\cdot|x)}\varphi^{\star}(x)(y_{2})\right]\] \[\leqslant\operatorname*{\mathbb{E}}_{x\sim\xi}\left[\sup_{f\in \mathcal{F}_{d}}\operatorname*{\mathbb{E}}_{y_{1}\sim P(\cdot|x)}f(y_{1})- \operatorname*{\mathbb{E}}_{y_{2}\sim Q(\cdot|x)}f(y_{2})\right].\]
**Corollary A.5.1**.: _Let \(\xi_{\pi}\) be a stationary distribution of \(\mathcal{M}_{\pi}\) and \(\mathcal{X}=\mathcal{S}\times\mathcal{A}\times\vec{\mathcal{S}}\times\vec{ \mathcal{A}}\), then_
\[L^{\xi_{\pi}}_{\mathbf{P}}=\sup_{\varphi\colon\mathcal{X}\sim\mathcal{J}_{d}} \operatorname*{\mathbb{E}}_{s,a,s^{\prime}\sim\xi_{\pi}}\operatorname*{\mathbb{ E}}_{\vec{s},\vec{a}\sim\phi_{\iota}(\cdot|s,a)}\left[\varphi(s,a,\vec{s}, \vec{a})\big{(}\phi_{\iota}\big{(}s^{\prime}\big{)}-\operatorname*{\mathbb{E}}_{ \vec{s}\sim\overline{\mathbf{P}}_{\delta}(\cdot|\vec{s},a)}\varphi(s,a,\vec{s},\vec{a})\big{(}\vec{s}^{\prime}\big{)}\right]\]
Consequently, we rewrite \(L^{\xi_{\pi}}_{\mathbf{P}}(\omega)\) as a tractable maximization:
\[L^{\xi_{\pi}}_{\mathbf{P}}(\omega)=\max_{\omega\colon\varphi^{\star}_{\omega} \in\mathcal{F}_{d}}\operatorname*{\mathbb{E}}_{s,a,s^{\prime}\sim\xi_{\pi}} \operatorname*{\mathbb{E}}_{\vec{s},\vec{a}\sim\phi_{\iota}(\cdot|s,a)}\left[ \varphi^{\mathbf{P}}_{\omega}\big{(}s,a,\vec{s},\vec{a},\phi_{\iota}\big{(}s^ {\prime}\big{)}-\operatorname*{\mathbb{E}}_{\vec{s}\sim\overline{\mathbf{P}}_{ \delta}(\cdot|\vec{s},\vec{a})}\varphi^{\mathbf{P}}_{\omega}\big{(}s,a,\vec{s},\vec{a},\vec{s}^{\prime}\big{)}\right].\]
### The Latent Metric
In the following, we show that considering the Euclidean distance for \(\vec{d}\) and \(d_{\vec{\mathcal{S}}}\) in the latent space for optimizing the regularizers \(\mathcal{W}_{\xi_{\pi}}\) and \(L^{\xi_{\pi}}_{\mathbf{P}}\) is Lipschitz equivalent to considering a continuous \(\lambda\)-relaxation of the _discrete metric_\(\mathbf{1}_{\mu}(\boldsymbol{x},\boldsymbol{y})=\mathbf{1}_{\boldsymbol{x} \neq\boldsymbol{y}}\). Consequently, this also means it is consistently sufficient to enforce \(1\)-Lipschitzness via the gradient penalty approach of Gulrajani et al. (2017) during training to maintain the guarantees linked to the regularizers in the zero-temperature limit, when the spaces are discrete.
**Lemma A.6**.: _Let \(d\) be the usual Euclidean distance and \(d_{\lambda}\colon[0,1]^{n}\times[0,1]^{n}\to[0,1[\), \(\langle\mathbf{x},\mathbf{y}\rangle\mapsto\frac{d(\mathbf{x},\mathbf{y})}{\lambda+d(\mathbf{x},\mathbf{y })}\) for \(\lambda\in\,]0,1]\) and \(n\in\mathbb{N}\), then \(d_{\lambda}\) is a distance metric._
Proof.: The function \(d_{\lambda}\) is a metric iff it satisfies the following axioms:
1. _Identity of indiscerimbles_: If \(\mathbf{x}=\mathbf{y}\), then \(d_{\lambda}(\mathbf{x},\mathbf{y})=\frac{d(\mathbf{x},\mathbf{y})}{\lambda+d(\mathbf{x},\mathbf{y})}= \frac{0}{\lambda+0}=0\) since \(d\) is a distance metric. Assume now that \(d_{\lambda}(\mathbf{x},\mathbf{y})=0\) and take \(\alpha=d(\mathbf{x},\mathbf{y})\), for any \(\mathbf{x},\mathbf{y}\). Thus, \(\alpha\in[0,+\infty[\) and \(0=\frac{\alpha}{\lambda+\alpha}\) is only achieved in \(\alpha=0\), which only occurs whenever \(\mathbf{x}=\mathbf{y}\) since \(d\) is a distance metric.
2. _Symmetry:_ \[d_{\lambda}(\mathbf{x},\mathbf{y}) =\frac{d(\mathbf{x},\mathbf{y})}{\lambda+d(\mathbf{x},\mathbf{y})}\] \[=\frac{d(\mathbf{y},\mathbf{x})}{\lambda+d(\mathbf{y},\mathbf{x})}\] (\[d\] is a distance metric) \[=d_{\lambda}(\mathbf{y},\mathbf{x})\]
3. _Triangle inequality_: Let \(\mathbf{x},\mathbf{y},\mathbf{z}\in[0,1]^{n}\), the triangle inequality holds iff \[d_{\lambda}(\mathbf{x},\mathbf{y})+d_{\lambda}(\mathbf{y},\mathbf{z}) \geqslant d_{\lambda}(\mathbf{x},\mathbf{z})\] (8) \[\equiv \frac{d(\mathbf{x},\mathbf{y})}{\lambda+d(\mathbf{x},\mathbf{y})}+\frac{d(\mathbf{y},\mathbf{z})}{\lambda+d(\mathbf{y},\mathbf{z})} \geqslant\frac{d(\mathbf{x},\mathbf{z})}{\lambda+d(\mathbf{x},\mathbf{z})}\] \[= \frac{\lambda d(\mathbf{x},\mathbf{y})+\lambda d(\mathbf{y},\mathbf{z})+2d(\mathbf{x},\mathbf{y})d(\mathbf{y},\mathbf{z})}{\lambda^{2}+\lambda d(\mathbf{x},\mathbf{y})+\lambda d(\bm {y},\mathbf{z})+d(\mathbf{x},\mathbf{y})d(\mathbf{y},\mathbf{z})} \geqslant\frac{d(\mathbf{x},\mathbf{z})}{\lambda+d(\mathbf{x},\mathbf{z})}\] \[= \lambda^{2}d(\mathbf{x},\mathbf{y})+\lambda^{2}d(\mathbf{y},\mathbf{z})+2\lambda d (\mathbf{x},\mathbf{y})d(\mathbf{y},\mathbf{z})+\] \[\lambda d(\mathbf{x},\mathbf{y})d(\mathbf{x},\mathbf{z})+\lambda d(\mathbf{y},\mathbf{z}) d(\mathbf{x},\mathbf{z})+2d(\mathbf{x},\mathbf{y})d(\mathbf{y},\mathbf{z})d(\mathbf{x},\mathbf{z})\] \[\geqslant \lambda^{2}d(\mathbf{x},\mathbf{z})+\lambda d(\mathbf{x},\mathbf{y})d(\mathbf{x},\bm {z})+\lambda d(\mathbf{y},\mathbf{z})d(\mathbf{x},\mathbf{z})+d(\mathbf{x},\mathbf{y})d(\mathbf{y},\mathbf{z}) d(\mathbf{x},\mathbf{z})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
**Corollary 7.7.1**.: _For all \(\beta\geq\nicefrac{{1}}{{\lambda}}\), \(s\in\mathcal{S}\), \(a\in\mathcal{A}\), \(\bar{s}\in\mathcal{\bar{S}}\), and \(\bar{a}\in\mathcal{\overline{A}}\), we have_
1. \(W_{d_{\lambda}}\left(\mathcal{T},\bar{\xi}_{\bar{\pi}_{\theta}}\right)\leq\beta \cdot W_{d}\left(\mathcal{T},\bar{\xi}_{\bar{\pi}_{\theta}}\right)\)__
2. \(W_{d_{\lambda}}\left(\phi_{\iota}\mathbf{P}(\cdot\mid s,a),\mathbf{\overline{P} }_{\theta}(\cdot\mid\bar{s},\bar{a})\right)\leq\beta\cdot W_{d}\left(\phi_{ \iota}\mathbf{P}(\cdot\mid s,a),\mathbf{\overline{P}}_{\theta}(\cdot\mid\bar{ s},\bar{a})\right)\)__
Proof.: By Lipschitz equivalence, taking \(\beta\geq\nicefrac{{1}}{{\lambda}}\) ensures that \(\forall n\in\mathbb{N}\), \(\forall\mathbf{x},\mathbf{y}\in[0,1]^{n}\), \(d_{\lambda}(\mathbf{x},\mathbf{y})\leq\beta\cdot d(\mathbf{x},\mathbf{y})\). Moreover, for any distributions \(P,Q\), \(W_{d_{\lambda}}\left(P,Q\right)\leq\beta\cdot W_{d}\left(P,Q\right)\) (cf., e.g., Gelada et al. 2019, Lemma A.4 for details).
In practice, taking the hyperparameter \(\beta\geq\nicefrac{{1}}{{\lambda}}\) in the W\({}^{2}\)AE-MDP ensures that minimizing the \(\beta\)-scaled regularizers w.r.t. \(d\) also minimizes the regularizers w.r.t. the \(\lambda\)-relaxation \(d_{\lambda}\), being the discrete distribution in the zero-temperature limit. Note that optimizing over two different \(\beta_{1},\beta_{2}\) instead of a unique scale factor \(\beta\) is also a good practice to interpolate between the two regularizers.
## Appendix B Experiment Details
The code for conducting and replicating our experiments is available at [https://github.com/florentdelgrange/wae_mdp](https://github.com/florentdelgrange/wae_mdp).
### Setup
We used TensorFlow 2.7.0 (Abadi et al., 2015) to implement the neural network architecture of our W\({}^{2}\)AE-MDP, TensorFlow Probability 0.15.0 (Dillon et al., 2017) to handle the probabilistic components of the latent model (e.g., latent distributions with reparameterization tricks, masked autoregressive flows, etc.), as well as TF-Agents 0.11.0 (Guadarrama et al., 2018) to handle the RL parts of the framework.
Models have been trained on a cluster running under CentOS Linux 7 (Core) composed of a mix of nodes containing Intel processors with the following CPU microarchitectures: (i) 10-core INTEL E5-2680v2, (ii) 14-core INTEL E5-2680v4, and (iii) 20-core INTEL Xeon Gold 6148. We used \(8\) cores and \(32\) GB of memory for each run.
### Stationary Distribution
To sample from the stationary distribution \(\xi_{\pi}\) of episodic learning environments operating under \(\pi\in\Pi\), we implemented the _recursive \(\epsilon\)-perturbation trick_ of Huang (2020). In a nutshell, the reset of the environment is explicitly added to the state space of \(\mathcal{M}\), which is entered at the end of each episode and left with probability \(1-\epsilon\) to start a new one. We also added a special atomic proposition reset into \(\mathbf{AP}\) to label this reset state and reason about episodic behaviors. For instance, this allows verifying whether the agent behaves safely during the entire episode, or if it is able to reach a goal before the end of the episode.
### Environments with initial distribution
Many environments do not necessarily have a single initial state, but rather an initial distribution over states \(d_{I}\in\Delta(\mathcal{S})\). In that case, the results presented in this paper remain unchanged: it suffices to add a dummy state \(s^{\star}\) to the state space \(\mathcal{S}\cup\{s^{\star}\}\) so that \(s_{I}=s^{\star}\) with the transition dynamics \(\mathbf{P}(s^{\prime}\mid s^{\star},a)=d_{I}(s^{\prime})\) for any action \(a\in\mathcal{A}\). Therefore, each time the reset of the environment is triggered, we make the MDP entering the initial state \(s^{\star}\), then transitioning to \(s^{\prime}\) according to \(d_{I}\).
### Latent space distribution
As pointed out in Sect. 4, posterior collapse is naturally avoided when optimizing W\({}^{2}\)AE-MDP. To illustrate that, we report the distribution of latent states produced by \(\phi_{\iota}\) during training (Fig. 5). The plots reveal that the latent space generated by mapping original states drawn from \(\xi_{\pi}\) during training to \(\mathcal{\bar{S}}\) via \(\phi_{\iota}\) is fairly distributed, for each environment.
### Distance Metrics: state, action, and reward reconstruction
The choice of the distance functions \(d_{\mathcal{S}}\), \(d_{\mathcal{A}}\), and \(d_{\mathcal{R}}\), plays a role in the success of our approach. The usual Euclidean distance is often a good choice for all the transition components, but the scale, dimensionality, and nature of the inputs sometimes require using scaled, normalized, or other kinds of distances to allow the network to reconstruct each component. While we did not observe such requirements in our experiments (where we simply used the Euclidean distance), high dimensional observations (e.g., images) are an example of data which could require tuning the state-distance function in such a way, to make sure that the optimization of the reward or action reconstruction will not be disfavored compared to that of the states.
### Value difference
In addition to reporting the quality guarantees of the model along training steps through local losses (cf. Figure 3(b)), our experiments revealed that the absolute value difference \(\|V_{\pi_{\#}}\|\) between the original and latent models operating under the latent policy quickly decreases and tends to converge to values in the same range (Figure 6). This is consistent with the fact that minimizing local losses lead to close behaviors (cf. Eq. 1) and that the value function is Lipschitz-continuous w.r.t. \(\widetilde{d}_{\pi_{\#}}\) (cf. Section 2).
### Remark on formal verification
Recall that _our bisimulation guarantees come by construction of the latent space._ Essentially, our learning algorithm spits out a distilled policy and a latent state space which already yields a guaranteed bisimulation distance between the original MDP and the latent MDP. This is the crux of how we enable verification techniques like model checking. In particular, bisimulation guarantees mean that _reachability probabilities in the latent MDP compared to those in the original one are close_.
Figure 5: Latent space distribution along training steps. The intensity of the blue hue corresponds to the frequency of latent states produced by \(\phi_{t}\) during training.
Figure 6: Absolute value difference \(\|V_{\pi_{\#}}\|\) reported along training steps.
Furthermore, the value difference of (omega-regular) properties (formulated through mu-calculus) obtained in the two models is bounded by this distance (cf. Sect. 2 and Chatterjee et al. 2010).
**Reachability is the key ingredient** to model-check MDPs. Model-checking properties is in most cases performed by reduction to the reachability of components or regions of the MDP: it either consists of (i) iteratively checking the reachability of the parts of the state space satisfying path formulae that comprise the specification, through a tree-like decomposition of the latter (e.g., for (P,R-)CTL properties, cf. Baier & Katoen 2008), or (ii) checking the reachability to the part of the state space of a product of the MDP with a memory structure or an automaton that embeds the omega-regular property -- e.g., for LTL (Baier et al., 2016; Sickert et al., 2016), LTLf (Wells et al., 2020), or GLTL (Littman et al., 2017), among other specification formalisms. The choice of specification formalism is up to the user and depends on the case study. The scope of this work is focusing on learning to distill RL policies with bisimulation guarantees _so that model checking can be applied_, in order to reason about the behaviors of the agent. That being said, _reachability is all we need_ to show that model checking can be applied.
### Hyperparameters
**W\({}^{2}\)AE-MDP parameters.** All components (e.g., functions or distribution locations and scales, see Fig. 2) are represented and inferred by neural networks (multilayer perceptrons). All the networks share the same architecture (i.e., number of layers and neurons per layer). We use a simple uniform experience replay of size \(10^{6}\) to store the transitions and sample them. The training starts when the agent has collected \(10^{4}\) transitions in \(\mathcal{M}\). We used minibatches of size \(128\) to optimize the objective and we applied a minibatch update every time the agent executing \(\pi\) has performed \(16\) steps in \(\mathcal{M}\). We use the recursive \(\epsilon\)-perturbation trick of Huang (2020) with \(\epsilon=\nicefrac{{3}}{{4}}\): when an episode ends, it restarts from the initial state with probability \(\nicefrac{{1}}{{4}}\); before re-starting an episode, the time spent in the reset state labeled with reset follows then the geometric distribution with expectation \(\nicefrac{{\epsilon}}{{1-\epsilon}}=3\). We chose the same latent state-action space size than Delgrange et al. (2022), except for LunarLander that we decreased to \(\log_{2}\left|\overline{\mathcal{S}}\right|=14\) and \(\left|\overline{\mathcal{A}}\right|=3\) to improve the scalability of the verification.
**VAE-MDPs parameters.** For the comparison of Sect. 4, we used the exact same VAE-MDP hyperparameter set as prescribed by Delgrange et al. (2022), except for the state-action space of LunarLander that we also changed for scalability and fair comparison purpose.2
Footnote 2: The code for conducting the VAE-MDPs experiments is available at [https://github.com/florentdelgrange/vae_mdp](https://github.com/florentdelgrange/vae_mdp) (GNU General Public License v3.0).
**Hyperparameter search.** To evaluate our W\({}^{2}\)AE-MDP, we realized a search in the parameter space defined in Table 2. The best parameters found (in terms of trade-off between performance and latent quality) are reported in Table 3. We used two different optimizers for minimizing the loss (referred to as the minimizer) and computing the Wasserstein terms (referred to as the maximizer). We used Adam (Kingma & Ba, 2015) for the two, but we allow for different learning rates Adam\({}_{\alpha}\) and exponential decays Adam\({}_{\beta_{1}},\) Adam\({}_{\beta_{2}}\). We also found that polynomial decay for Adam\({}_{\alpha}\) (e.g., to \(10^{-5}\) for \(4\cdot 10^{5}\) steps) is a good practice to stabilize the experiment learning curves, but is not necessary to obtain high-quality and performing distillation. Concerning the continuous relaxation of discrete distributions, we used a different temperature for each distribution, as Maddison et al. (2017) pointed out that doing so is valuable to improve the results. We further followed the guidelines of Maddison et al. (2017) to choose the interval of temperatures and did not schedule any annealing scheme (in contrast to VAE-MDPs). Essentially, the search reveals that the regularizer scale factors \(\beta\). (defining the optimization direction) as well as the encoder and latent transition temperatures are important to improve the performance of distilled policies. For the encoder temperature, we found a nice spot in \(\lambda_{\phi_{e}}=\nicefrac{{2}}{{3}}\), which provides the best performance in general, whereas the choice of \(\lambda_{\pi_{\theta}}\) and \(\beta\). are (latent-) environment dependent. The importance of the temperature parameters for the continuous relaxation of discrete distributions is consistent with the results of (Maddison et al., 2017), revealing that the success of the relaxation depends on the choice of the temperature for the different latent space sizes.
**Labeling functions.** We used the same labeling functions as those described by Delgrange et al. (2022). For completeness, we recall the labeling function used for each environment in Table 4.
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Range \\ \hline Adam\({}_{\alpha}\) (minimizer) & \(\{\,0.0001,0.0002,0.0003,0.001\,\}\) \\ Adam\({}_{\alpha}\) (maximizer) & \(\{\,0.0001,0.0002,0.0003,0.001\,\}\) \\ Adam\({}_{\beta_{1}}\) & \(\{\,0.05,0.9\,\}\) \\ Adam\({}_{\beta_{2}}\) & \(\{\,0.9,0.999\,\}\) \\ neurons per layer & \(\{\,64,128,256,512\,\}\) \\ number of hidden layers & \(\{\,1,2,3\,\}\) \\ activation & \(\{\,\)ReLU, Leaky ReLU, tanh, \(\frac{\text{s}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h }\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h }\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h }\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h }\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h }\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h }\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e} \text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e} \text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e} \text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e} \text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h} \text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h} \text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e} \text{h}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h} \text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{h} \text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h} \text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e} \text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h}\text{e}\text{h} \text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h} \text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h} \text{e}\text{h}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h} \text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{h}\text{e}\text{h}\text{e}\text{h}
**Time to failure properties.** Based on the labeling described in Table 4, we formally detail the time to failure properties checked in Sect. 4 whose results are listed in Table 1 for each environment. Let \(\mathsf{Reset}=\{\,\mathsf{reset}\,\}=\langle 0,\ldots,1\rangle\) (we assume here that the last bit indicates whether the current state is a reset state or not) and define \(s\models\mathsf{L}_{1}\wedge\mathsf{L}_{2}\) iff \(s\models\mathsf{L}_{1}\) and \(s\models\mathsf{L}_{2}\) for any \(s\in\mathcal{S}\), then
* _CartPole_: \(\varphi=-\mathsf{Reset}\,\mathcal{U}\)\(\mathsf{Unsafe}\), where \(\mathsf{Unsafe}=\langle 1,1,0\rangle\)
* _MountainCar_: \(\varphi=-\mathsf{Goal}\,\mathcal{U}\)\(\mathsf{Reset}\), where \(\mathsf{Goal}=\langle 1,0,0,0\rangle\)
* _Acrobot_: \(\varphi=-\mathsf{Goal}\,\mathcal{U}\)\(\mathsf{Reset}\), where \(\mathsf{Goal}=\langle 1,0,\ldots,0\rangle\)
* _LunarLander_: \(\varphi=-\mathsf{Safe}\mathsf{Landing}\,\mathcal{U}\)\(\mathsf{Reset}\), where \(\mathsf{Safe}\mathsf{Landing}=\mathsf{GroundContact}\)\(\wedge\)\(\mathsf{MotorsOff}\), \(\mathsf{GroundContact}=\langle 0,1,0,0,0,0,0\rangle\), and \(\mathsf{MotorsOff}=\langle 0,0,0,0,0,1,0\rangle\)
* _Pendulum_: \(\varphi=\langle-\mathsf{Safe}\,\wedge\,\bigcirc\rangle\mathsf{Reset}\), where \(\mathsf{Safe}=\langle 1,0,0,0,0\rangle\), \(\Diamond\mathsf{T}=-\Diamond\,\mathcal{U}\,\mathsf{T}\), and \(s_{i}\models\bigcirc\mathsf{T}\) iff \(s_{i+1}\models\mathsf{T}\), for any \(\mathsf{T}\subseteq\mathbf{AP},s_{i:\infty},a_{i:\infty}\in\mathit{Tray}\). Intuitively, \(\varphi\) denotes the event of ending an episode in an unsafe state, just before resetting the environment, which means that either the agent never reached the safe region or it reached and left it at some point. Formally, \(\varphi=\{\,s_{\neg\alpha},a_{\neg\alpha}\,\,\,\,|\,\exists i\in\mathsf{N},s_ {i}\models\mathsf{Safe}\,\wedge\,s_{i+1}\models\mathsf{Reset}\,\}\subseteq \mathit{Tray}\).
## Appendix C On the curse of Variational Modeling
_Posterior collapse_ is a well known issue occurring in variational models (see, e.g., Alemi et al., 2018; Tolstikhin et al., 2018; He et al., 2019; Dong et al., 2020) which intuitively results in a degenerate local optimum where the model learns to ignore the latent space and use only the reconstruction functions (i.e., the decoding distribution) to optimize the objective. VAE-MDPs are no exception, as pointed out in the original paper (Delgrange et al., 2022, Section 4.3 and Appendix C.2).
\begin{table}
\begin{tabular}{l l l l} \hline \hline Environment & \(\mathcal{S}\subseteq\) & Description, for \(\boldsymbol{s}\in\mathcal{S}\) & \(\ell(\boldsymbol{s})=\langle p_{1},\ldots,p_{n},p_{\mathsf{reset}}\rangle\) \\ \hline \multirow{3}{*}{CartPole} & \multirow{3}{*}{\(\mathbb{R}^{4}\)} & * \(\boldsymbol{s}_{1}\): cart position * \(\boldsymbol{s}_{2}\): cart velocity * \(\boldsymbol{s}_{3}\): pole angle (rad) * \(\boldsymbol{s}_{4}\): pole velocity at tip & * \(p_{1}=\boldsymbol{1}_{s_{1}>1.5}\): unsafe cart position * \(p_{2}=\boldsymbol{1}_{s_{2}>0.15}\): unsafe pole angle \\ \hline \multirow{3}{*}{MountainCar} & \multirow{3}{*}{\(\mathbb{R}^{2}\)} & * \(\boldsymbol{s}_{1}\): position * \(\boldsymbol{s}_{2}\): velocity * \(\boldsymbol{s}_{3}=\boldsymbol{1}_{s_{2}>0}\): car going forward \\ \cline{3-4} & & * \(\boldsymbol{s}_{1}=\cos(\theta_{1})\) * \(\boldsymbol{s}_{2}=\sin(\theta_{1})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\cos(\theta_{2})\) * \(\boldsymbol{s}_{3}=\cos(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\cos(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\) * \(\boldsymbol{s}_{3}=\sin(\theta_{2})\): positive angular velocity (1) * \(\boldsymbol{s}_{3}=\cos(\theta_{2})\): positive angular velocity (2) \\ \hline \multirow{3}{*}{Pendulum} & \multirow{3}{*}{\(\mathbb{R}^{3}\)} & Let \(\theta\in[0,2\pi]\) be the joint angle * \(\boldsymbol{s}_{1}=\cos(\theta)\) * \(\boldsymbol{s}_{1}=\cos(\theta)\) * \(\boldsymbol{s}_{2}=\sin(\theta)\) * \(\boldsymbol{s}_{2}=\sin(\theta)\) * \(\boldsymbol{s}_{3}=\sin(\theta)\) * \(\boldsymbol{s}_{3}=\sin(\theta)\) * \(\boldsymbol{s}_{3}=\sin(\theta)\) * \(\boldsymbol{s}_{3}=\cos(\theta)\): positive angular velocity \\ \hline \multirow{3}{*}{LunarLander} & \multirow{3}{*}{\(\mathbb{R}^{8}\)} & * \(\boldsymbol{s}_{1}\): horizontal coordinates * \(\boldsymbol{s}_{2}\): vertical coordinates * \(\boldsymbol{s}_{3}\): horizontal speed * \(\boldsymbol{s}_{4}\): vertical speed * \(\boldsymbol{s}_{5}\): ship angle * \(\boldsymbol{s}_{6}\): angular speed * \(\boldsymbol{s}_{7}\): left location * \(\boldsymbol{s}_{8}\): left leg contact * \(\boldsymbol{s}_{9}\): right leg contact \\ \hline \hline \end{tabular}
\end{table}
Table 4: Labeling functions for the OpenAI environments considered in our experiments (Delgrange et al., 2022). We provide a short description of the state space and the meaning of each atomic proposition. Recall that labels are binary encoded, for \(n=|\mathbf{AP}|-1\) (one bit is reserved for reset) and \(p_{\mathsf{reset}}=1\) iff \(\boldsymbol{s}\) is a reset state (cf. Appendix B.2).
Formally, VAE- and WAE-MDPs optimize their objective by minimizing two losses: a _reconstruction cost_ plus a _regularizer term_ which penalizes a discrepancy between the encoding distribution and the dynamics of the latent space model. In VAE-MDPs, the former corresponds to the the _distortion_, and the later to the _rate_ of the variational model (further details are given in Alemi et al.2018; Delgrange et al.2022), while in our WAE-MDPs, the former corresponds to the raw transition distance and the later to both the steady-state and transition regularizers. Notably, the rate minimization of VAE-MDPs involves regularizing a _stochastic_ embedding function \(\phi_{i}(\cdot\mid s)\)_point-wise_, i.e., for all different input states \(s\in\mathcal{S}\) drawn from the interaction with the original environment. In contrast, the latent space regularization of the WAE-MDP involves the marginal embedding distribution \(Q_{i}\) where the embedding function \(\phi_{i}\) is not required to be stochastic. Alemi et al. (2018) showed that _posterior collapse occurs in VAEs when the rate of the variational model is close to zero_, leading to low-quality representation.
**Posterior collapse in VAE-MDPs.** We illustrate the sensitivity of VAE-MDPs to the posterior collapse problem in Fig. 7, through the CartPole environment3: minimizing the distortion and the rate as is yields an embedding function which maps deterministically every input state to the same _sink_ latent state (cf. Fig. 6(a)). Precisely, there is a latent state \(\mathbf{\tilde{s}}\in\mathcal{\bar{S}}\) so that \(\phi_{i}(\mathbf{\tilde{s}}\mid s)\approx 1\) and \(\mathbf{\tilde{P}}_{\theta}(\mathbf{\tilde{s}}\mid\mathbf{\tilde{s}},\mathbf{ \tilde{a}})\approx 1\) whatever the state \(s\in\mathcal{S}\) and action \(\mathbf{\tilde{a}}\in\mathcal{\bar{A}}\). This is a form of posterior collapse, the resulting rate quickly drops to zero (cf. Fig 6(b)), and the resulting latent representation yields no information at all. This phenomenon is handled in VAE-MDPs by using (i) prioritized replay buffers that allow to focus on inputs that led to bad representation, and (ii) modifying the objective
Figure 7: Comparison of the VAE-MDP in the CartPole environment (i) when the distortion and the rate are minimized as is (_vanilla model_) and (ii) when it makes use of annealing schemes, entropy regularization, and prioritized experience replay to avoid posterior collapse (cf. Delgrange et al.2022). While the former clearly fails to learn a useful latent representation, the later does so meticulously and smoothly in two distinguishable phases: first, \(\phi_{i}\) focuses on fairly distributing the latent space, setting up the stage to the concrete optimization occurring from step \(4\cdot 10^{5}\), where the entropy of \(\phi_{i}\) is lowered, which allows to get the rate of the variational model away from zero. Five instances of the models are trained with different random seeds, with the same hyperparameters than in Sect. 4.
function for learning the latent space model -- the so-called evidence lower bound (Hoffman et al., 2013; Kingma and Welling, 2014), or ELBO for short -- and set up annealing schemes to eventually recover the ELBO at the end of the training process. Consequently, the resulting learning procedure focuses primarily on fairly distributing the latent space, to avoid it to collapse to a single latent state, to the detriment of learning the dynamics of the environment and the distillation of the RL policy. Then, the annealing scheme allows to make the model learn to finally smoothly use the latent space to maximize the ELBO, and achieve consequently a lower distortion at the "price" of a higher rate.
**Impact of the resulting learning procedure.** The aforementioned annealing process, used to avoid that every state collapses to the same representation, possibly induces a high entropy embedding function (Fig. 7d), which further complicates the learning of the model dynamics and the distillation in the first stage of the training process. In fact, in this particular case, one can observe that the entropy reaches its maximal value, which yields a fully random state embedding function. Recall that the VAE-MDP latent space is learned through _independent_ Bernoulli distributions. Fig. 7d reports values centered around \(4.188\) in the first training phase, which corresponds to the entropy of the state embedding function when \(\phi_{i}(\cdot\mid s)\) is uniformly distributed over \(\bar{\mathcal{S}}\) for any state \(s\in\bar{\mathcal{S}}\): \(H(\phi_{i}(\cdot\mid s))=\sum_{i=0}^{\log_{2}|\bar{\mathcal{S}}|-|\mathbf{AP} |=6}-p_{i}\log\,p_{i}-(1-p_{i})\log(1-p_{i})=4.188\), where \(p_{i}=\nicefrac{{1}}{{2}}\) for all \(i\). The rate (Fig. 7b) drops to zero since the divergence pulls the latent dynamics towards this high entropy (yet another form of posterior collapse), which hinders the latent space model to learn a useful representation. However, the annealing scheme increases the rate importance along training steps, which enables the optimization to eventually leave this local optimum (here around \(4\cdot 10^{5}\) training steps). This allows the learning procedure to leave the zero-rate spot, reduce the distortion (Fig. 7c), and finally distill the original policy (Fig. 7e).
As a result, the whole engineering required to mitigate posterior collapse slows down the training procedure. This phenomenon is reflected in Fig. 4: VAE-MDPs need several steps to stabilize and set up the stage to the concrete optimization, whereas WAE-MDPs have no such requirements since they naturally do not suffer from collapsing issues (cf. Fig. 5), and are consequently faster to train.
**Lack of representation guarantees.** On the theoretical side, since VAE-MDPs are optimized via the ELBO and the local losses via the related variational proxies, VAE-MDPs _do not leverage the representation quality guarantees_ induced by local losses (Eq. 1) during the learning procedure (as explicitly pointed out by Delgrange et al., 2022, Sect. 4.1.): in contrast to WAE-MDPs, when two original states are embedded to the same latent, abstract state, the former are not guaranteed to be bisimilarly close (i.e., the agent is not guaranteed to behave the same way from those two states by executing the policy), meaning those proxies do not prevent original states having distant values collapsing together to the same latent representation.
## Index of Notations
\(\mathbf{1}_{[cond]}\) indicator function: \(1\) if the statement [_cond_] is true, and \(0\) otherwise
\(\mathcal{F}_{d}\) Set of \(1\)-Lipschitz functions w.r.t. the distance metric \(d\)
\(\sigma\) Sigmoid function, with \(\sigma(x)=\nicefrac{{1}}{{1+\exp(-x)}}\)
\(f_{\theta}\) A function \(f_{\theta}:\mathcal{X}\rightarrow\mathbb{R}\) modeled by a neural network, parameterized by \(\theta\), where \(\mathcal{X}\) is any measurable set
**Latent Space Model**
\(\overline{\mathcal{M}}=\left\langle\bar{\mathcal{S}},\overline{\mathcal{A}}, \overline{\mathbf{P}},\overline{\mathcal{R}},\tilde{\ell},\mathbf{AP},\tilde {s}_{I}\right\rangle\) Latent MDP with state space \(\overline{\mathcal{S}}\), action space \(\overline{\mathcal{A}}\), reward function \(\overline{\mathcal{R}}\), labeling function \(\tilde{\ell}\), atomic proposition space \(\mathbf{AP}\), and initial state \(\tilde{s}_{I}\).
\(\left\langle\overline{\mathcal{M}},\phi,\psi\right\rangle\) Latent space model of \(\mathcal{M}\)
\(\bar{a}\) Latent action in \(\overline{\mathcal{A}}\)
\(\bar{\pi}\) Latent policy \(\bar{\pi}:\bar{\mathcal{S}}\rightarrow\mathcal{A}\); can be executed in \(\mathcal{M}\) via \(\phi\): \(\bar{\pi}(\cdot\mid\phi(s))\)
\(d_{\bar{\mathcal{S}}}\) Distance metric over \(\overline{\mathcal{S}}\)
\(\phi\) State embedding function, from \(\mathcal{S}\) to \(\overline{\mathcal{S}}\)
\(\psi\) Action embedding function, from \(\vec{\mathcal{S}}\times\vec{\mathcal{A}}\) to \(\mathcal{A}\)
\(\phi\mathbf{P}\) Distribution of drawing \(s^{\prime}\sim\mathbf{P}(\cdot\mid s,a)\), then embedding \(\vec{s}^{\prime}=\phi(s^{\prime})\), for any state \(s\in\mathcal{S}\) and action \(a\in\mathcal{A}\)
\(L^{\xi}_{\mathbf{R}}\) Local reward loss under distribution \(\xi\)
\(L^{\xi}_{\mathbf{P}}\) Local transition loss under distribution \(\xi\)
\(\vec{\Pi}\) Set of (memoryless) latent policies
\(\vec{s}\) Latent state in \(\vec{\mathcal{S}}\)
\(\vec{V}_{\pi}\) Latent value function
**Markov Decision Processes**
\(\mathcal{M}=\langle\mathcal{S},\mathcal{A},\mathbf{P},\mathcal{R},\ell, \mathbf{A}\mathbf{P},s_{I}\rangle\) MDP \(\mathcal{M}\) with state space \(\mathcal{S}\), action space \(\mathcal{A}\), transition function \(\mathbf{P}\), labeling function \(\ell\), atomic proposition space \(\mathbf{A}\mathbf{P}\), and initial state \(s_{I}\).
\(a\) Action in \(\mathcal{A}\)
\(\vec{d}_{\pi}\) Bisimulation pseudometric
\(\gamma\) Discount factor in \([0,1]\)
\(d_{\mathcal{A}}\) Metric over the action space
\(d_{\mathcal{R}}\) Metric over \(\mathrm{Im}(\mathcal{R})\)
\(d_{\mathcal{S}}\) Metric over the state space
\(\xi^{t}_{\pi}\) Limiting distribution of the MDP defined as \(\xi^{t}_{\pi}(s^{\prime}\mid s)=\mathbb{P}^{\mathcal{M}_{s}}_{\pi}\big{(}\{s_ {0:\infty},a_{0:\infty}\mid s_{t}=s^{\prime}\,\}\big{)}\), for any source state \(s\in\mathcal{S}\)
\(\Pi\) Set of memoryless policies of \(\mathcal{M}\)
\(\pi\) Memoryless policy \(\pi\colon\mathcal{S}\to\Delta(\mathcal{A})\)
\(\mathbb{P}^{\mathcal{M}}_{\pi}\) Unique probability measure induced by the policy \(\pi\) in \(\mathcal{M}\) on the Borel \(\sigma\)-algebra over measurable subsets of _Traj_
\(\mathcal{C}\,\mathcal{U}\,\mathsf{T}\) Constrained reachability event
\(\mathcal{M}_{s}\) MDP obtained by replacing the initial state of \(\mathcal{M}\) by \(s\in\mathcal{S}\)
\(s\) State in \(\mathcal{S}\)
\(\xi_{\pi}\) Stationary distribution of \(\mathcal{M}\) induced by the policy \(\pi\)
\(\vec{d}\) Raw transition distance, i.e., metric over \(\mathcal{S}\times\mathcal{A}\times\mathrm{Im}(\mathcal{R})\times\mathcal{S}\)
_Traj_ Set of infinite trajectories of \(\mathcal{M}\)
\(\tau=\langle s_{0:T},a_{0:T-\downarrow}\rangle\) Trajectory
\(V_{\pi}\) Value function for the policy \(\pi\)
**Probability / Measure Theory**
\(D\) Discrepancy measure; \(D(P,Q)\) is the discrepancy between distributions \(P,Q\in\Delta(\mathcal{X})\)
\(\Delta(\mathcal{X})\) Set of measures over a complete, separable metric space \(\mathcal{X}\)
\(\mathrm{Logistic}(\mu,s)\) Logistic distribution with location parameter \(\mu\) and scale parameter \(s\)
\(W_{d}\) Wasserstein distance w.r.t. the metric \(d\); \(W_{d}\left(P,Q\right)\) is the Wasserstein distance between distributions \(P,Q\in\Delta(\mathcal{X})\)
**Wasserstein Auto-encoded MDP**
\(\xi_{\theta}\) Behavioral model: distribution over \(\mathcal{S}\times\mathcal{A}\times\mathrm{Im}(\mathcal{R})\times\mathcal{S}\)
\(G_{\theta}\) Mapping \(\langle\vec{s},\vec{a},\vec{s}^{\prime}\rangle\mapsto\big{\langle}\mathcal{G} _{\theta}(\vec{s}),\psi_{\theta}(\vec{s},\vec{a}),\overline{\mathcal{R}}_{ \theta}(\vec{s},\vec{a}),\mathcal{G}_{\theta}(\vec{s}^{\prime})\big{\rangle}\)
\(\phi^{\mathcal{A}}_{\iota}\) Action encoder mapping \(\vec{\mathcal{S}}\times\mathcal{A}\) to \(\Delta(\vec{\mathcal{A}})\)
\(\mathcal{G}_{\theta}\) State-wise decoder, from \(\vec{\mathcal{S}}\) to \(\mathcal{S}\)
\begin{tabular}{p{42.7pt} p{340.1pt}} \(Q_{i}\) & Marginal encoding distribution over \(\overline{\mathcal{S}}\times\overline{\mathcal{A}}\times\overline{\mathcal{S}} :\mathbb{E}_{s,a,s^{\prime}\sim\xi_{\pi}}\,\phi_{i}(\cdot\mid s,a,s^{\prime})\) \\ \(\tilde{\xi}_{\pi_{\theta}}\) & Stationary distribution of the latent model \(\overline{\mathcal{M}}_{\theta}\), parameterized by \(\theta\) \\ \(\mathcal{W}_{\xi_{\pi}}\) & Steady-state regularizer \\ \(\varphi_{\omega}^{\xi}\) & Steady-state Lipschitz network \\ \(\lambda\) & Temperature parameter \\ \(\mathcal{T}\) & Distribution of drawing state-action pairs from interacting with \(\mathcal{M}\), embedding them to the latent spaces, and finally letting them transition to their successor state in \(\overline{\mathcal{M}}_{\theta}\), in \(\Delta(\overline{\mathcal{S}}\times\overline{\mathcal{A}}\times\overline{ \mathcal{S}})\) \\ \(\varphi_{\omega}^{\mathbf{P}}\) & Transition Lipschitz network \\ \end{tabular}
|
2303.15237 | Cascaded variational quantum eigensolver algorithm | We present a cascaded variational quantum eigensolver algorithm that only
requires the execution of a set of quantum circuits once rather than at every
iteration during the parameter optimization process, thereby increasing the
computational throughput. This algorithm uses a quantum processing unit to
probe the needed probability mass functions and a classical processing unit
perform the remaining calculations, including the energy minimization. The
ansatz form does not restrict the Fock space and provides full control over the
trial state, including the implementation of symmetry and other physically
motivated constraints. | Daniel Gunlycke, C. Stephen Hellberg, John P. T. Stenger | 2023-03-27T14:21:01Z | http://arxiv.org/abs/2303.15237v3 | # Cascaded variational quantum eigensolver algorithm
###### Abstract
We present a cascaded variational quantum eigensolver algorithm that only requires the execution of a set of quantum circuits once rather than at every iteration during the parameter optimization process, thereby reducing the number of needed circuit executions. This algorithm lets a quantum processing unit probe all the needed probability mass functions and a classical processing unit perform all the remaining calculations, including the variational optimization. The ansatz form does not restrict the solution space and provide full control over the parameter space, including the implementation of symmetry and other physically motivated constraints.
Quantum computing (QC) offers inherent advantages over classical computing (CC) for solving certain mathematical tasks [1; 2; 3; 4; 5; 6; 7; 8]. One of the most promising application areas is the simulation of quantum-mechanical systems [7; 9; 10]. Because the size of the Hilbert space comprising the quantum states of a quantum-mechanical system increases exponentially with the system size, performing operations on this space is an intractable task for conventional classical computers for all but the smallest systems. A quantum computer, on the other hand, can process exponentially large vector spaces, including Hilbert spaces, by mapping them to the Hilbert space of a quantum register--the size of which increases exponentially with the number of qubits--and then performing quantum gate operations on this register.
The two main algorithms for QC calculations of quantum-mechanical systems are the quantum phase estimation algorithm [11] and the variational quantum eigensolver (VQE) algorithm [12]. By recruiting classical computers for computationally efficient tasks, the latter algorithm requires relatively few gate operations, which limits the decoherence during the computations. As less exposure to decoherence allows for higher computational fidelities, this algorithm has a reduced need for quantum error correction, making it ideal for current noisy intermediate-scale quantum (NISQ) computers [13]. Since its introduction, the VQE algorithm has been applied to calculate the ground-state energy of a number of systems in chemistry and physics [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. A disadvantage of the VQE algorithm is that it requires many quantum circuit executions as part of its iterative energy minimization process. To prepare the needed trial states, the quantum circuit includes the implementation of an ansatz described by some operator \(\hat{A}\) that depends on a collection of variational parameters \(\theta\) generated by means of CC. This dependence necessarily leads to a lot of back-and-forth communication between the quantum processing unit (QPU) and the classical processing unit (CPU).
To address this challenge, we propose a cascaded variational quantum eigensolver (CVQE) algorithm that separates the quantum circuit executions from the parameter optimization. This variant is centered around an ansatz described by the ansatz operator in the form
\[\hat{A}(\theta)=e^{i\hat{\lambda}(\theta)}\hat{U}, \tag{1}\]
where \(\hat{\lambda}(\theta)\) is any diagonal operator and \(\hat{U}\) is any unitary operator, which in contrast to the commonly applied unitary coupled cluster ansatz [14; 15; 12; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46], the hardware-efficient ansatz [47; 48; 49; 50; 51; 52; 53; 54], and those used in various adaptive or trainable VQE algorithms [55; 56; 57; 58; 59; 60; 61; 62], is independent of \(\theta\).
To show the method enabling the CVQE algorithm, consider a system of identical fermions described by some Hamiltonian \(\hat{H}\). Our goal is to get an upper bound for the ground-state energy \(E_{\text{g}}\) of the system by applying the variational method of quantum mechanics, which can be stated as
\[E_{\text{g}}\leq\min_{\theta}E(\theta), \tag{2}\]
where \(E(\theta)\) denotes an energy expectation value.
Our method begins with the vacuum state \(\ket{0}\) in some antisymmetric Fock space \(\mathcal{F}\) that serves as a representation space for the quantum states of our chosen fermionic system. We then apply a unitary operator \(\hat{U}\) that acts on \(\mathcal{F}\) to produce the initial state
\[\ket{\Psi_{0}}=\hat{U}\ket{0} \tag{3}\]
for the variational optimization. The energy expectation value of the ansatz state \(\ket{\Psi(\theta)}=\hat{A}(\theta)\ket{0}\) can then be expressed as
\[E(\theta)=\frac{\bra{\Psi_{0}}e^{-i\hat{\lambda}^{\dagger}(\theta)}\hat{H}e^{ i\hat{\lambda}(\theta)}\ket{\Psi_{0}}}{\bra{\Psi_{0}}e^{-i\hat{\lambda}^{ \dagger}(\theta)}e^{i\hat{\lambda}(\theta)}\ket{\Psi_{0}}}. \tag{4}\]
To make further progress, we need a basis for the Fock space \(\mathcal{F}\). However, first we introduce the basis \(\{\ket{\psi_{q}}\}\) indexed by \(\mathcal{Q}=\{1,2,...,Q\}\) for the \(Q\)-dimensional, one-fermion Hilbert space \(\mathcal{H}\). For all \(q\in\mathcal{Q}\), we let \(n_{q}\) in the set \(\mathbb{B}=\{0,1\}\) be equal to \(0\) and \(1\) when the one-fermion basis state \(\ket{\psi_{q}}\) is unoccupied and occupied, respectively. A basis \(\{\ket{n}\}\) for \(\mathcal{F}=\mathcal{F}(\mathcal{H})\) labeled by the fermionic configurations \(n=(n_{q})_{q\in\mathcal{Q}}\) of occupation numbers can then be formed by the Fock states \(\ket{n}=C_{n}^{\dagger}\ket{0}\), for all
\(n\in\mathbb{B}^{Q}\), where
\[C_{n}^{\dagger}=\prod_{q\in\mathcal{Q}}\left(c_{q}^{\dagger}\right)^{n_{q}}, \tag{5}\]
where \(c_{q}^{\dagger}\) denotes fermionic creation operators. Using this basis, we can express the diagonal operator in Eq. (1) as
\[\hat{\lambda}(\theta)=\sum_{n\in\mathbb{B}^{Q}}\lambda_{n}(\theta)\left|n \right\rangle\!\left\langle n\right|, \tag{6}\]
where \(\lambda_{n}(\theta)\in\mathbb{C}\) are given coefficients that, together with those of \(\hat{U}\), define a specific ansatz operator \(\hat{A}(\theta)\).
As the dimension of the Fock space \(\mathcal{F}\) is \(2^{Q}\), and thus increases exponentially with \(Q\), the energy expectation value in Eq. (4) cannot in general be calculated efficiently using a CPU alone. A QPU, on the other hand, is able to process such large spaces. Looking for an isomorphism between \(\mathcal{F}\) and the Hilbert space for the qubit register in the QPU, we consider a register comprising \(Q\) qubits. As the qubits are distinguishable, the register Hilbert space can be expressed as the tensor power \(\mathtt{H}^{\otimes Q}\), where \(\mathtt{H}\) is the two-dimensional one-qubit Hilbert space. We can take advantage of tensor power structure by, for each qubit \(q\in\mathcal{Q}\), letting \(\{\left|0\right\rangle,\left|1\right\rangle\}\) be a basis for its space \(\mathtt{H}_{q}\) and mapping the occupation number \(n_{q}\) to it. Thus, each Fock state \(\left|n\right\rangle\) in \(\mathcal{F}\) is automatically mapped to the tensor product state
\[\left|n\right\rangle=\bigotimes_{q\in\mathcal{Q}}\left|n_{q}\right\rangle \tag{7}\]
in \(\mathtt{H}^{\otimes Q}\).
The probability that a measurement on a QPU in the basis \(\{\left|n\right\rangle\}\) collapses the quantum state \(\hat{R}_{m}\left|\Psi_{0}\right\rangle\), where \(\hat{R}_{m}\) is a unitary operator, to the state \(\left|n\right\rangle\) associated with a particular outcome \(n\) is given by the probability mass function
\[\mathrm{P}[\hat{R}_{m}\Psi_{0}\!\mapsto\!n]=\big{|}\langle\Psi_{0}|\hat{R}_{m }^{\dagger}|n\rangle\big{|}^{2}. \tag{8}\]
By performing a number of identical measurements of \(\hat{R}_{m}\left|\Psi_{0}\right\rangle\) and recording the outcome \(n_{s}^{m}\) of each shot \(s\) in some set \(\mathcal{S}\), we obtain a set of fermionic configurations \(\{n_{s}^{m}\!:\!s\in\mathcal{S}\}\). Given this set, we can then approximate the expectation value of a function \(f(n)\) with the arithmetic mean, yielding
\[\sum_{n\in\mathbb{B}^{Q}}f(n)\,\mathrm{P}[\hat{R}_{m}\Psi_{0}\!\mapsto\!n] \approx\frac{1}{S}\sum_{s\in\mathcal{S}}f(n_{s}^{m}), \tag{9}\]
where the number of shots \(S=\left|\mathcal{S}\right|\) is chosen such that desired statistical accuracy is attained.
Therefore, by performing a set of measurements of \(\left|\Psi_{0}\right\rangle\) on a QPU to produce \(\{n_{s}\!:\!s\in\mathcal{S}\}\) and then applying the mean approximation above to the denominator in Eq. (4), we obtain
\[\langle\Psi_{0}|e^{-i\hat{\lambda}^{\dagger}(\theta)}e^{i\hat{\lambda}(\theta )}|\Psi_{0}\rangle\approx\frac{1}{S}\sum_{s\in\mathcal{S}}e^{-2\operatorname{ Im}\lambda_{n_{s}}(\theta)}, \tag{10}\]
where the summation on the right-hand side is performed on a CPU.
To calculate the corresponding expectation value in the numerator in Eq. (4), we first decompose the Hamiltonian, which without loss of generality, can be expressed as
\[\hat{H}=\sum_{l\in\mathcal{L}}h_{l}\,C_{n_{l}^{+}}^{\dagger}C_{n_{l}^{-}}, \tag{11}\]
where the set \(\mathcal{L}\) contains the indices of all nonzero interactions, each specified by a coefficient \(h_{l}\in\mathbb{C}\) and two tuples \(n_{l}^{\pm}=(n_{lq}^{\pm})_{q\in\mathcal{Q}}\). For each interaction \(l\in\mathcal{L}\), we can divide the index set \(\mathcal{Q}\) into two complementary subsets \(\mathcal{Q}_{l}=\{q\in\mathcal{Q}:\delta_{n_{lq}^{+}n_{lq}^{-}}=0\}\) and \(\bar{\mathcal{Q}}_{l}=\mathcal{Q}\setminus\mathcal{Q}_{l}\), where \(\delta\) is the Kronecker delta. As only those \(q\in\mathcal{Q}\) with either a creation operator \(c_{q}^{\dagger}\) or annihilation operator \(c_{q}\) in the interaction are placed in \(\mathcal{Q}_{l}\), this division separates out the one-fermion basis states \(\left|\psi_{q}\right\rangle\) that are coupled to others by the interaction. Number operators \(\hat{n}_{q}=c_{q}^{\dagger}c_{q}\) do not couple different states and their indices are thus placed in \(\bar{\mathcal{Q}}_{l}\). As far as the interaction \(l\) goes, the one-fermion Hilbert space can therefore be decomposed as the internal direct sum \(\mathcal{H}=\mathcal{H}_{l}\oplus\bar{\mathcal{H}}_{l}\) with the bases \(\{\left|\psi_{q}\right\rangle\}_{q\in\mathcal{Q}_{l}}\) and \(\{\left|\psi_{q}\right\rangle\}_{q\in\bar{\mathcal{Q}}_{l}}\) for the two subspaces \(\mathcal{H}_{l}\) and \(\bar{\mathcal{H}}_{l}\), respectively. As there is no coupling between the subspaces, we can split the system into two subsystems with the representation spaces \(\mathcal{F}_{l}=\mathcal{F}_{l}(\mathcal{H}_{l})\) and \(\bar{\mathcal{F}}_{l}=\bar{\mathcal{F}}_{l}(\bar{\mathcal{H}}_{l})\), and then define the isomorphism between \(\mathcal{F}\) and \(\mathcal{F}_{l}\otimes\bar{\mathcal{F}}_{l}\) by the bijection \(n\mapsto(\nu,\bar{\nu})\), denoted \(\nu\bar{\nu}\) below for brevity, for all \(n\in\mathbb{B}^{Q}\), where \(\nu=(n_{q})_{q\in\mathcal{Q}_{l}}\) and \(\bar{\nu}=(n_{q})_{q\in\bar{\mathcal{Q}}_{l}}\) are subfamilies of \(n\). We assume that the number of interaction terms is bound by some even power of \(Q\) with the exponent being independent of the system size (e.g., \(Q^{4}\), for systems described by two-fermion interactions). The cardinality \(Q_{l}=\left|\mathcal{Q}_{l}\right|\) must therefore be independent of \(Q\), and consequently, only the dimension of \(\bar{\mathcal{F}}_{l}\), and not that of \(\mathcal{F}_{l}\), increases exponentially with the system size.
To be able to apply the measurement approach described above again, we need the state \(\left|\Psi_{0}\right\rangle\) to only appear within modulus squares in the form shown in Eq. (8). This requires that we diagonalize every term in the Hamiltonian. As each interaction \(l\in\mathcal{L}\) on \(\bar{\mathcal{F}}_{l}\) is already diagonal, we focus on the space \(\mathcal{F}_{l}\). From the definition of \(\mathcal{Q}_{l}\), it follows that \(n_{lq}^{-}=1-n_{lq}^{+}\), for all \(q\in\mathcal{Q}_{l}\). As a result, the interaction on \(\mathcal{F}_{l}\) is represented by a single element on the counter diagonal. We can thus expand this interaction in a basis \(\{\hat{V}_{m}\}\) composed of \(2^{Q_{l}}\) counter-diagonal unitary operators. One such basis can be formed by the operators
\[\hat{V}_{m}=\hat{\Pi}_{l}\bigotimes_{q\in\mathcal{Q}_{l}}\hat{\sigma}_{m_{q}} \bigotimes_{q\in\bar{\mathcal{Q}}_{l}}\hat{\sigma}_{0}, \tag{12}\]
on \(\mathtt{H}^{\otimes Q}\), for all \(m\in\mathcal{M}_{l}\), where \(\mathcal{M}_{l}=\{x,y\}^{Q_{l}}\), and where \(\hat{\Pi}_{l}\) is a permutation operator that places the operators in the tensor products in the correct order given by \(\mathcal{Q}\)
and \(\hat{\sigma}_{x}\) and \(\hat{\sigma}_{y}\), along with \(\hat{\sigma}_{z}\) appearing below, are Pauli operators, and \(\hat{\sigma}_{0}\) is the identity operator. Each unitary basis operator can furthermore always be transformed such that \(\hat{V}_{m}=\hat{R}_{m}^{\dagger}\hat{D}_{m}\hat{R}_{m}\), where \(\hat{R}_{m}\) is another unitary operator and \(\hat{D}_{m}\) is a diagonal unitary operator. Because \(\hat{V}_{m}\) is a tensor product, each operator can be diagonalized separately by the rotation operators
\[\hat{R}_{x}=\frac{\hat{\sigma}_{0}+i\hat{\sigma}_{y}}{\sqrt{2}},\hskip 28.452756pt \hat{R}_{y}=\frac{\hat{\sigma}_{0}-i\hat{\sigma}_{x}}{\sqrt{2}}, \tag{13}\]
which describe single-qubit rotations around the \(y\) and \(x\) axes by \(-\pi/2\) and \(\pi/2\), respectively, so that \(\hat{\sigma}_{x}=\hat{R}_{x}^{\dagger}\hat{\sigma}_{z}\hat{R}_{x}\) and \(\hat{\sigma}_{y}=\hat{R}_{y}^{\dagger}\hat{\sigma}_{z}\hat{R}_{y}\), for all \(q\in\mathcal{Q}_{l}\). We thus find that one transformation of \(\texttt{H}^{\otimes\mathcal{Q}}\) is given by
\[\hat{R}_{m}=\hat{\Pi}_{l}\bigotimes_{q\in\mathcal{Q}_{l}}\hat{R}_{x}^{\hat{ \sigma}_{m_{x}}}\hat{R}_{y}^{\hat{\sigma}_{m_{q}y}}\bigotimes_{q\in\mathcal{Q} _{l}}\hat{\sigma}_{0}, \tag{14}\]
for all \(m\in\mathcal{M}_{l}\), and \(\hat{D}_{m}=\hat{D}\), where
\[\hat{D}=\hat{\Pi}_{l}\bigotimes_{q\in\mathcal{Q}_{l}}\hat{\sigma}_{z}\bigotimes _{q\in\mathcal{Q}_{l}}\hat{\sigma}_{0}. \tag{15}\]
Now that we have decomposed the Hamiltonian in the required form, we prepare the state \(\hat{R}_{m}\ket{\Psi_{0}}\) and perform a set of measurements on the QPU, for every \(m\in\mathcal{M}_{l}\). This produces sets of fermionic configurations \(\{\nu_{s}^{m}\hat{\nu}_{s}^{m}:s\in\mathcal{S}\}\) that we then use to calculate the numerator in Eq. (4), which we find to be
\[\bra{\Psi_{0}}e^{-i\hat{\lambda}^{\dagger}(\theta)}\hat{H}e^{i \hat{\lambda}(\theta)}\ket{\Psi_{0}} \approx\frac{1}{S}\sum_{l\in\mathcal{L}}\frac{h_{l}}{2^{Q_{l}}} \sum_{m\in\mathcal{M}_{l}}\sum_{s\in\mathcal{S}}\Upsilon_{lm\nu_{s}^{m}\hat{\nu }_{s}^{m}}\] \[\times e^{-i\lambda_{\nu_{s}^{+}}\hat{\nu}_{s}^{m}(\theta)}e^{i \lambda_{\nu_{s}^{-}}\hat{\nu}_{s}^{m}(\theta)}, \tag{16}\]
where
\[\Upsilon_{lm\nu_{s}^{m}\hat{\nu}_{s}^{m}}=(-1)^{\Pi_{l}\hat{\nu}_{s}^{m}}\prod _{q\in\mathcal{Q}_{l}}[(-1)^{\nu_{u}^{+}}i]^{\delta_{m_{q}y}}(-1)^{\nu_{u_{q}} ^{-}}\prod_{q\in\mathcal{Q}_{l}}\delta_{\rho_{qq}^{m}1}^{\tilde{\rho}_{l_{q}^{ m}}^{+}} \tag{17}\]
are phase factors or zero, where \(\Pi_{l\hat{\nu}_{s}^{m}}\) is the number of permutations of creation and annihilation operators performed to separate the two subsystems.
Once we have collected the needed sets of fermionic configurations \(\{n_{s}:s\in\mathcal{S}\}\) and \(\{n_{s}^{m}:s\in\mathcal{S}\}\) from the QPU, we can calculate the energy expectation value \(E(\theta)\) in Eq. (4) using Eqs. (10), (16), and (17), for any collection of variational parameters \(\theta\). This lets us perform the optimization of \(\theta\) in the CVQE algorithm, in contrast to other variants, using a CPU, without further involvement of the QPU, until the energy has been minimized. See Fig. 1, for a schematic of this process.
The ansatz operator \(\hat{A}(\theta)\) in Eq. (1) is both general and versatile. The generality follows from the isomorphism between the vector space of diagonal operators on \(\mathbb{C}\) and the Fock space \(\mathcal{F}\). Provided a surjective function \(\hat{\lambda}(\theta)\) and a unitary operator \(\hat{U}\) such that all the coefficients \(u_{n}=\bra{n}\hat{U}\ket{0}\in\mathbb{C}\) are nonzero, any quantum state is the image of a unique diagonal operator \(\hat{\lambda}=\hat{\lambda}(\theta)\) and that image is the ansatz state
\[\ket{\Psi(\theta)}=\sum_{n\in\mathbb{B}^{\otimes}}e^{i\lambda_{n}(\theta)}u_{ n}\ket{n}. \tag{18}\]
Furthermore, there exists an operator \(\hat{\lambda}=\hat{\lambda}(\theta)\), for which the ansatz state is the ground state and the energy of this state, and its equivalent unit-normalized state \(\ket{\Psi(\theta)}\), is given by the equality in Eq. (2).
In practice, however, because of limited CC resources, for a sufficiently large system, there is no feasible parametric equation \(\hat{\lambda}=\hat{\lambda}(\theta)\), for which the ansatz state is a surjective function. Therefore, we will somehow need to restrict the parameter space \(\Theta\). The ansatz form herein makes that process physically intuitive. By letting \(\lambda_{n}\to i\infty\), we can eliminate any Fock state \(\ket{n}\) that is not in the same irreducible representation of the symmetry group on the Fock space as the ground state. In addition, we can enforce symmetry-adapted linear combinations of Fock states by constraints on the parameter space. See Fig. 2, for an illustration. We can even choose our parametric equation to be function of physically motivated many-fermion operators (e.g., the double-occupancy operator in the Hubbard model). An illustration of this approach is described in a forthcoming paper [63] that implements the Gutzwiller ansatz [64] within the cascaded VQE algorithm.
Another challenge, as system sizes increase, is going to be the limited number of shots \(S\). If \(S\ll 2^{Q}\), our mean approximation of \(E(\theta)\) will be independent of the majority of coordinates \(\lambda_{n}\), and concomitantly, the associated Fock states \(\ket{n}\) in the expansion of \(\ket{\Psi(\theta)}\) will have its
Figure 1: Schematic of the cascaded variational quantum eigensolver algorithm. The QPU executes quantum circuits to generate quantum states \(\hat{R}_{m}\hat{U}\ket{0}\) that when measured yield a collection of occupation numbers \((n_{1},n_{2},...,n_{Q})\) recorded as the tuple \(n_{s}^{m}\). Repeating the same measurements multiple times over the set of shots \(\mathcal{S}\) produces sets of fermionic configurations \(\{n_{s}^{m}:s\in\mathcal{S}\}\), which are passed on as input to the CPU. The CPU uses these sets together with a generated collection of variational parameters \(\theta\) to compute the expectation value of the Hamiltonian \(\hat{H}\) for the ansatz state \(\hat{A}(\theta)\ket{0}\). The CPU then completes the parameter optimization to obtain the sought minimum of the energy expectation value \(E(\theta)\).
coefficients \(u_{n}e^{i\lambda_{n}(\theta)}\) fixed by \(\hat{U}\). Therefore, the closer \(|\Psi(0)\rangle=|\Psi_{0}\rangle\) is to the sought ground state, the better. For a gradually refinement of the state \(|\Psi_{0}\rangle\), we can execute the CVQE algorithm herein iteratively. For each iteration \(i\in\{0,1,...,I\}\), in this refinement algorithm, we: (i) define and implement \(\tilde{U}_{i}\) such that \(|\tilde{\Psi}_{i}(0)\rangle=\tilde{U}_{i}\,|0\rangle\); (ii) minimize the energy to obtain \(|\Psi_{i}(\theta)\rangle\); (iii) normalize \(|\Psi_{i}(\theta)\rangle\) to get \(|\tilde{\Psi}_{i}(\theta)\rangle\); and (iv) let \(|\tilde{\Psi}_{i+1}(0)\rangle=|\tilde{\Psi}_{i}(\theta)\rangle\). Moreover, as part of each iteration, we can also redefine the parametric equation, which would allow for the exploration of new parts of the Fock space. Even without employing this iterative algorithm, it can be useful to normalize \(|\Psi(\theta)\rangle\) and then implement a new \(\hat{U}\) for the normalized state, as this would allow the normalized, optimized ansatz state to be prepared in its entirety on the QPU, which would allow for the calculation of other observables using the QPU.
This work has been supported by the Office of Naval Research (ONR) through the U.S. Naval Research Laboratory (NRL). J.P.T.S. thanks the National Research Council Research Associateship Programs for support during his postdoctoral tenure at NRL. We acknowledge quantum computing resources from IBM through a collaboration with the Air Force Research Laboratory (AFRL).
|
2305.09590 | Determination of optimal experimental conditions for accurate 3D
reconstruction of the magnetization vector via XMCD-PEEM | In this work we present a detailed analysis on the performance of X-ray
magnetic circular dichroism photo-emission electron microscopy (XMCD-PEEM) as a
tool for vector reconstruction of the magnetization. For this, we choose
360$^{\circ}$ domain wall ring structures which form in a synthetic
antiferromagnet as our model to conduct the quantitative analysis. We assess
how the quality of the results is affected depending on the number of
projections that are involved in the reconstruction process, as well as their
angular distribution. For this we develop a self-consistent error metric, which
indicates that the main factor of improvement comes from selecting the
projections evenly spread out in space, over having a larger number of these
spanning a smaller angular range. This work thus poses XMCD-PEEM as a powerful
tool for vector imaging of complex 3D magnetic structures. | Miguel A. Cascales Sandoval, A. Hierro-Rodríguez, S. Ruiz-Gómez, L. Skoric, C. Donnelly, M. A. Niño, D. McGrouther, S. McVitie, S. Flewett, N. Jaouen, R. Belkhou, M. Foerster, A. Fernández-Pacheco | 2023-05-16T16:40:00Z | http://arxiv.org/abs/2305.09590v4 | # 3D reconstruction of the magnetization vector via XMCD-PEEM
###### Abstract
In this work we present a detailed analysis on the performance of X-ray magnetic circular dichroism photo-emission electron microscopy (XMCD-PEEM) as a tool for vector reconstruction of the magnetization. For this, we choose 360\({}^{\circ}\) domain wall ring structures which form in a synthetic antiferromagnet as our model to conduct the quantitative analysis. We assess how the quality of the results is affected depending on the number of projections that are involved in the reconstruction process, as well as their angular distribution. For this we develop a self-consistent error metric, which indicates that the main factor of improvement comes from selecting the projections evenly spread out in space, over having a larger number of these spanning a smaller angular range. This work thus poses XMCD-PEEM as a powerful tool for vector imaging of complex 3D magnetic structures.
3D vector reconstruction, XMCD-PEEM, Nanomagnetism, 360\({}^{\circ}\) domain wall rings.
## I Introduction
The field of nanomagnetism has rapidly evolved over the last few decades, due to significant advances and developments in fabrication and synthesis methods [1]. These improvements enable to fabricate different natured magnetic systems with complex 3D configurations of the magnetization vector, as opposed to the traditional simple mono-domain magnetic devices. The increase in complexity of magnetic systems [2; 3] requires the adaptation and development of versatile characterization methods, where high magnetic sensitivity, spatial and temporal resolutions are some of the most important attributes.
Diverse laboratory-based modern characterization techniques are utilized to study the properties of materials via magnetic imaging, such as: magnetic force microscopy (MFM) [4], the different Lorentz transmission electron microscopy (L-TEM) modes [5; 6], electron holography [7], scanning electron microscopy with polarization analysis (SEMPA) [8; 9], spin-polarized low energy electron microscopy (SPLEEM) [10; 11], and the techniques which exploit the magneto-optical Kerr effect (MOKE) to perform wide-field [12; 13; 14] or scanning microscopy [12].
Analogous to MOKE, although in the X-ray regime, synchrotron-based characterization techniques exploit the strong coupling that exists between photons and magnetism. X-rays offer high lateral resolution due to the short wavelengths, as well as element specificity that arises from the need to tune the photon energy to the absorption edge of the element in question. Imaging setups may be divided in two geometries: transmission and electron yield [15]. Transmission X-ray microscopy (TXM) [16; 17], scanning transmission X-ray microscopy (STXM) [18] and coherent diffractive imaging (CDI) techniques such as pychography [19] and holography [20], all analyze the X-rays after passing through the magnetic material. Different strategies may be followed for tomographic reconstruction of the 3D magnetization vector [21; 22; 23; 24; 25], depending on the geometry and properties of the sample under investigation. This differs from photoemission electron microscopy (PEEM), or electron yield, where X-rays which have interacted with the material under investigation are not directly collected, but rather the photoelectrons emitted as a consequence of such interaction. Due to the short electron mean free path, PEEM is an excellent candidate for investigating very thin structures close to the surface, _e.g._, the top layers of a multilayer heterostructure.
Previous works have utilized X-ray magnetic circular dichroism PEEM (XMCD-PEEM) to reconstruct the spatially resolved magnetization vector, by combining images taken at different relative X-ray/sample orientations [15; 26; 27; 28; 29; 30; 31]. Here, we perform a detailed investigation on how the quality of the reconstructed 3D magnetization vector changes depending on the number of projections involved, as well as their angular distribution. For this, 360\({}^{\circ}\) domain wall (DW) ring structures are chosen as the model to perform the reconstruction, given their small size which pushes the microscope's resolution, and the complex winding sense of the magnetization. These textures are found to form in a synthetic antiferromagnet (SAF) multilayer heterostructure which shows Interlayer Dzyaloshinskii-Moriya interactions (IL-DMI) [32]. For further
details on their formation refer to [33].
In order to carry out this analysis, the algorithm first aligns the different projections with respect to each other, in such a way that they hold the same spatial orientation. Then, a thorough analysis which consists of running the reconstruction algorithm for different combinations of XMCD projections measured at different angles is performed, applying to the resulting magnetization vectors an error metric that quantitatively gives account of the quality of the reconstruction. Results evidence that having a larger number of projections is not the main factor of improvement, but rather selecting the azimuthal angles of these projections evenly spread out through the 360\({}^{\circ}\).
## II Methods
### Experimental set-up
The SAF layered structure investigated in this work consists of [Si/Ta (4 nm)/Pt (10 nm)/Co (1 nm)/Pt (0.5 nm)/Ru (1 nm)/Pt (0.5 nm)/CoFeB (2 nm)/Pt (2 nm)/Ta (4 nm) [32]; where the ferromagnetic layers are asymmetric in material and in thickness. The Co layer has dominating out of plane (OOP) anisotropy enhanced by the Pt layers at the interfaces, whereas the CoFeB layer's thickness has been tuned slightly above its spin reorientation transition (SRT), showing moderately low in plane (IP) anisotropy.
Prior to performing the synchrotron experiments, a series of repeating Pt\({}_{x}\)C\({}_{1-x}\) patterns consisting of rectangles and squares were deposited via focused electron beam induced deposition (FEBID) on top of the film surface. Respectively, the size of the squares and rectangles are \(1\mu m\times 1\mu m\) and \(2\mu m\times 1\mu m\), both being 50 \(nm\) thick. These are arranged in a square fashion, located at the midpoint of the sides of a 7\(\mu m\) square as schematically shown in figure 1. They serve the purpose of providing a non-magnetic signal reference within the field of view (FOV), given that the magnetism dependent photoelectrons do not possess the sufficient energy to escape the sample's surface through this additional bit of material. The non-magnetic signal reference is crucial for properly computing the final XMCD images as there might be slight flux differences and flux spatial distribution when changing polarization, which would alter the amount of emitted photoelectrons inducing ficticious magnetic contrast. Thus, these corrections and references are crucial in order to be quantitative with PEEM.
The microscopy measurements were taken at the PEEM endstation of CIRCE beamline in ALBA Synchrotron [34]. The sample is transferred to the PEEM chamber mounted on a holder with a dipolar electromagnet, providing the capability of applying IP uniaxial magnetic fields [35]. It is mounted in such a way that the nominal easy axis (given by the Pt\({}_{x}\)C\({}_{1-x}\) rectangle's long axis) is aligned with the external magnetic field direction (\(\vec{B}_{ext}\)). The system allows rotation of the sample with respect to the surface normal, effectively changing the projection of the incoming X-ray beam onto the sample's directions, as evidenced by figure 1. Measuring at different X-ray/sample relative orientations provides sensitivity to different components of the magnetization vector, given that in XMCD-PEEM magnetic contrast is given by \(\vec{k}\cdot\vec{m}\)[36], with \(\vec{k}\) and \(\vec{m}\) representing respectively the X-ray wave-vector and the magnetization vector.
### XMCD image measurement and post-processing
The procedure followed in this work to obtain XMCD images is very similar to the one discussed in [15]. After reaching the desired magnetic state, 256 images are recorded for each incoming X-ray circular polarization in order to perform posterior averaging and improve the signal-to-noise ratio. Prior to the subsequent averaging of the same polarization images, a normalization is performed where each individual image is divided in a pixel-wise operation by a largely defocused image in order to remove channelplate contributions. Once the channelplate contributions are removed, each polarization stack of images is individually aligned in order to correct for potential drifts during the time of measurement. For this, _python's scikit-image_ library [37] is used, where sub-pixel alignment is performed utilizing its Fourier-space cross-correlation algorithm. The alignment is done by selecting a region of interest (ROI) with a clear, sharp feature, which in this case is chosen to be one of the FEBID deposited landmarks within the FOV. It is crucial to perform the channelplate correction prior to the alignment of each stack, otherwise artifacts due to the translation would be induced. In addition to the image alignment, an
Figure 1: Diagram describing the sample rotation with respect to the X-ray beam for measurement of different XMCD-PEEM projections. The X-ray wave-vector \(\vec{k}\) is given by the black arrow, the circular X-ray polarization eigenmodes by the blue and red circular arrows, the magnetization vector (\(\vec{m}\)) by the dark blue arrow, the external magnetic field (\(\vec{B}_{ext}\)) by the orange arrow, \(\theta_{k}\) is the incidence angle with respect to the surface plane, and \(\varphi_{k_{0}}\) and \(\varphi_{k_{1}}\) are the different relative angles between X-ray beam and sample.
equalization in image brightness is performed per polarization stack. This is done to take into account and correct for potential X-ray flux variations during the time of measurement. The algorithm finds proportionality factors which equalize the intensity in the \(\mathrm{Pt_{x}C_{1-x}}\) deposits for each of the images within the stack, and applies them as global intensity factors to the full image.
The averaging of the two aligned stacks of images is now performed, giving as a result two averaged images. The cross-correlation algorithm is utilized again now for aligning these two images, and the intensity equalization is similarly done by finding a factor \(f\) which relates the intensity in the \(\mathrm{PtC_{x}}\) deposits, _i.e._, \(f=I_{CL}/I_{CR}\). The final XMCD image is computed as \(I_{XMCD}=(I_{CL}-f\cdot I_{CR})/(I_{CL}+f\cdot I_{CR})\)[36], where these are all pixel-wise operations.
### Magnetization vector reconstruction
To perform reconstruction of the 3 components of the magnetization vector, a minimum of three different projections are required in order to create a solvable system of equations with unique solutions. Experimentally, this is achieved by rotating the sample in the PEEM chamber about the sample normal and taking XMCD projections at different orientations, as sketched in figure 1. The XMCD images at each of the azimuthal angles are computed utilizing the procedure described in the previous section, although these host different spatial orientations due to the relative rotation between sample and camera. To correct for this, a new protocol which aligns the different azimuthal charge projections (computed as \(I_{CL}+f\cdot I_{CR}\)) to one another is developed. Charge images are used for this, given that their contrast is independent of the magnetic configuration and azimuthal orientation, unlike the XMCD signals.
First, a single projection's spatial orientation is chosen as a reference, with respect to which the rest of the projections are aligned to. For this, the algorithm finds the most suitable affine transformation parameters: rotation, translation, scale and shear, which take the distorted projection to the reference. Scale and shear adjustments are necessary to correct image deformations introduced by the electron optics upon sample rotation. The error metric defined for this consists of the pixel-wise squared distance between both charge images, and the effectiveness of the procedure is further enhanced by applying a combination of Sobel edge and high-pass filtering algorithms to give more weight to the edges, which serve as alignment features. The optimized affine transformation parameters, which are found from running the algorithm on the charge images, are in the end applied to the corresponding XMCD images.
With the different projections now aligned, the magnetization vector is reconstructed by fitting at each pixel the associated XMCD azimuthal profile to the model, as given by expression 1. \(\theta_{k}\) and \(\varphi_{k}\) are the independent (or known) parameters which describe the normalized X-ray wave-vector, corresponding respectively to the X-ray incidence angle with respect to the sample's surface, and the azimuthal rotation angle. These angles are known from the experimental setup. The remaining are the unknown (or fit) parameters: \(|\vec{m}|,\theta_{m}\) and \(\varphi_{m}\), which are the modulus, polar and azimuthal angles of the magnetization vector, respectively. Ten fits are done per pixel, where in each of these different random initial guesses are given to the fit parameters to avoid getting pinned in local minima due to the parameter landscape.
\[\mathrm{XMCD}(\theta_{k},\varphi_{k},|\vec{m}|,\theta_{m},\varphi_{m})=\vec{k }(\theta_{k},\varphi_{k})\cdot\vec{m}(|\vec{m}|,\theta_{m},\varphi_{m}) \tag{1}\]
### Error metric and analysis
The main objective of this work is to investigate how the quality of the reconstructed results varies depending on the data used, _i.e._, not only the amount of projections involved, but also if any particular combination of sample orientations are more beneficial than others. In order to be quantitative in this endeavour, an error metric needs to be defined. The procedure followed for this is sketched in figure 2, where 8 is the total number of available projections (since this is the amount measured experimentally). A combination of projections is picked, represented by the white circles (with a minimum of 3 and a maximum of 7), which are then fed to the fitting algorithm to obtain a spatially-resolved magnetization vector. With this vector configuration, the XMCD model is now applied in reverse, artificially generating the projections which were not involved in the reconstruction (black circles of the initial experimental projections). These artificially generated projections are now substracted with their corresponding experimental real XMCD images. The resulting difference images are squared and summed, normalizing the resulting quantity by the number of images involved. The pixelwise error metric corresponding to this process is mathematically described by \(\triangle^{2}=|I_{exp}-I_{arr}|^{2}\).
An intuitive way to interpret the meaning of this metric is the following: utilizing part of the available experimental information, the reconstruction algorithm is run. Since the ground truth or real magnetic configuration is not known to compare how accurate the reconstruction is, the only comparison that can be made with real data is with respect to the other experimental projections. In order to do that, these are generated artificially utilizing the XMCD model, and compared in a pixel-wise operation.
## III Results and discussion
In previous work, ring-like structures were observed to form within the FOV of the SAF after applying particular external magnetic field cycling procedures [33]. To perform vector reconstruction of the magnetization within these rings, 8 projections were measured at the Co \(L_{3}\) edge (775.2 eV) with \(\theta_{k}=16^{\circ}\) (large sensitivity to IP components). The signals obtained in this configuration are expected to come exclusively from the top CoFeB layer and not from the bottom Co, as the
layered structure prevents the signal from the Co bottom layer to reach the surface due to the short electron mean free path.
The 8 experimental projections are shown in figure 3 (a), after having applied the image processing and projection alignment algorithms described in methods. The magnetic signal in these images is determined to be coming mostly from IP components, given that it varies upon azimuthal rotation (OOP magnetization would be insensitive to an azimuthal rotation). The resulting 3D magnetization vector's spherical components obtained after applying the reconstruction fitting algorithm to the 8 projections are shown separately in figures 3 (b,c,d). The IP magnetization vector directions, figure 3 (b), reveal the presence of 360\({}^{\circ}\) DW rings separating the outer and inner domains, which point approximately along \(+x\). The OOP component, figure 3 (c), is very close to zero in the uniformly magnetized areas, although becomes significantly large in the DW area. A large uncertainty is expected for this component, mainly for two reasons. First, the very shallow angle of the incoming X-rays gives small sensitivity to OOP magnetization (proportional to sine of 16\({}^{\circ}\)). Second, in small lengthscales where the magnetization changes rapidly, the resulting magnetic signal measured by the microscope suffers a decrease in amplitude due to the microscope's natural resolution. Thus even if in reality the signal is coming
Figure 3: (a) Aligned experimental XMCD-PEEM projections, whose azimuthal rotation angles are given by the numbers in the inset. 0\({}^{\circ}\) and 90\({}^{\circ}\) are parallel to the \(x\) and \(y\) directions of the inset in (b). (b,c,d) Correspond respectively to the spatially resolved IP directions, modulus, and OOP component of the reconstructed magnetization vector obtained from all 8 experimental projections.
Figure 2: Schematic describing the work-flow of the error metric utilized for quantitatively assessing the quality of the reconstructed magnetization vector. A subset of the initial available experimental projections is taken, in this example 3, 6 and 8 are selected (left white circles). The reconstruction algorithm is applied obtaining the spatially resolved vector given by the matrix, which is then utilized to compute artificially the projections that were not involved in the reconstruction (right, dark gray circles). Finally, these artificially generated projections are substracted in a pixel-wise operation with the experimental ones (black), squaring and summing for all the pixels, and normalizing by the number of images involved (in this particular case 5). This error metric is represented by \(\triangle^{2}\).
from IP magnetization, the decrease in amplitude makes the XMCD profile much more susceptible to noise deforming the expected sinusoidal form, and preventing the algorithm from identifying it as such. The decrease in magnetic signal amplitude due to the microscope's resolution is clearly evident in the spatially resolved modulus component, figure 3 (d), which becomes significantly smaller in the 360\({}^{\circ}\) DW (20-30% relative to the outer uniformly magnetized area). In the ideal case where the microscope had infinite resolution, the modulus of the magnetization vector would be constant throughout the probed space, given that it is made up of the same magnetic material (except if there were inhomogeneities and/or defects which could alter the saturation magnetization). Also, misalignment has a larger negative effect in the quality of the reconstructed results in areas where the magnetic features are of smaller lengthscales, _e.g._, in the ring.
The previously described error metric, \(\triangle^{2}\), is now computed and represented in figure 4 (a) as a function of the projection azimuthal angle images displayed on the x-axis. The points in the graph represent the average value of \(\triangle^{2}\) for all the possible reconstruction combinations which exclude the projection at hand, whereas the error bars give the standard deviation or spread in \(\triangle^{2}\). This graph gives information regarding the quality of each individual XMCD projection, enabling to identify which of these are reliable, _i.e._, better levels of signal-to-noise, smaller misalignments and deformations... Overall, the value of the error metric is of the same order of magnitude for all projections, which implies that the noise level and alignment in between the different angles is quite similar, in all showing how the average error decreases as more projections are involved in the reconstruction. A particular case for these experiments concerns the case of the 45\({}^{\circ}\) projection, where the value of \(\triangle^{2}\) stands above all, having even a larger error for 7 projections than in the rest of the azimuthal angles with 3. This implies that the image quality at this angle in particular is worse than for the other angles, most probably due to imperfect correction and alignment with respect to the others. This error metric thus allows for detection of bad quality images which can be discarded from the final dataset if needed.
Now, the error metric is represented with respect to other different relevant quantities in figures 4 (b,c), as described hereafter. In figure 4 (b), the filled circle curve represents the averaged error for all the possible reconstruction combinations as a function of the number of projections involved in the algorithm. On the contrary, the empty circle curve represents the smallest error obtained for a single combination of projections, _i.e._, the best case for each projection number. Very clearly, the average error decreases significantly as the number of projections increases. For the best case, the error also decreases as the number of projections is made larger, although the improvement is not as pronounced. From the best case, 5 projections appear to be a good compromise between quality and time for measurements (each projection takes about 2 hours of measurement), as 5 projections improves the error in comparison to when using 3 projections by 42%, whereas 5 and 6 by 52% and 61%, respectively.
The fact that the improvement in error for the best case is not as significant as in the average is best understood by looking at the next plot, figure 4 (c). Here, \(\triangle^{2}\) is represented against the average relative angle in between the projections involved in the reconstruction, for cases considering 3 and 4 projections. A very clear trend is observed, which indicates how the error decreases as the spacing between projections becomes larger, converging to similar values for the largest separation possible. This is because the more spread out the projections are, the different components are probed more evenly, having a lower average error in the vector field. Thus, these results reveal that it is more effective to have fewer projections evenly spread in space over having numerous projections spanning a narrow angular range.
## IV Conclusions
In conclusion, we quantitatively assess how in XMCD-PEEM, the quality of a reconstructed 3D magnetization vector depends on the number of projections involved and their
Figure 4: (a) Representation of \(\triangle^{2}\) for the projections whose azimuthal angle is shown in the x-axis of the plot, for different number of projections involved in the algorithm. (b) Representation of the average (filled circle) and lowest (empty circle) value of the error metric \(\triangle^{2}\) as a function of the number of projections involved in the reconstruction. (c) Representation of \(\triangle^{2}\) average with respect the average relative angle in between projections involved for the reconstruction for 3 and 4 projections.
spatial orientation. For this, we use 360\({}^{\circ}\) DW ring structures forming in a SAF multilayer as the model to perform a detailed analysis. We have defined an error metric which uses part of the data for the vector reconstruction, and the remaining for quantitative comparison. Results show that the main factor of improvement is not the number of projections utilized for reconstructing, but rather having these evenly spread through the 360\({}^{\circ}\) angular range. From our results, 5 evenly spread projections are a good compromise between time invested and quality, improving by 42% the error obtained when utilizing 3 projections.
## V Acknowledgments
This work was supported by UKRI through an EPSRC studentship, EP/N509668/1 and EP /R513222/1, the European Community under the Horizon 2020 Program, Contract No. 101001290 (3DNANOMAG), the MCIN with funding from European Union NextGenerationEU (PPRTR-C17.11), and the Aragon Government through the Project Q-MAD. The raw data supporting the findings of this study will be openly available at the DIGITAL_CSIC repository.
A.H.-R. acknowledges the support by Spanish MICIN under grant PID2019-104604RB/AEI/10.13039/501100011033 and by Asturias FICYT under grant AVD/2021/51185 with the support of FEDER funds. S.R-G. acknowledges the financial support of the Alexander von Humboldt foundation. L.S. acknowledges support from the EPSRC Cambridge NanoOTC EP/L015978/1. C.D. acknowledges funding from the Max Planck Society Lise Meitner Excellence Program. The ALBA Synchrotron is funded by the Ministry of Research and Innovation of Spain, by the Generalitat de Catalunya and by European FEDER funds. S.M. acknowledges support from EPSRC project EP/T006811/1. M.A.N and M.F. acknowledge support from MICINN project ECLIPSE (PID2021-1229800B-C54).
|
2302.02462 | The Marriage of Effects and Rewrites | In the research on computational effects, defined algebraically, effect
symbols are often expected to obey certain equations. If we orient these
equations, we get a rewrite system, which may be an effective way of
transforming or optimizing the effects in a program. In order to do so, we need
to establish strong normalization, or termination, of the rewrite system. Here
we define a framework for carrying out such proofs, and extend the well-known
Recursive Path Ordering of Dershowitz to show termination of some effect
systems. | Ezra e. k. Cooper | 2023-02-05T19:12:07Z | http://arxiv.org/abs/2302.02462v1 | # The Marriage of Effects and Rewrites
###### Abstract
In the research on computational effects, defined algebraically, effect symbols are often expected to obey certain equations. If we orient these equations, we get a rewrite system, which may be an effective way of transforming or optimizing the effects in a program. In order to do so, we need to establish strong normalization, or termination, of the rewrite system. Here we define a framework for carrying out such proofs, and extend the well-known Recursive Path Ordering of Dershowitz to show termination of some effect systems.
term rewriting, strong normalization, termination, algebraic effects, functional programming 10.4230/LIPIcs... 1
## 1 Introduction
Plotkin and Power [15] introduced a view on computational effects as algebraic terms. Their operational semantics shows how a source term containing effects within it can reduce to a final effect term that represents a trace of the effects performed by the program, or indeed a tree of all possible linear traces. That line of work discusses equations between effect operators, which define their essence in relation to one another.
We add to that work with an observation: by orienting the equations as a system of _rewrite rules_ for the effect system, we can mechanically reduce an effect term (as a _trace_) to something more compact, a _state_. The rewrite rules in this case can be thought of as taking the place of the language implementation or indeed the hardware which makes the effect "take effect." Alternatively, such rewrite rules can be an elegant way of defining and implementing optimizations for effectful programs.
A general rewrite system is not the only way to work from an effectful term or trace to a final state. Ahman and Staton [1] give a Normalization By Evaluation strategy for doing just that. However, we find it interesting to study the behavior of generalized rewrite systems. The freedom to apply rewrite rules arbitrarily could be useful in a compiler implementation or other program-transformation engine.
Having applied general term-rewriting to algebraic effects, the researcher will want to know whether common properties apply, such as confluence and strong normalization. While an ad-hoc proof may be given, it would be preferable to factor the problem so that a proof about the termination of the term-rewrite system alone would easily lift to a proof about the system in the context of Moggi's computational metalanguage [13].
Now, in the literature of term-rewriting there are many techniques for showing strong normalization, for example the "recursive path ordering" or RPO [5]. With this technique, the practitioner merely exhibits an ordering on the function-symbols and shows that each rewrite rule obeys this ordering in a certain way. Thus an intricate inductive proof is replaced by some relatively simple (albeit recursive) checks on the rewrite rules. The RPO is extended to a calculus with \(\lambda\)-abstraction and with \(\beta\)-reduction in the literature on the "higher-order recursive path ordering," HORPO [9]. But that leaves us to wonder, what about a calculus
with the let-construct? If we can extend RPO or HORPO to the computational metalanguage, then we will have an easy way of proving termination of some systems of effects.
In this paper, we make the following contributions:
* We show how to interpret a variety of computational effects operationally as rewrite-rules, rather than equations, giving a more operational flavor to the workings of effect symbols,
* We show a new technique for proving strong normalization for rewriting algebraic-effect systems by lifting the recursive path ordering into Moggi's metalanguage.
* We use the technique to prove termination of an effect system for global state, which shows how to reduce its _traces_ to _states_.
Our central proof will not be surprising to anyone familiar with RPO, with the Tait-Girard proof of strong normalization [7] and the Lindley-Stark method for extending Tait-Girard to the computational metalanguage [11]. But by combining all these things, we get a compelling result that can be used directly to show termination of languages in the presence of algebraic effects, where the effects themselves are subject to rewrite rules.
## 2 Algebraic effects
Let's review the basic framework of algebraic effects, introduced by Plotkin and Power [15] (and Bauer [2]) as extended in Plotkin and Pretnar [17] with the let-construct. A programming language is defined with effect symbols representing individual atomic effects that can be performed. The symbols build on two kinds of syntactic roles, parameters \(p\) and arguments \(a\), as \(e_{\vec{p}}(\vec{a})\). The parameters represent data that is used by the effect, such as a message to print to the terminal, while the arguments represent possible continuation terms, depending on the _result_ of the effect. An effect \(readbit\) which reads a single bit from some input source would naturally have two argument positions, representing the behavior the program will follow if it reads a 0 or a 1, respectively: \(readbit(zeroContinuation,oneContinuation)\).
Another effect, \(print_{m}\) could be used to print a corresponding message:
\[readbit(print_{\mathtt{cold}}(30),\,print_{\mathtt{hot}}(70))\]
will print cold or hot correspondingly, then return a number, say 30 or 70 for a number of degrees celsius. Any pure (effect-free) term placed as an effect argument represents an ultimate return _value_ of the computation, dependent upon the path taken from root to leaf.
The Plotkin and Power semantics lets the effect symbols commute out of evaluation contexts, essentially floating to the top of the term at evaluation time, so that the normal forms are trees of effect symbols, representing possible traces, whose leaves are the corresponding return values. Thus we could apply the lambda-term \(\lambda x.x+5\) to the above effectful term and it would rewrite as follows:
\[(\lambda x.x+5)readbit(print_{\mathtt{cold}}(30),print_{\mathtt{ hot}}(70))\] \[\leadsto readbit((\lambda x.x+5)print_{\mathtt{cold}}(30),(\lambda x.x+5) print_{\mathtt{hot}}(70))\] (eff-assoc) \[\leadsto readbit(print_{\mathtt{cold}}((\lambda x.x+5)\,30),print_{ \mathtt{hot}}((\lambda x.x+5)\,70))\] (eff-assoc) \[\leadsto readbit(print_{\mathtt{cold}}(30+5),print_{\mathtt{hot}}(70+5)\] (abs- \[\beta\] ) \[\leadsto readbit(print_{\mathtt{cold}}(35),print_{\mathtt{hot}}(75)\] (abs- \[\beta\] )
The final row is a normal form, which shows a tree where the _readbit_ operation can choose either of two paths; on each path some specific message is printed; and finally each path terminates in a value, which was computed by applying the \(\lambda\)-abstraction to the _result_ of the original side-effecting expression.
In the present work, we explore what happens when these computation (effect) trees are further exposed to their own rewrite rules, which can be applied in source terms or in these final computation trees. Such reduction loses the "trace" nature of the tree, but gives us a way of simulating the machinery of the effects, something like an abstract machine for effects.
We do not make use of the effect parameters in our proofs, so we do not write them.
### Examples
When we are talking about the semantics of a programming language, the rewrite rules for the effects can be seen as implementing the machinery of the language implementation which reduces the individual effects to a state, itself represented as a normal form of an effect-term.
#### Example: Global State
Global state is modeled as a single global location which can hold a value of some type \(T\). The signature of the global-state effect system is
\(arity(assign_{i})=1\)
\(arity(get)=T\)
Note \(assign_{i}\) is parameterized by the value \(i\), to assign into the global variable. Plotkin and Power distinguish "parameters" and "arguments". The arguments of \(get\) are indexed by the values of the storage type \(T\): its "arity" is \(T\).
If the symbols are uninterpreted (subject to no rewrites) then the result of a rewrite sequence is just a computation tree, which acts as a tree of all possible traces of the program. But we may alternatively assign a meaning which is the actual final state of this computation, in cases where there is one. To that end, we can assign rewrite rules (adapted from Plotkin and Power [16]) that perform the trace-reduction:
\(assign_{i}(get(t_{1},\,\ldots,\,t_{n}))\leadsto assign_{i}(t_{i})\)
\(assign_{i}(assign_{j}(s))\leadsto assign_{j}(s)\)
\(get(t_{1},\,\ldots,\,t_{i},\,\ldots,\,t_{n})\leadsto get(t_{1},\,\ldots,\,s_{i}, \,\ldots,\,t_{n})\)
\(\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
### Example: a looping effect
This effect has not been proposed in the literature to our knowledge, but to motivate our work, we explore the idea of something that looks like an "effect" but has some complex rewriting behavior.
In the practice of programming with external services (for example, database servers, or web-based APIs), one frequently wants to make one's own service robust in the face of a brief interruption to the external service. To that end, the programmer builds a finite number of retries into their system. If the external service begins functioning during the retries, the program will continue normally, but if the finite retries are exhausted, an error is returned to the user.
We model such a system using a pair of effects, request and retry. Each time the program makes a _request_ to the service, that request may fail (a possibility whose continuation is represented by a first parameter, \(t\)), or it may return a meaningful value (represented by an indexed set of parameters, \(s_{1}\),..., \(s_{n}\)). So we introduce an effect \(request(t\), \(s_{1}\),..., \(s_{n}\)). We also introduce an effect \(retry(u\), \(r)\) which represents the effect of retrying the computation \(r\) a number of times indicated by \(u\). In \(u\) we will find a number represented through rewrite symbols \(zero()\) and \(succ(u)\), i.e. Peano numerals (we use Peano numerals to make the arithmetic amenable to rewriting).
\[retry(zero(),request(t,s_{1},\,\ldots,\,s_{n})) \rightsquigarrow t\] \[retry(succ(u),request(t,s_{1},\,\ldots,\,s_{n})) \rightsquigarrow request(retry(u,t^{\prime}),s_{1},\,\ldots,\,s_{n})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mbox{where $t^{\prime}=request(t,s_{1},\,\ldots,\,s_{n})$}\]
This effect-rewrite system produces something more like a trace than a final state, since it replicates the \(request\) effect \(u\) times in the computation tree. To evaluate the trace, we could choose further rewrite rules that make \(request\) act like \(get\) in the global-state example, flattening successive \(request\)s and choosing a single outcome for the whole set.
This may be a contrived model for a retry-loop of effects, but it demonstrates our technique on a slightly more complex system than the other examples.
### Example: Parallelism
A binary effect, _par_, represents parallel evaluation of two streams of effects. We assume it is used in combination with other effects. The following rewrite rules are replicated for each other effect \(e\) in the system:
\[par(e(s_{1},\,\ldots,\,s_{n}),t)\rightsquigarrow e(par(s_{1},t),\, \ldots,\,par(s_{n},t))\] \[par(s,e(t_{1},\,\ldots,\,t_{n}))\rightsquigarrow e(par(s,t_{1}), \,\ldots,\,par(s,t_{n}))\]
These rules are not in general confluent, so several different final states can be derived from a single source term. That is of course in the nature of parallelism.
In the tradition of fork-join parallelism, we could also add an effect _join_ which brings together the two results in one result term, for further computation. Here \(\langle\cdot,\cdot\rangle\) represents ordinary data pairing into a product type (\(S\times T\)):
\[join(par(v,w))\rightsquigarrow\mbox{\it pure}(\langle v,w\rangle)\]
## 3 Two kinds of effectful metalanguage
We must pause to reconcile two syntactic treatments of effects in the literature. One marks the monad type explicitly, the other leaves it implicit. Both treatments appear in Moggi's
early work and in the literature are often referred to as \(\lambda_{ml}\) and \(\lambda_{c}\).
The first approach (\(\lambda_{ml}\)) uses a computation type while the other (\(\lambda_{c}\)) treats effect operations as transparent to the type system. The latter notation predominates in Plotkin and Power [15] and other algebraic-effects research. Sabry and Wadler [19] establish a close correspondence between them.
The key typing rule for each is given below. We write \(\mathsf{E}(T)\) for the type of an effectful computation giving result type \(T\). (A single such effect constructor implies one global monad for effects throughout the system.)
Explicit computation types (\(\lambda_{ml}\))
\[\begin{array}{c}\Gamma\vdash t:\mathsf{E}(S)\qquad\Gamma,x:S\vdash u: \mathsf{E}(T)\\ \hline\Gamma\vdash\mathsf{let}\,x\Leftarrow t\,\mathsf{in}\,u:\mathsf{E}(T) \end{array}\qquad\qquad\begin{array}{c}\Gamma\vdash t:S\qquad\Gamma,x:S \vdash u:T\\ \hline\Gamma\vdash\mathsf{let}\,x\Leftarrow s\,\mathsf{in}\,u:T\end{array}\]
In \(\lambda_{ml}\), we perform beta-reduction on lets with explicitly-constructed pure subjects:
\[\mathsf{let}\,x\Leftarrow pure(t)\,\mathsf{in}\,u\leadsto u\{t/x\}\]
In \(\lambda_{c}\), beta-reduction is triggered by the syntactic class of a value in the subject position (assume \(v\) describes a syntactic class of values):
\[\mathsf{let}\,x\Leftarrow v\,\mathsf{in}\,u\leadsto u\{v/x\}\]
Values \(v\) in \(\lambda_{c}\) are defined by a grammar, which prohibits \(\mathsf{let}\) and effect application, at least when not embedded in a \(\lambda\)-body.
The main proofs in this paper use \(\lambda_{ml}\) as the substrate.
## 4 A Metalanguage With Explicit Effects
Now we define our core object: a metalanguage for computational effects, based on the basic syntax of Moggi [13] with the algebraic-effect rules of Plotkin and Pretnar [17]. As a blend of those languages, it includes explicit effect symbols and a let-construct.
Unlike some later work on algebraic effects, we don't use the _fine-grain call-by-value_ of Levy, et al. [10], and in fact do not assume a call-by-value evaluation order, because we want to cast as wide a net as possible for the interesting rewrite systems that can be proven strongly-normalizing with our technique. In fact, one of our motivating examples (NRC) benefits from allowing rewrites in arbitrary position and from allowing arbitrary term-term applications (in distinction to FGCBV).
Here is the grammar of our metalanguage:
\[s,t,u ::= x\mid\lambda x.u\mid\text{{pure}}(t)\mid st\mid\gamma(t_{1},...,t_ {n})\mid\mathsf{let}\,x\Leftarrow t\,\mathsf{in}\,u\] \[\gamma ::= e\mid f\] (rewritable symbols) \[e,e^{\prime},e^{\prime\prime}\] (effect symbols) \[f,g\] (function symbols)
We distinguish two classes of rewritable symbols: the effect symbols and function symbols. These two classes have their own typing rules, but are often treated the same in the rewrite theory, so we use \(\gamma\) to range over both, \(e\) (and its primes) to range over effect symbols, and \(f,g,...\) to range over non-effect function symbols. Both subclasses are subject to some metalanguage rewrites as motivated by the Plotkin-Power framework.
\[\begin{array}{r@{\hspace{1cm}}l}\Gamma,x:S\vdash u:T\hspace{1cm}\Gamma\vdash \lambda x.u:S\to T\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm} \hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1cm}\hspace{1}\hspace
Thus, the _symbolic_ rewrite system is ignorant of the let-construct, of lambda abstractions, and of applications.
### Rewriting contexts
For this work, we assume that all the rewrite relations (including those marked \(\leadsto\), \(\leadsto_{ml}\), \(\leadsto\) and \(\dot{\cdot}\)) are compatibly-closed, so they can be applied in any term context. This is a fairly standard assumption for rewrite systems.
In some application areas, one may wish to constrain the eligible rewrite contexts, for example to a call-by-value evaluation order. Doing so is normal in the study of computational effects. But by allowing rewrites anywhere, our result remains more general.
## 5 The Recursive Path Ordering Defined
Now we define the recursive path ordering \(\succ\), which relates terms of the symbolic part of the language, and which is the key tool of the normalization proof.
The RPO is defined with respect to an ordering \(>_{\Sigma}\) on rewritable symbols. We will write \(>\) for the symbol ordering when it is clear from context. The relation \(\succ\) is extended to a lexicographical ordering on a sequence of terms by writing \(\succ_{lex}\). (The (HO)RPO usually allows symbols whose args are ordered either by a multiset ordering or a lexicographical ordering. Presently we only defines the lexicographical one.) We use \(\succeq\) for the union of \(\succ\) with \(=\).
[RPO] Define \(s\succ t\) to hold when one of the following does:
1. \(s=\gamma(s_{1},\,\dots,\,s_{n})\), \(t=\gamma(t_{1},\,\dots,\,t_{n})\) and \(\vec{s}\succ_{lex}\vec{t}\) and for all \(j\), \(s\succ t_{j}\).
2. \(s=\gamma(s_{1},\,\dots,\,s_{m})\), \(t=\gamma^{\prime}(t_{1},\,\dots,\,t_{n})\) and \(\gamma>\gamma^{\prime}\) and for all \(j\), \(s\succ t_{j}\).
3. \(s=\gamma(s_{1},\,\dots,\,s_{m})\) and for some \(i\), \(s_{i}\succeq t\).
It is easy to miss that this relation is inductively defined, and there is a base case hidden in case (3), in the \(=\) part of the \(\succeq\) relation. All derivations of the RPO end in leaves which are assertions of the right-hand term being equal to an immediate subterm of the left-hand term.
Let's take a moment to understand the purpose of this step intuitively, and where it fits in the larger proof. The \(\succ\) relation essentially captures a large class of terminating rewrite systems that could be defined for a given effect-signature \(\Sigma\) and an ordering among the symbols. The ordering will be specific to the particular rewrite system, but the \(\succ\) relation abstracts slightly from the rewrite rules themselves. It is usually a superset of the relation (\(\leadsto\)) of interest, so (\(\leadsto\)) \(\subseteq\) (\(\succ\)). The user must also check their rewrite rules (\(\leadsto\)) do in fact meet the above criteria (qualifying as a (\(\succ\)) relation), but this is often easy to do, and one then gets a big termination proof "for free," as it were.
The \(\succ\) relation abstracts only the symbol-rewriting rules; to extend it through the metalanguage, we define
\[(\dot{\succ})\triangleq(\succ)\cup(\leadsto_{ml}).\]
And this \(\dot{\succ}\) is the relation for which we will prove strong normalization. As a result, the target calculus where \((\leadsto\)) \(\cup(\leadsto_{ml})\) is the relation of interest must also be strongly normalizing.
We can also present the RPO in terms of two powerful inference rules:
\[\frac{s_{i}\succeq t\text{ for some }s_{i}}{\gamma(s_{1},\,\dots,\,s_{n}) \succ t}\qquad\qquad\frac{(\gamma,\vec{s})\succ_{RPO}(\gamma^{\prime},\vec{t })}{\gamma(s_{1},\,\dots,\,s_{n})\succ\gamma^{\prime}(t_{1},\,\dots,\,t_{m})}\]
Where the \(>_{RPO}\) ordering is defined as a lexicographical ordering with the components \(>_{\Sigma}\) and \(\succ_{lex}\).
In what follows, a _reduct_ of \(t\) is a term \(t^{\prime}\) for which \(t\mathrel{\dot{\succ}}t^{\prime}\), and we write \(SN(t)\) if \(t\) strongly normalizes under the relation \(\dot{\succ}\). When a term is strongly normalizing, we can perform induction on its reduction tree (we only use this for the reduction tree under \((\dot{\succ})\), not the other term relations); to invoke this principle we will write "induction on \(t\), ordered by \(\dot{\succ}\)." When we have several normalizing terms handy, we might use simultaneous induction on all of them, where the proposition is assumed to hold for the group where any one is reduced.
### Continuations
A key difficulty in the proof is showing strong normalization in the presence of the let-assoc rule, which reorganizes the term in a progress-making way, but does not make it smaller. Thus we need a construct to allow tracking and inducting on that progress. In the let-assoc rule, \(\mathsf{let}\,x\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow \!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\arrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\arrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\!\arrow\!\Leftarrow\!\Leftarrow\!\Leftarrow\! \Leftarrow\!\Leftarrow\!\arrow\!\Leftarrow\!\arrow\!\Leftarrow\!\arrow\!\Leftarrow\!\arrow\! \Leftarrow\!\Leftarrow\!\arrow\!\Leftarrow\!\arrow\!\Leftarrow\!\arrow\arrow\!\Leftarrow\! \arrow\!\Leftarrow\!\arrow\!\Leftarrow\!\arrow\!\Leftarrow\!\arrow\!\Leftarrow\!\arrow\! \Leftarrow\!\arrow\!\Leftarrow\!\arrow\!\Leftarrow\!\arrow\!\arrow\!\Leftarrow\!\arrow\! \Leftarrow\!\arrow\!\Leftarrow\!\arrow\!\arrow\!\Leftarrow\!\arrow\!\arrow\!\Leftarrow\!\arrow\! \arrow\!\arrow\!\Leftarrow\!\arrow\!\arrow\!\Leftarrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\arrow\!\arrow\!\arrow\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\!\arrow\!\arrow\!\arrow\!\arrow\!\!\arrow\!\arrow\!\arrow\! \arrow\!\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\! \arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!\arrow\!
* case \(B\). A term which is a free variable satisfies this.
* case \(S\to T^{\prime}\). By IH we have a term \(t\in\mathsf{Red}_{T^{\prime}}\) and so \(\lambda x.t\in\mathsf{Red}_{S\to T^{\prime}}\).
* case \(\mathsf{E}(T^{\prime})\). By IH we have a term \(t\in\mathsf{Red}_{T^{\prime}}\) and so \(pure(t)\in\mathsf{Red}_{(T^{\prime})}\).
For any \(T\), \(t\), if \(t\in\mathsf{Red}_{T}\) then \(t\) strongly normalizes.
Proof.: By induction on \(T\) and appeal to the \(\mathsf{Red}_{T}\) definition. In the \(\mathsf{E}(T^{\prime})\) case, \(t\) is a subterm of something directly asserted to be SN. In the \(T_{1}\to T_{2}\) case, the IH gives strong normalization of a term which has \(t\) as a subterm. We need that \(\mathsf{Red}_{S}\) is inhabited, which Lemma 3.
For any \(s\in\mathsf{Red}_{T}\) with \(s\mathbin{\dot{\succ}}s^{\prime}\), we have \(s^{\prime}\in\mathsf{Red}_{T}\).
Proof.: By induction on \(T\).
* case \(B\). \(s^{\prime}\) merely needs to be SN, and it is by virtue of being a reduct of \(s\).
* case \(T_{1}\to T_{2}\). To show that \(s^{\prime}t\in\mathsf{Red}_{T_{2}}\) for any \(t\in\mathsf{Red}_{S_{1}}\). We have that \(st\in\mathsf{Red}_{T_{2}}\). But \(st\mathbin{\dot{\succ}}s^{\prime}t\) so the conclusion follows from the IH.
* case \(\mathsf{E}(T^{\prime})\). To show that \(K@s^{\prime}\in SN\) for \(K\in\mathsf{Red}_{T}^{\top}\). Again, \(K@s\) reduces to \(K@s^{\prime}\) and since the former is SN, the latter is too.
Given a neutral \(s\), if each of its reducts is in \(\mathsf{Red}_{T}\) then \(s\) is in \(\mathsf{Red}_{T}\).
Proof.: By induction on the structure of \(T\).
* case \(B\) Since the reducts are in \(\mathsf{Red}_{B}\), they are in SN, and this satisfies the definition of \(\mathsf{Red}_{B}\).
* case \(S\to T^{\prime}\) To show that \(st\in\mathsf{Red}_{T^{\prime}}\) for each \(t\in\mathsf{Red}_{S}\). We have that \(t\) is SN; proceed by induction on the reduction tree of \(t\). Examine reductions of \(st\). Since \(s\) is neutral, it is not a \(\lambda\)-abstraction, so there is no \(\beta\)-reduction at the head. The only reducts are \(s^{\prime}t\) (where \(s\mathbin{\dot{\succ}}s^{\prime}\)) and \(st^{\prime}\) (where \(t\mathbin{\dot{\succ}}t^{\prime}\)). In the first case, the lemma hypothesis is sufficient. In the second case, the inner IH is sufficient.
* case \(\mathsf{E}(T^{\prime})\) Given \(K\in\mathsf{Red}_{T^{\prime}}^{\top}\), we want to show \(SN(K@s)\). By induction on \(K\). Since \(s\) is neutral, the only reducts are \(K^{\prime}@s\) (where \(K\mathbin{\dot{\succ}}K^{\prime}\)) and \(K@s^{\prime}\) (where \(s\mathbin{\dot{\succ}}s^{\prime}\)). (Note there is no metalanguage rule rewriting the frame \(F\) into the application \(st\) in \(F[st]\) and one cannot be supplied by the symbol-rewrites.) In the first case, the inner IH is sufficient. In the second case, the lemma hypothesis is sufficient.
If \(SN(u\{t/x\})\) then \(SN(u)\).
Proof.: Constructively, every reduction in the reduction tree of \(u\) has an analogue in that of \(u\{t/x\}\). As a consequence, the tree for \(u\) can be no larger than that of the other term, and cannot be divergent when the latter is convergent.
Now we show that each term-former can construct a reducible term, given appropriate conditions.
If \(\mathsf{Red}_{S\to T}(s)\) and \(\mathsf{Red}_{S}(t)\) then \(\mathsf{Red}_{T}(st)\).
Proof.: Immediate from the definition of \(\mathsf{Red}_{S\to T}\).
If \(\mathsf{Red}_{T}(u\{t/x\})\) for every \(t\) in \(\mathsf{Red}_{S}\) then \(\mathsf{Red}_{S\to T}(\lambda x.u)\).
Proof.: Since \((\lambda x.u)t\) is neutral, it is sufficient to show that all its reducts are reducible. We have that \(t\) is SN by virtue of being in \(\mathsf{Red}_{S}\). We have that \(u\) is SN by Lemma 3, and therefore we can apply simultaneous induction on the two rewrite trees. The inductive hypotheses are that \((\lambda x.u^{\prime})t\) is reducible, for any \(u\mathbin{\dot{\succ}}u^{\prime}\), and that \((\lambda x.u)t^{\prime}\) is reducible, for any \(t\mathbin{\dot{\succ}}t^{\prime}\). Now we take those cases on the reducts of \((\lambda x.u)t\).
* case \((\lambda x.u)t\dot{\ \ }\dot{\ }(\lambda x.u^{\prime})t\) for \(u\dot{\ \ }u^{\prime}\); this is reducible by IH.
* case \((\lambda x.u)t\dot{\ \ }\dot{\ }(\lambda x.u)t^{\prime}\) for \(t\dot{\ }t^{\prime}\); this is reducible by IH.
* case \((\lambda x.u)t\dot{\ \ }u\{t/x\}\); this is reducible by lemma hypothesis.
If \(K\dot{\ \ }K^{\prime}\) then \(|K|\geq|K^{\prime}|\).
Proof.: By structural induction on K. If \(K=\epsilon\), there is no reduction. If \(K=K_{0}\circ F\), we have reductions \(K_{0}\dot{\ \ }K_{0}^{\prime}\) and \(F\dot{\ \ }F^{\prime}\), which conserve length (the former by IH).
Via (let-assoc), we also have \(K=K_{0}\circ F_{1}\circ(\mathsf{let}\,x\!\Leftarrow\![\ ]\mathsf{in}\,u)\circ K_{1}\) and \(K^{\prime}=K_{0}\circ(\mathsf{let}\,x\!\Leftarrow\![\ ]\mathsf{in}\,F_{1}[u])\circ K_{1}\). And this is one frame shorter.
If \(t\in\mathsf{Red}_{T}\) then \(\mathit{pure}(t)\in\mathsf{Red}_{E(T)}\).
Proof.: To show: that \(K@\mathit{pure}(t)\in\mathsf{Red}_{E(T)}\) for any \(K\in\mathsf{Red}_{T}^{\top}\). But this is immediate from the definition of \(\mathsf{Red}_{T}^{\top}\).
If \(s\in SN\) and \(K@(u\{s/x\})\in SN\), then \(K@(\mathsf{let}\,x\!\Leftarrow\!pure(s)\mathsf{in}\,u)\in SN\).
Proof.: By induction on \((|K|,\,(s,\,u,\,K))\) ordered by \((>,\,(\dot{\ \ }_{lex}))\). Proceed by showing all reducts of \(K@\mathsf{let}\,x\!\Leftarrow\!pure(s)\mathsf{in}\,u\) are in \(SN\).
* case \(K^{\prime}@\mathsf{let}\,x\!\Leftarrow\!pure(s)\mathsf{in}\,F[u]\) where \(K^{\prime}\circ F=K\), by let-assoc. To apply the IH, we need to show that \(K\) and \(F[u]\) meet the lemma premises, that \(K^{\prime}@(F[u]\{s/x\})\in SN\). Note \(K^{\prime}@F[u]=K@u\) and \(x\) cannot be free in \(F\), by the let-assoc side condition, therefore \(K^{\prime}@(F[u]\{s/x\})=K^{\prime}@(F[u\{s/x\}]])\).) Furthermore \(|K^{\prime}|<|K|\), so the metric decreases.
* case The reduct is \(K@u\{s/x\}\). By hypothesis.
* case The reduct is \(K@\mathsf{let}\,x\!\Leftarrow\!pure(s)\mathsf{in}\,u\) where \(s\dot{\ \ }s^{\prime}\). By IH.
* case The reduct is \(K@\mathsf{let}\,x\!\Leftarrow\!pure(s)\mathsf{in}\,u^{\prime}\) where \(u\dot{\ \ }u^{\prime}\). By IH.
* case The reduct is \(K^{\prime}@\mathsf{let}\,x\!\Leftarrow\!pure(s)\mathsf{in}\,u\) where \(K\dot{\ \ }K^{\prime}\). By IH.
If \(s\in\mathsf{Red}_{E(S)}\) and \(u\) is such that for all \(s^{\prime}\in\mathsf{Red}_{S}\) we have \(u\{s^{\prime}/x\}\in\mathsf{Red}_{E(T)}\), then \(\mathsf{let}\,x\!\Leftarrow\!s\mathsf{in}\,u\in\mathsf{Red}_{E(T)}\).
Proof.: We show that \(SN(K@\mathsf{let}\,x\!\Leftarrow\!s\mathsf{in}\,u)\) for any \(K\in\mathsf{Red}_{T}^{\top}\). First, we show \(K^{\prime}=K\circ(\mathsf{let}\,x\!\Leftarrow\![\ ]\mathsf{in}\,u)\in\mathsf{Red}_{S}^{\top}\), which in other words says that \(SN(K@\mathsf{let}\,x\!\Leftarrow\!pure(s^{\prime\prime})\mathsf{in}\,u)\) for any \(s^{\prime\prime}\in\mathsf{Red}_{S}\). This we get from Lemma 3 (by hypothesis, \(u\{s^{\prime\prime}/x\}\in\mathsf{Red}_{T}\) and further \(K^{\prime}@(u\{s^{\prime\prime}/x\})\in SN\) as required by Lemma 3). Now it follows by the definition of \(s\in\mathsf{Red}_{S}\) that \(K^{\prime}@s=K@\mathsf{let}\,x\!\Leftarrow\!s\mathsf{in}\,u\in SN\).
The next lemma shows a property of the undotted \(\succ\), that is, the raw RPO relation, which will be used as a subroutine in some inductive proofs to follow. Since this is an extraction of an inductive step, it is stated in terms of a "lemma hypothesis" which will align with some outer induction hypothesis in the cases where it is used.
To make the lemma appropriately general, we define _contexts_ to encompass the various kinds of settings in which terms can be placed to prove reducibility:
\[C\ \ ::=\ \ K\ \big{|}\ \big{|}\ \big{|}t\ \big{|}\ \big{|}\]
And write \(C[s]\) to denote filling the context with a term:
\[C[s]=\begin{cases}K@s&\text{when $C=K$}\\ st&\text{when $C=[\ \ ]$}t\\ s&\text{when $C=[\ \ ]$}\end{cases}\]
**Lemma 13** (Rpo step).: _Given some context \(C\), and \(\gamma(s_{1},\,\ldots,\,s_{n})=s\succ t\), with each \(C[s_{i}]\in SN\), and a "lemma hypothesis" that \(\vspace{-0.1cm}\) Given any \((\gamma^{\prime},\;\vec{t})\) having \((\gamma,\,\vec{s})\) greater than \((\gamma^{\prime},\,\vec{t})\) under the lexicographic ordering \(((>_{\Sigma}),\,(\dot{\succ}_{\mathit{lex}}))\), we have \(C[\gamma^{\prime}(t_{1},\,\ldots,\,t_{n})]\in SN\), then \(C[t]\in SN\)._
Proof.: We show that \(C[t]\) is SN by induction on the size of \(t\), and take cases on the RPO rule that proves \(s\succ t\):
* case (1) \(t=\gamma(t_{1},\,\ldots,\,t_{n})\). First we show \(C[t_{i}]\) is SN, which is given by the induction hypothesis (noting \(t_{i}\) is smaller than \(t\)). Then by the lemma hypothesis, \(C[t]=C[\gamma^{\prime}(t_{1},\,\ldots,\,t_{n})]\) is SN. Satisfying the ordering required by the lemma hypothesis, \(\gamma\) has not changed and the RPO rule has offered \(\vec{s}\succ_{\mathit{lex}}\vec{t}\), in turn implying \(\vec{s}\dot{\succ}_{\mathit{lex}}\vec{t}\). Also note that the lemma hypothesis itself is preserved, when inducting.
* case (2) \(t=\gamma^{\prime}(t_{1},\,\ldots,\,t_{m})\) and \(\gamma\succ_{\Sigma}\gamma^{\prime}\). First we show \(C[t_{i}]\) is SN, which is given by the induction hypothesis (noting \(t_{i}\) is smaller than \(t\)). Then by the lemma hypothesis, \(C[t]=C[\gamma^{\prime}(t_{1},\,\ldots,\,t_{m})]\) is SN. The lemma hypothesis is satisfied by \(\gamma>_{\Sigma}\gamma^{\prime}\).
* case (3) \(s_{i}\succeq t\). Here \(C[t]\) is in the reduction tree of \(C[s_{i}]\), which was assumed SN, so then \(C[t]\in SN\).
**Lemma 14**.: _Let \(s=\gamma(s_{1},\,\ldots,\,s_{n})\). Given a context \(C\), if each \(s_{i}\) has \(C[s_{i}]\in SN\), then \(C[s]\in SN\)._
Proof.: By cases on the type of \(s\). In each case, we study the possible terms \(C[s]\) and show that their reducts are all SN, and thus that \(C[s]\) is.
* case \(B\). It must be that \(C=[\;\;]\) so we just study the term \(s\) itself. By induction on the tuple \((\gamma,\,\vec{s})\) lexicographically ordered by \(((>_{\Sigma}),\,(\dot{\succ}_{\mathit{lex}}))\). By cases on the reducts of \(s\):
* case \(\gamma(s_{1},\,\ldots,\,s_{i}^{\prime},\,\ldots,\,s_{n})\) for some index \(i\) and \(s_{i}\dot{\succ}\,s_{i}^{\prime}\). The inner inductive hypothesis applies because \(\gamma\) is unchanged, while \(\vec{s}\) has decreased under \(\dot{\succ}_{\mathit{lex}}\).
* case \(t\) where \(s\succ t\). (Under the un-dotted \(\succ\) relation.) Lemma 13 applies. Our inner induction hypothesis implies the "lemma hypothesis" of Lemma 13.
* case \(T_{1}\to T_{2}\). The context \(C\) is either \([\;\;]\) or \([\;\;]s^{\prime}\). Showing that \(ss^{\prime}\) is SN covers all the cases for \(s\) alone, so we only show those. Proceed by lexicographical induction on \((s^{\prime},\,\gamma,\,\vec{s})\), ordered by \((\dot{\succ},\,\succ_{\Sigma},\,\dot{\succ}_{\mathit{lex}})\).
* case a subterm reduces: either some \(s_{i}\dot{\succ}\,s_{i}^{\prime}\) or the applicand \(s^{\prime}\dot{\succ}\,s^{\prime\prime}\). By IH, with reduction sequence decreasing.
* case \(s\succ t\). Lemma 13 applies. Our induction hypothesis implies the "lemma hypothesis".
* case \(\mathsf{E}(T^{\prime})\). Here \(C=K\). Since \(K@s_{i}\in SN\), we know that \(K\) itself is SN. We proceed by an lexicographic induction on the tuple \((|K|,\,K,\gamma,\,\vec{s})\) ordered by \((>,\dot{\succ},\,>_{\Sigma},\,\dot{\succ}_{\mathit{lex}})\). We show that every reduct of \(K@s=K@\gamma(s_{1},\,\ldots,\,s_{n})\) is strongly-normalizing, and thus that the term itself is. By cases on those reducts:
* case \(K@\gamma(s_{1},\,\ldots,\,s_{i}^{\prime},\,\ldots,\,s_{n})\) for some index \(i\) and \(s_{i}\dot{\succ}\,s_{i}^{\prime}\). The IH applies: \(K\) and \(\gamma\) are unchanged and the arguments \(\vec{s}\) have lexicographically reduced under \(\dot{\succ}\).
* case \(K^{\prime}@s\) where \(K\dot{\ \succ}\ K^{\prime}\). The IH applies because the continuation has gotten no longer (Lemma 9) and \(K^{\prime}\) is in the reduction tree of \(K\).
* case \(K^{\prime}@\gamma(F[s_{1}],\,\dots,\,F[s_{n}])\) where \(K=K^{\prime}\circ F\) and \(\gamma\) is an effect symbol (eff-assoc). The IH applies because \(K\) has gotten shorter (as \(K^{\prime}\)). We also need that \(K^{\prime}@F[s_{i}]\) is SN, to satisfy the IH, but \(K^{\prime}@F[s_{i}]=K@s_{i}\), which we already know is SN.
* case \(K@t\) where \(s\succ t\). (Under the un-dotted \(\succ\) relation.) Lemma 13 applies.
For a rewritable symbol \(\gamma:S_{1}\times\dots\times S_{n}\to T\), if each \(s_{i}\in\mathsf{Red}_{S_{i}}\), then \(s=\gamma(s_{1},\,\dots,\,s_{n})\in\mathsf{Red}_{T}\).
Proof.: By cases on the type of \(s\).
* case \(B\). To show \(s\in\mathsf{Red}_{B}\) for which we only need that \(s\in SN\), and we get this from Lemma 14.
* case \(T_{1}\to T_{2}\). To show \(ss^{\prime}\in\mathsf{Red}_{T_{2}}\) for any \(s^{\prime}\in\mathsf{Red}_{T_{1}}\). Because \(ss^{\prime}\) is neutral, we can show just that all the reducts of \(ss^{\prime}\) are reducible. By induction on \(((\gamma,\vec{s}),s^{\prime})\) ordered by \(((\succ_{\Sigma},\dot{\succ}_{lex}),\dot{\succ}_{lex})_{lex}\). The only reductions of \(ss^{\prime}\) are in \(s\) or in \(s^{\prime}\). If in \(s^{\prime}\), the IH suffices. If in \(s\), there are two possibilities:
* case the reduction is of the form \(s=\gamma(s_{1},\,\dots s_{i},\,\dots,\,s_{n})\dot{\ \ \succ}\ \gamma(s_{1},\,\dots s_{i}^{\prime},\,\dots,\,s_{n})\) with \(s_{i}\dot{\ \succ}\ s_{i}^{\prime}\), in which case the IH suffices.
* case \(s\succ t\). If it reduces by \(s\succ t=\gamma^{\prime}(t_{1},\,\dots,\,t_{m})\) with \((\gamma,\vec{s})>_{RPO}(\gamma^{\prime},\vec{t})\), then IH proves the point. On the other hand, if it is \(s_{i}\succeq t\) then \(t\in\mathsf{Red}_{S_{i}}\) by virtue of the \(s_{i}\in\mathsf{Red}_{S_{i}}\) assumption.
* case \(\mathsf{E}(T^{\prime})\). To show \(K@s\in SN\) for any \(K\in\mathsf{Red}_{T^{\prime}}^{\top}\). Because \(s_{i}\in\mathsf{Red}_{S_{i}}\), we have \(K@s_{i}\in SN\), which satisfies the premises of Lemma 14, thus \(K@s\in SN\) as needed.
Write \(s\{\vec{s}/\vec{x}\}\) for the operation of simultaneously substituting each \(s_{i}\) for the free variable \(x_{i}\) within \(s\): \(s\{\vec{s}/\vec{x}\}=s\{s_{1}/x_{1},\,\dots,\,s_{n}/x_{n}\}\).
Given \(x_{1}:S_{1},\,\dots,\,x_{n}:S_{n}\vdash t:T\), for all \(\vec{s}\in\mathsf{Red}_{\vec{s}}\), we have \(t\{\vec{s}/\vec{x}\}\in\mathsf{Red}_{T}\).
Proof.: By structural induction on \(t\).
* case \(x_{i}\). Then \(S_{i}=T\). Now \(s_{i}\in\mathsf{Red}_{S_{i}}\) and \(x_{i}\{s_{i}/x_{i}\}=s_{i}\in\mathsf{Red}_{S_{i}}=\mathsf{Red}_{T}\).
* case \(s^{\prime}t^{\prime}\). Immediate from Lemma 7 and the IH.
* case \(\lambda x.u\). The type derivation has \(\Gamma\), \(x:S\vdash u:T^{\prime}\). Let \(s^{\prime}\) be in \(\mathsf{Red}_{S}\). By inductive hypothesis, using \(\Gamma\) extended with \(s^{\prime}\), we have \(u\{\vec{s}/\vec{x},s^{\prime}/x\}=u\{\vec{s}/\vec{x}\}\{s^{\prime}/x\}\in \mathsf{Red}_{T}\), and thence by Lemma 8, \(\lambda x.u\in\mathsf{Red}_{S\to T^{\prime}}\).
* case \(\mathit{pure}(M)\). Immediate from Lemma 10 and the IH.
* case \(\gamma(t_{1},t_{2},...,t_{n})\). Immediate from Lemma 15 and the IH.
* case \(\mathsf{let}\,x\mathop{\Leftarrow}t^{\prime}\,\mathsf{in}\,u\). The type derivation is such that \(\Gamma\vdash t^{\prime}:S\) and \(\Gamma,x:S\vdash u:T\). By IH, we have \(t^{\prime}\{\vec{s}/\vec{x}\}\in\mathsf{Red}_{S}\) and then \(u\{t^{\prime}/x\}\{\vec{s}/\vec{x}\}=u\{\vec{s}/\vec{x},t^{\prime}\{\vec{s}/ \vec{x}\}/x\}\in\mathsf{Red}_{T}\) and from there, Lemma 15.
## 7 Revisiting the Examples
So what does all this give us? Can we use this technique to show termination for some interesting calculi?
### Global state
Recall the rewrite rules of global state given earlier:
\[assign_{i}(get(t_{1},\,\ldots,\,t_{n})) \leadsto assign_{i}(t_{i})\] \[assign_{i}(assign_{j}(s)) \leadsto assign_{j}(s)\] \[get(t_{1},\,\ldots,\,t_{i},\,\ldots,\,t_{n}) \leadsto get(t_{1},\,\ldots,\,s_{i},\,\ldots,\,t_{n})\] \[\qquad\text{where $t_{i}=get(s_{1},\,\ldots,\,s_{n})$}\]
These are easily shown to be normalizing: each only needs the RPO(3) case once or twice.
### Nondeterminism
\[or(or(s,t),u))\leadsto or(s,or(t,u))\]
With one effect symbol, the \(>_{\Sigma}\) relation is empty. But we check the RPO conditions on the solitary rule. Because the function symbols match, we use RPO(1). We have to check the lexicographical ordering of the arguments: \(or(s,\,t)\succ s\) (by RPO(3)) so we don't need to check the second argument. Then each argument on the RHS must be less than the whole term on the left, which is easily done (recursively applying RPO(3) or (1)). Note that the lexicographical ordering was crucial to ensuring this rule makes progress toward termination.
### Parallelism
For parallelism, set \(par>_{\Sigma}e\) for each other effect symbol \(e\). Recursive comparisons can then be carried out. Note for example that \(par(e(s_{1},\ldots,s_{n}),\,t)\succ par(s_{i},\,t)\) under RPO(1), the arguments are lexicographically decreasing, and \(s_{i}\) and \(t\) can each be found as subterms.
### Request-retry
Restating the rewrite rules:
\[retry(zero(),request(t,s_{1},\,\ldots,\,s_{n})) \leadsto t\] \[retry(succ(u),request(t,s_{1},\,\ldots,\,s_{n})) \leadsto request(retry(u,t^{\prime}),s_{1},\,\ldots,\,s_{n})\] \[\qquad\qquad\text{where $t^{\prime}=request(t,s_{1},\,\ldots,\,s_{n})$}\]
Let \(retry>_{\Sigma}request.\) The first rule has \(t\) as a subterm of the left-hand side, so it is in RPO. The second rule has \(retry>_{\Sigma}request\) and then we need to show
\[retry(succ(u),request(t,s_{1},\,\ldots,\,s_{n}))\succ retry(u,t^{\prime})\]
and each \(s_{i}\) has
\[retry(succ(u),request(t,s_{1},\,\ldots,\,s_{n}))\succ s_{i}.\]
The latter is easy, via the subterm rule. For the former, the head symbol matches, and then the immediate subterms \(u\) and \(t^{\prime}\) can both be found as subterms of the left-hand side.
## 8 Related Work
Johann, et al. [8] give "a generic operational metatheory for algebraic effects". The authors work with computation trees, or traces, like those which are the normal forms of algebraic-effect systems in the absence of equations. An equivalence (in fact, a preorder), between computation trees is given for each kind of effect system, but it is given through a separate definition which simulates the operation of each effect on a separate state-representation (a kind of abstract machine). By contrast, we have explored what happens when the effects can be defined by rewrites on the effect symbols themselves. Gavazzo and Faggian [6] explain monadic effects in rewrite systems. This work interprets the rewrite relation _itself_ as monadic/effectful, so for example the rewrite relation can have a probabilistic distribution on its possible right-hand terms.
We have set out to show that a set of rewrite rules can be applied in any order and still normalize. But one may instead choose a _particular_ rewriting or normalization strategy. Normalization By Evaluation is one such approach, and Ahman and Staton [1] have shown how to perform NBE on a calculus with algebraic effects and a sequencing form (like our let).
We build on the long history of rewrite-rule orderings to prove termination. Dershowitz [5] gives the original RPO ordering and termination proof. That proof is entirely different from the reducibility method, which is necessitated by the difficulties of the let-assoc rule. Okada [14] is the hero of our present work, as it is the first paper to show a general proof for strong normalization of any SN rewrite system crossed with the syntax of simply-typed lambda-calculus. Showing such orthogonality between rewrites and other syntax features is the spirit of the present work.
## 9 Future Work
So far, we have only shown a modest improvement on existing strong-normalization systems. The symbols in the system have their own rewrite rules, but are allowed to interact with let in just one way, commuting out of the subject position. We hope to give similar strong-normalization proofs for systems in which the rewrite system can specify further interactions with let (although some restrictions may remain). Our grand "test cases" for the technique are the systems in Cooper [3] and Ricciotti and Cheney [18]: when we can prove these strongly-normalizing with only a symbol-ordering, we will have succeeded.
## Thanks
Thanks to Sam Lindley, Matija Pretnar, and Wilmer Ricciotti for helpful comments shaping this work.
|
2306.10591 | Quantum computer based Feature Selection in Machine Learning | The problem of selecting an appropriate number of features in supervised
learning problems is investigated in this paper. Starting with common methods
in machine learning, we treat the feature selection task as a quadratic
unconstrained optimization problem (QUBO), which can be tackled with classical
numerical methods as well as within a quantum computing framework. We compare
the different results in small-sized problem setups. According to the results
of our study, whether the QUBO method outperforms other feature selection
methods depends on the data set. In an extension to a larger data set with 27
features, we compare the convergence behavior of the QUBO methods via quantum
computing with classical stochastic optimization methods. Due to persisting
error rates, the classical stochastic optimization methods are still superior. | Gerhard Hellstern, Vanessa Dehn, Martin Zaefferer | 2023-06-18T15:58:34Z | http://arxiv.org/abs/2306.10591v1 | # Quantum computer based Feature Selection in Machine Learning
###### Abstract
The problem of selecting an appropriate number of features in supervised learning problems is investigated in this paper. Starting with common methods in machine learning, we treat the feature selection task as a quadratic unconstrained optimization problem (QUBO), which can be tackled with classical numerical methods as well as within a quantum computing framework. We compare the different results in small-sized problem setups. According to the results of our study, whether the QUBO method outperforms other feature selection methods depends on the data set. In an extension to a larger data set with 27 features, we compare the convergence behavior of the QUBO methods via quantum computing with classical stochastic optimization methods. Due to persisting error rates, the classical stochastic optimization methods are still superior.
## 1 Introduction and literature review
When using supervised machine learning to tackle real-world problems one is faced with the task of selecting the "right" features for the learning procedure. When too few features are chosen, this may lead to poor results for the algorithms. When too many features are selected, the training time is negatively affected, which may lead to unstable results. Moreover, the more features are used, the more effort is required to maintain and control these features for a deployed machine learning model. Just testing all possible combinations of features with a brute-force approach is usually not feasible, since the number of combinations grows exponentially with the number of features.
In the past, different methods have been proposed to tackle this problem. Brown et al. provide a compilation of several approaches where _mutual information_ is used as a dependency measure, introduced within an information-theoretic framework [6]. An extensive survey of feature selection methods that are applied to different data sets to demonstrate their behavior is given in [7] and [27].
The idea of using quantum computers for solving this problem is introduced in [20]: There, the feature selection problem has been reformulated as a quadratic unconstrained optimization problem (QUBO). In addition, this problem was solved via classical optimization methods as well as with a quantum annealing approach. The different methods were applied to a well-known data set, the "German Credit Data" of UC Irvine [10].
The annealing approach of solving the QUBO problem for feature selection was also employed in [22], where the authors investigate feature selection for recommender systems. With the same approach, other data sets were explored in [9]. Results that combine results obtained with annealing methods as well as with a gate-based approach to quantum computing are reported in [21]. An approach that examines an unconstrained black box binary optimization and applies it to feature selection is presented in [30]. Finally, [26] compares different algorithms (annealing, gate-based and classical) to solve the QUBO problem of feature selection in a small-size setup.
In this paper, we report on the following approach to investigate the feature selection problem: Using small-sized data sets we first compare well-known machine-learning methods for feature selection with the brute-force method.
After reformulating the feature selection problem as a QUBO, hereby allowing different dependency measures between the features and also between the features and the target, we investigate the stability of the solutions when using different values of the undetermined weighting parameter with classical optimization methods. Since for this problem size the exact solution for the features selection problem can be determined, we investigate to what extent this solution can be achieved on the one hand with the commonly used feature selection methods and on the other hand with the way via QUBO optimization.
In order to solve the QUBO with the gate-based approach of quantum computing, we use the QAOA- and the VQE-ansatz. In addition to using the out-of-the-box QAOA algorithm of IBM, we compare the results with a customized solution scheme for QAOA. Hereby it can be shown whether and to what extent the QAOA algorithm is a feasible way to solve the QUBO problem and how strong the goodness of the solutions depends on the QAOA depth as well as on the dependence measures used.
After discussing our results on small-sized data sets, we scale our methods up to 27 features (i.e., 27 qubits) and report optimization results on different physical quantum computers of IBM. For this setup, we also compare the results to several classical stochastic optimization methods. At this point, we investigate, on the one hand, whether 27 qubits on a real quantum computer can yield any meaningful results beyond random noise, given its error rates. On the other hand, we focus on comparing the convergence of the QAOA algorithm with a selection of classical stochastic optimization methods.
## 2 Feature Selection in Supervised Learning
### Machine Learning approaches
In the following, we restrict ourselves to binary classification as one of the most important topics in supervised learning:
Given are a real \(m\times n\) matrix \(X\) consisting of \(n\) features and \(m\) training examples and a target vector \(y\); \(y\) is of dimension \(m\) and each element \(y_{i}\in\{0,1\}\). A classification model is a mapping \(f:X\in A\to B\), i.e. from an input space \(A\) to an output space \(B\). In order to find the best model \(f\), we use a loss function \(g:A\times B\rightarrow\mathbb{R}\) that has to be minimized. In the case of a binary classification problem, the binary cross-entropy is mostly used as a loss function. For a thorough discussion of machine learning, see e.g. [2].
Given a training set \((X,y)\), the following problem arises in practice: How many and which features should be included in the training of a particular model in order to do the best job? Naively, one would expect that considering all features is naturally the best choice. However, there are several reasons why this is in general not the case:
* Features with a high dependence on each other can lead to an unstable training process.
* Each feature used by an algorithm must be prepared and pre-processed and controlled in a productive environment. So, from an economical point of view, one should not consider more features than necessary.
* Models for data with more features require more computing time.
* It may happen, that some of the features have no predictive power at all. Identifying and removing them can save costs.
Selecting the optimal number of features by brute force, i.e. testing all possible combinations, is in general not feasible since the combinatorial possibilities grow exponentially with the number of features. Therefore, several methods for feature selection have been proposed and applied in the past:
* Using a penalty term \(\alpha\sum_{i=1}^{n}\mid w_{i}\mid\) with an adjustable hyper-parameter \(\alpha\) in the cost function of the training algorithm effectively reduces the number of features. This is called LASSO [25] in the machine learning context.
* The recursive feature elimination procedure (RFE) starts with all possible features and then reduces the number of features according to their importance one by one in a greedy manner. Instead of comparing all combinations of features, the number of models trained and compared corresponds to the number of features.
* One may also use unsupervised methods, e.g., principal component analysis (PCA), to reduce the number of features in the model. However, this is done independently by the target and can therefore not take into account which features may have high predictive power. Also, PCA and similar methods generate a (lesser number of) new features from the original features, rather than merely selecting. Hence, the original features would still have to be available when deploying this approach in practice.
### QUBO-formulation of the problem
Here, following the suggestion of [20], we propose and explore a different approach for feature selection: Let \(\rho_{i,j}\) be a dependency measure between column \(i\) and column \(j\) of the matrix \(X\) and let \(\rho_{i,Y}\) be the dependence between column \(i\) and the target vector. Later, we will specify which dependency measures are used.
In addition, let \(z_{i}\in\{0,1\}\) be a binary variable, indicating if column \(i\) should be selected (\(z_{i}=1\)) in the model or not (\(z_{i}=0\)). Intuitively, features should be selected so that the dependence
\[\sum_{i,j=1,i\neq j}^{n}z_{i}z_{j}\mid\rho_{i,j}\mid, \tag{1}\]
between them is minimized, whereas the dependence between the features and the target vector
\[\sum_{i=1}^{n}z_{i}\mid\rho_{i,Y}\mid. \tag{2}\]
is maximized. Note, that we take the absolute values of the dependency measures since for the dependence measures considered in the following (e.g. correlation) a high negative dependence is as relevant as a high positive dependence.
Putting both criteria together, we get the following optimization (here: minimization) problem:
\[\mathbf{h(z)}=-\left[\phi\sum_{i=1}^{n}z_{i}\mid\mathbf{\rho_{i,Y}}\mid-(1-\phi)\sum_{ i,j=1,i\neq j}^{n}z_{i}z_{j}\mid\mathbf{\rho_{i,j}}\mid\right] \tag{3}\]
The parameter \(\phi\in[0,1]\) has to be considered as a hyper-parameter and governs the weighting between the two conditions. By tuning \(\phi\) one can decide which of the two conditions is more important. Using the fact that \(z_{i}\) is a binary variable, i.e. \(z_{i}z_{i}=z_{i}\), the above equation can be transformed to
\[\mathbf{h(z)}=-\left[z^{T}\mathbf{Q}z\right] \tag{4}\]
and the solution to our problem is given by
\[z^{*}=\operatorname*{arg\,min}_{z}\left[h(z)\right] \tag{5}\]
Such a problem, known as quadratic unconstrained binary optimization (QUBO) problem can be treated with classical numerical methods but is also suited for a solution with a quantum computer.
Up to now the dependency measure is not yet determined. The most natural choice would be to use correlation as a linear measure, \(\rho^{Correl}\), as shown in Fig. 1. Alternatively, one could choose the rank correlation.
Other possible dependency measures are the mutual information \(\rho^{MI}\) which has been discussed in [6], the univariate ROC-value \(\rho^{ROC}\) and the Anova F-Statistic \(\rho^{Anova}\). All these dependency measures could be used either between the features or between the features and the target variable.
In the following, for the inter-feature dependence we restrict ourselves to correlation and the mutual information measure. For the dependence on the target, we tested all measures. In the following, we will denote the selected dependence by the tuple \((\rho^{X}_{Feature},\rho^{Y}_{Target})\) where \(X\in\{Correl,MI\}\) and \(Y\in\{Correl,MI,ROC,Anova\}\).
## 3 Classical feature selection for a small-sized data set
### Machine Learning method
In this chapter, a comparison of the described feature selection methods is performed. To that end, we consider two data sets: a) a subset of Kaggle's lending club data [18] and b) the Wisconsin breast cancer data [10]. In the case of a) we use 8 features and 1000 observations; in
the case of b) we have 10 features and 569 observations in the data set. For both data sets we seek a binary classification model with the "optimal" number of features. To keep the analysis transparent we use logistic regression as the machine learning model and use the whole data set for training. We are aware, that this may lead to overfitting, however, logistic regression, in comparison to more sophisticated models, proves often to be robust against overfitting (as long as the model term is restricted to low-order polynomials).
For comparing models with different input features, we use the overall accuracy (ACC) of the model as well as the area under curve (AUROC) as performance measures. Accuracy is the proportion of correctly classified cases by the total number of all data points. AUROC is the area under the ROC curve which is constructed by plotting the true positive rate (TPR) against 1-FPR, i.e. false positive rate (FPR). Intuitively AUROC measures the ability of the model to separate the two classes which is desirable in many use cases.
To find the optimal number of features, we use these two measures as relevant criteria. Since the accuracy of a model may provide misleading results in case of imbalanced data sets, we chose the AUROC as the measure to optimize.
For a small number of features \(n\), it is possible to determine the optimal feature selection by brute force, i.e. simply trying all possible combinations of features and then training the corresponding model. The determined optimal choice is compared to the results obtained with the RFE- and the LASSO-method. For LASSO we used cross-validation to find the optimal value of \(\alpha\).
In the case of the lending club data, we observed the results shown in Table 1. While the accuracy is robust against different choices of features, this is not the case for the AUROC. The best choice of features (obtained with brute force) can not be obtained via RFE or LASSO: The optimal choice is six out of eight features; RFE recommends keeping only one feature while LASSO recommends keeping three features out of eight.
In the case of the breast cancer data, the results are shown in Table 2. The optimal choice is five out of ten features; RFE recommends to keep six features and LASSO recommends to keep three features out of eight. Again, the best choice of features is not obtained with RFE
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & AUROC & Accuracy & Number of features \\ \hline All features & 0.6651 & 0.83 & 8 \\ \hline Best choice of features & **0.6682** & 0.83 & 6 \\ \hline Selection with RFE & 0.6593 & 0.83 & 1 \\ \hline Selection with LASSO & 0.6681 & 0.83 & 3 \\ \hline \end{tabular}
\end{table}
Table 1: Lending club data: performance of the logistic regression model for different feature selection approaches.
Figure 1: Illustration of the objective function for lending club data.
or LASSO.
The results obtained so far should be considered as a benchmark and it has to be shown, whether feature selection via optimization as described above, is able to compete with these benchmarks.
### Optimization methods
To answer the previous question, we treat the feature selection task as an optimization problem. Again, due to the low dimensionality of the two problems, the exact solution of the optimization, c.f. equation (5), can be obtained by brute force.
However, it is not obvious, which value of \(\phi\) should be chosen to obtain the best solution. To tackle this problem and to explore the dependence of the results on \(\phi\), we solve the optimization problem for different values of \(\phi\) and compare the results to each other. Here, we also consider different combinations of dependency measures - between the features, but also between each feature and the target vector.
In Fig. 2 and Fig. 3 the results for the lending club data are shown. Several findings can be observed in these figures:
* Using mutual information as the dependence measure between features, irrespective of the measure to the target, leads to the worst AUROC values.
* When using ANOVA as the dependency measure, we observe that the AUROC is largely independent of \(\phi\).
* In most cases, we see that the AUROC depends on \(\phi\); the AUROC tends to increase with increasing values of \(\phi\).
In all of our experiments, we find the highest value of the AUROC when using the ROC as the dependency measure between the features and the correlation or mutual information between each feature and the target. In addition, the _phi_ value should be chosen larger than \(0.85\). Note, \(\phi=1\) is the case where only the dependence between features and target is considered. The \(\phi-\)dependence is displayed in Fig. 2. Here, we show for comparison - as a straight line - the best AUROC of the best model (obtained by brute force), the AUROC of the RFE- and LASSO models. It can be seen, that RFE is outperformed by our proposed optimization approach with a suitable choice of dependency measure. The LASSO result is only slightly worse than the best possible. The best solution of our optimization approach coincides with the result, obtained by selecting all features, here denoted by "ALL".
From this analysis, we conclude, that a high value for the AUROC can be obtained when using the ROC-dependency measure between the features and the correlation dependency measure between each feature and the target vector, i.e., the combination (ROC, Correl). Furthermore, we find that a rather high value of \(\phi\approx 0.95\) should be used.
The breast cancer data set leads to somewhat different results, which shows the well-known dependency of machine learning results on individual data sets. In Fig. 3, the overall \(\phi\)-dependence of the AUROC for different dependency measures is shown. Here, we find the highest value of the AUROC for the combination (MI, MI), but only for a narrow band of \(\phi\)-values around \(0.75\). The next best combination is (ROC, Correl), as in the case of lending club data.
Fig. 3 shows the best and the second best constellation - compared to the brute-force, RFE, and LASSO results. Again, with all selection methods investigated the best value of AUROC can not be achieved. However, with the optimization approach (and a suitable choice of parameters) we can beat RFE as well as LASSO.
To summarize our results so far:
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & AUROC & Accuracy & Number of features \\ \hline All features & 0.9844 & 0.9297 & 10 \\ \hline Best choice of features & **0.9863** & 0.9297 & 5 \\ \hline Selection with RFE & 0.9839 & 0.9297 & 6 \\ \hline Selection with LASSO & 0.9845 & 0.9262 & 3 \\ \hline \end{tabular}
\end{table}
Table 2: Breast cancer data: performance of the logistic regression model for different feature selection approaches.
Figure 2: Lending club data
* For both data sets it was not possible to determine the best subset of features (as obtained by brute-force) with the classical selection methods (RFE and LASSO) considered.
* Using feature selection via optimization may be superior over these classical methods but the results are highly dependent on the dependency measures used and the hyper-parameter \(\phi\).
* For both data sets we found better results for higher values of \(\phi\) which indicates that the dependence of the features with the target seems to be more important than the intra-feature dependence.
## 4 Feature Selection within the gate-based quantum computer framework
### Quantum Algorithms for Optimization
The optimization problem in equation (5) is the starting point for transferring the problem to a quantum computer in the framework of the gate-based approach. The binary variables \(z_{i}\) are converted to operators with eigenvalues \(\{1;-1\}\) and eigenstates \(|0\rangle\) and \(|1\rangle\). In this way, by applying the transformation \(z_{i}=(1+s_{i})/2\) our problem is mapped to finding the ground state of the Ising-like model
\[\mathcal{H}=\sum_{i,j}J_{i,j}s_{i}s_{j}\, \tag{6}\]
where the coupling matrix \(J_{i,j}\) is derived from the above introduced matrix \(Q\).
For solving these types of problems on a gate-based computer several algorithms have been proposed. Here, we mainly use the QAOA algorithm, which has been suggested by Farhi et al. [12].
The key idea of QAOA is to generate the following quantum state \(|\psi_{\vec{\gamma},\vec{\beta}}\rangle\) depending on the parameters \(\vec{\gamma}=(\gamma_{1},\ldots,\gamma_{p})\) and \(\vec{\beta}=(\beta_{1},\ldots,\beta_{p})\) (with a number of iterations \(p\)):
\[|\psi_{\vec{\gamma},\vec{\beta}}\rangle=\hat{U}_{M}(\beta_{p})e^{-i\gamma_{p }\hat{F}}\ldots\hat{U}_{M}(\beta_{2})e^{-i\gamma_{2}\hat{F}}\hat{U}_{M}(\beta_ {1})e^{-i\gamma_{1}\hat{F}}|\psi_{0}\rangle_{M}\, \tag{7}\]
where the initial state \(|\psi_{0}\rangle_{M}\) and the operator \(\hat{U}_{M}(\beta)\) depend on the choice of the mixer \(M\). Here, we use the so-called standard mixer with the Pauli X-matrix
\[\hat{U}_{\rm standard}(\beta)=e^{i\beta\sum_{i=1}^{n}\hat{X}_{i}}. \tag{8}\]
After applying the parameterized gates, all qubits are measured with respect to the standard basis which leads to a classical bit-string of zeros and ones. Each bit-string corresponds to a different state and appears with a certain probability. There are \(2^{n}\) different states.
These bit-strings of several thousand "shots" (we use 8192 shots) are plugged into the objective function eq. (3) to calculate the expectation value. Then, a classical optimizer is used to update the parameters \(\vec{\gamma}\), \(\vec{\beta}\), and in order to minimize the expectation value and thus the objective function. In other words, to find the ground state of its converted problem Hamiltonian. A thorough benchmarking study of the QAOA approach, including a description of the solution method for QAOA, is described in great detail by Brandhofer et al. [5].
As a measure to gauge the performance of this optimization procedure we use the approximation ratio:
\[r(z_{1},\ldots,z_{n})=\frac{h(z_{1},\ldots,z_{n})-h_{\rm max}}{h_{\rm min}-h_ {\rm max}}. \tag{9}\]
Here \(h_{\rm max}\) and \(h_{\rm min}\) denote the worst and the best bit-string solution of the problem. If the trained quantum algorithm would always produce the optimal solution with probability 1, the approximation ratio would be 1.
To compare the quality of the results, in addition, we use the VQE algorithm [17] with the ansatz of real-amplitudes [16]. Basically, the ansatz of real-amplitudes consists of one parametrized R(y)-rotation per qubit and entangling neighboring qubits with a CNOT-gate. Repeating this procedure several times (= number of layers) increases the depth of the circuit and thereby the complexity.
All quantum-based calculations in this paper have been performed with IBM's Qiskit framework [23]. Especially, we use out-of-the-box versions of the QAOA- and VQE-algorithm [15], the state vector- and QASM-simulator, as well as different physical quantum backends. The formulation of the optimization problem with IBM's Qiskit [23] is straightforward. Within this framework, different solution methods are available for comparison:
* Solution with IBM's classical optimizer CPLEX [8].
* Small problems can be solved by a direct diagonalization of the problem's Hamiltonian.
* Using the state vector simulator allows to perform the quantum mechanical calculations directly without destroying the quantum state. Again, this is only feasible for small problem sizes.
* Using a quantum simulator that mimics real hardware, where qubits are prepared, transformed by several gates, and then measured. From the measurements, the solution to the problem can be derived. Here, we use the noise-free QASM simulator within IBM's framework which is available in the IBM cloud for sizes up to 32 qubits.
* Using a physical quantum computer in the cloud to perform the calculation on a real device.
For small problem sizes, as the one considered up to now, all solution strategies described can be used to benchmark the results. However, the bigger the problem, the fewer are the range of usable strategies. From a problem size of approx. 50 qubits, the exact diagonalization, and the state vector and quantum simulation methods are not available anymore and the only remaining possibility is using CPLEX (running classical optimization routines under the hood) or using real quantum hardware.
### Results on quantum simulators for different dependency measures and \(\phi\)-values
#### 4.2.1 Lending Club Data
As a first step, we compare the results shown in Table 3 for the case of the lending club data set with different tuples \((\rho_{Feature}^{X},\rho_{Target}^{Y})\). Interestingly, for all the applied solution methods and algorithms (CPLEX, exact diagonalization, state vector-simulator, QASM-simulator, QAOA- and VQE-algorithm) the same optimization value (given \((\rho_{Feature}^{X}\) and \(\rho_{Target}^{Y})\)) is obtained. Furthermore, the calculated accuracy is for all dependency measures equal to 0.830. However, as shown in the table above, the number of selected features and the area under curve (AUROC) differ.
In Fig. 4, we show the dependency of the approximation ratio on the number of layers - for QAOA as well as for VQE. For comparison, we also show the approximation ratio of random search. This value is obtained by drawing 1000 uniform random bit-strings and calculating the mean approximation ratio.
The approximation ratios obtained with the VQE algorithm are in almost all cases close to 1 and are in general higher than the results obtained with QAOA. Further, the QAOA results are distinct: In some cases, the approximation ratios are - independent of \(p\) - near 1, in other cases, they increase with increasing \(p\). This supports the conjecture raised in [5] that there are easy and not-so-easy parameter combinations. From a practical point of view, it would be desirable to have on the one hand a QUBO matrix that leads to a high value of AUROC (see table 3) and on the other hand, a QUBO matrix that is easy to handle for the calculation, i.e. it leads to high approximation ratios with only a few iterations \(p\). Taking this into account, \((\rho_{Feature}^{Correl},\rho_{Target}^{ROC})\) seems to be a good choice for this problem.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\rho_{Feature}^{X}\) & \(\rho_{Target}^{Y}\) & Optimization value & AUROC & Number of features \\ \hline \(X=Correl\) & \(Y=Correl\) & -0.3542 & 0.6635 & 5 \\ \hline \(X=Correl\) & \(Y=MI\) & -0,0617 & 0.6405 & 3 \\ \hline \(X=Correl\) & \(Y=ROC\) & -3.7803 & **0.6651** & 8 \\ \hline \(X=Correl\) & \(Y=ANOVA\) & -87.8750 & 0.6634 & 7 \\ \hline \(X=MI\) & \(Y=Correl\) & -0.4117 & 0.6639 & 6 \\ \hline \(X=MI\) & \(Y=MI\) & -0.0445 & 0.5790 & 1 \\ \hline \(X=MI\) & \(Y=ROC\) & -4.0131 & **0.6651** & 8 \\ \hline \(X=MI\) & \(Y=ANOVA\) & -88.0025 & 0.6634 & 7 \\ \hline \end{tabular}
\end{table}
Table 3: Lending club data: performance of the optimization approaches when selecting features for the logistic regression model with different dependency measures.
In order to improve the performance of QAOA, we make use of four heuristic methods ((i) extrapolation, (ii) linear ansatz, (iii) quadratic ansatz, (iv) adding zero angles) to choose suitable candidates for initial values of the circuit parameters \(\gamma\) and \(\beta\) for increasing QAOA layers \(p\) as proposed in [5]. By repeating the optimization runs with different initial values one can avoid getting stuck in a local minimum. Further, it has been shown that the required time scales exponentially faster to achieve a similar performance when using the observed patterns of the optimal parameters instead of using arbitrary starting values [29]. The QAOA routine as proposed in [5] and which is used here to solve the feature selection problem is depicted in Figure 4.
Figure 4: Lending Club data: Approximation ratio obtained with VQE, QAOA and tuned QAOA for different values of p, \(\phi=0.95\) and for the following dependency measures: (a) \((\rho_{Feature}^{Correl},\rho_{Target}^{Correl})\), (b) \((\rho_{Feature}^{Correl},\rho_{Target}^{MI})\), (c) \((\rho_{Feature}^{Correl},\rho_{Target}^{ROC})\), (d) \((\rho_{Feature}^{Correl},\rho_{Target}^{ANOVA})\), (e) \((\rho_{Feature}^{MI},\rho_{Target}^{Correl})\), (f) \((\rho_{Feature}^{MI},\rho_{Target}^{MI})\), (g) \((\rho_{Feature}^{MI},\rho_{Target}^{ANOVA})\), (h) \((\rho_{Feature}^{MI},\rho_{Target}^{ROC})\)
Fig. 5.
In Fig. 6, the approximation ratios for different values of \(\phi\) for the lending club data are given as a function of the number of QAOA layers \(p\). For this data set, it is noticeable that the majority of the dependency measures (Fig. 6 (a), (b), (d), (e), (g)) a value of \(\phi=0\) lead to the fastest convergence towards the optimal solution, whereas higher values of \(\phi\) (e.g. \(0.75\leq\phi<0.95\)) lead to a slower convergence. In other words, we find that the dependence between the features is more significant than the dependence between features and the target. For the other dependency measures (Fig. 6 (c), (f), (h)), we find that large values of \(\phi\) (e.g. \(0.75\leq\phi<1\)) perform better than values of \(\phi\leq 0.5\). Consequently, the weight for the two conditions (intra-feature dependence and dependence between features and target) depends on the choice of the dependency measure and it is not useful to be set globally in order to achieve the best possible performance - it is rather suggested to set a local value for the weighting parameter depending on the individual dependency measure.
Lastly in this section, we present the results obtained using a physical quantum backend of IBM for a fixed parameter setup and with the ROC-dependence between the features and the target, and the linear correlation between all features. Furthermore, we set \(\phi=0.95\).
Using the lending club data, we obtain the same optimal value for the objective function, \(obj=-3.7803\), with the classical optimizer, the direct diagonalization of the Hamiltonian, the state vector simulator, and with the quantum simulator. Further, we also obtain the same value for the AUROC, i.e. \(AUROC=0.6651\), c.f. (3). This corresponds to the selection of all features.
When using a physical quantum backend (here: the IBM system Montreal) with \(p=1\), we obtain the same objective function value of \(-3.7803\). However, the approximation ratio drops to \(0.59\). With \(p=5\), the approximation ratio increases to \(0.66\). These values are to be compared with \(0.52\), the approximation ratio obtained by randomly selecting bit-strings. While these results reflect the errors of existing real-world QC backends, they show nevertheless the expected dependence on \(p\).
Figure 5: Flowchart of the QAOA routine for solving the Feature Selection problem, whereby the particularity is to find suitable candidates for initial values of the circuit parameters using four heuristic methods described in [5]
#### 4.2.2 Breast Cancer Data
We now repeat the calculation from the last subsection for the breast cancer data. The results for the different tuples \((\rho^{X}_{Feature},\rho^{Y}_{Target})\) are reported in Table 4. While the optimization values, given \((\rho^{X}_{Feature},\rho^{Y}_{Target})\)) coincides in general for CPLEX, exact diagonalization, and QAOA (state vector-simulator and QASM-simulator), we detect that VQE is not always able to find the same optimization value. Interestingly, this behavior depends on the number of layers. I.e. in the case \((\rho^{Correl}_{Feature},\rho^{ROC}_{Target})\)) it occurs that only for \(p\leq 3\) is the VQE algorithm able to find the optimal value.
Here, in contrast to the lending club data, we see that the accuracy depends on the depen
dency measure and the highest value for the accuracy can not be obtained together with the highest value of the AUROC.
For comparison, the approximation ratios for the breast cancer data are shown in Fig. 8. Again, different values of \(\phi\) are used and we show the dependence on the number of QAOA layers. For the dependency measures (\(\rho_{\text{Feature}}^{\text{Correl}},\rho_{\text{Target}}^{\text{Correl}}\)) and (\(\rho_{\text{Feature}}^{\text{MI}},\rho_{\text{Target}}^{\text{Correl}}\)) we find that setting the parameter \(\phi\) to \(0.75\) leads to the slowest convergence towards the optimal solution, whereas for the other measures a value of \(\phi=0.95\) is showing this behavior. The best convergence is achieved in all cases for an intra-feature setting (\(\phi=0\)) that opens up the question of what influence the condition between feature and target should have, while the results show that this condition does not play a role in order to achieve the best-optimized solution. Compared to the lending club data set, we find that the approximation ratios for the different values of \(\phi\) show a more continuous behavior in terms of the choice of the dependency measure.
As a final result, we present the solution for a fixed parameter combination on a physical quantum backend of IBM. The results are obtained with the mutual information dependence between the features and also to the target. Furthermore, we fix \(\phi=0.75\). Due to limitations in the use of physical quantum backends, we restrict ourselves to two cases and solved the problem with the QAOA algorithms with different values of \(p\).
1) (\(\rho_{Feature}^{Correl},\rho_{Target}^{Correl}\)): As backends, we use the IBM systems Toronto (\(p=1,5\)), Auckland (\(p=2,3\)), and Hanoi (\(p=4\)). We are aware of the fact that results from different machines are not directly comparable. However, due to the availability of system resources, it was not possible to perform all calculations on the same hardware. As approximation ratios we get: \(p=1:0.8981;p=2:0.8734;p=3:0.8917;p=4:0.8562;p=5:0.8706\). These values have to be compared with the random approximation ratio of \(0.7984\), which is significantly lower for all values of \(p\). The approximation ratio obtained on the physical backends is in the order of magnitude as obtained with the QASM simulator.
2) (\(\rho_{Feature}^{Correl},\rho_{Target}^{MI}\)): Again, we obtained the same value for the objective function and the same features selected as reported in Table 4. As backends, we use the IBMQ systems Toronto (\(p=1\)) and Geneva (\(p=2,3,4,5\)). As approximation ratios we get: \(p=1:0.7897;p=2:0.8348;p=3:0.7311;p=4:0.7620;p=5:0.7491\). These values have to be compared with the random approximation ratio of \(0.7743\). The approximation ratio obtained on the physical backends is in the order of magnitude obtained with the QASM simulator.
In the next section, we will explore the results obtained in the context of a larger problem set.
## 5 Extension to 27 features aka qubits
The results obtained so far are encouraging and we extend the calculation to a larger problem. Here, we use a subset of 27 features of the data considered in [20].
For this problem size, using a brute-force approach to determine the best selection of features is not possible anymore. In addition, the state vector-simulator and the exact diagonalization of the Hamiltonian are infeasible. Therefore, we compare the following approaches before applying feature selection via optimization:
* Calculate the AUROC with all features,
* Select the most relevant features via RFE and calculate the AUROC,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\rho_{Feature}^{X}\) & \(\rho_{Target}^{Y}\) & Optimization value & AUC & ACC & Number of features \\ \hline \(X=Correl\) & \(Y=Correl\) & -0.91611 & 0.9806 & 0.9209 & 3 \\ \hline \(X=Correl\) & \(Y=MI\) & -0.3300 & 0.9644 & 0.8910 & 1 \\ \hline \(X=Correl\) & \(Y=ROC\) & -1.7031 & 0.9822 & 0.9192 & 4 \\ \hline \(X=Correl\) & \(Y=ANOVA\) & -2913.6696 & 0.9839 & **0.9315** & 9 \\ \hline \(X=MI\) & \(Y=Correl\) & -1.438 & 0.9796 & 0.9262 & 5 \\ \hline \(X=MI\) & \(Y=MI\) & -0.4139 & **0.9859** & 0.9297 & 3 \\ \hline \(X=MI\) & \(Y=ROC\) & -2.6366 & 0.9828 & 0.9279 & 6 \\ \hline \(X=MI\) & \(Y=ANOVA\) & -2914.5083 & 0.9839 & **0.9315** & 9 \\ \hline \end{tabular}
\end{table}
Table 4: Breast cancer data: performance of the optimization approach when selecting features for the logistic regression model with different dependency measures.
* Select the most relevant features via LASSO and calculate the AUROC.
The results are shown in Table 5. Again, these values serve as benchmarks and have to be compared with the results obtained according to the QUBO scheme. We translate the feature selection problem to an optimization problem, using correlation as the relevant dependence measure between the features and between features and target and we fix \(\phi=0.9\).
Then, the following strategies are used to solve the optimization problem:
* Select the features with a greedy algorithm, starting from a random configuration and then flipping the qubits to improve the objective function
* Solve the optimization problem with a classical optimizer (here: IBM's CPLEX).
* Solve the problem with a quantum simulator, available in the IBM cloud with using the QAOA algorithm.
* Solve the optimization problem using the QAOA algorithm with a 27-qubit real quantum device.
In contrast to the last chapter, where we used a tuned QAOA algorithm, we use here IBM's standard implementation: First, the advantages of the tuned algorithm are only observable at higher values of p. Since we are now looking at 27 qubits, a useful computation for higher values of p is no longer possible, so we can no longer take advantage of this variant and choose to use IBM's standard implementation instead. Second, when using the IBM algorithms, the cloud infrastructure and the interplay between classical and quantum algorithms via the so-called Runtime can be optimally exploited.
The results using the greedy algorithm and IBM's CPLEX optimizer are presented in Table 6. The calculation with the real quantum device shown below should be considered a proof of concept. For the calculation of the approximation ratio, we use the optimal CPLEX result as ground truth.
We use different US devices available in the IBM cloud - Auckland and Kolkata as well as the German system Ehningen. Results are shown in Table 7.
Again, we also calculate the random approximation ratio, calculated with a sample of 1000 bit-strings.
Due to the fact that now we do not necessarily get the same optimal objective value compared with CPLEX, we use two different prescriptions:
* Approximation ratio rel. CPLEX: We take the best and the worst objective value out of CPLEX as ground truth and calculate the approximation ratios relative to these values.
* Approximation ratio rel. Sim.: From the outcome of the simulations (either random sampling as a baseline or the quantum simulations) we take the best and the worst value of the objective function and use those in the formula for the approximation ratio. This number describes the spread of the different simulation results.
The respective values are shown in Table 8. The results obtained with the physical quantum devices are surprisingly good, considering the existing error rates. The minimum objective function as determined with CPLEX is almost achieved in most cases. The approximation ratios are well above the random sampling result and we see a slight improvement when increasing the value of \(p\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Objective & AUROC & Accuracy & Number of features \\ \hline Greedy optimization & -0.7232 & 0.7469 & 0.709 & 8 \\ \hline IBM’s CPLEX & -0.7232 & 0.7469 & 0.709 & 8 \\ \hline \end{tabular}
\end{table}
Table 6: Results for the data with 27 features, with classical optimization approaches.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Objective & AUROC & Accuracy & Number of features \\ \hline QUASM-Simulator, p=1 & -0.6821 & 0.7412 & 0.731 & 7 \\ \hline QUASM-Simulator, p=2 & -0.6974 & 0.718 & 0.707 & 7 \\ \hline QUASM-Simulator, p=3 & -0.70421 & 0.7435 & 0.707 & 7 \\ \hline Real Device (Auckland), p=1 & -0.6097 & 0.7323 & 0.722 & 5 \\ \hline Real Device (Auckland), p=2 & -0.6218 & 0.7444 & 0.709 & 8 \\ \hline Real Device (Auckland), p=3 & -0.6348 & 0.7469 & 0.7225 & 6 \\ \hline Real Device (Kolkata), p=4 & -0.6199 & 0.7482 & 0.72 & 9 \\ \hline Real Device (Ehningen), p=1 & -0.6073 & 0.7374 & 0.702 & 7 \\ \hline Real Device (Ehningen), p=2 & -0.5982 & 0.7304 & 0.695 & 7 \\ \hline Real Device (Ehningen), p=3 & -0.3543 & 0.7452 & 0.712 & 7 \\ \hline \end{tabular}
\end{table}
Table 7: Results for the data with 27 features, with QC optimization approaches.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & AUROC & Accuracy & Number of features \\ \hline All features & 0.7913 & 0.764 & 27 \\ \hline Selection with RFE & 0.7904 & 0.757 & 17 \\ \hline Selection with LASSO & 0.7768 & 0.732 & 17 \\ \hline \end{tabular}
\end{table}
Table 5: Model performance for the data with 27 features.
### Comparison to classical/metaheuristic optimization algorithms
In order to gain more insight into the details of this optimization problem, we perform another experiment. Here, we are interested in a comparison between classical, metaheuristic algorithms, such as evolutionary algorithms (EA) or estimation of distribution algorithms (EDA) [11, 3, 14, 4].
One core element of such algorithms are stochastic components, e.g., the random mutation of solutions in an EA or the sampling from a distribution in an EDA. Similarly to non-quantum metaheuristics, QAOA also has random components (potentially as part of the classical update step, as well as stochastic elements in the physical quantum circuit). As such, a comparison to (mostly) deterministic solvers such as CPLEX is less intuitive. Moreover, globally optimal solutions might not be necessary (and in fact not guaranteed to be found within a limited time frame), and hence it may be of interest to investigate how intermediate (non-optimal) results from a classical stochastic algorithm compare to those of QAOA.
Another interesting aspect of this comparison is that the metaheuristics we consider are _not_ limited to quadratic (or polynomial) objective functions: they are also applicable to arbitrary non-linear optimization problems (of course, with the respective consequences for performance).
### Classical, stochastic optimization algorithms
We tested four metaheuristics: A simple evolutionary algorithm (EA), a self-adaptive evolutionary algorithm (SAEA), a univariate estimation-of-distribution algorithm (UEDA), and a discrete version of the covariance matrix adaption evolution strategy (DCMA).
#### 5.2.1 Evolutionary algorithm
Evolutionary algorithms (EA) iteratively generate new candidate solutions ("offspring") from existing candidate solutions ("parents") via random variation operators ("mutation" and "recombination"). Well-performing candidates influence candidates in the next iteration. For a more in-depth introduction see, e.g., [11].
The simple EA employed in this experiment uses a fixed mutation and recombination rate, considering a population of \(\mu\) parents. Initially, candidate solutions are generated by first sampling the number \(n_{bit}\) uniformly from \(1,2,...,27\), then uniformly and randomly generating a bit-string with exactly \(n_{bit}\) ones.
In each iteration, new candidate solutions are generated via a mutation operator and a recombination operator. The following operators are used:
* Mutation: bit-flip; inverting a random number of bits of an existing candidate solution.
* Mutation: block-inversion; inverting a random block (sequence) \(i,i+1,...j-1,j\) of bits.
* Mutation: cycle; cyclically shifting bit-values to the right or left.
* Recombination: one-point crossover; all bits up to bit \(i\) are taken from one parent, the remaining bits from the second parent.
* Recombination: two-point crossover; selecting a random sequence of bits \(i,i+1,...j-1,j\) from one parent, the remaining bits from the second parent.
* Recombination: uniform crossover; selecting bits uniform randomly from both parents.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Approx. ratio rel. CPLEX & Approx. ratio rel. Sim. \\ \hline Random Sampling & 0.7314 & 0.6899 \\ \hline Real Device (Auckland), p=1 & 0.8351 & 0.7086 \\ \hline Real Device (Auckland), p=2 & 0.8189 & 0.7466 \\ \hline Real Device (Auckland), p=3 & 0.8274 & 0.7159 \\ \hline Real Device (Kolkata), p=4 & 0.8271 & 0.7272 \\ \hline Real Device (Ehingen), p=1 & 0.8356 & 0.7185 \\ \hline Real Device (Ehingen), p=2 & 0.8734 & 0.6941 \\ \hline Real Device (Ehingen), p=3 & 0.8541 & 0.7299 \\ \hline \end{tabular}
\end{table}
Table 8: Approximation ratios for the data with 27 features, with QC optimization approaches.
The mutation rate \(r_{m}\) determines how many bits are changed with each mutation operator (i.e., number of bits flipped, size of the inverted block, step size of the cyclical shift). We will investigate four parameters for tuning: population size \(\mu\), mutation operator choice for recombination \(o_{r}\) and mutation \(o_{m}\), and the mutation rate \(r_{m}\). The remaining parameters of the EA are left at default values. For performance reasons, an archive of all candidate solutions is not retained. The employed implementation is available via the R-package CEOO[28].
#### 5.2.2 Self-adaptive evolutionary algorithm
One issue of the'simple' EA described above is that it requires setting fixed algorithm parameters. The performance of the EA is rather sensitive to some of these parameters (e.g., population size, operator choices, and mutation rate). Hence, these parameters may require tuning or else, may lead to unsatisfactory performance.
The self-adaptive EA (SAEA) tries to partially alleviate this issue by attaching several parameters \(\theta\) to each solution candidate (i.e., to each individual in a population). Our SAEA variant follows a similar approach as described for the Mixed Integer Evolution Strategy [19].
Subsequently, the parameters \(\theta\) are evolved alongside the actual binary decision variables \(z\): each candidate solution is then composed as \(z^{\prime}=\{z,\theta\}\). Thus, if some solution candidate is successful (i.e., has a good objective function value), the associated parameters \(\theta\) (e.g., the mutation rate) that were used to generate it will be more likely to be used in subsequent iterations as well.
Note, that some algorithm parameters will necessarily be fixed (at least during a single algorithm run), and not attached to each candidate solution. For example, we have to specify a set of operators, ranges of parameters, and learning rates to configure the self-adaptive procedure itself. The underlying motivation is that the algorithm should be less sensitive to the parameters that control the self-adaptive procedure (e.g., learning rate) when compared to the sensitivity to changes in the parameters controlled by that procedure (e.g., mutation rate).
In detail, the SAEA will self-adapt the following three parameters: the mutation rate \(\theta_{1}=r_{m}\in[1/n,1]\), the mutation operator choice \(\theta_{2}=o_{m}\) (bit-flip, block-inversion, cycle), and the recombination operator choice \(\theta_{3}=o_{r}\) (1-point, 2-point, uniform).
The choices of operators for mutation and recombination are themselves also mutated, choosing randomly with probability \(p_{r}\) from the parents. Similarly, the mutation rate is mutated in each iteration, via \(\theta_{1}^{*}=\theta_{1}e^{\tau_{e}}\). Here, \(\tau\) is a learning rate (hyperparameter) and \(\epsilon\) is a normal-distributed random sample with zero mean and unit variance. Intermediate crossover is used to recombine \(\theta_{1}^{*}=f(\theta_{1}^{*(1)},\theta_{1}^{*(2)})\) values from parent solutions \(\{z^{\prime(1)},z^{\prime(2)}\}\).
Like the simple EA, the SAEA implementation is available via the R-package CEOO[28], and uses the same initialization strategy.
#### 5.2.3 Univariate estimation-of-distribution algorithm
The third algorithm we employ is an estimation of distribution algorithm (EDA). EDAs essentially model the set of candidate solutions with a distribution, iteratively sampling from that distribution to generate new candidates, then updating the distribution parameters based on the evaluated candidates.
Here, we use a separate univariate binary distribution to model each solution bit \(z_{i}\). Specifically, our univariate EDA (UEDA) is a variant of the population-based incremental learning algorithm (PBIL) [1]:
* Each bit \(z_{i}\) is represented by an independent probability \(p_{z_{i}}\).
* New candidate solutions are sampled based on these probabilities. We refer to the number of generated candidate solutions (bit-strings) as the population size \(\mu\).
* Each candidate solution is evaluated with the objective function.
* The better 50% of the candidate solutions are selected.
* Based on that selection, new probabilities \(p_{z_{i}}^{*}\) are estimated (via the mean value for each bit, over all selected candidates).
* The probabilities for the next iteration are set via a 'learning' update \(p_{z_{i}}=p_{z_{i}}(1-\tau)+p_{z_{i}}^{*}\tau\).
* To avoid collapsing to \(p_{z_{i}}=0\) or \(p_{z_{i}}=1\), the probabilities are limited to \(p_{z_{i}}\in[\frac{1}{n},1-\frac{1}{n}]\).
An implementation of this simple UEDA was written by the authors in R. It uses the same initialization strategy as the EA.
#### 5.2.4 Discrete covariance matrix adaption evolution strategy (DCMA)
Finally, we also employ a variant of the covariance matrix adaption evolution strategy (CMAE-ES), which can handle discrete (including binary) variables by introducing a lower bound on the marginal probabilities [13]. In essence, this algorithm can also be viewed as an EDA. But in contrast to the UEDA, correlations between variables \(z_{i}\) are taken into account, as the internal distribution is based on an evolving covariance matrix.
We use the implementation of this margin-based CMA-ES provided by the cmaes python library [24] and refer to this algorithm as discrete covariance matrix adaption (DCMA). Besides the population size \(\mu\), this algorithm is also impacted by an initial standard deviation \(\sigma_{\mathrm{init}}\), as well as the margin parameter \(\alpha_{m}\).
#### 5.2.5 Random and Greedy Search
Finally, we employ two simple algorithms as baselines to compare against: a simple greedy local search with restarts (GRS) and random search (RS). The former is at its core a purely exploitative, local search, while the latter is purely explorative, global search.
The local search procedure of the GRS starts at a random initial solution (determined in the same way as the initial solutions in the EA). Neighboring candidate solutions are produced by bit-flips, in a random order. Once a neighbor improves on the current solution, it immediately becomes the current solution. If no neighbor improves on the current solution, the procedure is restarted with a new initial solution. We do not tune this procedure for the sake of simplicity. Note that we could in principle make this procedure more complex, e.g., by redefining neighboring solutions to mean something else than single bit-flips, by introducing a more clever way to generate new initial solutions for restarts, or by hybridizing it with the EA or similar algorithms.
The random search simply tries out new candidate solutions at random, where each solution is generated in the same way as random initial solutions in the EAs.
### Experiment
The whole data set is randomly split into two subsets. Each subset is used to generate a corresponding instance of the QUBO objective function, using correlation as the respective dependency measure. One instance will be used for tuning, another for validating / comparing the (tuned) algorithms.
For the quantum algorithm QAOA, we run the algorithm on real QC devices (mainly Geneva and Montreal) nine times for \(p=3\), solving only the validation instance. Similarly, CPLEX is run on the validation instance of the objective function (and will be used to provide a baseline, as it solves this problem exactly).
#### 5.3.1 Algorithm Tuning
Tuning is performed on the first of the two instances of our objective function. The goal is to provide just a very rough pre-configuration for the meta-heuristics, to avoid detrimental performance due to a poor algorithm configuration and allow for a reasonably fair comparison between the metaheuristics.
Therefore, the employed tuning method is uniform random search: For each algorithm, 100 algorithm configurations are sampled uniformly at random. For each algorithm configuration, 10 independent algorithm runs are performed. Each run uses a maximum of 2000 objective function evaluations. Afterward, the configuration with the best median performance over the 10 runs is returned as the 'tuned' configuration. Here, 'performance' means the median sum (overall objective function evaluations (OFE)) of the cumulative minimum objective value recorded by the tuning procedure. We tune two to four'main' parameters of each algorithm, as specified in Table 9 (omitting the more simple baselines, and omitting QAOA due to computing constraints).
The tuning results in Table 9 indicate that SAEA performs best on the respective tuning instance of the objective function, followed by UEDA. The simple EA performs worst during tuning. In addition, the low standard deviation implies that SAEA is less sensitive to the tuned parameters, which may reasonably be attributed to the self-adaptive mechanism.
In terms of tuned parameters, the mutation rate for the EA is close to \(1/n\), and the respective operator is bit-flip. This suggests that disturbing solutions as little as possible are preferable. The chosen recombination operator is a uniform crossover. This makes sense, as there is no obvious structure in the order of bits \(z_{i}\) (i.e., the underlying data features, whose order is arbitrary) which would have to be preserved by using 1- or 2-point crossover.
The population size is the largest in the tested set of algorithms, which may be a reasonable measure to account for the relatively static behavior of the EA.
The learning rate \(\tau=0.048\) of the SAEA is set rather low, resulting in an average change of the mutation rate at only \(3.8\) % per iteration. Considering that the initial mutation rate is at \(1/n\), this means that it will probably remain fairly low throughout the complete run. Comparatively, the probability for changes of the operators is set rather high, leading to more frequent changes of the operator (which may either indicate that these operators have to change dynamically during the optimization run, or else, that the algorithm is not that sensitive to frequent changes of the operators).
The UEDA receives a rather high learning rate. This means, new probabilities \(p_{z_{i}}^{*}\) estimated in an iteration contribute more to the respective probabilities in the next iteration than the probabilities employed in the current iteration of the UEDA. Seemingly, the UEDA requires changing the probabilities rather dynamically throughout each algorithm run.
Finally, the DCMA receives a population size that is slightly larger than the default at \(13\). The initial standard deviation is set rather low, and the margin parameter is set close to the suggested default \(\left(\mu n\right)^{-1}\).
Only the more complex non-QC metaheuristics are tuned. The QAOA algorithm is not tuned before running it on the physical QC devices, since the accessibility of the system does not allow for the respective larger number of runs required for tuning. Also, the metaheuristics, unlike QAOA, are not specifically developed for quadratic (or polynomial) objective functions and are hence, by default, less specialized to this problem class.
#### 5.3.2 Validation experiment
Validation is performed on the second of the two instances of the objective function. The goal of this is an unbiased comparison, to avoid a potential overfit of algorithm parameters to the tuning instance, although this is relatively unlikely given the rather rough tuning procedure.
For each algorithm, a maximum of \(5000\) objective function evaluations are allowed per independent run. \(20\) independent runs are performed for each metaheuristic algorithm (each run with different initial seeds).
Due to limitations in terms of computing access, QAOA does only \(9\) independent runs on the validation instance, with \(58\) to \(79\) iterations. Note, that each run takes three to four hours to complete (time-in-queue not included). In comparison, a run of any of the metaheuristics
\begin{table}
\begin{tabular}{l l l l|l} algorithm & parameter & range & tuned & performance \\ & & of values & value & (sd-all) \\ \hline EA & population size \(\mu\) & \([4,200]\) & \(52\) & -565 \\ & mutation rate \(r_{m}\) & \([0,1]\) & \(0.042\) & (73.2) \\ & mutation operator \(o_{m}\) & \(\{1,2,3\}\) & \(1\) & \\ & recombination operator \(o_{r}\) & \(\{1,2,3\}\) & \(3\) & \\ \hline SAEA & population size \(\mu\) & \([4,200]\) & \(14\) & -593 \\ & learning rate \(\tau\) & \([10^{-4},10^{0}]\) & \(10^{-1.32}\) & (25.6) \\ & probability \(p_{r}\) & \([0,1]\) & \(0.30\) & \\ \hline UEDA & population size \(\mu\) & \([4,200]\) & \(15\) & -586 \\ & learning rate \(\tau\) & \([0,1]\) & \(0.95\) & (66.5) \\ \hline DCMA & population size \(\mu\) & \([4,200]\) & \(19\) & -576 \\ & initial stand. dev. \(\sigma_{\text{init}}\) & \([10^{-4},10^{4}]\) & \(10^{-2.03}\) & (59.0) \\ & margin \(\alpha_{m}\) & \([(\mu n)^{-1.5},(\mu n)^{-0.5}]\) & \((\mu n)^{-1.23}\) & \\ \end{tabular}
\end{table}
Table 9: Tuning: bounds and results for each parameter. \(b_{t}\) is the budget, i.e., the maximum number of evaluations per run during tuning. Mutation operator codes are \(1\): bit-flip, \(2\): block-inversion, \(3\): cycle. Recombination operator codes are \(1\): one-point, \(2\): two-point, \(3\): uniform. Where values are specified with exponents, random search applies uniformly in the exponents (instead of uniform in terms of actual values). The column ’performance’ lists the median sum of the cumulative minimum objective value recorded by the tuning procedure. In brackets, the standard deviation over all random configurations is given (not the standard deviation of the best performance only).
takes 2 to 6 seconds, depending on hardware details (using no parallel computing resources). CPLEX usually requires less than one second but makes use of multiple CPU cores, if available.
#### 5.3.3 Validation result
Figure 9 shows some first results from the validation experiment. The upper two boxplots show the best solution founds (aggregated over the independent runs of each algorithm), early after just 100 classical objective function evaluations (OFE) and finally after 5000 OFE. Note that the numbers shown for QAOA are in each case recorded at the end of a terminated run. Here, we see that while QAOA results are better than early results from each classical algorithm, the classical algorithms eventually find better results, given that enough OFE is invested.
Between the classical algorithms, GRS performs best in terms of early performance. SAEA and GRS are the only algorithms to attain the global optimum in all runs within 5000 OFE, with GRS converging considerably faster. This clear advantage of GRS is also visible in Fig. 10, and may indicate that the problem is 'easy' to solve at least from the perspective of local search: only a few repeated local searches are sufficient to consistently determine the optimum.
The lower left of Fig. 9 shows the approximation ratio (AE) after 5000 OFE. Note, that unlike the QAOA, AR values of the classical algorithms simply consider the last 50 evaluations of each run, weighting each observed candidate solution equally. The EA receives AR values comparable to QAOA, all other algorithms receive higher values. But note, that AR is not necessarily a comprehensive quality measure for the classical algorithms, since good performance (in terms of the best solution found) can be achieved by all four tested algorithms, despite significant differences in terms of AR. The reason here is that while final performances are equal, the variation of results differs a lot between the four algorithms, as the bottom right plot shows. Here, the EA and SAEA variants show much larger variations than the UEDA and DCMA. This is not (necessarily) a negative observation for EA and SAEA. In fact, larger variation within the sampled candidate solutions may imply a better ability to escape local optima.
As seen in Fig. 10, the metaheuristics require a few hundred of OFE to meet the performance of the final QAOA results. Since QAOA does not consume OFE, it is instead reported against the number of QAOA iterations in Fig. 11, where each QAOA iteration is (somewhat arbitrarily) valued at 86 OFE, to enable a visual comparison to the metaheuristic solvers.
Figure 9: Top left: best found objective value after 100 objective function evaluations (OFE) per run. Top right: same after 5000 OFE. Note: QAOA is best overall, not counting OFE, hence the same in top left and top right. Bottom left: approximation ratio (of the last 50 of 5000 OFE in case of the non-quantum algorithms). Bottom right: raw objective function values of the last 50 OFE.
Figure 10: Optimality gap of the metaheuristics over the number of OFE (with log-scaled axes). Colored thick line is the median of each metaheuristic. The colored ribbon shows 5 and 95 percentiles. Black dashed lines show 5, 50, and 95 percentiles for QAOA. The thin red line at the bottom shows the global optimum value of the objective function (determined via CPLEX). The optimality gap is the difference of the best found objective function value to the optimum. The optimum is rounded down on the third digit to avoid infinite values. Hence, the global optimum is just below the 0.001 mark.
Figure 11: Optimality gap of the metaheuristics and QAOA over the number of OFE (log-scaled axes), similar to Figure 10, for the first 1000 OFE. Since QAOA does not consume OFE, it is instead reported against the number of QAOA iterations, where each QAOA iteration is (somewhat arbitrarily) valued at 86 OFE, to enable a comparison to the metaheuristics (i.e., scaled up to 5000 OFE). The optimality gap is the difference of the best found objective function value to the optimum. The optimum is rounded down on the third digit to avoid infinite values. Hence, the global optimum is just below the 0.001 mark.
The corresponding plot shows that QAOA itself starts with better objective function values than the competing algorithms, but after some initial gains, progresses only a little. Only the simple RS is consistently outperformed by QAOA.
While QAOA (when run on actual QC hardware) achieves decent results fairly early, it is not yet able to compete with non-QC optimization algorithms. To determine whether this may change eventually will require a more in-depth investigation, that would have to make strong assumptions about, among others, how costs/effort of QC and non-QC algorithms are to be compared reasonably, and how they scale with aspects such as problem dimension or available parallel computing power.
## 6 Discussion
In this paper, the problem of selecting appropriate features for a supervised learning algorithm has been transformed to an optimization problem. This optimization problem has been solved with classical numerical methods as well as with quantum computing.
During the transformation of the feature selection problem, several ambiguities arise: One may choose between different dependency measures and choose whether the dependence between the features or the dependence between each feature and the target is more important. While the choice of the dependency measure depends on the data set considered, we found that in general a high value of \(\phi\) is favored, which means, that the dependence between the target and the features is more important.
When comparing the performance of the optimization method (with a proper choice of the dependency measure and \(\phi\)), this method can compete with established methods like RFE or LASSO. However, all feature selection methods seem to be not able to find the global optimum, which could be obtained by brute force, for our small data sets.
After confirming that the optimization method for feature selection works in principle, we explored different methods to solve the resulting quadratic binary optimization problem: A simple greedy search algorithm, a well-established commercial optimizer (IBM CPLEX), and a cutting-edge method of quantum computing.
For the quantum computing solution, we used the QAOA algorithm within the gate-based approach and used quantum simulators, as well as real quantum hardware available on the IBM quantum cloud. Whereas the solution of the small-size data sets merely served to demonstrate the feasibility, the solution of the 27-feature problem is undoubtedly more interesting. Although we were limited to \(k<=3\) within QAOA, on real hardware we obtained approximation ratios above 0.80; well above the random sampling result. With the number of qubits available at the moment, the classical and meta-heuristical methods are not outperformed by quantum computing. An open question that will be addressed in future work is the scaling of both approaches, i.e. if and for which number of qubits there may be a quantum advantage.
**Acknowledgements** This work is funded by the Ministry of Economic Affairs, Labour and Tourism Baden-Wurttemberg in the frame of the Competence Center Quantum Computing Baden-Wurttemberg (project 'QORA II').
Useful discussions and the careful reading of the manuscript by PD Dr. Thomas Wellens, IAF, are greatly acknowledged.
**Data availability** The data that support the findings of this study are available upon reasonable request.
**Declarations** The authors have no relevant financial or non-financial interests to disclose. |
2308.06887 | Robustified ANNs Reveal Wormholes Between Human Category Percepts | The visual object category reports of artificial neural networks (ANNs) are
notoriously sensitive to tiny, adversarial image perturbations. Because human
category reports (aka human percepts) are thought to be insensitive to those
same small-norm perturbations -- and locally stable in general -- this argues
that ANNs are incomplete scientific models of human visual perception.
Consistent with this, we show that when small-norm image perturbations are
generated by standard ANN models, human object category percepts are indeed
highly stable. However, in this very same "human-presumed-stable" regime, we
find that robustified ANNs reliably discover low-norm image perturbations that
strongly disrupt human percepts. These previously undetectable human perceptual
disruptions are massive in amplitude, approaching the same level of sensitivity
seen in robustified ANNs. Further, we show that robustified ANNs support
precise perceptual state interventions: they guide the construction of low-norm
image perturbations that strongly alter human category percepts toward specific
prescribed percepts. These observations suggest that for arbitrary starting
points in image space, there exists a set of nearby "wormholes", each leading
the subject from their current category perceptual state into a semantically
very different state. Moreover, contemporary ANN models of biological visual
processing are now accurate enough to consistently guide us to those portals. | Guy Gaziv, Michael J. Lee, James J. DiCarlo | 2023-08-14T01:47:26Z | http://arxiv.org/abs/2308.06887v2 | # Robustified ANNs Reveal Wormholes
###### Abstract
The visual object category reports of artificial neural networks (ANNs) are notoriously sensitive to tiny, adversarial image perturbations. Because human category reports (aka human percepts) are thought to be insensitive to those same small-norm perturbations - and locally stable in general - this argues that ANNs are incomplete scientific models of human visual perception. Consistent with this, we show that when small-norm image perturbations are generated by standard ANN models, human object category percepts are indeed highly stable. However, in this very same "human-presumed-stable" regime, we find that _robustified_ ANNs reliably discover low-norm image perturbations that strongly disrupt human percepts. These previously undetectable human perceptual disruptions are massive in amplitude, approaching the same level of sensitivity seen in robustified ANNs. Further, we show that robustified ANNs support precise perceptual state _interventions_: they guide the construction of low-norm image perturbations that strongly alter human category percepts toward specific prescribed percepts. These observations suggest that for arbitrary starting points in image space, there exists a set of nearby "wormholes", each leading the subject from their current category perceptual state into a semantically very different state. Moreover, contemporary ANN models of biological visual processing are now accurate enough to consistently guide us to those portals.
[https://github.com/ggaziv/Wormholes](https://github.com/ggaziv/Wormholes)
## 1 Introduction
Based on empirical alignment, some fully-trained artificial neural networks (ANNs) are the current leading scientific models of the integrated mechanisms of the adult primate visual ventral stream - in particular, its multi-level neural computations and some of the visually-guided behaviors it supports [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. However, individual ANNs are also notoriously susceptible to _adversarial attack_: Adding of a tiny (e.g., ultra-low norm \(\ell_{2}\)-norm) pixel perturbation to the model's input, which is optimized to disrupt the model's categorization of the original image [11; 12; 13]. Because human object category perception has been shown to be only weakly sensitive to those same small-norm pixel perturbations [14; 15; 16], if at all, this argues against those ANNs as scientific models of human visual perception [17; 18].
Moreover, human object category perception is empirically robust to random image perturbations of sufficiently low-norm, here estimated to be \(||\delta||_{2}\leq 30\) (see Methods). The corresponding assumption is that humans are robust to _any_ perturbation with low norm (Fig 1a, but see [19; 20] for special cases). Our primary goal here was to re-examine this prevailing assumption, and to systematically compare human and ANN model categorization behavior in this low-norm pixel perturbation regime.
We were motivated by three recent observations: First, much recent work has been aimed at addressing the adversarial sensitivity of ANNs [21], including _adversarial training_[22] - a training scheme in
which insensitivity to low-norm pixel perturbations is _explicitly imposed_ by generating adversarial examples during model optimization (a procedure we refer to here as "robustification"). Intriguingly, the resultant _robustified_ ANNs are not only less susceptible to adversarial attacks on new clean images [22, 23], but, unlike vanilla ANN attacks, those attacks introspectively seem to engage human perception [24, 25, 26]. Second, when tested on clean natural images, robustified ANNs have internal "neural" representations that are more closely aligned with the corresponding neural population representations along the primate ventral stream, relative to vanilla ANNs [27, 28]. Third, the adversarial sensitivity of the responses of individual "neurons" in deep layers of robustified ANNs has been reported to be comparable with the sensitivity of the responses of individual biological neural sites at the corresponding layer of the primate visual ventral stream [29]. In addition, the robustified ANNs were found to reliably predict [ultra-]low-norm perturbation _directions_ in pixel space for which biological neural sites turned out to indeed exhibit unexpectedly high sensitivity.
Taken together, these observations suggested the possibility that robustified ANNs may have closed the human-to-model behavioral gap in low-norm perturbation sensitivity. If so, then because robustified ANNs still exhibit ubiquitous, dramatic category report changes with precisely targeted low-norm pixel perturbations, then humans must _also_ exhibit such not-yet-detected, ubiquitous, massive "bugs" in their visual processing. And those bugs could be utilized as image-dependent "features" that may give rise to very strong, yet very efficient (i.e., low pixel budget) human perceptual modulation.
We therefore asked: **Q1) have robustified ANN models closed the model-human gap in adversarial sensitivity?** And, **Q2) can arbitrary human percepts be specifically induced in the low pixel budget regime**, where human percepts are believed to by highly robust? We asked these two questions in the context of nine-way visual categorization tasks that are performed both by ANN models (all parameters frozen) and by human observers: a test image is presented and one of nine category reports is made (by models and by humans). To create each such test image, a frozen ANN model is utilized by a standard image generation algorithm as a "guide" in pixel space (dimensionality
Figure 1: **Robustified models discover low-norm image perturbations that strongly modulate human category percepts.**_The prevailing assumption: Human object category percepts have complicated topology in pixel space, but are robust (i.e., stable) inside a low pixel budget envelope around most natural images (**a**). Guided by robustified ANNs, we discovered image perturbations within that envelope that strongly disrupt (**b1**) or precisely change (**b2**) human category percepts. (**c1**) Image category disruption rates of humans (black) and models on the same modulated images, replicating a large gap in behavioral alignment for a non-robust “vanilla” model, which is largely closed by adversarial robustification. Model curves show reports by “surrogate” model of the Guide Model type (i.e., “gray-box”). (**c2**) Robust models allow for precise Targeted Modulation (TM) of human behavior in the “human-presumed-insensitive” pixel budget regime. Shown are TM examples by \(\ell_{2}\) pixel budget of 30.0 in the low pixel budget regime, demonstrating that arbitrary source images can be locally modulated toward arbitrary target human category percepts. The presented robust model in all panels is adversarially-trained at \(\ell_{2}\) pixel budget of 3.0. Error bars, 95% CI over source images & human raters._
D = 150,528; see Methods) to construct a low-norm pixel perturbation (length D) relative to an arbitrary start image for a prescribed "perceptual" goal. For **Q1)** the prescribed goal is: "Perturb this start image to disrupt the model's currently category judgement." And, for **Q2)** the prescribed goal is: "Perturb this start image to induce a prescribed category judgement [X]." We use different guide models to generate these perturbations, and we then measure the effect (if any) of such image perturbations on the categorization behavior of: (i) the original ANN guide model (aka "white-box"), (ii) a surrogate ANN model from the same family as the guide model (aka "gray-box"/transfer test), and (iii) a pool of human observers at natural viewing times.
Our main contributions are as follows:
\(\bullet\) We provide evidence for the existence of low-norm image perturbations of arbitrary natural images that strongly disrupt human categorization behavior.
\(\bullet\) We provide empirical evidence for the existence of "wormholes" between category percepts: local (i.e., low-norm) perturbations in image space that support "travel" from the category percept induced in a human subject by an arbitrary start image to semantically very "distant" perceptual categories.
\(\bullet\) We show that some robustified ANN models have largely, but not completely, closed the model-to-human behavioral correspondence gap in low-norm perturbation sensitivity.
\(\bullet\) We show that robustified ANN models coupled with a generator algorithm can support surprisingly precise, low-norm perceptual state modulations for humans.
## 2 Overview of approach and experiments
To study the effects of small image perturbations on human visual object categorization, we used a two-stage methodology: (i) generate small image perturbations predicted to modulate human behavior by highly-ranked models of the ventral visual stream and control models, then (ii) collect human object categorization reports in a nine-way choice task, identical to that performed by the model using Amazon Mechanical Turk surveys. This methodology allowed us to directly assess the alignment between model category judgements and (pooled) human category judgements in response to the same start images and the same perturbations of those start images.
### Generating image perturbations predicted to modulate human behavior
We focused on two image perturbation modes conceptually illustrated in Fig 1b: (i) Disruption Modulation (DM), involving perturbations intended to induce model errors by driving the model's category judgement away from the ground-truth category label, irrespective of the alternatively-induced model category judgement (Q1 above); and (ii) Targeted Modulation (TM), involving perturbations designed to induce specific, "target" model category judgements (Q2 above). These two modes are also referred to as untargeted and targeted attacks, respectively [22; 30; 18]. Image perturbations were created by one of the four model families we considered here, without any guidance from human experimenters and without guidance from human subjects data.
To have a tractable perceptual category report set that avoids the problem of humans being unfamiliar with the 1000 fine-grained ImageNet classes, we focused on a subset of ImageNet that has been mapped to a basic set of nine classes, called "Restricted ImageNet" [30]. Using this nine-category image set, we randomly sampled starting images (aka "clean" images) from each class and, for each such starting image, produced a DM-perturbation image and eight TM-perturbation images (one for each of the eight alternative possible categories). Each such perturbation image was required to be within the pixel "vicinity" of the starting image by imposing a pre-chosen upper limit on the \(\ell_{2}\)-norm of the perturbation (aka "pixel budget" \(\epsilon\), Fig 1a). To systematically test perturbation efficiency, we repeated this procedure for a wide range of pre-chosen pixel budget limits, focusing on the "human-presumed-stable" low-norm pixel budget regime (\(\epsilon\leq 30\); see Sec 1).
For Guide Models (GM) used to direct the construction of image perturbations, we focused on the ResNet50 ImageNet-pretrained model family [31]. These ANNs are among the currently leading models of primate ventral visual processing stream and its supported behaviors [32; 2; 29], and particularly so in their more recent adversarially-trained, "robustified" version [22; 30]. We thus considered four families of ResNet50 models, which differ only in their robustification level, starting from the non-robust, "vanilla" variant, through those commonly employed (\(\ell_{2}\) pixel budget at training of 3.0) [30], to new models which we trained for other robustification levels. Notably, we denote this _adversarial training budget_ by \(\varepsilon\), to be distinguished from the perturbation budget, \(\epsilon\), which applies to any image-perturbation method.
For a fair quantitative comparison of model category judgements and human category judgements we addressed the "gray-box" setting [12; 11; 13], where we ask _"surrogate" models_ - a model from the Guide Model's family, but which is initialized and trained using a different seed - to report its category judgements of the perturbed images (i.e., within family transfer test).
### Measuring human object category percepts
We used Mechanical Turk for large-scale behavioral experiments of nine-way image categorization. Randomly-selected human raters provided their trial-by-trial category reports via a sequence of trials. In each trial a single test image - randomly-chosen from the full set of TM or DM perturbations images - was presented at the center of gaze for a fixed duration of 200ms, after which the rater was shown a category choice screen with nine category options ('dog', 'cat', 'frog', 'turtle', 'bird', 'primate', 'fish', 'crab', 'insecet'). No pre-mask or post-mask was used. Raters were asked to report the category that best describes the presented image.
## 3 Results
### Disruption Modulation results
Fig 2 shows the results of the Disruption Modulation (DM) experiment. The DM image generation Guide Models include a non-robust (vanilla) model and three versions of robustified models, trained to different robustification levels, indicated by their perturbation pixel budget at adversarial training, namely, 1.0, 3.0, and 10.0. Each panel corresponds to measured category report "errors" on a different system: the guide model itself, a guide-surrogate model, and on humans. Note that "errors" here are defined with respect to the ground truth label of the start (unperturbed) image.
Examination of the effect of image perturbations on the Guide Model that designed them (Fig 2b, left) confirms previous reports that robustification strongly reduces white-box perturbation sensitivity near novel start images, as previously reported [11; 12; 13]. And the human DM results confirms previous reports [14; 15; 16] that human category percepts are only very weakly sensitive to low-norm image perturbations designed by vanilla ANNs (Fig 2b right, blue line).
Figure 2: **Low-norm image perturbations discovered by robustified models strongly disrupt human category judgements.**_(a)_ _The Guide Models used for Disruption Modulation (DM) image generation._ _(b)_ _Disruption rates of humans and models. All panels share the same set of start images, and the four sets of perturbed images (one for each of the four Guide Models, color-coded). Y-axis shows the categorization “error” rate with respect to the category label of the (clean) start image. Human curves show average across subjects and trial repeats per image, with lapse-rate correction applied subject-wise. “White-box” refers to reports made by the Guide Model itself. “Gray-box” refers to reports by a second seed “surrogate” model of Guide Model type. Error bars, 95% CI by bootstrap over images & subjects._ _(b)_ _Example of modulated images generated by robust model [3.0] (in red) and, in contrast, by vanilla model (in blue) in the low pixel budget regime._
In contrast, we report this novel finding: **human percepts are massively disrupted by low-norm image perturbations discovered by robustified ANNs** trained at an \(\ell_{2}\) pixel budget of 3.0 (red line). At an image perturbation budget of 30, \(\sim\)90% of human category reports are no longer the category of the start image. Visual inspection of some example test images (Fig 2c) allows introspection about the perceptual differences between image perturbations generated by vanilla guide models vs. those generated by robustified guide models. Notably, we report these strong disruptions in human category percepts when perturbing at budgets well above the budget used for model robustification, yet still well-below the typical pairwise natural image distance regime (see Supplementary Material, Fig 1). This result was not a priori guaranteed.
We sought to determine whether human perceptual disruptions are quantitatively aligned with the predictions of robustified ANNs. Because we cannot perform white box attacks on the human visual system, the fairest assessment of this is via comparison of effects on humans with effects on surrogate models. To facilitate this comparison, we re-plot the results of the robustified ANN model (level 3.0, red) on the same panel as the human data (Fig 1c1, black line). Those plots demonstrate: (i) the originally motivating qualitative "gap" between the very high sensitivity of vanilla (surrogate) ANNs and the almost complete lack of human perceptual sensitivity to the same perturbations, and (ii) that some robustified ANNs have dramatically closed that gap. Nevertheless, some human-model misalignment still exists, with humans typically requiring about twice the perturbation strength to produce a quantitatively matched level of perceptual disruption as that particular ANN model family (red vs. black in Fig 1c1 right).
We also noted that robustified models show stronger "white-box"-to-"gray-box" alignment. Namely, image perturbations driven by robustified models not only transfer effectively to humans but also exhibit stronger alignment to surrogates of their own model class, compared to vanilla models (compare Fig 2b left vs. middle graphs).
We further assessed the impact of robustification training level on our findings. We found that models, which were adversarially trained at reduced allowable budget of 1.0, were less human-aligned (cf. Fig 2b right, yellow vs. red lines). Increasing the allowable budget at training to 10.0 did not result in a significant qualitative difference in model-human alignment. However, it did lead to less effective human category disruptions per budget level (cf. Fig 2b right, brown vs. red lines).
In follow up experiments, we found these human perceptual disruption effects to be largely unaffected by test image viewing times in the natural viewing time range of 100-800ms (see Supplementary Material). In summary, our results demonstrate that robustified models reveal at least a subset of behaviorally meaningful image perturbations in the low pixel budget regime.
### Targeted Modulation results
Fig 3 shows the results of the Targeted Modulation experiments - tests of each Guide Model's ability to shift human percepts _toward prescribed target categories_ from arbitrary start images (see Sec 2.1). We first randomly selected start images from all nine object categories and perturbed each source image toward all different-class targets. We used several Guide Models: the non-robust (vanilla) model, three variations of robustified models with different levels of robustification, and an image interpolation baseline approach using randomly selected images from the intended target categories. We experimented with a range of \(\ell_{2}\) pixel budgets in the low pixel budget regime, using the same source images across all conditions. Fig 3a provides examples of images generated by the \(\varepsilon_{tr}[3.0]\) robustified model at a 30.0 pixel budget in the low budget regime.
We found that robustified models trained with an \(\ell_{2}\) pixel budget of 3.0 and 10.0 give rise to perturbations that are reliably reported by humans as the target category (Fig 3b). These models discovered specific image modifications that resulted in a 60% probability of humans choosing the prescribed target category at an \(\ell_{2}\) pixel budget of 30 (chance: \(\sim\)11%). In comparison, the interpolation approach remained at baseline level throughout the low pixel budget regime. Consistent with the findings in the Disruption Modulation experiment, we found that perturbations made by vanilla model had no targeted effect on human category percepts.
The _specificity_ of our Targeted Modulation is further demonstrated in Fig 4a. This panel shows examples of successful TM toward 'dog' and 'cat' for a robustified model (\(\varepsilon_{tr}[3.0]\)), and other baseline methods. The precise control of a model is shown in confusion matrices in Fig 4b. They further highlight the strong specificity of TM for the robust model at \(\ell_{2}\) pixel budget of 30 relative to a image interpolation baseline approach.
#### Dependence on start images & target category percepts
To test the generality of these behavior modulation findings across different distributions of the start images, we conducted similar experiments (all at pixel budget 30) using source images from (i) Out Of Distribution (OOD) images of the same categories ('frog', etc.), (ii) Arbitrary Natural Images (ANI), collected from the internet and social media, and (iii) Uniform Noise Images (UNI). We found that the TM effect magnitude remained comparable even when start images were drawn from these different distributions. For In-Distribution (ID), OOD, ANI, and UNI we achieved normalized target category induction rates of 60%, 51%, 55%, and 70%, respectively (Fig 4c). Notably, TM often fails to produce convincing images for some target classes when using UNI source images (see Supplementary Material).
To further test the generality of our approach, we conducted experiments using _new, arbitrary target categories_. We selected nine target classes from an external animal image dataset, using 60 randomly selected images to represent each class. We modified our TM method to optimize towards the centroid feature representation of the target class, obviating the need to train a new classifier for the new class space (see Sec 4). We found high TM specificity also under this setting (Fig 4d), with lower TM scores on average compared to previous settings. This is despite the challenges posed here, which include, in addition to using new target classes, the presence of source-image distribution shifts (i.e., OOD, ANI; For UNI see Supplementary Material).
#### Induction of composite target category percepts
We next sought to test whether model-guided perturbations could move human percepts toward pairs of object categories. Because subjects were only allowed to report one of nine categories, a perfect success in this test would be inducing humans to report each of the two prescribed target categories \(\sim\)50% of the time. Fig 4e shows results for these "two-composite" experiments. These results indicate that, while not perfect, even under this unusual setting, robustified models often discover highly creative ways to combine multi-class features _in a single image_ that are human-recognizable. Visual inspection of the modulated images further highlights that the multi-class modulation cannot be trivially attributed to tiling the relevant class features across different spatial locations - The budget constraint guides the model to find clever ways _to combine_ rather than "just add". Notably, scaling to 3-composite directions under the restrictive budget of 30 makes TM highly challenging, and more sensitive to the specific start images (see Supplementary Material).
#### Optimal robustification level
We defined the Model TM Efficiency (MTME) to be the budget at which a modulatory effect size exceeding 10% probability of choosing the target is observed. To this end, we interpolated the curves from Fig 3 uniformly to 100 points in \(\ell_{2}\) pixel budget range of \([0,30]\), and identified the first budget at which the threshold is exceeded. We rank the models by
Figure 3: **Robustified models enable budget-efficient precise targeted behavioral modulation.** _(a) Targeted Modulation (TM) example: A ‘dog’ start image is here independently modulated toward each of eight alternative target categories by Robust \(\varepsilon_{tr}[3.0]\) at an \(\ell_{2}\) pixel budget of 30.0. **(b)** Efficacy of Guide Models in modulating human reports toward specific, prescribed target categories (TM). Images were drawn from all nine categories and each image was independently perturbed towards each of the eight alternative possible target categories (as illustrated in **(a)**). Curves show the lapse-normalized probability of human choosing the prescribed target category. Baseline marks refer to unperturbed images, indicating human rates of reporting the target category of the source image (‘Catch’). Error bars, 95% CI by bootstrap over source images, target categories, and subjects. Robust models trained with \(\ell_{2}\) pixel budget 3.0 and 10.0 gave rise to the most budget-efficient behavior modulation._
ascending efficiency order (marking MTME score, lower is better): Rank 5) Vanilla, which never reaches the 10% threshold within the considered range, Rank 4) Robusified \(\varepsilon_{tr}[1.0]\) (196), Rank 3) Image interpolation (47), Rank 2) Robustified \(\varepsilon_{tr}[10.0]\) (11), and the best, Rank 1) Robustified \(\varepsilon_{tr}[3.0]\) (9). This suggests that there exist an optimal robustification level for strongest modulation effects. While this precise optimization is beyond the scope of this work, we highlight that the Robust \(\varepsilon_{tr}[3.0]\) model is currently the most TM-efficient amongst all tested.
_Other pixel budget constraints._ In addition to the \(\ell_{2}\) perturbation constraint, we also explored the use of \(\ell_{\infty}\) constraint for TM. Visual inspection of the resulting images revealed that both constraints induce comparable modulation effects, with the \(\ell_{2}\) constraint giving rise to images that appeared slightly more visually "clean" and aesthetically pleasing. We also found that TM by \(\ell_{\infty}\)-trained models was significantly weaker compared to the \(\ell_{2}\)-trained models.
## 4 Methods
### Distances in image space
We exclusively focused on \(224\times 224\times 3\) RGB pixel space (\(D=150,528\)), where each value ranges between 0 and 1. Thus, the maximal distance between any two images is \(\|\delta\|_{2}^{max}\approx 388\). For
Figure 4: **Robustified models achieve precise Targeted Modulation (TM) from any image start point to any target direction.**_(a, b)_ _Behavioral modulation results obtained when using a robust model to drive toward all possible cardinal class directions (i.e., 1-hot). **(c, d)** _Analysis of distribution dependence of source images and target directions, generalizing our precise targeted modulation finding beyond Restricted ImageNet. **(e)** _Targeted Modulation toward hybrid directions of two classes. All plots and images are due to robust model (\(\varepsilon_{tr}[3.0]\)) that perturbs natural source images at an \(\ell_{2}\) pixel budget of 30.0. All panels show directly comparable numbers. Error bars, 95% CI over source images & human subjects._
reference, we found the typical distance between ImageNet images to be \(\approx 130\). We also estimated the regime in which human object perception is empirically robust to random perturbations to be \(\|\delta\|_{2}\leq 30\) (see Supplementary Materials; also see [33]).
### Stimuli generation by model-guided perturbations
_Disruption Modulation (DM)._ We used the Projected Gradient Descent (PGD) attack algorithm [29, 34, 35], using an adapted version of the robustness library [30]. Given an initial input image and its ground-truth label in 1-hot representation \(x\), \(y\), and a classification model to logits for \(C\) classes, \(f:\ \mathbb{R}^{H\times W\times 3}\rightarrow\mathbb{R}^{C}\), we optimize for the perturbation, \(\delta\), such to be disruptive of model classification to the ground-truth class label under a pixel budget restriction:
\[\delta=\operatorname*{argmax}_{||\delta||_{p}<\epsilon}\mathcal{L}_{CE}\left(f \left(x+\delta\right),y\right), \tag{1}\]
where \(\epsilon\) is the pixel budget, and \(\mathcal{L}_{CE}\) is the cross-entropy loss. Unless otherwise mentioned, we focus on \(\ell_{2}\)-adversarially-trained models and attacks, i.e., \(p=2\). Optimization is performed in steps of Stochastic Gradient Descent (SGD), each followed by a projection step to pixel budget \(\epsilon\):
\[\delta_{k+1}\leftarrow\text{Proj}_{\epsilon}\left(\delta_{k}+\eta\nabla_{ \delta}\mathcal{L}_{CE}\left(f\left(x+\delta_{k}\right),y\right)\right), \tag{2}\]
where \(k\) denotes the step, and \(\eta\) is the step-size. All original input images are resized to \(256\times 256\) (bilinear, anti-aliased) and center-cropped to \(224\times 224\)[22]. We focused on the following range of \(\ell_{2}\)-pixel budgets in the low-budget regime: \([0.1,0.5,1.0,2.0,3.0,5.0,7.5,10,15,20,25,30,40,50]\). We set the number of PGD steps and the step-size, \((k_{steps},\eta)\), such to match the pixel budget [30], ranging from \((200,0.02)\) for \(\epsilon=0.1\) to \((2000,2)\) for \(\epsilon=50\).
_Targeted Modulation (TM)._ The algorithm is identical to DM, aside from \(y\) changing to be a given target class label, and the optimization solving for minimization in lieu of maximization (flipping of gradient sign). Notably, unlike DM, this modulation can be performed toward multiple target class labels from a single source image.
_Hybrid-Class Targeted Modulation._ To modulate toward non-cardinal directions we defined random composite directions involving multiple classes. These directions were constructed as probability vectors with all-zero except for \(k\) randomly selected classes, which were assigned equal probabilities. We generated four 2-composite directions: 'frog-bird', 'frog-insect', 'primate-crab', 'primate-fish'; Our objective function was based on cross-entropy similar to Eq 1, with additional logit maximization on the non-zero target classes, which drives higher usage of the allowable budget.
_Features Targeted Modulation._ To modulate images toward a target feature representation we compiled a subset of 60 images per class and computed their mean feature representation using the GM. This defines the target classes via class-centroids. The optimization criterion is then the MSE between the predicted feature representation the target class-centroid.
_Image interpolation modulation._ To generate baseline image perturbations using an image interpolation approach, we randomly drew images from a target class, used them for linear interpolation with the source image, and projected back the pixel budget envelope.
_Dataset._ We sought to focus on a dataset with the following key properties: (i) Has a tractable class-space for exploration; (ii) Will mitigate confounds originating from typical-human unfamiliarity with the class labels (as is commonly the case in ImageNet); (iii) Is widely used in the adversarial robustness context. Based on these considerations, we found a partial and mapped version of ImageNet to a basic set of nine classes, termed "Restricted ImageNet", to be the most suitable choice (see Supplementary Material for class mapping) [30].
_Training robustified models._ To adversarially-train models on ImageNet [36], we followed publicly released code [30]. Specifically, we trained ResNet50 models at \(\ell_{2}\) pixel budgets of 1.0, 3.0, and 10.0, using PGD (steps, step-size) of (7, 0.3), (7, 0.5), (10, 1.5), respectively.
_Runtime._ Our stimuli generation completes within 5min for a single batch of size 100 on a single A100 GPU. Adversarial-training of models completes within \(\sim\)8 days on 4 A100 GPUs.
### Human behavioral measurements
We measured human behavior in a nine-way image categorization task. Our primary experimental objective was, for any given test image, to estimate the probability that humans would on average select each of the nine possible choices.
_Task paradigm._ Human subjects performed "sessions" comprising multiple trials. Each trial had three stages: (i) First, the subject pressed a button at the center of their screen, encouraging them to center-fixate. (ii) A test image (covering approximately 6 degrees of visual angle) appeared at the center of their screen for 200ms. This was followed by 200ms of a blank gray screen. (iii) An array of nine labeled buttons (one for each of the nine reportable categories) appeared. These buttons were arranged radially, outside of the location of where the test image appeared. The locations of the buttons were randomly permuted on each trial, and no time limit was imposed. The subject was asked to select the button which most accurately described the image.
In total, there were \(n=130\) such trials in each session. Among these, there were four types of trials: (i) _Warmup trials_ (\(n=10\)), which were always presented at the beginning of the session. Test images were randomly sampled "clean" (unperturbed) images drawn from the Restricted ImageNet validation set [30]. (ii) _Calibration trials_ (\(n=10\)), included to measure the lapse rate of each human subject (see Lapse-rate correction). Stimulus images consisted of either a blue triangle or blue circle, and two randomly selected buttons on the choice screen were replaced with buttons for 'circle' and 'triangle'. (iii) _Reference trials_ (\(n=10\)), consisting of unperturbed stimulus images randomly selected from Restricted ImageNet validation set. (iv) _Main trials_ (\(n=100\)), consisting of potentially perturbed versions of source images. Trials types (ii)-(iv) were run randomly interleaved, following the warmup trials. In general, subject feedback was only delivered for unperturbed images. Upon an incorrect selection by the subject, a black "x" was displayed for 1000ms; No feedback was delivered upon a correct selection. We implemented the task paradigm using JSPsych, and presented the stimuli using the JSPsych-Psychophysics plugin.
_Human data collection._ We used Amazon Mechanical Turk to recruit and collect behavioral measurements in human subjects, following a protocol approved by the [anonymized IRB information] The protocol has no greater than minimal risk involved for the subjects. We did not attempt to collect any personally identifying information; subjects were kept anonymous. We screened subjects by their performance on a similar "demo" object categorization task (see Supplementary Material). In total, we recruited a population of \(n=119\) subjects (\(n=43\) female; \(n=76\) male). Subjects were compensated for all time spent in this study, including in the recruitment phase. Subjects were free to opt out of the study at any time. We ensured that each image we included in our analysis had measurements from at least \(n=2\) different subjects.
_Lapse-rate correction._ We assumed that measurements of human reports was possibly corrupted by a base "lapse rate", where a subject makes a random report independent of the presented test image's content. To account for that, we measured each subject's lapse rate, then adjusted their measurements using a formula that estimates their behavior under a lapse rate of zero (see Supplementary Material).
## 5 Conclusion
In this work, we systematically examine the prevailing assumption that human categorization is highly robust to low-norm image perturbations. Our findings challenge that assumption by establishing that the low-norm image perturbations suggested by robustified ANNs (but not vanilla ANNs) can strongly and precisely change human object category percepts. These results suggest two novel conclusions: (i) for arbitrary starting points in image space, there exists nearby "wormholes", each leading the subject from their current category perceptual state into a very different state, and (ii) contemporary scientific models of ventral visual processing are accurate enough to point us to those wormholes.
### Limitations
Our findings raise many new questions: given a start image, does there _always_ exist a nearby human perceptual wormhole to at least one another category? Are there nearby wormholes to _any_ possible category? Which wormholes are common to all humans and which are more idiosyncratic? And do robustified ANNs _always_ find the wormholes that do exist? And how often do they predict wormholes that do not actually exist? We do not yet know the answers to these questions, but the results presented here offer some insight. For example, Fig 2b right panel, showing a \(\sim\)90% disruption effect, is consistent with the hypothesis that wormholes are abundant in image space. Figs 3 and 4 (\(\sim\)60% effect) suggests that multiple category portals are nearby, but that portals to all categories are not always nearby. Further work could produce tighter lower-bounds on such probabilities.
We emphasize that these are lower bound estimates because we are probing human vision with models that are imperfect approximations of the underlying neurobiology. Indeed, we do not claim that robustified ANN models discover _all_ low-norm wormholes or that they _never_ point to
wormholes that do not actually exist. Notably, because we still see a residual gap in behavioral alignment (Figs 1c and 2b), other models, which are even more behaviorally aligned, must still exist. Specifically, our analyses focused on the ResNet50 model architecture, and mainly on \(\ell_{2}\)-norm budget robustified models and image perturbation type (see Sec 3.2 for other perturbation constraints). While generalizing to other architectures or image generation algorithms is not foundational to the our claims presented in this paper, those are interesting directions to explore in future work.
Furthermore, while this paper supports that adversarial training (AT) gives rise to an improved estimate of the adults state of the ventral stream and its supported behavior, we do not claim that AT is the mechanism by which robustness in humans emerges. Future work may consider alternative, more biologically plausible mechanisms (e.g., Topographic ANNs [37]) that may give rise to a comparably aligned models in terms of predictivity.
### Societal impact
Image generation methods such as those used here have potential societal benefits, but also potential risks [38]. The scientific knowledge of brain function contributed here may present additional future risks, in that it may widen the ability of malicious actors to subtly and non-consensually disrupt the perception of humans. Building defenses against those manipulations, and orienting the use of this knowledge toward societal benefits (e.g., consensually enriching perception, improving mental health, accelerating human visual learning) are crucial considerations in future work.
## Acknowledgments
This work was partially funded by the Office of Naval Research (N00014-20-1-2589, JJD); (MURI, N00014-21-1-2801, JJD), the National Science Foundation (2124136, JJD) and the Simons Foundation (542965, JJD). |
2304.07694 | Dancing polygons, rolling balls and the Cartan-Engel distribution | A pair of planar polygons is "dancing" if one is inscribed in the other and
they satisfy a certain cross-ratio relation at each vertex of the
circumscribing polygon. Non-degenerate dancing pairs of closed $n$-gons exist
for all $n\geq 6$. Dancing pairs correspond to trajectories of a non-holonomic
mechanical system, consisting of a ball rolling, without slipping and twisting,
along a polygon drawn on the surface of a ball 3 times larger than the rolling
ball. The correspondence stems from reformulating both systems as piecewise
rigid curves of a certain remarkable rank 2 non-integrable distribution defined
on a 5-dimensional quadric in $\mathbb{RP}^6$, introduced by \'E. Cartan and F.
Engel in 1893 in order to define the simple Lie group $\mathrm{G}_2$. | Gil Bor, Luis Hernández Lamoneda | 2023-04-16T04:52:51Z | http://arxiv.org/abs/2304.07694v3 | # Dancing polygons, rolling balls and the Cartan-Engel distribution
###### Abstract.
A pair of planar polygons is 'dancing' if one is inscribed in the other and they satisfy a certain cross-ratio relation at each vertex of the circumscribing polygon. Non-degenerate dancing pairs of closed \(n\)-gons exist for all \(n\geq 6\). Dancing pairs correspond to trajectories of a non-holonomic mechanical system, consisting of a ball rolling, without slipping and twisting, along a polygon drawn on the surface of a ball 3 times larger than the rolling ball. The correspondence stems from reformulating both systems as piecewise rigid curves of a certain remarkable rank 2 non-integrable distribution defined on a 5-dimensional quadric in \(\mathbb{RP}^{6}\), introduced by E. Cartan and F. Engel in 1893 in order to define the simple Lie group \(\mathrm{G}_{2}\).
Key words and phrases:(2,3,5)-distribution; simple group \(G_{2}\); projective polygon pairs; rolling distribution 2010 Mathematics Subject Classification: 58A30;53A20; 53A40; 53A55 We thank Robert Bryant for informative correspondence and to Travis Wilse for reading an initial draft and making useful suggestions. We acknowledge support from CONACYT Grant A1- S-45886.
## 1. Introduction and statement of results
This article connects two seemingly unrelated themes: (1) dancing pairs of polygons in the real projective plane, and (2) spherical polygons with trivial rolling monodromy. The connection is established by relating each of these themes to a third one: (3) piecewise-rigid integral curves of the Cartan-Engel distribution.
In this introductory section we describe themes (1) and (2) and state the relation between them, see Theorem 3. In the next section (Section 2) we describe theme (3) and the relations (1) \(\leftrightarrow\) (3) and (2) \(\leftrightarrow\) (3). Themes (2) and (3) and their relation were previously studied [3, 5, 7]; theme (1) and its relation to (3) is new and can be thought of as a discrete version of our previous work with P. Nurowski in [4]. The main new technical result of the paper, establishing the relation (1) \(\leftrightarrow\) (3), is Theorem 4 of Section 2 (proved in Section 3.1).
In Section 3 we give proofs of the theorems stated in the first two sections. In Appendix A we give explicit formulas for the infinitesimal action of the group \(\mathrm{G}_{2}\) on the various configuration spaces appearing in this article (this action is due to a well-known symmetry property of the Cartan-Engel distribution, see Section 2.3 below). In Appendix B we give explicit coordinate formulas for the 'rolling distribution' which models the rolling balls system of theme (2).
In general, we tried to make the article as self-contained as possible, without assuming the reader's familiarity with any of the mentioned themes.
### Dancing pairs of polygons
These are inscribed pairs of planar polygons, satisfying a certain system of scalar equations, one equation for each vertex of the circumscribing polygon, involving cross-ratios of neighboring vertices. The precise definition is as follows.
Consider a pair of polygons in the projective plane \(\mathbb{RP}^{2}\), open or closed, the first with \(n\) vertices \(A_{1},A_{2},\ldots,A_{n}\) and the second with \(n\) edges \(b_{1},b_{2},\ldots,b_{n}\), \(n\geq 2\). The second polygon is _inscribed_ in the first if each vertex \(B_{i}:=b_{i}b_{i+1}\) (the intersection of \(b_{i}\) with \(b_{i+1}\)) lies on the edge \(a_{i}:=A_{i}A_{i+1}\) (the line through \(A_{i}\) and \(A_{i+1}\)). If the polygons are open then \(i=1,\ldots,n-1\) and if they are closed then \(i=1,\ldots,n\) and indices are considered mod \(n\).
**Definition 1** (Dancing pairs).: An inscribed pair of polygons, as above, is a _dancing pair_ if
\[[A_{i+1},B_{i},A_{i},D]+[A_{i+1},B_{i+1},A_{i+2},C]=0, \tag{1}\]
where \(C:=b_{i}a_{i+1}\), \(D:=a_{i}b_{i+2}\), \(i=1,\ldots,n-2\) for open polygons and \(i=1,\ldots,n\) for closed polygons, in which case indices are considered mod \(n\). See Figure 1.
Note that for \(n=2\) any inscribed (open) pair is automatically dancing.
The cross-ratio in Equation (1) is defined for \(4\) collinear points in \(\mathbb{RP}^{2}\) by
\[[A_{1},A_{2},A_{3},A_{4}]:=\frac{(x_{1}-x_{3})(x_{2}-x_{4})}{(x_{1}-x_{4})(x_{ 2}-x_{3})}, \tag{2}\]
Figure 1. The dancing condition, see Equation (1). Actually, (a) does not satisfy the dancing condition. Can you see why? See Remark 1.
where \(x_{i}\) are the coordinates of \(A_{i}\) with respect to some affine coordinate along the containing line.
As is well known, the cross-ratio of \(4\) colinear points in \(\mathbb{RP}^{2}\) is projectively invariant; that is, invariant under the action of the standard action of the projective group \(\mathrm{PGL}_{3}(\mathbb{R})\) on \(\mathbb{RP}^{2}\).
**Remark 1**.: Figure 1(a), although pleasantly symmetric and depicts correctly the definition of \(C,D\) in Equation (1), is not of a dancing pair, since both summands on the left hand side of Equation (1) in this figure are positive. The reason is that, as can be easily verified, the cross-ratio \([A_{1},A_{2},A_{3},A_{4}]\) is positive if and only if \(A_{1},A_{2}\) does not'separate' \(A_{3},A_{4}\). See Figure 2.
The term 'dancing' in Definition 1 is taken from [4]. The present article can be thought of as a discrete version of it.
For Definition 1 to make sense, one needs to make some genericity assumptions on the pair of polygons. Let us spell them out:
**Definition 2**.: A pair of inscribed polygons in \(\mathbb{RP}^{2}\), the first with vertices \(A_{1},\ldots,A_{n}\) and the second with edges \(b_{1},\ldots,b_{n}\), is _non-degenerate_ if
* Each vertex \(A_{i}\) of the first polygon does not lie on the 'opposite' side \(b_{i}\) of the second polygon.
* Each three consecutive vertices of the first polygon are non-collinear and each three consecutive edges of the second polygon are non-concurrent.
_Existence of closed dancing pairs._ It is easy to produce examples of non-degenerate dancing pairs of _open_ polygons for all \(n\geq 2\). For example, one picks arbitrary (generic) \(A_{1},\ldots,A_{n},b_{1}\), then arbitrary (generic) \(b_{2}\) incident to \(b_{1}(A_{1}A_{2})\), after which \(b_{3},....,b_{n}\) are determined recursively by equation (1) for \(i=1,\ldots,n-2.\) This gives a total of \((2n+3)\)-parameter family of open dancing pairs. When the polygons are closed one needs to add Equation (1) also for \(i=n-1,n\) and the equation \(b_{n+1}=b_{1}\), reducing the number of parameters to \(2n\). (One can also make this naive parameter count by considering that pairs of inscribed \(n\)-gons depend on \(3n\) parameters and the \(n\) dancing conditions reduce this to \(2n\).)
However, by examining the signs of the summands in Equation (1), one can see easily that there are no dancing pairs of triangles. More generally, we have the following result, which will be proved in later sections.
**Theorem 1**.: _There are non-degenerate dancing pairs of closed \(n\)-gons if and only if \(n\geq 6\)._
Figure 2. The sign of the cross-ratio.
Figure 3 shows an example of a non-degenerate dancing pair of closed hexagons.
SymmetriesClearly, by the projective invariance of the cross-ratio appearing in Definition 1, the \(8\)-dimensional projective group \(\mathrm{PGL}_{3}(\mathbb{R})\) acts on the space of closed dancing pairs, so one expects, for \(n\) big enough, a \((2n-8)\)-parameter family of projective congruence classes (e.g., a \(4\)-parameter family of non-trivial deformations of the example of Figure 3). A somewhat surprising construction in this paper (see Section 2.3) is an effective (local) action of the \(14\)-dimensional exceptional simple non-compact Lie group \(\mathrm{G}_{2}\) on the space of closed dancing pairs, so one expects a \(2n-14\) family of such congruence classes for \(n\geq 8\). For \(n=6,7\) one expects a discrete family of \(\mathrm{G}_{2}\)-congruence classes.
**Conjecture 1**.: _For \(n=6,7\), all closed dancing pairs of \(n\)-gons are \(\mathrm{G}_{2}\) congruent._
### Spherical polygons, rolling balls and monodromy
The second theme of this article is a well-known non-holonomic mechanical system, see [1, 2, 3, 5, 7]. Consider a'stationary' round sphere of radius \(r\) in Euclidean \(\mathbb{R}^{3}\), on which a closed oriented polygonal path \(\Gamma\) is drawn (its edges are arcs of great circles). We impose a _non-degeneracy_ condition on \(\Gamma\): no three consecutive vertices are 'collinear', i.e., lie on the same great circle. Note that this implies that no two consecutive vertices are antipodal.
Next take another sphere, a'moving' sphere, of radius \(r^{\prime}\), place it outside the stationary sphere, touching it at one of the vertices of \(\Gamma\), then roll it along \(\Gamma\) without slipping or twisting, see Figure 4. (A formal definition of these terms will be given later in Section 2.2.)
Figure 3. A dancing pair of hexagons.
As we roll the moving sphere along \(\Gamma\), it rotates about its center. After going once around \(\Gamma\), the moving sphere returns to the initial vertex, but possibly with a different orientation, given by an element of the orthogonal group \(\mathrm{SO}_{3}\), called the _rolling monodromy_ of \(\Gamma\). Put differently, as we roll the moving sphere along \(\Gamma\), its rotation about its center defines a curve in \(\mathrm{SO}_{3}\), starting at the identity, whose other endpoint is the rolling monodromy of \(\Gamma\). The curve in \(\mathrm{SO}_{3}\) can be lifted to a curve in the universal double-cover \(S^{3}\to\mathrm{SO}_{3}\), starting at \(1\in S^{3}\) (we are thinking of \(S^{3}\) as the sphere of unit quaternions), whose other end point is the _lifted rolling monodromy_ of \(\Gamma\).
**Definition 3**.: The rolling monodromy of \(\Gamma\) is _trivial_ if it is the identity element in \(\mathrm{SO}_{3}\). Similarly, for the lifted monodromy.
In other words, the rolling monodromy of \(\Gamma\) is trivial if the associated curve in \(\mathrm{SO}_{3}\) is _closed_, and the lifted monodromy is trivial if the lifted curve in \(S^{3}\) is closed, which is the same as requiring the curve in \(\mathrm{SO}_{3}\) to be closed and _null homotopic_.
Clearly, if we pick another initial point on \(\Gamma\) then the (lifted) rolling monodromy differs by conjugation by an element in \(\mathrm{SO}_{3}\), which does not affect its triviality.
Note that the (lifted) rolling monodromy, and in particular its triviality, does depend on the radius ratio of the two spheres, \(\rho:=r/r^{\prime}\).
**Example 1**.: Let \(\Gamma\) be the equator of the stationary sphere (a horizontal great circle). Rolling the moving sphere once around \(\Gamma\) results in it being rotated \(\rho+1\) times about the vertical axis through its center (this is elementary, but rather counterintuitive; see this animation [11]). Lifted to \(S^{3}\), we get a path going \((\rho+1)/2\) times around a great circle of \(S^{3}\). Thus the rolling monodromy of such a \(\Gamma\) is trivial if and only if \(\rho\) is an integer, and the lifted monodromy is trivial if and only if \(\rho\) is an _odd integer_.
**Example 2**.: (We used this example to produce Figure 3.) Let \(\Gamma\) be an 'octant', i.e., an equilateral spherical triangle, with side length a quarter of a great circle. For \(\rho=3\), each of its sides, by the previous example, results in a lifted monodromy of \(-1\in S^{3}\), adding up to a total lifted monodromy of \((-1)^{3}=-1.\) Thus rolling _twice_ around \(\Gamma\) results in a trivial lifted monodromy.
Figure 4.
Figure 5.
We next present an infinite family of examples of non-degenerate regular spherical \(n\)-gons with trivial lifted rolling monodromy, valid for \(\rho=3\) and all \(n\geq 6\).
_Regular spherical polygons with trivial (lifted) monodromy._ Consider a closed non-degenerate regular spherical \(n\)-gon \(\Gamma\), contained in the (open) northern hemisphere of the stationary sphere, and whose vertices lie on a circle of latitude of radius \(\phi\in(0,\pi/2)\) (the 'colatitude').
Let \(w\in\mathbb{N}\) be the winding number of \(\Gamma\) about the north pole. Note that by the non-degeneracy assumption, \(\Gamma\) does not pass through the poles, so \(w\) is well defined. The 'rotation angle' between two successive vertices is then \(\theta:=2\pi w/n\), where \(0<\theta<\pi\), i.e., \(0<w<n/2\). See Figure 6.
We ask: for which \((n,w,\phi,\rho)\) is the (lifted) rolling monodromy of \(\Gamma\) trivial? We shall answer this question only for \(\rho=3\), the case that interests us here.
**Theorem 2**.: _Let \(\Gamma\) be a non-degenerate regular spherical \(n\)-gon on the stationary sphere, with winding number \(w\) about its center and circumscribing circle of radius \(\phi\). Furthermore, we assume that \(n\geq 3\), \(0<w<n/2\) and \(\phi\in(0,\pi/2)\), as described above. Then_
1. \(\Gamma\) _has trivial rolling monodromy for_ \(\rho=3\) _if and only if there exists a (necessarily unique) integer_ \(w^{\prime}\) _in the range_ \(w<w^{\prime}<n\) _such that_ (3) \[\cos\left(\frac{\pi w^{\prime}}{n}\right)=\cos\left(\frac{\pi w}{n}\right) \left[1-4\sin^{2}\left(\frac{\pi w}{n}\right)\sin^{2}\phi\right].\]
2. _The lifted rolling monodromy of such_ \(\Gamma\) _is trivial if and only if_ \(w^{\prime}\equiv w\pmod{2}\)_._
3. \(w^{\prime}\) _in Equation (_3_) is the winding number of the closed regular polygon traced on the moving sphere as it rolls along_ \(\Gamma\)
Figure 6. A regular spherical pentagon, projected onto the \(xy\) plane, with winding number \(w=2\) about the north pole \(O\) and rotation angle of \(\theta=4\pi/5\).
_._
4. _There are solutions to Equation (_3_) with_ \(w^{\prime}\equiv w\pmod{2}\) _if and only if_ \(n\geq 6\)_. In fact, there is a solution for_ \(w=2,w^{\prime}=4\) _and all_ \(n\geq 6\)_._
See Section 3.2 for a proof. Figure 7 shows some examples with \(\leq 10\) vertices.
**Corollary 1**.: _For each \(n\geq 6\) there exists a closed non-degenerate spherical \(n\)-gon with lifted trivial monodromy for \(\rho=3\)._
**Remark 2**.: Theorem 2 does not give an explicit list of all regular polygons with trivial lifted monodromy for \(\rho=3\), since it only reduces the question to Equation (3), without fully solving it. We will not dwell on solving this equation completely, since the solutions given in part (d) of Theorem 2 are sufficient for this article. However, some numerical experiments indicate the following1:
Footnote 1: We thank Carlos Licea, an undergraduate physics student of the University of Guanajuato, for helping with these experiments.
* If \((n,w,w^{\prime})\) is an admissible triple (i.e., one can find \(\phi\in(0,\pi/2)\) that solves the equation, with \(0<w<w^{\prime}<n\), \(w<n/2\) and \(w\equiv w^{\prime}\mod 2\)) then so are all \((n^{\prime},w,w^{\prime})\) with \(n^{\prime}>n\).
* Let us say that an admissible triple \((n,w,w^{\prime})\) is _minimal_ if \((n^{\prime},w,w^{\prime})\) is not admissible for \(n^{\prime}<n\). Then, for a fixed \(n\), the minimal admissible triples and their number \(m\) are as follows, with \(k\geq 2\) and \(j=1,2,\ldots,m\) in all \(3\) cases:
* If \(n=3k\) then \(w=k+j-1,w^{\prime}=n-3j+1\), \(m=[k/2]\).
* if \(n=3k+1\) then \(w=k+j,w^{\prime}=n-3j-1\), \(m=[(k-1)/2]\).
* If \(n=3k+2\) then \(w=k+j,w^{\prime}=n-3j\), \(m=[k/2]\).
* In particular, there are minimal admissible triples for all \(n\geq 6\), except \(n=7\). The minimal triples for \(n\leq 12\) are \((6,2,4)\), \((8,3,5)\), \((9,3,7)\), \((10,4,6)\), \((11,4,8)\), \((12,4,10)\), \((12,5,7)\).
Figure 7. Regular spherical \(n\)-gon, \(n=6,7,8,9,10\), with trivial lifted rolling monodromy for \(\rho=3\), projected to the \(xy\) plane. The triple of numbers below each figure is \((n,w,w^{\prime})\).
### The main theorem
Our main result relates dancing pairs of closed polygons with spherical polygons with lifted trivial rolling monodromy for radius ratio \(\rho=3\). What is special about \(\rho=3\) (as well as \(1/3\)) will be explained in the next section, once we interpret sphere rolling in terms of the Cartan-Engel distribution. For the moment, to state the relation, we need to add a definition.
Note that a non-degenerate spherical polygon \(\Gamma\) is not determined by its vertices: for each pair of successive vertices there are infinitely many 'edges' (directed arcs of great circles) connecting them; two of which are'simple', i.e., non self-intersecting, complementary arcs of the great circle containing the two points, and the rest wrap around this circle an arbitrary number of times.
**Definition 4**.: Two closed spherical polygons are _equivalent_ if they have the same set of vertices, and/or some of the vertices are replaced by their antipodes.
In other words, equivalence classes of non-degenerate closed spherical \(n\)-gons are given by non-degenerate ordered sets of \(n\) points in \(S^{2}/\pm 1\cong\mathbb{RP}^{2}\).
Note that, by Example 1, triviality of the lifted rolling monodromy of \(\Gamma\) for \(\rho=3\) is preserved by this equivalence relation (see Lemma 5 below for further details).
The initial placement of the moving sphere at one of the vertices of \(\Gamma\) is given by an element of \(\operatorname{SO}_{3}\), the orientation of the moving sphere with respect to some fixed reference orientation. The lifted path in \(S^{3}\), describing the motion of the moving sphere along \(\Gamma\), is thus determined by an arbitrary initial element \(q\in S^{3}\) chosen 'above' the initial vertex in \(\Gamma\).
**Theorem 3** (main).: _There is a bijection between non-degenerate dancing pairs of closed \(n\)-gons in \(\mathbb{RP}^{2}\) and generic2 pairs \(([\Gamma],q)\), where \(q\in S^{3}\) and \([\Gamma]\) is an equivalence class of non-degenerate closed spherical \(n\)-gons with trivial lifted rolling monodromy for \(\rho=3\)._
Footnote 2: The precise meaning of ‘generic’ will be given in the next section, once we describe the bijection in detail.
For example, to the spherical octant of Figure 5, traversed twice, corresponds the dancing pair of hexagons of Figure 3. Similarly, the regular spherical polygons with trivial lifted monodromy of Theorem 2 correspond to examples of dancing pairs of \(n\)-gons for all \(n\geq 6\) (one half of Theorem 1).
## 2. The Cartan-Engel distribution
The relation between dancing pairs and rolling balls, as indicated in Theorem 3, is based on modeling both problems by the same remarkable geometric object: a certain non-integrable rank \(2\) distribution on a \(5\)-manifold,
introduced by Elie Cartan and Friedrich Engel in 1893 (seemingly independently), in order to define the simple exceptional 14-dimensional Lie group \(G_{2}\)[8, 10]. The subject has since been studied extensively by many authors (including ourselves, see [4, 5]). Here is a quick review of the basic properties relevant here.
Let \(\mathscr{D}\) be a rank 2 distribution on a 5-manifold \(Q\), i.e., a rank 2 subbundle of \(TQ\). It is said to be a \((2,3,5)\)-distribution if \([[\mathscr{D},\mathscr{D}],\mathscr{D}]=TQ\); that is, for any local framing \(X_{1},X_{2}\) of \(\mathscr{D}\) (i.e., two everywhere independent local sections of \(\mathscr{D}\)), let \(X_{3}:=[X_{1},X_{2}],X_{4}:=[X_{1},X_{3}],X_{5}:=[X_{2},X_{3}]\), then \(X_{1},\dots,X_{5}\) is a local framing of \(TQ\). In fact, a generic rank 2 distibution on a 5-manifold is \((2,3,5)\) but typically no two are diffeomorphic, even locally. A (local) symmetry of a distribution \(\mathscr{D}\) on a manifold \(Q\) is a (local) self-diffeomorphism of \(Q\) preserving \(\mathscr{D}\).
In his fundamental paper on the subject [9] Cartan showed that the maximal dimension of the local symmetry group of a \((2,3,5)\)-distribution is 14, in which case the local symmetry group is the simple non-compact Lie group \(\mathrm{G}_{2}\) and the distribution is called _flat_. Cartan also showed in [9] that all flat \((2,3,5)\)-distributions are locally diffeomorphic. Cartan and Engel gave in 1893 explicit formulas for such a distribution on \(\mathbb{R}^{5}\) (their formulas in [8, 10] look similar to our formula of Equation (6) below). We thus call a flat \((2,3,5)\)-distribution a _Cartan-Engel distribution_, or, by a slight abuse of terminology, _the_ Cartan-Engel distribution.
Another result of Cartan in [9] is _Cartan's submaximal symmetry statement_: the local symmetry group of a non-flat \((2,3,5)\)-distribution is at most 7-dimensional.
A much more recent general result on \((2,3,5)\)-distributions, by Bryant and Hsu [7] (see also the last paragraph of [6]), is the existence of _rigid_ (or 'abnormal') integral curves: any small enough segment of such a curve admits no deformations within the class of integral curves with the same endpoints. In fact, the rigid curves of a \((2,3,5)\)-distribution \(\mathscr{D}\) form (locally) a 5-parameter family of \(\mathscr{D}\)-integral curves, a unique such curve passing through a given point in \(Q\) in a given direction at the point tangent to \(\mathscr{D}\).
We next describe briefly 3 models of the Cartan-Engel distribution that are used in this paper and their interrelations.
### First model: dancing pairs
Let \(\mathbb{R}^{3,3}:=\mathbb{R}^{3}\times(\mathbb{R}^{3})^{*}\), a 6-dimensional real vector space equipped with the quadratic form \((\mathbf{A},\mathbf{b})\mapsto\mathbf{b}\mathbf{A}\) of signature \((3,3)\) (we are thinking of \(\mathbb{R}^{3}\) as column vectors and \((\mathbb{R}^{3})^{*}\) as row vectors). We use the standard volume forms \(vol,vol^{*}\) on \(\mathbb{R}^{3},(\mathbb{R}^{3})^{*}\) (respectively) to define'vector products' \(\mathbb{R}^{3}\times\mathbb{R}^{3}\rightarrow(\mathbb{R}^{3})^{*}\), \((\mathbb{R}^{3})^{*}\times(\mathbb{R}^{3})^{*}\rightarrow\mathbb{R}^{3}\) by
\[\mathbf{A}_{1}\times\mathbf{A}_{2}:=vol(\mathbf{A}_{1},\mathbf{A}_{2},\,\cdot \,),\,\,\,\,\mathbf{b}_{1}\times\mathbf{b}_{2}=vol^{*}(\mathbf{b}_{1},\mathbf{ b}_{2},\,\cdot\,). \tag{4}\]
Denote the projection \(\mathbb{R}^{3}\setminus 0\to\mathbb{RP}^{2}\) by \(\mathbf{A}\mapsto[\mathbf{A}]\), and similarly for \((\mathbb{R}^{3})^{*}\). That is, we are thinking of \(\mathbb{R}^{3}\) and \((\mathbb{R}^{3})^{*}\) as homogeneous coordinates of points and lines in \(\mathbb{RP}^{2}\) (respectively).
Next, let \(Q^{\mathsf{dan}}\subset\mathbb{R}^{3,3}\) be the \(5\)-dimensional affine quadric
\[Q^{\mathsf{dan}}:=\{(\mathbf{A},\mathbf{b})\mid\mathbf{b}\mathbf{A}=1\}.\]
Clearly, the tangent space to \(Q^{\mathsf{dan}}\) at a point \((\mathbf{A},\mathbf{b})\), translated to the origin, consists of vectors \((\dot{\mathbf{A}},\dot{\mathbf{b}})\in\mathbb{R}^{3,3}\) satisfying
\[\dot{\mathbf{b}}\mathbf{A}+\mathbf{b}\dot{\mathbf{A}}=0 \tag{5}\]
**Definition 5**.: Define a rank \(2\) distribution \(\mathscr{D}^{\mathsf{dan}}\subset TQ^{\mathsf{dan}}\) as follows: the elements at a point \((\mathbf{A},\mathbf{b})\in Q^{\mathsf{dan}}\subset\mathbb{R}^{3,3}\), translated to the origin in \(\mathbb{R}^{3,3}\), are the vectors \((\dot{\mathbf{A}},\dot{\mathbf{b}})\in\mathbb{R}^{3,3}\) satisfying, in addition to Equation (5),
\[\dot{\mathbf{b}}=\mathbf{A}\times\dot{\mathbf{A}}. \tag{6}\]
**Proposition 1**.: \((Q^{\mathsf{dan}},\mathscr{D}^{\mathsf{dan}})\) _is a flat \((2,3,5)\)-distribution._
This appeared in [4]. We recall the flatness argument: \((Q^{\mathsf{dan}},\mathscr{D}^{\mathsf{dan}})\) admits \(\mathrm{SL}_{3}(\mathbb{R})\) as an 'obvious' symmetry group, the restriction to \(Q^{\mathsf{dan}}\) of the diagonal action on \(\mathbb{R}^{3,3}\), \((\mathbf{A},\mathbf{b})\mapsto(g\mathbf{A},\mathbf{b}g^{-1}).\) This is an \(8\)-dimensional group, hence by Cartan's submaximality result, the full symmetry group is in fact \(14\)-dimensional.
In [4] is given also an explicit local action of \(\mathrm{G}_{2}\) on \(Q^{\mathsf{dan}}\). We recall this in Appendix A.2, by embedding \((Q^{\mathsf{dan}},\mathscr{D}^{\mathsf{dan}})\) in another model of the Cartan-Engel distribution, a homogeneous space of \(\mathrm{G}_{2}\), constructed using split octonions, see Section 3.3.
Here is an easy to verify extrinsic property of \(\mathscr{D}^{\mathsf{dan}}\), not found in [4].
**Proposition 2**.: _For any given point in \(Q^{\mathsf{dan}}\) and a \(\mathscr{D}^{\mathsf{dan}}\)-direction at the point, the affine line in \(\mathbb{R}^{3,3}\) passing through the given point and tangent to the given direction is contained in \(Q^{\mathsf{dan}}\) and everywhere tangent to \(\mathscr{D}^{\mathsf{dan}}\). In fact, these affine lines are exactly the rigid curves of \(\mathscr{D}^{\mathsf{dan}}\) in the sense of [7]._
Now we come to the second main definition of this article (the first one was Definition 1).
**Definition 6** (Horizontal polygons).: A _horizontal polygon_ in \(Q^{\mathsf{dan}}\) is a polygon in \(\mathbb{R}^{3,3}\) whose edges are \(\mathscr{D}^{\mathsf{dan}}\)-horizontal lines in \(Q^{\mathsf{dan}}\), as in Proposition 2. The polygon is _non-degenerate_ if every \(3\) consecutive vertices are non-collinear.
Our next theorem gives a bijection between the objects defined in Definitions 1 and 6.
**Theorem 4**.: _For every non-degenerate horizontal \(n\)-gon in \(Q^{\mathsf{dan}}\), closed or open, with vertices \((\mathbf{A}_{1},\mathbf{b}_{1}),(\mathbf{A}_{2},\mathbf{b}_{2}),\ldots,( \mathbf{A}_{n},\mathbf{b}_{n})\), \(n\geq 2\), the projected pair of polygons, the first with vertices \(A_{i}=[\mathbf{A}_{i}]\in\mathbb{RP}^{2}\) and the second with edges \(b_{i}=[\mathbf{b}_{i}]\in(\mathbb{RP}^{2})^{*}\), \(i=1,\ldots,n\), is a non-degenerate dancing pair._
_Conversely, every non-degenerate dancing pair of polygons lifts uniquely to a non-degenerate horizontal polygon in \(Q^{\mathsf{dan}}\)._
This is proved in Section 3.1.
### Second model: rolling balls
The configuration space for rolling balls is \(Q^{\mathsf{roll}}:=S^{2}\times\mathrm{SO}_{3}\). A point \((\mathbf{v},g)\in Q^{\mathsf{roll}}\) represents a placement of the moving ball so that it touches the stationary sphere at the point \(r\mathbf{v}\) and is rotated with respect to some fixed reference orientation by \(g\in\mathrm{SO}_{3}\). Let \(\varphi_{(\mathbf{v},g)}:\mathbb{R}^{3}\to\mathbb{R}^{3}\) be the rigid motion \(\mathbf{x}\mapsto g\mathbf{x}+(r+r^{\prime})\mathbf{v}\). Then the placement of the moving ball is given by its image under \(\varphi_{(\mathbf{v},g)}\). See Figure 8.
For a given element \(g\in\mathrm{SO}_{3}\) and tangent vector \(\dot{g}\in T_{g}\mathrm{SO}_{3}\) there is a unique \(\boldsymbol{\omega}\in\mathbb{R}^{3}\), called the _angular velocity_ vector, such that \(\dot{g}g^{-1}\in\mathfrak{so}_{3}\) is given by \(\mathbf{v}\mapsto\boldsymbol{\omega}\times\mathbf{v}\).
**Definition 7**.: Define \(\mathscr{D}^{\mathsf{roll}}\subset TQ^{\mathsf{roll}}\) as follows. The elements of \(\mathscr{D}^{\mathsf{roll}}\) at a point \((\mathbf{v},g)\in Q^{\mathsf{roll}}\) are the tangent vectors \((\dot{\mathbf{v}},\dot{g})\in T_{(\mathbf{v},g)}Q^{\mathsf{roll}}\) satisfying
1. \((\rho+1)\dot{\mathbf{v}}=\boldsymbol{\omega}\times\mathbf{v}\),
2. \(\boldsymbol{\omega}\cdot\mathbf{v}=0\).
See Proposition 1 of [5] for a derivation of conditions (1) and (2) from 'no-slip' and 'no-twist' (respectively).
**Proposition 3**.:
1. \(\mathscr{D}^{\mathsf{roll}}\) _is a 2-distribution on_ \(Q^{\mathsf{roll}}\)_, integrable for_ \(\rho=1\) _and_ \((2,3,5)\) _for_ \(\rho\neq 1\)_._
2. _For_ \(\rho\neq 1\)_, the rigid curves of_ \(\mathscr{D}^{\mathsf{roll}}\) _correspond to rolling along great circles of the stationary sphere. Thus rolling along polygonal curves on the stationary sphere corresponds to piecewise rigid curves of_ \(\mathscr{D}^{\mathsf{roll}}\)
Figure 8. The rolling configuration space \(S^{2}\times\mathrm{SO}_{3}\).
_
3. \(\mathscr{D}^{\mathtt{roll}}\) _is a flat_ \((2,3,5)\)_-distribution if and only if_ \(\rho=3\) _or_ \(1/3\)_._
Parts (a) and (b) appeared in SS4.4 of [7]. We learned about part (c) from a conversation with Robert Bryant. See [3, 5] for a proof of (c) as well as attempts to explain the mysterious 3:1 radius ratio.
### Third model: split octonions
We use here the notation of [4, SS3]. See also [12, 14] for more details.
_Split octonions._ This is an \(8\)-dimensional non-commutative and non-associative real algebra \(\mathbb{O}\). Following Zorn [16, page 144], its elements can be written as'vector matrices'
\[\begin{pmatrix}x&\mathbf{A}\\ \mathbf{b}&y\end{pmatrix},\ \ x,y\in\mathbb{R},\ \ \mathbf{A}\in\mathbb{R}^{3},\ \ \mathbf{b}\in(\mathbb{R}^{3})^{*},\]
with'vector-matrix-multiplication',
\[\begin{pmatrix}x&\mathbf{A}\\ \mathbf{b}&y\end{pmatrix}\begin{pmatrix}x^{\prime}&\mathbf{A}^{\prime}\\ \mathbf{b}^{\prime}&y^{\prime}\end{pmatrix}:=\left(\begin{array}{cc}xx^{ \prime}-\mathbf{b}^{\prime}\mathbf{A}&x\mathbf{A}^{\prime}+y^{\prime}\mathbf{ A}+\mathbf{b}\times\mathbf{b}^{\prime}\\ x^{\prime}\mathbf{b}+y\mathbf{b}^{\prime}+\mathbf{A}\times\mathbf{A}^{ \prime}&yy^{\prime}-\mathbf{b}\mathbf{A}^{\prime}\end{array}\right),\]
where, as before, we use the'vector products' of formulas (4).
Conjugation in \(\mathbb{O}\) is given by
\[\zeta=\begin{pmatrix}x&\mathbf{A}\\ \mathbf{b}&y\end{pmatrix}\mapsto\overline{\zeta}=\left(\begin{array}{cc}y&- \mathbf{A}\\ -\mathbf{b}&x\end{array}\right),\]
satisfying
\[\overline{\overline{\zeta}}=\zeta,\ \ \overline{\zeta\zeta^{\prime}}= \overline{\zeta^{\prime}\zeta},\ \ \zeta\overline{\zeta}=\langle\zeta,\zeta\rangle\mathbf{1},\]
where \(\mathbf{1}=\left(\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right)\) is the mutiplicative unit and
\[\langle\zeta,\zeta\rangle:=xy+\mathbf{b}\mathbf{A} \tag{7}\]
is a quadratic form of signature \((4,4)\) on \(\mathbb{O}\).
Define as usual
\[\mathrm{Re}(\zeta)=(\zeta+\overline{\zeta})/2,\ \ \mathrm{Im}(\zeta)=(\zeta- \overline{\zeta})/2,\]
so that
\[\mathbb{O}=\mathrm{Re}(\mathbb{O})\oplus\mathrm{Im}(\mathbb{O}),\]
where \(\mathrm{Re}(\mathbb{O})=\mathbb{R}\mathbf{1}\) and \(\mathrm{Im}(\mathbb{O})\) are 'traceless' vector matrices of the form \(\left(\begin{smallmatrix}x&\mathbf{A}\\ \mathbf{b}&-x\end{smallmatrix}\right)\).
Let \(\mathbb{RP}^{6}=\mathbb{P}(\mathrm{Im}(\mathbb{O}))=\left(\mathrm{Im}( \mathbb{O})\setminus 0\right)/\mathbb{R}^{*}\) (projectivized imaginary split octonions) and \(Q^{\mathtt{oct}}\subset\mathbb{RP}^{6}\) the projectivized null cone; that is,
\[Q^{\mathtt{oct}}=\{[\zeta]\mid\zeta\in\mathrm{Im}(\mathbb{O}),\ \zeta\neq 0,\ \langle \zeta,\zeta\rangle=0\}.\]
**Definition 8**.: \(\mathscr{D}^{\mathtt{oct}}\subset TQ^{\mathtt{oct}}\) is the distribution whose elements at a point \([\zeta]\in Q^{\mathtt{oct}}\) are given by the _annihilator_
\[\zeta^{0}=\{\zeta^{\prime}\in\mathrm{Im}(\mathbb{O})\mid\zeta\zeta^{\prime}=0\},\]
as follows. For a non-zero null \(\zeta\in\mathrm{Im}(\mathbb{O})\), \(\zeta^{0}\) is a \(3\)-dimensional subspace of \(\mathrm{Im}(\mathbb{O})\), containing \(\mathbb{R}\zeta\) and tangent to the null cone at \(\zeta\), so descends to
a projective \(2\)-plane, tangent to \(Q^{\mathsf{oct}}\) at \([\zeta]\). Its tangent space at \([\zeta]\) is the fiber of \(\mathscr{D}^{\mathsf{oct}}\) at \([\zeta]\).
We have a projective analog of Proposition 2 and Definition 6.
**Proposition 4**.: _For any given point in \(Q^{\mathsf{oct}}\) and a \(\mathscr{D}^{\mathsf{oct}}\)-direction at the point, the projective line in \(\mathbb{RP}^{6}\) passing through the given point and tangent to the given direction is contained in \(Q^{\mathsf{oct}}\) and everywhere tangent to \(\mathscr{D}^{\mathsf{oct}}\). These are the rigid curves of \(\mathscr{D}^{\mathsf{oct}}\)._
**Definition 9** (Horizontal polygons).: A _horizontal polygon_ in \(Q^{\mathsf{oct}}\) is a polygon in \(\mathbb{RP}^{6}\) whose edges are \(\mathscr{D}^{\mathsf{oct}}\)-horizontal projective lines in \(Q^{\mathsf{oct}}\), as in Proposition 4. The polygon is _non-degenerate_ if every \(3\) consecutive vertices are non-collinear.
\(\mathrm{G}_{2}\)_-symmetry_.: The automorphism group of \(\mathbb{O}\), i.e., the subgroup of \(\mathrm{GL}(\mathbb{O})\) preserving octonion multiplication, is a non-compact \(14\)-dimensional connected simple Lie group, denoted by \(\mathrm{G}_{2}\). See Appendix A. The \(\mathrm{G}_{2}\)-action on \(\mathbb{O}\) preserves the splitting \(\mathbb{O}=\mathrm{Re}(\mathbb{O})\oplus\mathrm{Im}(\mathbb{O})\) and acts trivially on \(\mathrm{Re}(\mathbb{O})\). The action on \(\mathrm{Im}(\mathbb{O})\) induces an effective action on \(\mathbb{P}(\mathrm{Im}(\mathbb{O}))\), clearly preserving \((Q^{\mathsf{oct}},\mathscr{D}^{\mathsf{oct}})\). Therefore, one has
**Proposition 5**.: \((Q^{\mathsf{oct}},\mathscr{D}^{\mathsf{oct}})\) _is a flat \((2,3,5)\)-distribution, whose symmetry group is \(\mathrm{G}_{2}\)._
The Lie algebra of \(\mathrm{G}_{2}\) is \(\mathfrak{g}_{2}\subset\mathfrak{so}_{4,3}\subset\mathfrak{so}_{4,4}\); the first inclusion is due to the \(\mathrm{G}_{2}\)-invariance of the inner product of equation (7), restricted to \(\mathrm{Im}(\mathbb{O})\), the second from the inclusion \(\mathrm{Im}(\mathbb{O})\subset\mathbb{O}\).
### Putting it all together
To prove Theorem 3, we combine the three models of the Cartan-Engel distribution presented so far, as summed up in the following diagram.
(8)
In this diagram appear the underlying manifolds of our three models for the Cartan-Engel distribution, \(Q^{\mathsf{dan}},Q^{\mathsf{oct}}\) and \(Q^{\mathsf{roll}}\), the universal covers of the last two (both of which are diffeomorphic to \(S^{2}\times S^{3}\)), as well as the following maps:
* An embedding \(\iota:Q^{\mathsf{dan}}\hookrightarrow Q^{\mathsf{oct}}\). One defines first an affine chart \(\mathbb{R}^{3,3}\hookrightarrow\mathbb{P}(\mathrm{Im}(\mathbb{O}))=\mathbb{RP }^{6}\), \[(\mathbf{A},\mathbf{b})\mapsto[\zeta],\text{ where }\zeta=\begin{pmatrix}1&\mathbf{A}\\ \mathbf{b}&-1\end{pmatrix},\]
then restrict to \(Q^{\mathsf{dan}}=\{\mathbf{b}\mathbf{A}=1\}\subset\mathbb{R}^{3,3}.\) The image \(\iota(Q^{\mathsf{dan}})\subset Q^{\mathsf{oct}}\) is the complement in \(Q^{\mathsf{oct}}\) of the 'hyperplane section' \(\{\left[\left(\begin{smallmatrix}x&\mathbf{A}\\ \mathbf{b}&-x\end{smallmatrix}\right)\right]\ |\ x=\mathbf{b}\mathbf{A}=0\}.\)
* The double covers \(\widetilde{Q}^{\mathsf{oct}}\to Q^{\mathsf{oct}}\), \(\widetilde{Q}^{\mathsf{roll}}\to Q^{\mathsf{roll}}\) (the vertical arrows).
* Recall that \(Q^{\mathsf{oct}}\) is the projectivized null cone in \(\mathbb{RP}^{6}=\left(\mathrm{Im}(\mathbb{O})\setminus 0\right)/\mathbb{R}^{*}\). If we quotient by \(\mathbb{R}^{+}\) instead of \(\mathbb{R}^{*}\) we get the'spherized' null cone \(\widetilde{Q}^{\mathsf{oct}}\subset\left(\mathrm{Im}(\mathbb{O})\setminus 0 \right)/\mathbb{R}^{+}=S^{6}\), and a double cover \(\widetilde{Q}^{\mathsf{oct}}\to Q^{\mathsf{oct}}\), \(\mathbb{R}^{+}\zeta\mapsto\mathbb{R}^{*}\zeta.\) We use this double cover to pull back \(\mathscr{D}^{\mathsf{oct}}\) to \(\widetilde{\mathscr{D}}^{\mathsf{oct}}\) on \(\widetilde{Q}^{\mathsf{oct}}\).
* Recall that \(Q^{\mathsf{roll}}=S^{2}\times\mathrm{SO}_{3}\), define \(\widetilde{Q}^{\mathsf{roll}}:=S^{2}\times S^{3}\) and use the usual double cover \(S^{3}\to\mathrm{SO}_{3}\) to define the double cover \(\widetilde{Q}^{\mathsf{roll}}\to Q^{\mathsf{roll}}\). Explicitly, to an element \(q\in S^{3}\subset\mathbb{R}^{4}=\mathbb{H}\) one associates the orthogonal transformation of \(\mathbb{R}^{3}=\mathrm{Im}(\mathbb{H})\), \(\mathbf{v}\mapsto q\mathbf{v}\bar{q}\). Again, use this double cover to pull back \(\mathscr{D}^{\mathsf{roll}}\) to \(\widetilde{\mathscr{D}}^{\mathsf{roll}}\) on \(\widetilde{Q}^{\mathsf{roll}}\).
* A diffeomorphism3\(\Phi:\widetilde{Q}^{\mathsf{roll}}\to\widetilde{Q}^{\mathsf{oct}}.\) For \((\mathbf{v},q)\in\widetilde{Q}^{\mathsf{roll}}=S^{2}\times S^{3}\subset \mathrm{Im}(\mathbb{H})\times\mathbb{H}\) one defines4 Footnote 3: This diffeomorphism is essentially the same as the one constructed in [3, Proposition 2]. Footnote 4: Note that for this formula to make sense we identify \(\mathbb{R}^{3}\) with \((\mathbb{R}^{3})^{*}\) using the standard Euclidean inner product. (9) \[\Phi(\mathbf{v},q)=\mathbb{R}^{+}\begin{pmatrix}\mathrm{Re}(\mathbf{v}q)& \mathbf{v}+\mathrm{Im}(\mathbf{v}q)\\ \mathbf{v}-\mathrm{Im}(\mathbf{v}q)&-\mathrm{Re}(\mathbf{v}q)\end{pmatrix}.\]
**Theorem 5**.: _In diagram (8), \(\Phi\) is a diffeomorphism and all maps preserve the Cartan-Engel distributions on the respective spaces._
This is proved in Section 3.3. Now we can give a more precise statement of our main theorem (Theorem 3). Let \(Q_{*}^{\mathsf{oct}}=\iota(Q^{\mathsf{dan}})\subset Q^{\mathsf{oct}}\) (the complement of the hyperplane section \(x=0\)), and similarly \(\widetilde{Q}_{*}^{\mathsf{oct}}\subset\widetilde{Q}^{\mathsf{oct}}, \widetilde{Q}_{*}^{\mathsf{roll}}\subset\widetilde{Q}^{\mathsf{roll}},Q_{*}^{ \mathsf{roll}}\subset Q^{\mathsf{roll}}\) the corresponding submanifolds under the maps of diagram (8). A polygonal horizontal path in \(Q^{\mathsf{oct}}\) (respectively \(\widetilde{Q}^{\mathsf{oct}},\widetilde{Q}^{\mathsf{roll}},Q^{\mathsf{roll}}\)) is _generic_ if all its vertices lie in \(Q_{*}^{\mathsf{oct}}\) (respectively \(\widetilde{Q}_{*}^{\mathsf{oct}},\widetilde{Q}_{*}^{\mathsf{roll}},Q_{*}^{ \mathsf{roll}}\)). A pair \((\Gamma,q)\), where \(\Gamma\) is a closed non-degenerate spherical \(n\)-gon with vertices \(\mathbf{v}_{1},\ldots,\mathbf{v}_{n}\) and \(q\in S^{3}\), is _generic_ if the horizontal lift of \(\Gamma\) to \(\widetilde{Q}^{\mathsf{roll}}=S^{2}\times S^{3}\), starting at \((\mathbf{v}_{1},q)\), is a generic horizontal polygon.
**Corollary** (Theorem 3).: _Diagram (8) defines a bijective correspondence between non-degenerate dancing pairs of closed \(n\)-gons in \(\mathbb{RP}^{2}\) and generic pairs \(([\Gamma],q)\), where \(q\in S^{3}\) and \([\Gamma]\) is an equivalence class of non-degenerate closed spherical \(n\)-gon with trivial lifted rolling monodromy for \(\rho=3\)._
In section 3.4 we shall prove Theorem 3 using Theorem 5.
## 3. Proofs
### Theorem 4
Using induction on \(n\geq 2\), the proof reduces to the following two lemmas, the \(n=2,3\) cases of the theorem.
**Lemma 1** (Theorem 4 for \(n=2\)).: _Consider a horizontal non-degenerate 2-gon in \(Q^{\mathsf{dan}}\), given by a pair of vertices \(q_{1},q_{2}\in Q^{\mathsf{dan}}\) such that the line \(q_{1}q_{2}\) is horizontal. Then the projected pair of 2-gons, with vertices \((A_{1},A_{2})\) and edges \((b_{1},b_{2})\), is inscribed, i.e., \(b_{1}b_{2}\in A_{1}A_{2}.\) Conversely, every inscribed pair of 2-gons lifts to a unique horizontal 2-gon in \(Q^{\mathsf{dan}}\)._
Proof.: Suppose \(q_{i}=(\mathbf{A}_{i},\mathbf{b}_{i}),\)\(i=1,2,\) such that \(q_{1}q_{2}\) is horizontal. Then, by definition (Proposition 1), \(\mathbf{A}_{1}\times\mathbf{A}_{2}=\mathbf{b}_{2}-\mathbf{b}_{1}\). This equation implies \((\mathbf{A}_{1}\times\mathbf{A}_{2})(\mathbf{b}_{1}\times\mathbf{b}_{2})=0,\) i.e., \(b_{1}b_{2}\in A_{1}A_{2}.\)
Conversely, suppose an inscribed pair of 2-gons is given, i.e., \(b_{1}b_{2}\in A_{1}A_{2},\) with \(b_{1}\neq b_{2},A_{1}\neq A_{2},\)\(A_{i}\not\in b_{i},\)\(i=1,2.\) The horizontality conditions on a lift \((\mathbf{A}_{i},\mathbf{b}_{i})\in Q^{\mathsf{dan}}\), \(i=1,2,\) is \(\mathbf{A}_{1}\times\mathbf{A}_{2}=\mathbf{b}_{2}-\mathbf{b}_{1}\) (see Proposition 1).
Picking an arbitrary lift \((\mathbf{A}_{i},\mathbf{b}_{i})\in Q^{\mathsf{dan}}\), any other lift is of the form \((x_{i}\mathbf{A}_{i},\mathbf{b}_{i}/x_{i})\), for some \(x_{i}\in\mathbb{R}\setminus 0\), \(i=1,2,\) and the horizontality condition is \((x_{1}\mathbf{A}_{1})\times(x_{2}\mathbf{A}_{2})=\mathbf{b}_{2}/x_{2}-\mathbf{ b}_{1}/x_{1}.\)
Now \(b_{1}b_{2}\in A_{1}A_{2}\) implies \(\mathbf{A}_{1}\times\mathbf{A}_{2}=\lambda_{1}\mathbf{b}_{1}+\lambda_{2} \mathbf{b}_{2}\) for some \(\lambda_{i}\neq 0\), therefore \(x_{1}x_{2}(\lambda_{1}\mathbf{b}_{1}+\lambda_{2}\mathbf{b}_{2})=\mathbf{b}_{2 }/x_{2}-\mathbf{b}_{1}/x_{1}\). Thus the horizontality condition becomes \((x_{1})^{2}x_{2}\lambda_{1}=-1\) and \(x_{1}(x_{2})^{2}\lambda_{2}=1\). This system has a unique solution \(x_{1}=\sqrt[3]{\lambda_{2}/(\lambda_{1})^{2}},\)\(x_{2}=-\sqrt[3]{\lambda_{1}/(\lambda_{2})^{2}},\) as needed.
**Lemma 2** (Theorem 4 for \(n=3\)).: _Consider a horizontal non-degenerate 3-gon in \(Q^{\mathsf{dan}}\), given by 3 vertices \(q_{1},q_{2},q_{3}\in Q^{\mathsf{dan}}\) such that \(q_{1}q_{2}\) and \(q_{2}q_{3}\) are horizontal lines. Then the projected pair of inscribed 3-gons with vertices \(A_{1},A_{2},A_{3}\) and edges \(b_{1},b_{2},b_{3}\) is a dancing pair; i.e., it satisfies_
\[[A_{2},B_{1},A_{1},D]+[B_{2},A_{2},C,A_{3}]=0, \tag{10}\]
_where \(B_{1}=b_{1}b_{2},B_{2}=b_{2}b_{3},C:=b_{1}a_{2}\) and \(D:=a_{1}b_{2}\). Conversely, a dancing pair of 3-gons lifts uniquely to a horizontal 3-gon in \(Q\)._
Proof.: Let \(q_{i}=(\mathbf{A}_{i},\mathbf{b}_{i}),i=1,2,3.\) By Lemma 1, the projected pair of 3-gons is inscribed and we need to show that Equation (10) is satisfied. We give homogeneous coordinates to all points involved:
\[\mathbf{B}_{1}:=\mathbf{b}_{1}\times\mathbf{b}_{2},\;\mathbf{B}_{2}:=\mathbf{ b}_{2}\times\mathbf{b}_{3},\;\mathbf{C}:=\mathbf{b}_{1}\times(\mathbf{A}_{2} \times\mathbf{A}_{3}),\;\mathbf{D}:=\mathbf{b}_{3}\times(\mathbf{A}_{1}\times \mathbf{A}_{2}).\]
We now write these expressions in terms of \(\mathbf{A}_{1},\mathbf{A}_{2},\mathbf{A}_{3}\). From the horizontality condition \(\mathbf{b}_{2}-\mathbf{b}_{1}=\mathbf{A}_{1}\times\mathbf{A}_{2}\) and the vector identity
\[\mathbf{a}\times(\mathbf{B}\times\mathbf{C})=(\mathbf{a}\mathbf{C})\mathbf{B} -(\mathbf{a}\mathbf{B})\mathbf{C} \tag{11}\]
follows
\[\mathbf{b}_{1}\times\mathbf{b}_{2}=\mathbf{b}_{1}\times(\mathbf{b}_{1}+\mathbf{A}_ {1}\times\mathbf{A}_{2})=(\mathbf{b}_{1}\mathbf{A}_{2})\mathbf{A}_{1}-\mathbf{A }_{2}.\]
Now \((\mathbf{A}_{2}-\mathbf{A}_{1},\mathbf{b}_{2}-\mathbf{b}_{1})\) is tangent to \(Q^{\mathsf{dan}}=\{\mathbf{b}\mathbf{A}=1\}\) at \((\mathbf{A}_{1},\mathbf{b}_{1})\), hence
\[0 =(\mathbf{b}_{2}-\mathbf{b}_{1})\mathbf{A}_{1}+\mathbf{b}_{1}( \mathbf{A}_{2}-\mathbf{A}_{1})=\mathbf{b}_{2}\mathbf{A}_{1}+\mathbf{b}_{1} \mathbf{A}_{2}-2=\] \[=(\mathbf{b}_{1}-\mathbf{A}_{1}\times\mathbf{A}_{2})\mathbf{A}_{ 1}+\mathbf{b}_{1}\mathbf{A}_{2}-2=\mathbf{b}_{1}\mathbf{A}_{2}-1,\]
hence \(\mathbf{B}_{1}=\mathbf{A}_{1}-\mathbf{A}_{2}.\) Similarly, \(\mathbf{B}_{2}=\mathbf{A}_{2}-\mathbf{A}_{3}.\) Using again equation (11), \(\mathbf{C}=(\mathbf{b}_{1}\mathbf{A}_{3})\mathbf{A}_{2}-\mathbf{A}_{3}\) and \(\mathbf{D}=\mathbf{A}_{1}-(\mathbf{b}_{3}\mathbf{A}_{1})\mathbf{A}_{2}.\)
Now it is easy to show that if \(4\) collinear points \(A_{1},\ldots,A_{4}\in\mathbb{RP}^{2}\) are given by homogeneous coordinates \(\mathbf{A}_{i}\in\mathbb{R}^{3}\), such that \(\mathbf{A}_{3}=\mathbf{A}_{1}+\mathbf{A}_{2},\mathbf{A}_{4}=k\mathbf{A}_{1}+ \mathbf{A}_{2}\), then \([A_{1},A_{2},A_{3},A_{4}]=k\) (see for example [13, Theorem 74, page 105]). Using this formula and the above expressions for \(\mathbf{B}_{1},\mathbf{B}_{2},\mathbf{C},\mathbf{D}\), we get
\[[A_{2},B_{1},A_{1},D]=1-\mathbf{b}_{3}\mathbf{A}_{1},\quad[B_{2},A_{2},C,A_{3} ]=1-\mathbf{b}_{1}\mathbf{A}_{3},\]
hence
\[[A_{2},B_{1},A_{1},D]+[B_{2},A_{2},C,A_{3}]=2-\mathbf{b}_{3}\mathbf{A}_{1}- \mathbf{b}_{1}\mathbf{A}_{3}.\]
Next
\[\mathbf{b}_{3}\mathbf{A}_{1}+\mathbf{b}_{1}\mathbf{A}_{3} =(\mathbf{b}_{2}+\mathbf{A}_{2}\times\mathbf{A}_{3})\mathbf{A}_{ 1}+(\mathbf{b}_{2}-\mathbf{A}_{1}\times\mathbf{A}_{2})\mathbf{A}_{3}\] \[=\mathbf{b}_{2}\mathbf{A}_{1}+\mathbf{b}_{2}\mathbf{A}_{3}=2,\]
so Equation (10) is satisfied, as needed.
In the other direction, suppose an inscribed pair of \(3\)-gons is given, with vertices \(A_{1},A_{2},A_{3}\) and edges \(b_{1},b_{2},b_{3}\), satisfying Equation (10). By Lemma 1, we can uniquely lift \((A_{1},b_{1}),(A_{2},b_{2})\) to points \((\mathbf{A}_{1},\mathbf{b}_{1}),(\mathbf{A}_{2},\mathbf{b}_{2})\in Q^{\mathsf{dan}}\) on a horizontal line.
Likewise, we can uniquely lift \((A_{2},b_{2}),(A_{3},b_{3})\) to points \((\lambda\mathbf{A}_{2},\mathbf{b}_{2}/\lambda),(\mathbf{A}_{3},\mathbf{b}_{3}) \in Q^{\mathsf{dan}}\) on a horizontal line, for some \(\lambda\neq 0.\) See Figure 9.
We now show that the dancing condition (10) implies \(\lambda=1\), i.e., the three lifted points \((\mathbf{A}_{1},\mathbf{b}_{1}),(\mathbf{A}_{2},\mathbf{b}_{2}),(\mathbf{A}_{ 3},\mathbf{b}_{3})\) are the vertices of a horizontal \(3\)-gon.
We use the following:
* \(\mathbf{b}_{2}=\mathbf{b}_{1}+\mathbf{A}_{1}\times\mathbf{A}_{2}\), \(\mathbf{b}_{3}=\frac{1}{\lambda}\mathbf{b}_{2}+\lambda\mathbf{A}_{2}\times \mathbf{A}_{3};\)
* \(A_{1}=[\mathbf{A}_{1}]\), \(A_{2}=[\mathbf{A}_{2}]\)
* \(B_{1}=[\mathbf{b}_{1}\times\mathbf{b}_{2}]=[\mathbf{A}_{1}-\mathbf{A}_{2}]\), \(B_{2}=[\mathbf{b}_{2}\times\mathbf{b}_{3}]=[\lambda\mathbf{A}_{2}-\mathbf{A}_{ 3}];\)
* \(C=[\mathbf{b}_{1}\times(\mathbf{A}_{2}\times\mathbf{A}_{3})]=[(\mathbf{b}_{1} \mathbf{A}_{3})\mathbf{A}_{2}-\mathbf{A}_{3}];\)
Figure 9. Lemma 2.
* \(D=[\mathbf{b}_{3}\times(\mathbf{A}_{\times}\mathbf{A}_{2})]=[(\mathbf{b}_{3}\mathbf{ A}_{2})\mathbf{A}_{1}-(\mathbf{b}_{3}\mathbf{A}_{1})\mathbf{A}_{2}]=[\frac{1}{ \lambda}\mathbf{A}_{1}-(\mathbf{b}_{3}\mathbf{A}_{1})\mathbf{A}_{2}]\).
A similar computation as above gives
\[[A_{2},B_{1},A_{1},D]=1-\lambda\mathbf{b}_{3}\mathbf{A}_{1}\]
\[[B_{2},A_{2},C,A_{3}]=1-\frac{1}{\lambda}\mathbf{b}_{1}\mathbf{A}_{3}\]
Now, from \(\mathbf{b}_{3}=\frac{1}{\lambda}\mathbf{b}_{2}+\lambda\mathbf{A}_{2}\times \mathbf{A}_{3}\), we get that
\[\mathbf{b}_{3}\mathbf{A}_{1}=\frac{1}{\lambda}+\lambda(\mathbf{A}_{1}\times \mathbf{A}_{2})\mathbf{A}_{3};\]
whereas, from \(\mathbf{b}_{2}=\mathbf{b}_{1}+\mathbf{A}_{1}\times\mathbf{A}_{2}\) we obtain
\[\mathbf{b}_{1}\mathbf{A}_{3}=\lambda-(\mathbf{A}_{1}\times\mathbf{A}_{2}) \mathbf{A}_{3},\]
since \(\mathbf{b}_{2}\mathbf{A}_{3}=\lambda\).
Thus,
\[0 =[A_{2},B_{1},A_{1},D]+[B_{2},A_{2},C,A_{3}]\] \[=2-\lambda\mathbf{b}_{3}\mathbf{A}_{1}-\frac{1}{\lambda}\mathbf{ b}_{1}\mathbf{A}_{3}\] \[=2-(1+\lambda^{2}(\mathbf{A}_{1}\times\mathbf{A}_{2})\mathbf{A}_ {3})-(1-\frac{1}{\lambda}(\mathbf{A}_{1}\times\mathbf{A}_{2})\mathbf{A}_{3})\] \[=\left(\frac{1}{\lambda}-\lambda^{2}\right)(\mathbf{A}_{1}\times \mathbf{A}_{2})\mathbf{A}_{3}\]
Since \(\mathbf{A}_{1},\mathbf{A}_{2},\mathbf{A}_{3}\) are non-collinear \((\mathbf{A}_{1}\times\mathbf{A}_{2})\mathbf{A}_{3}\neq 0\), therefore, \(\lambda=1\).
Theorem 4 now follows from these previous two lemmas by induction on \(n\), using Lemma 2 for the inductive step.
**3.2**.: **Theorem 2.** We identify \(\mathbb{R}^{3}=\mathrm{Im}(\mathbb{H})\), \((x,y,z)\mapsto x\mathbf{i}+y\mathbf{j}+z\mathbf{k}\), and use repeatedly the following well known facts:
* If \(\mathbf{w}\in\mathbb{R}^{3}\) is a unit vector and \(\theta\in\mathbb{R}\) then \(\mathbf{v}\mapsto\mathbf{v}^{\prime}=e^{\theta\mathbf{w}}\mathbf{v}e^{-\theta \mathbf{w}}\) is the rotation about the axis \(\mathbb{R}\mathbf{w}\) by the angle \(2\theta\), in the sense given by the 'right hand rule' (\(\det(\mathbf{v},\mathbf{v}^{\prime},\mathbf{w})>0\)).
* \(e^{\theta\mathbf{w}}=\cos\theta+\mathbf{w}\sin\theta\).
Proof of Part (b).: (Part (a) will follow easily from part (b).) Let \(\mathbf{v}_{0},...,\mathbf{v}_{n-1}\) be the vertices of \(\Gamma\), arranged on a circle of (spherical) radius \(\phi\), centered at the north pole \(\mathbf{k}\) of the stationary sphere (which we assume here to be a unit sphere, to simplify notation). Let \(w\) be the winding number of \(\Gamma\) about \(\mathbf{k}\), so that \(\theta:=2\pi w/n\) is the angle of rotation at \(\mathbf{k}\) sending \(\mathbf{v}_{i}\mapsto\mathbf{v}_{i+1}\). That is, setting \(q:=e^{\theta\mathbf{k}/2}\), one has \(\mathbf{v}_{i}=q^{i}\mathbf{v}_{0}\bar{q}^{i}.\) As we roll the moving sphere along the edge of \(\Gamma\) joining \(\mathbf{v}_{i}\) to \(\mathbf{v}_{i+1}\), an arc of a great circle of length \(\delta\), the moving sphere rotates about the axis through its center in the direction of
\[\mathbf{w}_{i}:=\frac{\mathbf{v}_{i}\times\mathbf{v}_{i+1}}{\left\|\mathbf{v} _{i}\times\mathbf{v}_{i+1}\right\|}\]
by an angle of \(4\delta\) (due to the 3:1 radius ratio). See Fig. 10.
Thus the lifted monodromy due to rolling along this edge is \(g_{i}:=e^{2\delta\mathbf{w}_{i}},\) and the total lifted monodromy is
\[g:=g_{n-1}\cdots g_{1}g_{0}.\]
Now, clearly \(\mathbf{w}_{i}=q^{i}\mathbf{w}_{0}\bar{q}^{i},\) hence \(g_{i}=q^{i}g_{0}\bar{q}^{i},\) so
\[g=\left(q^{n-1}g_{0}\bar{q}^{n-1}\right)\left(q^{n-2}g_{0}\bar{q}^{n-2}\right) \cdots\left(qg_{0}\bar{q}\right)g_{0}=q^{n}(\bar{q}g_{0})^{n}.\]
Next
\[q^{n}=e^{n\theta\mathbf{k}/2}=e^{n(2\pi w/n)\mathbf{k}/2}=e^{\pi\mathbf{k}w}= (-1)^{w},\]
hence \(g=(-1)^{w}(\bar{q}g_{0})^{n}.\) It follows that the trivial lifted monodromy condition, \(g=1,\) is equivalent to
\[(\bar{q}g_{0})^{n}=(-1)^{w}. \tag{12}\]
**Lemma 3**.: _The condition \(p^{n}=(-1)^{w},\) for a unit quaternion \(p,\) is equivalent to the existence of an integer \(w^{\prime}\equiv w\) (mod 2) such that \(\mathrm{Re}(p)=\cos(\pi w^{\prime}/n).\)_
**Proof.** Write \(p=e^{t\mathbf{w}}=\cos t+\mathbf{w}\sin t,\) for some unit imaginary quaternion \(\mathbf{w}\) and \(t\in\mathbb{R}\). Then \(p^{n}=\cos(nt)+\mathbf{w}\sin(nt)=(-1)^{w}\Leftrightarrow\cos(nt)=(-1)^{w} \Leftrightarrow nt=\pi w^{\prime}\) for some integer \(w^{\prime}\equiv w\) (mod 2). \(\square\)
To apply the last Lemma to \(p=\bar{q}g_{0}\) we calculate its real part,
\[\begin{split}\mathrm{Re}(\bar{q}g_{0})&=\mathrm{Re} \left(e^{-\theta\mathbf{k}/2}e^{2\delta\mathbf{w}_{0}}\right)=\\ &=\cos(\theta/2)\cos(2\delta)+(\mathbf{k}\cdot\mathbf{w}_{0})\sin (\theta/2)\sin(2\delta).\end{split} \tag{13}\]
Next we have
\[\mathbf{w}_{0}=\frac{\mathbf{v}_{0}\times\mathbf{v}_{1}}{\|\mathbf{v}_{0} \times\mathbf{v}_{1}\|},\ \ \|\mathbf{v}_{0}\times\mathbf{v}_{1}\|=\sin\delta,\ \ (\mathbf{v}_{0}\times \mathbf{v}_{1})\cdot\mathbf{k}=\sin\theta\sin^{2}\phi,\]
thus
\[\mathbf{k}\cdot\mathbf{w}_{0}=\frac{\sin^{2}\phi\sin\theta}{\sin\delta}.\]
Figure 10.
As for \(\delta\), we consider the spherical right triangle with vertices \(\mathbf{k},\mathbf{v}_{0},M\), where \(M\) is the midpoint of the edge of \(\Gamma\) joining \(\mathbf{v}_{0}\) with \(\mathbf{v}_{1}\). See Figure 11.
By standard spherical trigonometry (e.g., formula (R3) of [15]),
\[\sin(\delta/2)=\sin\phi\sin(\theta/2).\]
Using the last two displayed equations in (13), we obtain, after some simplification,
\[\operatorname{Re}(\bar{q}g_{0})=\cos\left(\theta/2\right)\left[1-4\sin^{2} \left(\theta/2\right)\sin^{2}\phi\right].\]
Recalling that \(\theta=2\pi w/n\) and the lifted monodromy triviality condition \(\operatorname{Re}(\bar{q}g_{0})=\cos(\pi w^{\prime}/n)\) for some \(w^{\prime}\equiv w\ (\text{mod}\ 2)\), we obtain Equation (3),
\[\cos\left(\pi w^{\prime}/n\right)=\cos\left(\pi w/n\right)\left[1-4\sin^{2} \left(\pi w/n\right)\sin^{2}\phi\right]. \tag{14}\]
This completes the proof of part (b) of Theorem 2, except for the bound \(w<w^{\prime}<n\). This follows from the last equation: by the periodicity of the left hand side, one can assume, without loss of generality, that \(0\leq w^{\prime}<n\). Now the right hand side is a strictly decreasing function of \(\phi\in(0,\pi/2)\), taking values in the open interval \((\cos(3\pi w/n),\cos(\pi w/n))\). Thus if \(\cos(\pi w^{\prime}/n)\) is one of these values we must have that \(\pi w/n<\pi w^{\prime}/n\), or \(w<w^{\prime}\).
Proof of Part (a).: This proof is very similar to part (b) above, except that for the trivial (unlifted) monodromy condition on \(\Gamma\), one requires only that the lifted monodromy satisfies \(g=\pm 1\), so in Lemma 3 one can drop the requirement \(w^{\prime}\equiv w\ (\text{mod}\ 2)\).
Proof of Part (c).: Let \(\Gamma^{\prime}\) be the regular polygon traced on the moving sphere as it is rolled along \(\Gamma\). Its vertices are \(\mathbf{v}^{\prime}_{0},\ldots,\mathbf{v}^{\prime}_{n-1}\), arranged on a circle of radius \(\phi^{\prime}\), with \(\theta^{\prime}\) the angle of rotation at the center, sending \(\mathbf{v}^{\prime}_{i}\mapsto\mathbf{v}^{\prime}_{i+1}.\) To show that \(\mathbf{w}^{\prime}\) in equation (3) is the winding number of \(\Gamma^{\prime}\) about its center we thus need to show that \(\theta^{\prime}:=2\pi w^{\prime}/n\).
We consider the shaded triangle Figure 11 and the corresponding triangle on the moving sphere. On the stationary sphere the angles are \(\theta/2,\pi/2,A\), and the sides opposite the first two angles are \(\delta/2,\phi\) (respectively). On the moving sphere the angles are \(\theta^{\prime}/2,\pi/2,A\), and the sides opposite the first two angles are \(3\delta/2,\phi^{\prime}\) (respectively). By formulas (R3) and (R9) of [15]
Figure 11.
applied to these two triangles,
\[\cos(\theta/2) =\sin A\cos(\delta/2),\] \[\cos(\theta^{\prime}/2) =\sin A\cos(3\delta/2),\] \[\sin(\delta/2) =\sin(\theta/2)\sin\phi.\]
Hence
\[\cos(\theta^{\prime}/2) =\cos(\theta/2)\frac{\cos(3\delta/2)}{\cos(\delta/2)}=\cos( \theta/2)\left[1-4\sin^{2}(\delta/2)\right]\] \[=\cos(\theta/2)\left[1-4\sin^{2}(\theta/2)\sin^{2}\phi\right]= \cos\left(\pi w^{\prime}/n\right).\]
Thus \(\cos(\theta^{\prime}/2)=\cos\left(\pi w^{\prime}/n\right)\), hence \(\theta^{\prime}=2\pi w^{\prime}/n\), as needed.
Proof of Part (d).: We first show that for all \(n\geq 6\), \(w=2\) and \(w^{\prime}=4\), Equation (19) has a solution, using an intermediate-value argument: let \(\alpha=2\pi/n\), then we need to show that \(\cos(2\alpha)=\cos\alpha\left(1-4\sin^{2}\alpha\sin^{2}\phi\right)\) has a solution \(\phi\in(0,\pi/2)\). As \(\phi\) varies in \([0,\pi/2]\), the right hand side of this equation decreases monotonically from \(\cos\alpha\) to \(\cos\alpha\left(1-4\sin^{2}\alpha\right)=\cos(3\alpha)\). For \(n\geq 6\) one has \(3\alpha\leq\pi\), hence \(\cos(\alpha)>\cos(2\alpha)>\cos(3\alpha)\), so there is an intermediate value of \(\phi\in(0,\pi/2)\) for which the right hand side is \(\cos(2\alpha)\).
To show that there is no solution of (19) with \(n<6\), we can either use Theorem 1, or more elementary, prove directly that there are no solutions to Equation (19) with \(n<6\), \(0<w<n/2\), \(w<w^{\prime}<n\) and \(w^{\prime}\equiv w\pmod{2}\). There are only \(3\) cases of \((n,w,w^{\prime})\) satisfying these restrictions: \((4,1,3),(5,1,3)\) and \((5,2,4).\) In these \(3\) cases it is easy to show that the equation reduces to \(\cos\phi=0\), so there is no solution \(\phi\in(0,\pi/2)\).
### Theorem 5
We divide the proof as follows:
1. \(\iota:Q^{\mathsf{dan}}\to Q^{\mathsf{oct}}\) is an embedding, mapping \(\mathscr{D}^{\mathsf{dan}}\) to \(\mathscr{D}^{\mathsf{oct}}\).
2. \(\Phi:\widetilde{Q}^{\mathsf{roll}}\to\widetilde{Q}^{\mathsf{oct}}\) is a diffeomorphism
3. \(\Phi\) maps \(\widetilde{\mathscr{D}}^{\mathsf{roll}}\) to \(\widetilde{\mathscr{D}}^{\mathsf{oct}}\).
Proof of (1).: This was shown in [4, SS3.3], with slightly different notation, so we sketch the proof here. The map \(\mathbb{R}^{3,3}\to\mathbb{P}(\operatorname{Im}(\mathbb{O}))\), \((\mathbf{A},\mathbf{b})\mapsto[\zeta]\), where \(\zeta=\left(\begin{smallmatrix}1&\mathbf{A}\\ \mathbf{b}&-1\end{smallmatrix}\right),\) is clearly injective (an affine chart). If \((\mathbf{A},\mathbf{b})\in Q^{\mathsf{dan}}\), i.e., \(\mathbf{b}\mathbf{A}=1\), then \(\left\langle\zeta,\zeta\right\rangle=-1+\mathbf{b}\mathbf{A}=0\), i.e. \(\iota(Q^{\mathsf{dan}})\subset Q^{\mathsf{oct}}\).
Next let \(\Omega:=\zeta\mathrm{d}\zeta\), an \(\mathbb{O}\)-valued \(1\)-form on \(\operatorname{Im}(\mathbb{O})\). Explicitly,
\[\Omega=\left(\begin{array}{cc}x\,\mathrm{d}x-\mathrm{d}\mathbf{b}\,\mathbf{A} &x\,\mathrm{d}\mathbf{A}-\mathbf{A}\,\mathrm{d}x+\mathbf{b}\times\mathrm{d} \mathbf{b}\\ \mathbf{b}\,\mathrm{d}x-x\mathrm{d}\mathbf{b}+\mathbf{A}\times\mathrm{d} \mathbf{A}&x\,\mathrm{d}x-\mathbf{b}\,\mathrm{d}\mathbf{A}\end{array}\right). \tag{15}\]
Next one calculates that \(r^{*}\Omega=r^{2}\Omega\), \(r\in\mathbb{R}^{*}\), and that \(\Omega\) vanishes along the radial directions in the null cone \(C\subset\operatorname{Im}(\mathbb{O})\), hence the restriction of \(\Omega\) to \(C\) descends to the quotient \((C\setminus 0)/\mathbb{R}^{*}=Q^{\mathsf{oct}}\), with kernel \(\mathscr{D}^{\mathsf{oct}}\) (see [4, Proposition 3.5] for the detailed calculation). It is thus enough to
show that the kernel of the pull-back of \(\Omega\) to \(Q^{\mathsf{dan}}\) by \(\tilde{\iota}:(\mathbf{A},\mathbf{b})\mapsto\left(\begin{smallmatrix}1&\mathbf{A} \\ \mathbf{b}&-1\end{smallmatrix}\right)\) is \(\mathscr{D}^{\mathsf{dan}}\). Indeed, from Equation (15) follows
\[\tilde{\iota}^{*}\Omega=\left(\begin{array}{cc}-\mathrm{d}\mathbf{b}\, \mathbf{A}&\mathrm{d}\mathbf{A}+\mathbf{b}\times\mathrm{d}\mathbf{b}\\ -\mathrm{d}\mathbf{b}+\mathbf{A}\times\mathrm{d}\mathbf{A}&-\mathbf{b}\, \mathrm{d}\mathbf{A}\end{array}\right),\]
then one checks that the common kernel of the entries of this 1-form, restricted to \(Q^{\mathsf{dan}}\), is indeed \(\mathscr{D}^{\mathsf{dan}}\).
Proof of (2).: We express \(\Phi:\widetilde{Q}^{\mathsf{roll}}\to\widetilde{Q}^{\mathsf{oct}}\) as the composition
\[S^{2}\times S^{3}\xrightarrow{f}S^{2}\times S^{3}\xrightarrow{j}C\setminus 0 \xrightarrow{\pi}\widetilde{Q}^{\mathsf{oct}}, \tag{16}\]
where
* \(f\) is the restriction to \(S^{2}\times S^{3}\) of the map \[\mathrm{Im}(\mathbb{H})\times\mathbb{H}\to\mathrm{Im}(\mathbb{H})\times \mathbb{H},\ \ (\mathbf{v},q)\mapsto(\mathbf{v},\mathbf{v}q),\]
* \(j\) is the restriction to \(S^{2}\times S^{3}\) of the linear isomorphism (17) \[\mathrm{Im}(\mathbb{H})\times\mathbb{H}\to\mathrm{Im}(\mathbb{O}),\ \ (\mathbf{v},x+ \mathbf{w})\mapsto\begin{pmatrix}x&\mathbf{v}+\mathbf{w}\\ \mathbf{v}-\mathbf{w}&-x\end{pmatrix},\] where \(\mathbf{w},\mathbf{v}\in\mathrm{Im}(\mathbb{H})\), \(x\in\mathbb{R}\),
* \(C=\{\zeta\in\mathrm{Im}(\mathbb{O})\ |\ \langle\zeta,\zeta\rangle=0\}\) is the _null cone_ in \(\mathrm{Im}(\mathbb{O})\), and
* \(\pi\) is the restriction to \(C\setminus 0\) of the canonical projection \[\mathrm{Im}(\mathbb{O})\setminus 0\to(\mathrm{Im}(\mathbb{O})\setminus 0)/ \mathbb{R}^{+}=S^{6},\ \ \zeta\mapsto\mathbb{R}^{+}\zeta.\]
**Remark 3**.: We note, as we did after Equation (9), that for Equation (17) to make sense we need to identify \(\mathbb{R}^{3}\) (column vectors) with \((\mathbb{R}^{3})^{*}\) (row vectors) using the standard Euclidean inner product in \(\mathbb{R}^{3}\). This convention will be kept implicitly for the rest of the article.
First, \(f\) is a diffeomorphism because it has an inverse,
\[(\mathbf{v},q)\mapsto(\mathbf{v},-\mathbf{v}q).\]
Next, we verify that \(j\) maps \(S^{2}\times S^{3}\) into \(C\setminus 0\): if \((\mathbf{v},x+\mathbf{w})\in S^{2}\times S^{3}\), then \(|\mathbf{v}|^{2}=x^{2}+|\mathbf{w}|^{2}=1\), hence if \(\zeta=j(\mathbf{v},x+\mathbf{w})=\left(\begin{smallmatrix}x&\mathbf{v}+ \mathbf{w}\\ \mathbf{v}-\mathbf{w}&-x\end{smallmatrix}\right)\) then \(\langle\zeta,\zeta\rangle=-x^{2}+(\mathbf{v}+\mathbf{w})\cdot(\mathbf{v}- \mathbf{w})=|\mathbf{v}|^{2}-(x^{2}+|\mathbf{w}|^{2})=1-1=0\), hence \(j(\mathbf{v},x+\mathbf{w})\in C\). Now \(0\) is clearly not in the image, since \(j\) is the restriction of an injectve linear map and \(0\) is not in \(S^{2}\times S^{3}\).
Next, to show that \(\pi\circ j\) is a diffeomorphism, we show that it is bijective and a local diffeomorphism.
To show that \(\pi\circ j\) is injective, note that \(j\) is injective and its image intersects each of the fibers of \(\pi\) at a single point.
To show that \(\pi\circ j\) is surjective let \([\zeta]\in\widetilde{Q}^{\mathsf{oct}}\), then \(\zeta=\left(\begin{smallmatrix}x&\mathbf{A}\\ \mathbf{b}&-x\end{smallmatrix}\right)\in C\setminus 0\), hence \(-x^{2}+\mathbf{b}\mathbf{A}=0.\) Let \(\mathbf{v}:=(\mathbf{A}+\mathbf{b})/2,\mathbf{w}:=(\mathbf{A}-\mathbf{b})/2\), then \(|x+\mathbf{w}|^{2}=x^{2}+|\mathbf{w}|^{2}=x^{2}+|\mathbf{A}-\mathbf{b}|^{2}/4= x^{2}+|\mathbf{A}+\mathbf{b}|^{2}/4-\mathbf{b}\mathbf{A}=|\mathbf{v}|^{2}.\) Thus \(|x+\mathbf{w}|=|\mathbf{v}|\neq 0\), so we can (positively) rescale \(\mathbf{v}\) and \(x+\mathbf{w}\) simultaneously to unit vectors.
To show that \(\pi\circ j\) is a local diffeomorphism it is enough to show, by the inverse function theorem, that its differential \((\pi\circ j)_{*}\) is bijective at each point of \(S^{2}\times S^{3}\).
Now \(j\) is the restriction of a linear isomorphism, hence is an immersion, i.e., \(j_{*}\) is injective at each point of \(S^{2}\times S^{3}\), and \(\pi\) is a submersion, i.e., \(\pi_{*}\) is surjective at each point of \(C\setminus 0\), with kernel in the radial direction, transverse to the image of \(j_{*}\). Hence the composition \(\pi_{*}\circ j_{*}=(\pi\circ j)_{*}\) is bijective, as needed.
Proof of (3).: We shall prove this statement in several steps:
1. Define a transitive action of \(K:=S^{3}\times S^{3}\) on \(\widetilde{Q}^{\mathtt{roll}}\) and \(\widetilde{Q}^{\mathtt{oct}}\).
2. \(\Phi:\widetilde{Q}^{\mathtt{roll}}\to\widetilde{Q}^{\mathtt{oct}}\) is \(K\)-equivariant.
3. \(\widetilde{\mathscr{D}}^{\mathtt{roll}},\widetilde{\mathscr{D}}^{\mathtt{oct}}\) are \(K\)-invariant.
4. \(\Phi_{*}\) maps \(\widetilde{\mathscr{D}}^{\mathtt{roll}}\) to \(\widetilde{\mathscr{D}}^{\mathtt{oct}}\) at a single point of \(\widetilde{Q}^{\mathtt{roll}}\).
We proceed with proofs of each of these steps.
_Step (a)._ Define a linear action of \(K=S^{3}\times S^{3}\) (pairs of unit quaternions) on \(\operatorname{Im}(\mathbb{H})\oplus\mathbb{H}\) via
\[(q_{1},q_{2}):(\mathbf{v},q)\mapsto(q_{1}\mathbf{v}\bar{q}_{1},q_{1}q\bar{q}_{ 2}). \tag{18}\]
This leaves invariant \(\widetilde{Q}^{\mathtt{roll}}=S^{2}\times S^{3}\) so defines a \(K\)-action on it, clearly transitive.
This linear \(K\)-action on \(\operatorname{Im}(\mathbb{H})\oplus\mathbb{H}\) also induces a \(K\)-action on \(\operatorname{Im}(\mathbb{O})\) via the linear isomorphism \(\operatorname{Im}(\mathbb{H})\oplus\mathbb{H}\to\operatorname{Im}(\mathbb{O})\) of formula (17). Under this isomorphism, \(\langle\,\ \rangle\) becomes the quadratic form \(|\mathbf{v}|^{2}-|q|^{2}\) on \(\operatorname{Im}(\mathbb{H})\oplus\mathbb{H}\), which is clearly \(K\)-invariant, hence the null cone \(C\subset\operatorname{Im}(\mathbb{O})\) is \(K\)-invariant, inducing a \(K\)-action on \(\widetilde{Q}^{\mathtt{oct}}=(C\setminus 0)/\mathbb{R}^{+}\).
_Step (b)._ Each of the maps \(f,\iota,\pi\) of (16) are \(K\)-equivariant, hence so is \(\Phi\). The (easy) verification is left to the reader.
_Step (c)._ We will use throughout a well-known (and easy to verify) quaternion identity:
\[\mathbf{v}\mathbf{w}=-\mathbf{v}\cdot\mathbf{w}+\mathbf{v}\times\mathbf{w}, \ \ \forall\mathbf{v},\mathbf{w}\in\operatorname{Im}(\mathbb{H}).\]
We start with the \(K\)-invariance of \(\widetilde{\mathscr{D}}^{\mathtt{roll}}\). Note that the \(K\)-action on \(\widetilde{Q}^{\mathtt{roll}}\) commutes with that of \((\pm 1,\pm 1)\), hence it descends to an action of \(K/(\pm 1,\pm 1)=\operatorname{SO}_{3}\times\operatorname{SO}_{3}\) on the quotient \(Q^{\mathtt{roll}}=\widetilde{Q}^{\mathtt{roll}}/(1,\pm 1)=S^{2}\times \operatorname{SO}_{3}\), \((g_{1},g_{2}):(\mathbf{v},g)\mapsto(g_{1}\mathbf{v},g_{1}gg_{2}^{-1}).\) Thus, in order to show that \(\widetilde{\mathscr{D}}^{\mathtt{roll}}\) is \(K\)-invariant it is enough to show that \(\mathscr{D}^{\mathtt{roll}}\) is \(\operatorname{SO}_{3}\times\operatorname{SO}_{3}\)-invariant. Now at a point \((\mathbf{v},g)\in Q^{\mathtt{roll}}=S^{2}\times\operatorname{SO}_{3}\), \(\mathscr{D}^{\mathtt{roll}}\) consists of vectors \((\dot{\mathbf{v}},\dot{g})\in T_{(\mathbf{v},g)}Q^{\mathtt{roll}}\) satisfying
\[4\dot{\mathbf{v}}=\boldsymbol{\omega}\times\mathbf{v},\ \ \boldsymbol{\omega} \cdot\mathbf{v}=0, \tag{19}\]
where \(\boldsymbol{\omega}\times\mathbf{x}=\dot{g}g^{-1}\mathbf{x}\), see Definition 7. Now \((g_{1},g_{2})_{*}:(\dot{\mathbf{v}},\dot{g})\mapsto(g_{1}\dot{\mathbf{v}},g_{1} \dot{g}g_{2}^{-1})\), thus
\[g_{1}\dot{g}g_{2}^{-1}(g_{1}gg_{2}^{-1})^{-1}\mathbf{x}=g_{1}\dot{g}g^{-1}g_{1} ^{-1}\mathbf{x}=g_{1}(\boldsymbol{\omega}\times g_{1}^{-1}\mathbf{x})=(g_{1} \boldsymbol{\omega})\times\mathbf{x},\]
hence \((g_{1},g_{2})_{*}:(\dot{\mathbf{v}},\boldsymbol{\omega})\mapsto(g_{1}\dot{ \mathbf{v}},g_{1}\boldsymbol{\omega}).\) Now if \((\dot{\mathbf{v}},\boldsymbol{\omega})\) satisfy Equations (19) then \(4g_{1}\dot{\mathbf{v}}=4g_{1}(\boldsymbol{\omega}\times\mathbf{v})=4(g_{1} \boldsymbol{\omega})\times(g_{1}\mathbf{v})\), \((g_{1}\boldsymbol{\omega})\cdot(g_{1}\mathbf{v})=\boldsymbol{\omega}\cdot \mathbf{v}=0\), hence \((g_{1}\dot{\mathbf{v}},g_{1}\boldsymbol{\omega})\) satisfy them as well. This shows that \(\widetilde{\mathscr{D}}^{\mathtt{roll}}\) is \(K\)-invariant.
To show that \(\widetilde{\mathscr{D}}^{\mathtt{oct}}\) is \(K\)-invariant it is enough to show that the \(K\)-action defined on \(\mathrm{Im}(\mathbb{O})\) by the isomorphism \(\mathrm{Im}(\mathbb{H})\oplus\mathbb{H}\to\mathrm{Im}(\mathbb{O})\) of formula (17) preserves octonion multiplication. For this, we show that the image of the infinitesimal \(K\)-action on \(\mathrm{Im}(\mathbb{O})\) is contained in the Lie algebra \(\mathfrak{g}_{2}\subset\mathrm{End}(\mathrm{Im}(\mathbb{O}))\) defined by Equation (25).
To this end, let \((\mathbf{v}_{1},\mathbf{v}_{2})\in\mathrm{Im}(\mathbb{H})\oplus\mathrm{Im}( \mathbb{H})\), thought of as the Lie algebra of \(K=S^{3}\times S^{3}\). The infinitesimal action of \((\mathbf{v}_{1},\mathbf{v}_{2})\) on \(\mathrm{Im}(\mathbb{H})\oplus\mathbb{H}\), corresponding to the \(K\)-action of Equation (18), is
\[(\mathbf{v},q)\mapsto(2\mathbf{v}_{1}\times\mathbf{v},\mathbf{v}_{1}q-q \mathbf{v}_{2}).\]
Conjugating this action with the isomorphism \(\mathrm{Im}(\mathbb{H})\oplus\mathrm{Im}(\mathbb{H})\to\mathrm{Im}(\mathbb{O})\) of Equation (17), we obtain the infinitesimal action of \((\mathbf{v}_{1},\mathbf{v}_{2})\) on \(\mathrm{Im}(\mathbb{O})\),
\[\left(\begin{array}{cc}x&\mathbf{A}\\ \mathbf{b}&-x\end{array}\right)\mapsto\left(\begin{array}{cc}\tilde{x}& \tilde{\mathbf{A}}\\ \tilde{\mathbf{b}}&-\tilde{x}\end{array}\right),\]
where
\[\left\{\begin{array}{rl}\tilde{\mathbf{A}}&=\frac{1}{2}(3\mathbf{v}_{1}+ \mathbf{v}_{2})\times\mathbf{A}+\frac{1}{2}(\mathbf{v}_{1}-\mathbf{v}_{2}) \times\mathbf{b}+(\mathbf{v}_{1}-\mathbf{v}_{2})x\\ \tilde{\mathbf{b}}&=\frac{1}{2}(\mathbf{v}_{1}-\mathbf{v}_{2})\times\mathbf{A }+\frac{1}{2}(3\mathbf{v}_{1}+\mathbf{v}_{2})\times\mathbf{b}+(\mathbf{v}_{2} -\mathbf{v}_{1})x\\ \tilde{x}&=\frac{1}{2}(\mathbf{v}_{2}-\mathbf{v}_{1})\cdot(\mathbf{A}- \mathbf{b})\end{array}\right.\]
(to simplify notation, all vectors in this formula are column vectors). One can see easily that this is the action of \(\rho(T,\mathbf{Q},\mathbf{p})/2\) of Equation (25), where
\[T\mathbf{x}=(3\mathbf{v}_{1}+\mathbf{v}_{2})\times\mathbf{x},\ \mathbf{Q}= \mathbf{v}_{1}-\mathbf{v}_{2},\ \mathbf{p}=\mathbf{v}_{2}-\mathbf{v}_{1}. \tag{20}\]
This concludes the proof of Step (c).
_Step (d)._ We first find equations for \(\widetilde{\mathscr{D}}^{\mathtt{roll}}\) on \(\widetilde{Q}^{\mathtt{roll}}=S^{2}\times S^{3}\subset\mathrm{Im}(\mathbb{H}) \oplus\mathbb{H}\), using coordinates \(\mathbf{v}\in\mathrm{Im}(\mathbb{H})\), \(s+\mathbf{w}\in\mathbb{H}\).
**Lemma 4**.: \(\widetilde{\mathscr{D}}^{\mathtt{roll}}\subset T\widetilde{Q}^{\mathtt{roll}}\) _is given by_
\[2\mathrm{d}\mathbf{v}+\mathbf{v}\times(s\,\mathrm{d}\mathbf{w}- \mathbf{w}\,\mathrm{d}s+\mathbf{w}\times\mathrm{d}\mathbf{w})=0 \text{(no slip)},\] \[\mathbf{v}\cdot(s\,\mathrm{d}\mathbf{w}-\mathbf{w}\,\mathrm{d}s+ \mathbf{w}\times\mathrm{d}\mathbf{w})=0 \text{(no twist)}.\]
**Proof.** Recall first the equations that define \(\mathscr{D}^{\mathtt{roll}}\) (Definition 7, for \(\rho=3\)):
\[4\dot{\mathbf{v}}=\boldsymbol{\omega}\times\mathbf{v}\ (\text{no slip}),\quad \boldsymbol{\omega}\cdot\mathbf{v}=0\ (\text{no twist}),\]
where \(\boldsymbol{\omega}\) is defined by \(\dot{g}g^{-1}\mathbf{x}=\boldsymbol{\omega}\times\mathbf{x}\).
Now let \(q=q(t)\in S^{3}\), then \(1=q\bar{q}\) implies \(0=\dot{q}\bar{q}+q\dot{\bar{q}}\), i.e., \(\dot{q}\bar{q}\in\mathrm{Im}(\mathbb{H})\). Next define \(g=\mathrm{Ad}(q)\), i.e., \(\mathbf{x}=g\mathbf{X}=q\mathbf{X}\bar{q}\), where \(\mathbf{X}\in\mathbb{R}^{3}\) (fixed). Then, on the one hand,
\[\dot{\mathbf{x}}=\dot{g}\mathbf{X}=\dot{g}g^{-1}\mathbf{x}=\boldsymbol{\omega} \times\mathbf{x},\]
and on the other hand,
\[\dot{\mathbf{x}}=\dot{q}\mathbf{X}\bar{q}+q\mathbf{X}\dot{\bar{q}}=\dot{q}\bar{ q}\mathbf{x}q\bar{q}+q\bar{q}\mathbf{x}q\dot{\bar{q}}=\dot{q}\bar{q}\mathbf{x}- \mathbf{x}\dot{q}\bar{q}=2\dot{q}\bar{q}\times\mathbf{x},\]
hence \(\boldsymbol{\omega}=2\dot{q}\bar{q}\).
Next, writing \(q=s+\mathbf{w}\), one has \(1=q\bar{q}=s^{2}+|\mathbf{w}|^{2}\), hence \(0=s\dot{s}+\mathbf{w}\cdot\dot{\mathbf{w}}\), thus
\[\begin{split}\dot{q}\bar{q}&=(\dot{s}+\dot{ \mathbf{w}})(s-\mathbf{w})=\dot{s}s-\dot{s}\mathbf{w}+\dot{\mathbf{w}}s-\dot{ \mathbf{w}}\mathbf{w}\\ &=-\dot{\mathbf{w}}\cdot\mathbf{w}-\dot{s}\mathbf{w}+\dot{ \mathbf{w}}s-(-\dot{\mathbf{w}}\cdot\mathbf{w}+\dot{\mathbf{w}}\times \mathbf{w})\\ &=s\dot{\mathbf{w}}-\mathbf{w}\dot{s}+\mathbf{w}\times\dot{ \mathbf{w}}.\end{split} \tag{21}\]
It follows that the no slip equation on \(Q^{\texttt{roll}}\), \(4\dot{\mathbf{v}}=\boldsymbol{\omega}\times\mathbf{v}\), pulled back to \(\widetilde{Q}^{\texttt{roll}}\) by \(q\mapsto\mathrm{Ad}(q)\), is
\[2\dot{\mathbf{v}}+\mathbf{v}\times\dot{q}\bar{q}=2\dot{\mathbf{v}}+\mathbf{v} \times(s\dot{\mathbf{w}}-\mathbf{w}\dot{s}+\mathbf{w}\times\dot{\mathbf{w}})=0.\]
The no-twist equation also follows from the above expression (21) for \(\dot{q}\bar{q}\),
\[\dot{q}\bar{q}\cdot\mathbf{v}=\mathbf{v}\cdot(s\dot{\mathbf{w}}-\mathbf{w} \dot{s}+\mathbf{w}\times\dot{\mathbf{w}})=0,\]
as needed.
Proof of step (d) continued.: Now fix the point \((\mathbf{i},1)\in\widetilde{Q}^{\texttt{roll}}\). From the last lemma follows that \(\widetilde{\mathscr{D}}^{\texttt{roll}}\) at this point is given by
\[\mathrm{d}w_{1}=\mathrm{d}v_{1}=2\mathrm{d}v_{2}-\mathrm{d}w_{3}=2\mathrm{d}v_{ 3}+\mathrm{d}w_{2}=0. \tag{22}\]
Next consider \(\Phi(\mathbf{i},1)=[\begin{smallmatrix}0&2\mathbf{i}\\ 0&0\end{smallmatrix}]\in\widetilde{Q}^{\texttt{oct}}.\) According to Equation (15), at this point \(\widetilde{\mathscr{D}}^{\texttt{oct}}\), pulled-back to the tangent to \(C\subset\mathrm{Im}(\mathbb{H})\) at \(\begin{smallmatrix}0&2\mathbf{i}\\ 0&0\end{smallmatrix}\), is given by
\[\mathrm{d}x=\mathrm{d}b_{1}=\mathrm{d}A_{2}=\mathrm{d}A_{3}=0. \tag{23}\]
Now \(\Phi=\pi\circ\iota\circ f\), and \(\iota\circ f\) is given by \((\mathbf{v},s+\mathbf{w})\mapsto\begin{smallmatrix}x&\mathbf{A}\\ \mathbf{b}&-x\end{smallmatrix}\), where
\[\mathbf{A}=(1+s)\mathbf{v}+\mathbf{v}\times\mathbf{w},\ \mathbf{b}=(1-s)\mathbf{v}- \mathbf{v}\times\mathbf{w},\ x=-\mathbf{v}\cdot\mathbf{w}.\]
The pull back of Equations (23) under this map are
\[\mathrm{d}w_{1}=\mathrm{d}s=2\mathrm{d}v_{2}-\mathrm{d}w_{3}=2\mathrm{d}v_{3}+ \mathrm{d}w_{2}=0.\]
This coincides with Equations (22), modulo the tangency equations to \(S^{2}\times S^{3}\) at \((\mathbf{i},1)\), \(\mathrm{d}v_{1}=\mathrm{d}s=0\).
### Theorem 3
The proof is based on Theorem 5. We need to establish first two lemmas.
To state the first lemma, we consider an ordered pair of points \(\left([\mathbf{v}_{1}],[\mathbf{v}_{2}]\right)\in\left(S^{2}/\pm 1\right)\times \left(S^{2}/\pm 1\right),\) and associate with it a rolling monodromy \(\mu\in S^{3},\) as follows. Consider a simple geodesic segment from \([\mathbf{v}_{1}]\) to \([\mathbf{v}_{2}]\) (there are \(2\) such segments), lift it to a spherical segment in \(S^{2}\) (there are \(2\) such lifts), roll the moving sphere along this segment, then take the resulting lifted rolling monodromy \(\mu\in S^{3}.\)
**Lemma 5**.: _The lifted rolling monodromy \(\mu\in S^{3}\) depends only on the ordered pair \(\left([\mathbf{v}_{1}],[\mathbf{v}_{2}]\right)\in\left(S^{2}/\pm 1\right) \times\left(S^{2}/\pm 1\right)\) and not on the various choices made._
**Proof.** Let \(\delta:=\)dist\(([\mathbf{v}_{1}],[\mathbf{v}_{2}]),\)\(0<\delta\leq\pi/2.\) There are two directed geodesic segments in \(S^{2}/\pm 1\) connecting \([\mathbf{v}_{1}]\) to \([\mathbf{v}_{2}],\) of lengths \(\delta,\pi-\delta.\) Each has two possible lifts to geodesic segments in \(S^{2},\) of the same length, a total of \(4\) possibilities, with endpoints (i) \(\mathbf{v}_{1},\mathbf{v}_{2},\) (ii) \(\mathbf{v}_{1},-\mathbf{v}_{2},\) (iii) \(-\mathbf{v}_{1},-\mathbf{v}_{2},\) (iv) \(-\mathbf{v}_{1},\mathbf{v}_{2},\) of lengths \(\delta,\pi-\delta,\delta,\pi-\delta\) (respectively). See Figure 12.
Let us take case (i). Let \(\mathbf{w}=\mathbf{v}_{1}\times\mathbf{v}_{2}/\|\mathbf{v}_{1}\times\mathbf{v }_{2}\|.\) As we roll the small sphere along the spherical arc segment of length \(\delta\) from \(\mathbf{v}_{1}\) to \(\mathbf{v}_{2},\) it rotates about the \(\mathbf{w}\) axis by an angle of \(4\delta\). Thus \(\mu=e^{2\mathbf{w}\delta}.\) In case (ii), the arc length changes to \(\pi-\delta\) and \(\mathbf{w}\) changes to \(-\mathbf{w}\). Thus \(\mu=e^{2(\pi-\delta)(-\mathbf{w})}=e^{2\mathbf{w}\delta},\) same as in case (i). Cases (iii) and (iv) are analyzed similarly.
The second lemma concerns the map \(\Phi:\widetilde{Q}^{\mathtt{roll}}\to\widetilde{Q}^{\mathtt{oct}}\) of Equation (9).
**Lemma 6**.: _Let \((\mathbf{v},q)\in\widetilde{Q}^{\mathtt{roll}}\), \([\zeta]=\Phi(\mathbf{v},q)\in\widetilde{Q}^{\mathtt{oct}}.\) Then \(\Phi(-\mathbf{v},q)=[-\zeta].\)_
In other words, \(\Phi\) is \(\mathbb{Z}_{2}\)-equivariant, with respect to the actions \((-1)\times 1,\)\(-1\) on \(\widetilde{Q}^{\mathtt{roll}},\widetilde{Q}^{\mathtt{oct}}\) (respectively). The proof is immediate from formula (9).
Now we proceed to the bijective correspondence indicated in the statement of Theorem 3. We start with an equivalence class \([\Gamma]\) of non-degenerate closed spherical \(n\)-gons with trivial lifted rolling monodromy and an element \(q\in S^{3}\). We associate with \(([\Gamma],q)\) a non-degenerate closed horizontal \(n\)-gon
Figure 12.
in \(Q^{\mathtt{oct}}\), as follows. \([\Gamma]\) is given by the projected vertices \([\mathbf{v}_{1}],\ldots,[\mathbf{v}_{n}]\in S^{2}/\pm 1\) (see the first paragraph after Definition 4). We pick simple edges between succesive vertices (there are two possibilities for each successive pair) and obtain a closed polygonal path in \(S^{2}/\pm 1\). This path is then lifted to a spherical \(n\)-gon \(\Gamma\) in \(S^{2}\) (there are two such lifts), with vertices \(\mathbf{v}_{1},\mathbf{v}_{2},\ldots,\) closed up to sign, i.e., \(\mathbf{v}_{i+n}=\pm\mathbf{v}_{i}\) (same sign for all \(i\)). Let \(\mu_{i}\in S^{3}\) be the lifted rolling monodromy along the edge of \(\Gamma\) from \(\mathbf{v}_{i}\) to \(\mathbf{v}_{i+1}\). By Lemma 5, \(\mu_{i}\) depends only on \([\mathbf{v}_{i}],[\mathbf{v}_{i+1}]\).
Next we lift \(\Gamma\) horizontally to \(\widetilde{Q}^{\mathtt{roll}}\), starting at \((\mathbf{v}_{1},q)\), and obtain a horizontal polygon, with vertices \((\mathbf{v}_{1},q_{1}),\ldots(\mathbf{v}_{n},q_{n})\), where \(q_{1}:=q,q_{2}:=\mu_{1}q_{1},\ldots,q_{n}=\mu_{n-1}q_{n-1}.\) This polygon is closed up to sign of \(\mathbf{v}_{i}\), i.e., \((\mathbf{v}_{i+n},q_{i+n})=(\pm\mathbf{v}_{i},q_{i})\) (same sign for all \(i\), depending on the choices made along the way).
This horizontal polygon is mapped by \(\Phi\) to a horizontal polygon in \(\widetilde{Q}^{\mathtt{oct}}\) with vertices \([\zeta_{i}]:=\Phi[(\mathbf{v}_{i},q_{i})],\)\(i=1,\ldots,n.\) By Lemma 6, this polygon is closed up to sign, i.e., \([\zeta_{i+n}]=[\pm\zeta_{i}]\) (same sign for all \(i\)). The projection of this polygon to \(Q^{\mathtt{oct}}\) is thus closed and horizontal, and its vertices do not depend on the choices made along the way (the edges do depend on the initial choice of edges in \(S^{2}/\pm 1\), but a dancing pair of polygons is specified by its vertices alone). See figure 13.
If we add the genericity condition to \(([\Gamma],q)\), then the \(n\) vertices in \(Q^{\mathtt{oct}}\) lie in \(Q^{\mathtt{oct}}_{*}=\iota(Q^{\mathtt{dan}})\), i.e., correspond to a dancing pair of polygons as per Theorem 4.
This correspondence is clearly invertible. Starting with a dancing pair of closed polygons, we first lift it to a horizontal polygon in \(Q^{\mathtt{dan}}\) (by Theorem 4), then map it to a closed horizontal polygon in \(Q^{\mathtt{oct}}_{*}\) (by step (1) of the proof of Theorem 5, see Section 3.3), lift to a horizontal polygon in \(\widetilde{Q}^{\mathtt{oct}}_{*}\), then map by \(\Phi^{-1}\) to a horizontal polygon in \(\widetilde{Q}^{\mathtt{roll}}_{*}\), with vertices \((\mathbf{v}_{1},q_{1}),\ldots,(\mathbf{v}_{n},q_{n})\), closed up to sign of \(\mathbf{v}_{i}\). It follows that the spherical polygon \(\Gamma\) with vertices
Figure 13.
\(\mathbf{v}_{1},\ldots,\mathbf{v}_{n}\) has trivial lifted rolling monodromy and that \(([\Gamma],q_{1})\) is generic.
### Theorem 1
From the correspondence of Theorem 4 (proved below in Section 3.1), it is enough to show that (1) there exist non-degenerate horizontal \(n\)-gons in \(Q^{\mathsf{dan}}\) for every \(n\geq 6\), and (2) every horizontal \(n\)-gon in \(Q^{\mathsf{dan}}\) for \(n\leq 5\) is degenerate. The first statement follows from Corollary 1 and Theorem 3. We procceed to prove the second statement.
Let \(q_{1},q_{2},q_{3}\in Q^{\mathsf{dan}}\) be the vertices of an open non-degenerate horizontal \(3\)-gon; i.e., the three points are distinct and the lines \(q_{1}q_{2}\) and \(q_{2}q_{3}\) are horizontal and distinct. We shall prove that
1. \(q_{1}q_{3}\) is not horizontal; therefore, there are no non-degenerate horizontal triangles.
2. If \(q_{4}\in Q^{\mathsf{dan}}\) is such that \(q_{1}q_{4}\) and \(q_{3}q_{4}\) are horizontal, then \(q_{4}=q_{2}\); therefore, there are no non-degenerate horizontal quadrilaterals.
3. If \(q_{4},q_{5}\in Q^{\mathsf{dan}}\) are such that \(q_{3}q_{4}\), \(q_{4}q_{5}\) and \(q_{5}q_{1}\) are horizontal, then either \(q_{4}=q_{2}\) or \(q_{5}=q_{2}\); in either case, the pentagon will be a degenerate one.
We proceed with proofs of (1) and (3). The proof of (2) is similar to (3), slightly simpler, and is omitted.
Proof of (1).: Suppose \(q_{1},q_{2},q_{3}\) are \(3\) distinct points in \(Q^{\mathsf{dan}}\) forming a horizontal triangle. We show that it is degenerate, i.e., the \(3\) points are collinear. Let \(q_{i}=(\mathbf{A}_{i},\mathbf{b}_{i})\), \(i=1,2,3\). Then, by Definition 5,
\[\mathbf{b}_{i}-\mathbf{b}_{j}=\mathbf{A}_{j}\times\mathbf{A}_{i},\ i,j=1,2,3. \tag{24}\]
We can assume that \(\mathbf{A}_{1},\mathbf{A}_{2},\mathbf{A}_{3}\) are distinct: if \(\mathbf{A}_{i}=\mathbf{A}_{j}\), \(i\neq j\), then, by Equation (24), \(\mathbf{b}_{i}-\mathbf{b}_{j}=\mathbf{A}_{j}\times\mathbf{A}_{i}=0\), hence \(q_{i}=q_{j}\), a contradiction.
Next, again by (24),
\[(\mathbf{A}_{1}-\mathbf{A}_{3})\times(\mathbf{A}_{2}-\mathbf{A} _{3}) =\mathbf{A}_{1}\times\mathbf{A}_{2}-\mathbf{A}_{1}\times\mathbf{A }_{3}-\mathbf{A}_{3}\times\mathbf{A}_{2}=\] \[=(\mathbf{b}_{2}-\mathbf{b}_{1})-(\mathbf{b}_{3}-\mathbf{b}_{1}) -(\mathbf{b}_{2}-\mathbf{b}_{3})=0,\]
so \(\mathbf{A}_{1}-\mathbf{A}_{3}=\lambda(\mathbf{A}_{2}-\mathbf{A}_{3})\) for some \(\lambda\neq 0.\) Taking the cross product with \(\mathbf{A}_{3}\), \(\mathbf{A}_{1}\times\mathbf{A}_{3}=\lambda(\mathbf{A}_{2}\times\mathbf{A}_{3})\), which implies, by (24), \(\mathbf{b}_{1}-\mathbf{b}_{3}=\lambda(\mathbf{b}_{2}-\mathbf{b}_{3})\), so \(q_{1}-q_{3}=\lambda(q_{2}-q_{3})\), hence \(q_{1},q_{2},q_{3}\) are collinear.
Proof of (3).: Let \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) be the standard basis of \(\mathbb{R}^{3}\) and \(\mathbf{e}^{1},\mathbf{e}^{2},\mathbf{e}^{3}\) the dual basis. Since \(\mathrm{SL}_{3}(\mathbb{R})\) acts transitively on \(Q^{\mathsf{dan}}\) preserving horizontality, we may assume, without loss of generality, that \(q_{2}=(\mathbf{e}_{1},\mathbf{e}^{1}).\) The isotropy group at this point is
\[\left\{\begin{pmatrix}1&0\\ 0&A\end{pmatrix}\mid A\in\mathrm{SL}_{2}(\mathbb{R})\right\},\]
acting on the affine plane \(\mathscr{D}_{q_{2}}^{\mathsf{dan}}=\operatorname{Span}\{(\mathbf{e}_{2},\mathbf{e }^{3}),(\mathbf{e}_{3},-\mathbf{e}^{2})\}\). Hence, without loss of generality, we may assume that
\[q_{1}=q_{2}+(\mathbf{e}_{2},\mathbf{e}^{3})=(\mathbf{e}_{1}+ \mathbf{e}_{2},\mathbf{e}^{1}+\mathbf{e}^{3}),\] \[q_{3}=q_{2}+a(\mathbf{e}_{3},-\mathbf{e}^{2})=(\mathbf{e}_{1}+a \mathbf{e}_{3},\mathbf{e}^{1}-a\mathbf{e}^{2}),\]
for some \(a\neq 0\).
In a similar fashion, we see that any point \(q_{5}\in Q^{\mathsf{dan}}\) that may be joined horizontally to \(q_{1}\) must be of the form
\[q_{5}=q_{1}+\left(x\mathbf{e}_{1}+y\mathbf{e}_{2}-x\mathbf{e}_{3},-x\mathbf{ e}^{1}+x\mathbf{e}^{2}+(y-x)\mathbf{e}^{3}\right),\]
for some choice of \(x,y\in\mathbb{R}\). Likewise, any point \(q_{4}\in Q^{\mathsf{dan}}\) that may be joined horizontally to \(q_{3}\) must be of the form
\[q_{4}=q_{3}+(ab\mathbf{e}_{1}+b\mathbf{e}_{2}+c\mathbf{e}_{3},-ab\mathbf{e}^{ 1}+(a^{2}b-c)\mathbf{e}^{2}+b\mathbf{e}^{3}),\]
for some \(b,c\in\mathbb{R}\).
Assuming that \(q_{4}q_{5}\) is a horizontal line, the horizontality equation \(\mathbf{b}_{5}-\mathbf{b}_{4}=\mathbf{A}_{4}\times\mathbf{A}_{5}\) gives the following system
\[\left\{\begin{array}{rcl}(a+x)b+(1+y)c&=&x-a(1+y)\\ a(a+x)b+xc&=&ax\\ b\left[x-a(1+y)\right]&=&x\end{array}\right..\]
Solving for \(b,c\) in terms of \(x,y\) in the first two equations and then using the third, implies that:
* if \(x\neq 0\Rightarrow a=0\), i.e., \(q_{3}=q_{2}\), a contradiction;
* if \(x=0\Rightarrow b=0\Rightarrow\) either \(c=-a\) (which implies \(q_{4}=q_{2}\)) or \(y=-1\) (which implies \(q_{5}=q_{2}\)).
This concludes the proof of (3).
## Appendix A \(\mathbf{G}_{2}\)-symmetry
The definition of the Cartan-Engel distribution using split octonions \(\mathbb{O}\) (see Section 2.3) gives rise to an action of the automorphism group of this algebra, \(\operatorname{G}_{2}:=\operatorname{Aut}(\mathbb{O})\), as the (local) symmetry group of the various models of this distribution that appear in this article. In this appendix we collect some explicit formulas for the associated infinitesimal action. Most of the material here appeared before, e.g., in [3, 4, 5].
### g\({}_{2}\)
The Lie algebra of \(\operatorname{G}_{2}\) is \(\mathfrak{g}_{2}\subset\mathfrak{so}_{4,3}\subset\mathfrak{so}_{4,4}\), where \(\mathfrak{so}_{4,4}\) is the Lie algebra of the subgroup of \(\operatorname{GL}(\mathbb{O})\) preserving the inner product on \(\mathbb{O}\) of Equation (7), and \(\mathfrak{so}_{4,3}\) is the subalgebra preserving the spltting \(\mathbb{O}=\operatorname{Re}(\mathbb{O})\oplus\operatorname{Im}(\mathbb{O})\).
Now \(\mathfrak{g}_{2}\) is the algebra of _derivations_ of \(\mathbb{O}\), i.e., maps \(D\in\operatorname{End}(\mathbb{O})\) satisfying
\[D(\zeta\zeta^{\prime})=(D\zeta)\zeta^{\prime}+\zeta(D\zeta^{\prime})\]
for all \(\zeta,\zeta^{\prime}\in\mathbb{O}.\) It preserves the decomposition \(\mathbb{O}=\operatorname{Re}(\mathbb{O})\oplus\operatorname{Im}(\mathbb{O})\) and acts trivially on the first summand. On the second summand we have the infinitesimal action of \(\mathfrak{g}_{2}\), defined as follows.
Consider the map \(\rho:\mathfrak{sl}_{3}(\mathbb{R})\times\mathbb{R}^{3}\times(\mathbb{R}^{3})^{ *}\to\operatorname{End}(\operatorname{Im}(\mathbb{O}))\),
\[\rho(T,\mathbf{Q},\mathbf{p}):\begin{pmatrix}x&\mathbf{A}\\ \mathbf{b}&-x\end{pmatrix}\mapsto\begin{pmatrix}\mathbf{p}\mathbf{A}+\mathbf{b }\mathbf{Q}&T\mathbf{A}-\mathbf{p}\times\mathbf{b}+2\mathbf{Q}x\\ \mathbf{Q}\times\mathbf{A}-\mathbf{b}T+2\mathbf{p}x&-\mathbf{p}\mathbf{A}- \mathbf{b}\mathbf{Q}\end{pmatrix}, \tag{25}\]
where \(x\in\mathbb{R}\), \(\mathbf{A},\mathbf{Q}\in\mathbb{R}^{3}\) (column vectors), \(\mathbf{b},\mathbf{p}\in(\mathbb{R}^{3})^{*}\) (row vectors) and \(T\in\mathfrak{sl}_{3}(\mathbb{R})\) (traceless \(3\times 3\) real matrices).
Now \(\rho\) is clearly an injective linear map, hence its image is a \(14\)-dimensional subspace of \(\operatorname{End}(\operatorname{Im}(\mathbb{O}))\). One can check that \([\rho(T_{1},\mathbf{Q}_{1},\mathbf{p}_{1}),\rho(T_{2},\mathbf{Q}_{2},\mathbf{ p}_{2})]=\rho(T_{3},\mathbf{Q}_{3},\mathbf{p}_{3})\), where
\[T_{3} =[T_{1},T_{2}]+3\left(\mathbf{Q}_{1}\mathbf{p}_{2}-\mathbf{Q}_{2 }\mathbf{p}_{1}\right)+\left(\mathbf{p}_{1}\mathbf{Q}_{2}-\mathbf{p}_{2} \mathbf{Q}_{1}\right)\mathrm{I}_{3},\] \[\mathbf{Q}_{3} =T_{1}\mathbf{Q}_{2}-T_{2}\mathbf{Q}_{1}-2\mathbf{p}_{1}\times \mathbf{p}_{2},\] \[\mathbf{p}_{3} =\mathbf{p}_{1}T_{2}-\mathbf{p}_{2}T_{1}+2\mathbf{Q}_{1}\times \mathbf{Q}_{2}.\]
It follows that the image of \(\rho\) is a \(14\)-dimensional Lie subalgebra of \(\operatorname{End}(\operatorname{Im}(\mathbb{O}))\).
**Theorem 6**.: _The image of \(\rho\) in \(\operatorname{End}(\operatorname{Im}(\mathbb{O}))\), extended to \(\mathbb{O}=\operatorname{Re}(\mathbb{O})\oplus\operatorname{Im}(\mathbb{O})\) by the \(0\) action on \(\operatorname{Re}(\mathbb{O})\), is the Lie algebra \(\mathfrak{g}_{2}\subset\operatorname{End}(\mathbb{O})\) of derivations of \(\mathbb{O}\)._
See for example [12, page 143].
The \(\mathfrak{g}_{2}\)-action on \(\widetilde{Q}^{\operatorname{roll}}\) and \(Q^{\operatorname{dan}}\)
The \(\mathrm{G}_{2}\)-action on \(\operatorname{Im}(\mathbb{O})\), given (infinitesimally) by Equation (25), induces actions on \(\widetilde{Q}^{\operatorname{oct}}\), \(Q^{\operatorname{oct}}\), and a local action on \(Q^{\operatorname{dan}}\), then via diagram (8) an action on \(\widetilde{Q}^{\operatorname{roll}}\). The latter action does _not_ descend to \(Q^{\operatorname{roll}}\), even infinitesimally (see next subsection). The next two propositions give explicit formulas for the infinitesimal actions on \(Q^{\operatorname{dan}},\widetilde{Q}^{\operatorname{roll}}\). We start with \(Q^{\operatorname{dan}}\).
**Proposition 6**.: _For each \((T,\mathbf{Q},\mathbf{p})\in\mathfrak{sl}_{3}(\mathbb{R})\times\mathbb{R}^{3} \times(\mathbb{R}^{3})^{*}\), the infinitesimal action of \(\rho(T,\mathbf{Q},\mathbf{p})\in\mathfrak{g}_{2}\) on \(Q^{\operatorname{dan}}\subset\mathbb{R}^{3,3}\) is given by the vector field \(f\partial_{\mathbf{A}}+g\partial_{\mathbf{b}}\) on \(\mathbb{R}^{3,3}\), where_
\[f(\mathbf{A},\mathbf{b}) = 2\mathbf{Q}+T\mathbf{A}-\mathbf{p}\times\mathbf{b}-(\mathbf{b} \mathbf{Q}+\mathbf{p}\mathbf{A})\mathbf{A},\] \[g(\mathbf{A},\mathbf{b}) = 2\mathbf{p}-\mathbf{b}T+\mathbf{Q}\times\mathbf{A}-(\mathbf{b} \mathbf{Q}+\mathbf{p}\mathbf{A})\mathbf{b}.\]
**Proof.** This appeared in [4], with somewhat different notation, so we will give it here again for completeness.
We consider the coordinates \((x,q)\) on \(\operatorname{Im}(\mathbb{O})\), where \(q=(\mathbf{A},\mathbf{b})\in\mathbb{R}^{3,3}\). A linear vector field on \(\operatorname{Im}(\mathbb{O})\), such as the one given by Equation (25), can be written as
\[(ax+bq)\,\partial_{x}+(cx+dq)\,\partial_{q},\]
where \(a\in\mathbb{R},b\in(\mathbb{R}^{3,3})^{*}\), \(c\in\mathbb{R}^{3,3}\), \(d\in\operatorname{End}(\mathbb{R}^{3,3})\). It induces a vector field on \(\mathbb{P}(\operatorname{Im}(\mathbb{O}))\), given in the affine chart \(\mathbb{R}^{3,3}\to\mathbb{P}(\operatorname{Im}(\mathbb{O}))\), \(q\to[1,q]\), by
\[[c+(d-a)u-(bq)q]\,\partial_{q}. \tag{26}\]
For the linear vector field given by Equation (25) one has
\[a =0, b(\mathbf{A},\mathbf{b}) =\mathbf{p}\mathbf{A}+\mathbf{b}\mathbf{Q},\] \[c =2(\mathbf{Q},\mathbf{p}), d(\mathbf{A},\mathbf{b}) =(T\mathbf{A}-\mathbf{p}\times\mathbf{b},-\mathbf{b}T+\mathbf{Q} \times\mathbf{A}).\]
Using these in Equation (26) gives the stated formulas.
The formulas for the \(\mathfrak{g}_{2}\)-action on \(\widetilde{Q}^{\tt roll}\) are more complicated. We shall treat a special representative case with enough detail so that the interested reader can easily derive the formulas of the general case.
**Proposition 7**.: _Let \(X^{\tt oct}\) be the linear vector field on \(\operatorname{Im}(\mathbb{O})\) given by formula (25) with \(\mathbf{Q}=\mathbf{p}=0\) and \(T=T^{t}\). The induced infinitesimal action on \(\widetilde{Q}^{\tt roll}=S^{2}\times S^{3}\subset\operatorname{Im}(\mathbb{H}) \times\mathbb{H}\) is given by the vector field_
\[X^{\tt roll}=f\partial_{\mathbf{v}}+g\partial_{\mathbf{w}}+h\partial_{s},\]
_where \(\mathbf{v}\in\operatorname{Im}(\mathbb{H}),\ s+\mathbf{w}\in\mathbb{H}\) are the standard Euclidean coordinates on \(\operatorname{Im}(\mathbb{H})\times\mathbb{H}\), and_
\[f =T(s\mathbf{v}+\mathbf{v}\times\mathbf{w})-\left[T(s\mathbf{v}+ \mathbf{v}\times\mathbf{w})\cdot\mathbf{v}\right]\mathbf{v},\] \[g =\left[(s^{2}-1)\mathbf{v}+s(\mathbf{v}\times\mathbf{w})\right] \times T\mathbf{v}+(s\mathbf{v}+\mathbf{v}\times\mathbf{w})\times T(\mathbf{ v}\times\mathbf{w})\] \[\qquad\qquad+(\mathbf{v}\cdot\mathbf{w})T(s\mathbf{v}+\mathbf{v }\times\mathbf{w})-2\left(T(s\mathbf{v}+\mathbf{v}\times\mathbf{w})\cdot \mathbf{v}\right)\mathbf{w},\] \[h =T\left(\mathbf{v}+s(\mathbf{v}\times\mathbf{w})\right)\cdot \mathbf{v}-sT(s\mathbf{v}+\mathbf{v}\times\mathbf{w})\cdot\mathbf{v}+T( \mathbf{v}\times\mathbf{w})\cdot(\mathbf{v}\times\mathbf{w}).\]
**Proof.** The linear vector field \(X^{\tt oct}\) on \(\operatorname{Im}(\mathbb{O})\) given by Equation (25) induces, by'spherization' (\(\mathbb{R}^{+}\)-quotient), a vector field on \(\widetilde{Q}^{\tt oct}\) and a vector field \(X^{\tt roll}\) on \(\widetilde{Q}^{\tt roll}\) via the diffeomorphism \(\Phi:\widetilde{Q}^{\tt roll}\to\widetilde{Q}^{\tt oct}\) of diagram (8). Now \(\Phi\) factors as a composition \(\widetilde{Q}^{\tt roll}\to C\setminus 0\xrightarrow{\pi}\widetilde{Q}^{\tt oct}\) (recall that \(C\) is the null cone in \(\operatorname{Im}(\mathbb{O})\); see Equation (16)). The map \(\widetilde{Q}^{\tt roll}\to C\setminus 0\) is the restriction to \(S^{2}\times S^{3}\) of the map \(\varphi:\operatorname{Im}(\mathbb{H})\times\mathbb{H}\to\operatorname{Im}( \mathbb{O})\) given by Equations (16)-(17), \((\mathbf{v},s+\mathbf{w})\mapsto\begin{pmatrix}x&\mathbf{A}\\ \mathbf{b}&-x\end{pmatrix}\), where
\[\mathbf{A} =(1+s)\mathbf{v}+\mathbf{v}\times\mathbf{w}, \tag{27}\] \[\mathbf{b} =(1-s)\mathbf{v}-\mathbf{v}\times\mathbf{w},\] \[x =-\mathbf{v}\cdot\mathbf{w}.\]
The inverse map is given by \(\psi:\mathrm{Im}(\mathbb{O})\to\mathrm{Im}(\mathbb{H})\times\mathbb{H}\),
\[\begin{split}&\mathbf{v}=\frac{\mathbf{A}+\mathbf{b}}{2},\\ &\mathbf{w}=\frac{\mathbf{A}\times\mathbf{b}}{2}-x\mathbf{v},\\ & s=\frac{1}{4}(|\mathbf{A}|^{2}-|\mathbf{b}|^{2}),\end{split} \tag{28}\]
with derivative
\[\begin{split}&\psi^{*}\left(\mathrm{d}\mathbf{v}\right)=\frac{1}{2} \left(\mathrm{d}\mathbf{A}+\mathrm{d}\mathbf{b}\right),\\ &\psi^{*}(\mathrm{d}\mathbf{w})=\frac{1}{2}\left[\mathbf{A} \times\mathrm{d}\mathbf{b}-\mathbf{b}\times\mathrm{d}\mathbf{A}-x(\mathrm{d} \mathbf{A}+\mathrm{d}\mathbf{b})-(\mathbf{A}+\mathbf{b})\mathrm{d}x\right],\\ &\psi^{*}(\mathrm{d}s)=\frac{1}{2}\left(\mathbf{A}\cdot\mathrm{d} \mathbf{A}-\mathbf{b}\cdot\mathrm{d}\mathbf{b}\right).\end{split}\]
That is, \(\varphi\) and \(\psi\), when restricted to \(S^{2}\times S^{3}\) and \(M:=\varphi(S^{2}\times S^{3})\subset C\) (respectively), are inverse maps:
\[\begin{CD}\mathrm{Im}(\mathbb{H})\times\mathbb{H}@>{\varphi}>{}>\overleftarrow{ \psi}\\ \cup@V{}V{}V\\ S^{2}\times S^{3}\end{CD}\cong\begin{CD}M\\ \end{CD}\]
Next let
\[X^{\mathtt{oct}}=\alpha\partial_{\mathbf{A}}+\beta\partial_{\mathbf{b}}+ \gamma\partial_{x},\]
and let
\[E:=\mathbf{A}\partial_{\mathbf{A}}+\mathbf{b}\partial_{\mathbf{b}}+x\partial_ {x} \tag{29}\]
be the Euler field on \(\mathrm{Im}(\mathbb{O})\), which is tangent to \(C\). At each point of \(M\subset C\), since \(X^{\mathtt{oct}}\) is tangent to \(C\), one can decompose uniquely
\[X^{\mathtt{oct}}=X^{\mathtt{oct}}_{\parallel}+\lambda E,\]
where \(X^{\mathtt{oct}}_{\parallel}\) is tangent \(M\) and \(\lambda\in\mathbb{R}\). Then \(X^{\mathtt{roll}}=\psi_{*}(X^{\mathtt{oct}}_{\parallel})\).
Next we note that, by Equations (27), \(M\) is given in \(C\) by the equation \(|\mathbf{A}+\mathbf{b}|^{2}=4\), so \(TM\) is the kernel of \((\mathbf{A}+\mathbf{b})\cdot(\mathrm{d}\mathbf{A}+\mathrm{d}\mathbf{b})\) restricted to \(TC\). It follows that
\[\begin{split} 0=(\mathbf{A}+\mathbf{b})\cdot(\mathrm{d}\mathbf{A}+ \mathrm{d}\mathbf{b})(X^{\mathtt{oct}}-\lambda E)&=(\mathbf{A}+ \mathbf{b})\cdot(\alpha+\beta-\lambda(\mathbf{A}+\mathbf{b}))\\ &=(\mathbf{A}+\mathbf{b})\cdot(\alpha+\beta)-4\lambda,\end{split}\]
thus
\[\lambda=\frac{1}{4}(\mathbf{A}+\mathbf{b})\cdot(\alpha+\beta)=\frac{\mathbf{v} }{2}\cdot(\alpha+\beta).\]
It follows that
\[f =\mathrm{d}\mathbf{v}(X^{\texttt{roll}})=\mathrm{d}\mathbf{v}\left( \psi_{*}X_{\parallel}^{\texttt{oct}}\right)=(\psi^{*}\mathrm{d}\mathbf{v})X_{ \parallel}^{\texttt{oct}}\] \[=\frac{1}{2}\left(\mathrm{d}\mathbf{A}+\mathrm{d}\mathbf{b} \right)\left(X^{\texttt{oct}}-\lambda E\right)=\frac{1}{2}\left[\alpha+\beta- \left(\left(\alpha+\beta\right)\cdot\mathbf{v}\right)\mathbf{v}\right],\]
and similarly
\[g =\left(\psi^{*}\mathrm{d}\mathbf{w}\right)X_{\parallel}^{\texttt{ oct}}\] \[=\left[\frac{1}{2}\left(\mathbf{A}\times\mathrm{d}\mathbf{b}- \mathbf{b}\times\mathrm{d}\mathbf{A}\right)-x\mathrm{d}\mathbf{v}-\mathbf{v} \mathrm{d}x\right]\left(X^{\texttt{oct}}-\lambda E\right)\] \[=\frac{1}{2}\left(\mathbf{A}\times\beta-\mathbf{b}\times\alpha \right)-\frac{x}{2}\left[\alpha+\beta-\left(\left(\alpha+\beta\right)\cdot \mathbf{v}\right)\mathbf{v}\right]\] \[\qquad-\left(\frac{\mathbf{v}}{2}\cdot\left(\alpha+\beta\right) \right)\left[\mathbf{A}\times\mathbf{b}-x\left(\mathbf{A}+\mathbf{b}\right) \right]-\gamma\mathbf{v},\] \[h =\left(\psi^{*}\mathrm{d}s\right)X_{\parallel}^{\texttt{oct}}= \frac{1}{2}\left(\mathbf{A}\cdot\mathrm{d}\mathbf{A}-\mathbf{b}\cdot\mathrm{d }\mathbf{b}\right)\left(X^{\texttt{oct}}-\lambda E\right)\] \[=\frac{1}{2}\left[\mathbf{A}\cdot\alpha-\mathbf{b}\cdot\beta- \left(\mathbf{v}\cdot\left(\alpha+\beta\right)\right)\left(|\mathbf{A}|^{2}-| \mathbf{b}|^{2}\right)\right].\]
Now we apply the above to the vector field \(X^{\texttt{oct}}\) on \(\mathrm{Im}(\mathbb{O})\) given by Equation (25), with \(T=T^{t}\), \(\mathbf{Q}=0\), \(\mathbf{p}=0\). Then
\[\alpha=T\mathbf{A},\ \beta=-T\mathbf{b},\ \gamma=0.\]
Using these values in the above expressions for \(f,g,h\), we obtain, after some simplification, the stated formulas.
The \(\mathbf{g}_{2}\)-action on \(\widetilde{Q}^{\texttt{roll}}\) does not descend to \(Q^{\texttt{roll}}\)
It was shown in [5, SS7] that the \(\mathrm{G}_{2}\)-action on \(\widetilde{Q}^{\texttt{roll}}\) does not descend to \(Q^{\texttt{roll}}\). Here we show that the infinitesimal \(\mathbf{g}_{2}\)-action does not descend either (a stronger result). Actually, we will determine the elements of \(\mathbf{g}_{2}\) whose action on \(\widetilde{Q}^{\texttt{roll}}\) descend to \(Q^{\texttt{roll}}\).
**Proposition 8**.: _Consider the \(2:1\) cover \(\widetilde{Q}^{\texttt{roll}}\to Q^{\texttt{roll}}\) and the vector field \(X^{\texttt{roll}}\) on \(\widetilde{Q}^{\texttt{roll}}\) induced by an element \(\rho(T,\mathbf{Q},\mathbf{p})\in\mathbf{g}_{2}\), see Equation (25). Then \(X^{\texttt{roll}}\) descends to a vector field on \(Q^{\texttt{roll}}\) if and only if \(T^{t}=-T\), \(\mathbf{Q}=-\mathbf{p}\)._
**Proof.** Recall that \(\widetilde{Q}^{\texttt{roll}}=S^{2}\times S^{3}\), \(Q^{\texttt{roll}}=S^{2}\times\mathrm{SO}_{3}\) and \(\widetilde{Q}^{\texttt{roll}}\to Q^{\texttt{roll}}\) is the quotient by the antipodal map in the second factor. Let us denote this self-map of \(\widetilde{Q}^{\texttt{roll}}\) by \(\sigma\). The vector field \(X^{\texttt{roll}}\) on \(\widetilde{Q}^{\texttt{roll}}\) thus descends to \(Q^{\texttt{roll}}\) if and only if it is \(\sigma\)-invariant, \(\sigma_{*}X^{\texttt{roll}}=X^{\texttt{roll}}\).
The diffeomorphism \(\Phi:\widetilde{Q}^{\texttt{roll}}\to\widetilde{Q}^{\texttt{oct}}\) maps \(X^{\texttt{roll}}\) to a vector field \(\Phi_{*}X^{\texttt{roll}}\) on \(\widetilde{Q}^{\texttt{oct}}\). Thus \(X^{\texttt{roll}}\) is \(\sigma\)-invariant if and only if \(\Phi_{*}X^{\texttt{roll}}\) is \(\tau\)-invariant, where \(\tau=\Phi\circ\sigma\circ\Phi^{-1}\).
Using the definition of \(\Phi\) (Equation (9)), one finds that \(\tau\) is given by a linear involution of \(\mathrm{Im}(\mathbb{O})\),
\[\widetilde{\tau}:\begin{pmatrix}x&\mathbf{A}\\ \mathbf{b}&-x\end{pmatrix}\mapsto\begin{pmatrix}-x&\mathbf{b}\\ \mathbf{A}&x\end{pmatrix}\]
Namely, if \(\zeta\in C\setminus 0\) (a non-zero null octonion) then \(\tau\left([\zeta]\right)=[\widetilde{\tau}\zeta]\).
Similarly, the vector field \(\Phi_{*}X^{\mathtt{roll}}\) on \(\widetilde{Q}^{\mathtt{oct}}\) is given by the linear vector field \(X^{\mathtt{oct}}\) on \(\mathrm{Im}(\mathbb{O})\) of Equation (25) by first restricting \(X^{\mathtt{oct}}\) to the null cone \(C\subset\mathrm{Im}(\mathbb{O})\) (this restriction makes sense since \(X^{\mathtt{oct}}\) is tangent to \(C\setminus 0\)), then projecting via the \(\mathbb{R}^{+}\)-quotient \(\pi:C\setminus 0\to\widetilde{Q}^{\mathtt{oct}}\).
Summarizing, the \(\sigma\)-invariance of \(X^{\mathtt{roll}}\) amounts to
\[X^{\mathtt{oct}}(\widetilde{\tau}\zeta)\equiv\widetilde{\tau}_{*}\left(X^{ \mathtt{oct}}(\zeta)\right)\ \mathrm{mod}\ E(\widetilde{\tau}\zeta),\ \forall\zeta\in C, \tag{30}\]
where \(E\) is the Euler vector field on \(\mathrm{Im}(\mathbb{O})\) (see Equation (29)).
Next recall from (25) that if \(\zeta=\begin{pmatrix}x&\mathbf{A}\\ \mathbf{b}&-x\end{pmatrix}\in\mathrm{Im}(\mathbb{O})\) then
\[X^{\mathtt{oct}}(\zeta)=(T\mathbf{A}-\mathbf{p}\times\mathbf{b} +2x\mathbf{Q})\partial_{\mathbf{A}}+(\mathbf{Q}\times\mathbf{A}-T^{t}\mathbf{ b}+2x\mathbf{p})\partial_{\mathbf{b}}\\ +(\mathbf{p}\cdot\mathbf{A}+\mathbf{b}\cdot\mathbf{Q})\partial_{x},\]
hence
\[X^{\mathtt{oct}}(\widetilde{\tau}\zeta)=(T\mathbf{b}-\mathbf{p} \times\mathbf{A}-2x\mathbf{Q})\partial_{\mathbf{A}}+(\mathbf{Q}\times\mathbf{ b}-T^{t}\mathbf{A}-2x\mathbf{p})\partial_{\mathbf{b}}\\ +(\mathbf{p}\cdot\mathbf{b}+\mathbf{A}\cdot\mathbf{Q})\partial_{x}\]
and
\[\widetilde{\tau}_{*}\left(X^{\mathtt{oct}}(\zeta)\right)=( \mathbf{Q}\times\mathbf{A}-T^{t}\mathbf{b}+2x\mathbf{p})\partial_{\mathbf{A}} +(T\mathbf{A}-\mathbf{p}\times\mathbf{b}+2x\mathbf{Q})\partial_{\mathbf{b}}\\ -(\mathbf{p}\cdot\mathbf{A}+\mathbf{b}\cdot\mathbf{Q})\partial_{x}.\]
Hence Equation (30) is equivalent to
\[\left(\begin{array}{c}\mathbf{Q}\times\mathbf{A}-T^{t}\mathbf{b}+2x\mathbf{ p}\\ T\mathbf{A}-\mathbf{p}\times\mathbf{b}+2x\mathbf{Q}\\ -\mathbf{p}\cdot\mathbf{A}-\mathbf{b}\cdot\mathbf{Q}\end{array}\right)\equiv \left(\begin{array}{c}T\mathbf{b}-\mathbf{p}\times\mathbf{A}-2x\mathbf{Q} \\ \mathbf{Q}\times\mathbf{b}-T^{t}\mathbf{A}-2x\mathbf{p}\\ \mathbf{p}\cdot\mathbf{b}+\mathbf{A}\cdot\mathbf{Q}\end{array}\right)\ \mathrm{mod}\ \left(\begin{array}{c}\mathbf{b}\\ \mathbf{A}\\ -x\end{array}\right)\]
for all \(\mathbf{A},\mathbf{b},x\) such that \(\mathbf{b}\mathbf{A}=x^{2}.\) Setting \(\mathbf{b}=0,\)\(x=0\) in the above equation, the third component reads \((\mathbf{Q}+\mathbf{p})\cdot\mathbf{A}=0\) for all \(\mathbf{A}\), hence \(\mathbf{Q}=-\mathbf{p}.\) The second component then reads \((T+T^{t})\mathbf{A}=\lambda\mathbf{A}\) for all \(\mathbf{A}\) for some \(\lambda\) (that may depend on \(\mathbf{A}\)). Hence \(T+T^{t}\) is a multiple of the identity. But \(T\) is traceless, and so is \(T+T^{t}\), hence \(T+T^{t}=0\), as needed.
**Remark 4**.: The subspace of \(\mathfrak{g}_{2}\) indicated in this proposition is isomorphic to \(\mathfrak{so}_{3}\oplus\mathfrak{so}_{3}=\mathfrak{so}_{4}\), the Lie algebra of a maximal compact subgroup \(\mathrm{SO}_{4}\simeq K\subset\mathrm{G}_{2}\), which has an 'obvious' action on \(Q^{\mathtt{roll}}=S^{2}\times\mathrm{SO}_{3}\). In fact, the \(\mathrm{SO}_{4}\)-action descends to \(\mathrm{SO}_{3}\times\mathrm{SO}_{3}\) (a \(\mathbb{Z}_{2}\)-quotient of \(\mathrm{SO}_{4}\)), \((g_{1},g_{2}):(\mathbf{v},g)\mapsto(g_{1}\mathbf{v},g_{1}gg_{2}^{-1}).\) The embedding \(\mathrm{SO}_{4}\to\mathrm{G}_{2}\), and the associated embedding \(\mathfrak{so}_{3}\oplus\mathfrak{so}_{3}\to\mathfrak{g}_{2}\), has appeared explicitly above during the proof of Theorem
5; see Section 3.3, item (3), step (c), Equation (20). This also appeared in [5], SS2.2 and SS5.
## Appendix B Coordinate formulae for \(\mathscr{D}^{\mathtt{roll}}\)
The configuration space \(Q^{\mathtt{roll}}\) of rolling a sphere of radius \(1/\rho\) on a stationary sphere of radius \(1\) is \(S^{2}\times\mathrm{SO}_{3}\), where \(\mathbf{v}\in S^{2}\) denotes the contact point of the two spheres and \(g\in\mathrm{SO}_{3}\) the orientation of the moving sphere with respect to some fixed initial orientation.
To write down the rolling distribution on \(Q^{\mathtt{roll}}\) we use the coordinates \((\phi,\theta,\alpha,\beta,\gamma)\), where \(\phi,\theta\) are the spherical coordinates on \(S^{2}\),
\[\mathbf{v}=\begin{pmatrix}\sin\theta\cos\phi\\ \sin\theta\sin\phi\\ \cos\theta\end{pmatrix}\]
and \(\alpha,\beta,\gamma\) are Euler angles on \(\mathrm{SO}_{3}\),
\[g=\begin{pmatrix}\cos\gamma&-\sin\gamma&0\\ \sin\gamma&\cos\gamma&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}\cos\beta&0&\sin\beta\\ 0&1&0\\ -\sin\beta&0&\cos\beta\end{pmatrix}\begin{pmatrix}1&0&0\\ 0&\cos\alpha&-\sin\alpha\\ 0&\sin\alpha&\cos\alpha\end{pmatrix}\]
The angular velocity vector of the moving sphere, about its center, is
\[\boldsymbol{\omega}=\begin{pmatrix}\dot{\alpha}\sin\beta\sin\gamma+\dot{\beta }\cos\gamma\\ \dot{\alpha}\sin\beta\cos\gamma-\dot{\beta}\sin\gamma\\ \dot{\alpha}\cos\beta+\dot{\gamma}\end{pmatrix}\]
The rolling distribution \(\mathscr{D}^{\mathtt{roll}}\) on \(Q^{\mathtt{roll}}\) is the rank 2 distribution whose integral curves satisfy \((1+\rho)\dot{\mathbf{v}}=\boldsymbol{\omega}\times\mathbf{v},\ \boldsymbol{\omega}\cdot\mathbf{v}=0\) (see Definition 7). Explicitly,
\[(1+\rho)(\dot{\theta}\cos\theta\cos\phi-\dot{\phi}\sin\theta\sin \phi)-(\dot{\alpha}\sin\beta\sin\gamma+\dot{\beta}\cos\gamma)\sin\theta\cos\phi\] \[+(\dot{\alpha}\sin\beta\cos\gamma-\dot{\beta}\sin\gamma)\sin \theta\sin\phi-\dot{\alpha}\cos\beta\cos\theta=0,\] \[(1+\rho)(\dot{\theta}\cos\theta\cos\phi-\dot{\phi}\sin\theta\sin \phi)-(\dot{\alpha}\sin\beta\sin\gamma+\dot{\beta}\cos\gamma)\sin\theta\cos\phi\] \[+(\dot{\alpha}\sin\beta\cos\gamma-\dot{\beta}\sin\gamma)\sin \theta\sin\phi-\dot{\alpha}\cos\beta\cos\theta=0,\] \[(1+\rho)(-\dot{\theta}\sin\theta)+(\dot{\alpha}\sin\beta\sin \gamma+\dot{\beta}\cos\gamma)\cos\theta+\dot{\alpha}\sin\beta\cos\theta=0,\] \[\sin\theta\cos\phi(\dot{\alpha}\sin\beta\sin\gamma+\dot{\beta} \cos\gamma)+\sin\theta\sin\phi(\dot{\alpha}\sin\beta\cos\gamma-\dot{\beta} \sin\gamma)\] \[+\cos\theta(\dot{\alpha}\cos\beta+\dot{\gamma})=0.\]
**Remark 5**.: The first three equations are linearly dependent, defining a rank 3 distribution on \(Q^{\mathtt{roll}}\). The relation is \(\mathbf{v}\cdot[(1+\rho)\dot{\mathbf{v}}-\boldsymbol{\omega}\times\mathbf{v}]=0.\) Thus one can omit say the first equation, away from the subset of \(Q^{\mathtt{roll}}\) where \(v_{1}=0\). |
2303.14129 | Relational space-time and de Broglie waves | Relative motion of particles is examined in the context of relational
space-time. It is shown that de Broglie waves may be derived as a
representation of the coordinate maps between the rest-frames of these
particles. Energy and momentum are not absolute characteristics of these
particles, they are understood as parameters of the coordinate maps between
their rest-frames. It is also demonstrated the position of a particle is not an
absolute, it is contingent on the frame of reference used to observe the
particle. | Tony Lyons | 2023-03-17T02:14:19Z | http://arxiv.org/abs/2303.14129v2 | # Relational space-time and de Broglie waves
###### Abstract.
Relative motion of particles is examined in the context of relational space-time. It is shown that de Broglie waves may be derived as a representation of the coordinate maps between the rest-frames of these particles. Energy and momentum are not absolute characteristics of these particles, they are understood as parameters of the coordinate maps between their rest-frames. It is also demonstrated the position of a particle is not an absolute, it is contingent on the frame of reference used to observe the particle.
## 1. Introduction
### Relational space-time
In this paper we consider the relative motion of material point particles in the context of relational space-time and aim to show that de Broglie waves1 may be deduced as a representation of these point particles. In [3] Barbour examines in detail the development of relational concepts of space and time from Leibniz [11] up to and including his own work on relational formulations of dynamics [2, 4, 5]. A central point of discussion in [3] is that the uniformity of space means its points are indiscernible, which are made discernible only by the presence of "substance."2 This relational understanding of space and time supposes it is the varied and changing distribution of matter which endows space-time with enough variety to distinguish points therein.
Footnote 1: de Broglie waves as defined by Dirac [10] p.120
Footnote 2: In the sense used by Minkowski, Cologne (1908) [13]
Figure 1 illustrates point-like observers \(\mathcal{O}_{a}\) and \(\mathcal{O}_{b}\) with associated rest-frames \(K_{a}\) and \(K_{b}\), in a state of relative motion. In the frame \(K_{a}\) it appears the observer \(\mathcal{O}_{b}\) moves between space-time locations \((t_{1},x_{1})\)
and \((t_{2},x_{2})\), while \(\mathcal{O}_{a}\) "moves" between locations \((t_{1},0)\) and \((t_{2},0)\). On the other hand the observer \(\mathcal{O}_{b}\) is seen to "move" in its rest-frame \(K_{b}\) between space-time locations of the form \((\tau_{1b},0)\) and \((\tau_{2b},0)\) while \(\mathcal{O}_{a}\) moves between \((\tau_{1a},\xi_{1})\) and \((\tau_{2a},\xi_{2})\). The spatial separation between the points \((t_{1},x_{1})\) and \((t_{2},x_{2})\) is simply not recognised in the rest frame of \(\mathcal{O}_{b}\) in the relational framework. On the contrary, the locations \(x=x_{1}\) and \(x=x_{2}\) are made discernible only because the material point \(\mathcal{O}_{b}\) is observed to move between these locations.
Furthermore the instants \(t=t_{1}\) and \(t=t_{2}\) are made discernible only by the changing location of \(\mathcal{O}_{b}\) with respect to \(\mathcal{O}_{a}\). Indeed it is such material re-configurations which allow for the measurement of time intervals in practice. For instance, the motion of a sprinter between two fixed positions on a race-track is compared to the number of periodic vibrations of a quartz crystal, typically oscillating at \(2^{15}\) Hz in modern watches. The relational viewpoint suggests that the instants \(t=t_{1}\) and \(t=t_{2}\) have no intrinsic separation (or indeed meaning) without reference to the observed motion of \(\mathcal{O}_{b}\) between the locations \(x=x_{1}\) and \(x=x_{2}\).
Figure 1: The relative motion of \(\mathcal{O}_{a}\) and \(\mathcal{O}_{b}\) and the coordinate displacements this defines in the reference frames \(K_{a}\) and \(K_{b}\).
The distinction between instants \(t_{1}\) and \(t_{2}\) and the spatial locations \(x_{1}\) and \(x_{2}\) is made discernible only because the observer \(\mathcal{O}_{b}\) has been observed to move between these space-time locations. Likewise, the distinction between the locations \((\tau_{1b},0)\) and \((\tau_{2b},0)\) in \(K_{b}\) is made physical only because \(\mathcal{O}_{a}\) is observed to move between locations \((\tau_{1a},\xi_{1})\) and \((\tau_{2a},\xi_{2})\), which are themselves made discernible in \(K_{b}\) only because of the observed motion of \(\mathcal{O}_{a}\). In particular, it is clear that space-time locations in the frames \(K_{a}\) and \(K_{b}\) only become physically manifest by the reconfiguration of material observers \(\mathcal{O}_{a}\) and \(\mathcal{O}_{b}\). This in turn implies that each location in \((t,x)\in K_{a}\) becomes physically manifest only if it has a counterpart \((\tau,\xi)\in K_{b}\), and vice-versa.
On the other hand, it is understood that the _coordinate differences_ in each frame of reference serve to characterise the relative motion, for instance it is the coordinate difference \((t_{2}-t_{1},x_{2}-x_{1})\) which serve to define the velocity and related energy-momentum of \(\mathcal{O}_{b}\) with reference to \(K_{a}\). It is these coordinate differences and their transformation between reference frames which contains all physical information about the system of observers \(\mathcal{O}_{a}\) and \(\mathcal{O}_{b}\). In other words, the space-time locations labelled by \(K_{a}\) and \(K_{b}\) are not in themselves fundamental, however, the _transformation of coordinate differences_ from one reference frame to another is fundamental.
### Relativity and de Broglie waves
It is assumed the observer \(\mathcal{O}_{b}\) moves with reference to \(K_{a}\) at constant velocity \(v=\beta c\), where \(\beta\in(-1,1)\) and \(c\) is the speed of light. The coordinate map \(\boldsymbol{\Xi}:K_{a}\to K_{b}\) takes the form
\[\tau=\gamma\left(t-\frac{\beta}{c}x\right)\quad\xi=\gamma\left(x-c\beta t \right);\quad\gamma=\frac{1}{\sqrt{1-\beta^{2}}}. \tag{1}\]
The point emphasised by de Broglie [7, 8] is \(\mathcal{O}_{b}\) has an associated angular frequency
\[\omega_{0}=\frac{E_{0}}{\hbar}, \tag{2}\]
which may be obtained from the Planck and Einstein relations \(E=\hbar\omega\) and \(E_{0}=mc^{2}\), where \(m\) is the rest mass of \(\mathcal{O}_{b}\).
Given this angular frequency, de Broglie postulated that the waveform \(\psi(\tau,\xi)=e^{i\omega_{0}\tau}\) is naturally associated with the observer \(\mathcal{O}_{b}\). Meanwhile (1) ensures this wave-form with respect to \(K_{a}\) is of the form
\[\psi(t,x)=e^{i\omega_{0}\gamma\left(t-\frac{\beta}{c}x\right)}=e^{i(\omega t- kx)}, \tag{3}\]
where \(\omega=\gamma\omega_{0}\) and \(k=\frac{\omega_{0}\beta\gamma}{c}=\frac{\beta}{c}\omega\). The relativistic energy and momentum of \(\mathcal{O}_{b}\) with reference to \(K_{a}\) are given by \(E=mc^{2}\gamma\) and \(p=mc\beta\gamma\), and as such the wave-form \(\psi(t,x)\) may be also written as
\[\psi(t,x)=e^{i(\omega t-kx)}:=e^{\frac{i}{\hbar}(Et-px)}. \tag{4}\]
Thus the relativistic energy-momentum \((E,p)\) of the observer \(\mathcal{O}_{b}\) are related to the angular frequency \(\omega\) and wave-number \(k\) of the associated wave-form \(\psi\).
A point of importance for de Broglie was that the wave form \(\psi(t,x)\) is always in phase with a clock of period \(T_{0}=\frac{2\pi}{\omega_{0}}=\frac{mc^{2}}{\hbar}\) at rest in the frame \(K_{b}\). This clock is shown in Figure 2 as an oscillator moving along the \(y\)-axis of the frame \(K_{b}\) with angular frequency \(\omega_{0}\).
The period and angular frequency of this clock relative to \(K_{a}\) are
\[T=\gamma T_{0}\quad\Omega=\frac{2\pi}{T}=\frac{\omega_{0}}{\gamma}. \tag{5}\]
The angular frequency \(\Omega\) is not to be confused with the angular frequency of \(\psi(t,x)\) which is \(\omega=\gamma\omega_{0}\) and for reference Figure 2 also shows a similar clock at rest in \(K_{a}\) with angular frequency \(\omega\).
The clock co-moving with \(\mathcal{O}_{b}\) moving between \((t,x)\) and \((t+\mathrm{d}t,x+\beta c\mathrm{d}t)\) in \(K_{a}\) will undergo a phase-shift \(\mathrm{d}\Phi=\Omega\mathrm{d}t=\frac{\omega_{0}}{\gamma}\mathrm{d}t\). Meanwhile, the phase difference of the wave \(\psi(t,x)\), between \((t,x)\) and \((t+\mathrm{d}t,x+\beta cdt)\) is
\[\omega_{0}\gamma\left(dt-\frac{\beta}{c}\beta cdt\right)=\frac{\omega_{0}}{ \gamma}\mathrm{d}t=\mathrm{d}\Phi, \tag{6}\]
so the moving clock and wave-form \(\psi(t,x)\) are in phase, see Figure 2. It is clear then that de Broglie waves are closely connected with the Lorentz transformation between local inertial reference frames \(K_{a}\) and \(K_{b}\), in particular with the coordinate map \(\tau(t,x)\). The aim now is derive the existence of such a wave-form as a representation of this coordinate map between the rest-frames of the observers \(\mathcal{O}_{a}\) and \(\mathcal{O}_{b}\).
## 2. Coordinate maps and their governing equations
### Motion and coordinate maps
At any instant of its motion through \(K_{a}\), the observer \(\mathcal{O}_{b}\) is following a trajectory with tangent vector \((\mathrm{d}t\,,\mathrm{d}x)\), while the corresponding trajectory with reference to \(K_{b}\) is of the form \((\mathrm{d}\tau\,,0)\). Correspondingly, the observer \(\mathcal{O}_{a}\) must be travelling along a trajectory in \(K_{b}\) whose tangent vector is of the form \((\mathrm{d}\tau\,,\mathrm{d}\xi)\), while this tangent vector has counterpart \((\mathrm{d}t\,,0)\) with reference to \(K_{a}\), cf. Figure 1.
In general, coordinate differences \((\mathrm{d}\tau\,,\mathrm{d}\xi)\) with reference to \(K_{b}\) are related to their counterparts \((\mathrm{d}t\,,\mathrm{d}x)\) with reference to \(K_{a}\) according to
\[\begin{bmatrix}\mathrm{d}\tau\\ \mathrm{d}\xi\end{bmatrix}=\begin{bmatrix}\tau_{t}&\tau_{x}\\ \xi_{t}&\xi_{x}\end{bmatrix}\begin{bmatrix}\mathrm{d}t\\ \mathrm{d}x\end{bmatrix}\qquad\begin{bmatrix}\mathrm{d}t\\ \mathrm{d}x\end{bmatrix}=\begin{bmatrix}t_{\tau}&t_{\xi}\\ x_{\tau}&x_{\xi}\end{bmatrix}\begin{bmatrix}\mathrm{d}\tau\\ \mathrm{d}\xi\end{bmatrix},\]
where sub-scripts denote differentiation with respect to the relevant variable. To ensure consistency with the special theory of relativity, it is required that tangent vectors of the form \((\mathrm{d}t\,,\beta c\,\mathrm{d}t)\), \((\mathrm{d}t\,,0)\) and
\((\mathrm{d}t\,,c\,\mathrm{d}t)\) have counterparts \((\mathrm{d}\tau\,,0)\), \((\mathrm{d}\tau\,,-\beta c\,\mathrm{d}\tau)\) and \((\mathrm{d}\tau\,,c\,\mathrm{d}\tau)\) respectively. This requires the Jacobian matrices of the coordinate maps to be of the form
\[\begin{bmatrix}\mathrm{d}\tau\\ \mathrm{d}\xi\end{bmatrix}=\begin{bmatrix}\tau_{t}&\tau_{x}\\ c^{2}\tau_{x}&\tau_{t}\end{bmatrix}\begin{bmatrix}\mathrm{d}t\\ \mathrm{d}x\end{bmatrix}\iff\begin{bmatrix}\mathrm{d}t\\ \mathrm{d}x\end{bmatrix}=\begin{bmatrix}t_{\tau}&\frac{1}{c^{2}}x_{\tau}\\ x_{\tau}&t_{\tau}\end{bmatrix}\begin{bmatrix}\mathrm{d}\tau\\ \mathrm{d}\xi\end{bmatrix}, \tag{7}\]
In addition it is required that the Jacobian of each coordinate map should satisfy
\[J=\tau_{t}^{2}-c^{2}\tau_{x}^{2}=t_{\tau}^{2}-\frac{1}{c^{2}}x_{\tau}^{2}=1 \tag{8}\]
### The Hamilton-Jacobi Equations
The action for the coordinate map \(\mathbf{X}:K_{b}\to K_{a}\), associated with the motion \((t_{1},x_{1})\to(t,x)\) induced by the motion of \(\mathcal{O}_{b}\) along the corresponding trajectory \((\tau_{1},0)\to(\tau,0)\) is given by
\[S[\underline{x}]=\frac{E_{0}}{2c^{2}}\int_{\tau_{1}}^{\tau}\underline{x}_{\tau }.\underline{x}_{\tau}\,\mathrm{d}\tau=\int_{\tau_{1}}^{\tau}L[\underline{x}, \underline{x}_{\tau}]\,\mathrm{d}\tau\,. \tag{9}\]
The notation means \(\underline{x}(\tau)\equiv(ct(\tau,0),x(\tau,0))\in K_{a}\) which is the image of the map \(\mathbf{X}:K_{b}\to K_{a}\) applied to the trajectory \(\underline{\xi}(\tau)\equiv(\tau,0)\in K_{b}\). The inner-product is given by
\[\underline{x}_{\tau}.\underline{x}_{\tau}=c^{2}t_{\tau}^{2}-x_{\tau}^{2}=c^{2}J\]
where \(J\) is the Jacobian of the coordinate map \(\mathbf{X}:K_{b}\to K_{a}\) (cf. equation (8)). The constraint \(J=1\) is interpreted as a weak equation, to be applied _after_ variational derivatives are calculated, in line with the terminology of Dirac (cf. [9]).
Under a variation of the form \(\underline{x}(\tau)\to\underline{x}(\tau)+\epsilon\underline{u}(\tau)\), Hamilton's principle is simply the requirement \(\left.\frac{d}{d\epsilon}S[\underline{x}+\epsilon\underline{u}]\right|_{ \epsilon=0}=0\), and can be written for a general Lagrangian \(L[\underline{x},\underline{x}_{\tau}]\) according to
\[\int_{\tau_{1}}^{\tau}\left[\frac{\partial L}{\partial\underline{x}}-\frac{ \mathrm{d}}{\mathrm{d}\tau}\frac{\partial L}{\partial\underline{x}_{\tau}} \right].\underline{u}\,\mathrm{d}\tau+\int_{\tau_{1}}^{\tau}\frac{\mathrm{d}}{ \mathrm{d}\tau}\left(\frac{\partial L}{\partial\underline{x}_{\tau}}. \underline{u}\right)\mathrm{d}\tau=0 \tag{10}\]
after integration by parts. Imposing the boundary conditions \(\underline{u}(\tau_{1})=\underline{u}(\tau_{b})=\underline{0}\) to an otherwise arbitrary variation \(\underline{u}(\tau)\), yields the Euler-Lagrange equations
\[\frac{\partial L}{\partial\underline{x}}-\frac{d}{d\tau}\frac{\partial L}{ \partial\underline{x}_{\tau}}=\underline{0}. \tag{11}\]
When \(L=\frac{E_{0}}{2c^{2}}\left(c^{2}t_{\tau}^{2}-x_{\tau}^{2}\right)\) specifically, the Euler-Lagrange equations for the coordinate map \(\mathbf{X}:K_{b}\to K_{a}\) satisfies \(\frac{\mathrm{d}^{2}}{\mathrm{d}\tau^{2}}\mathbf{X}(\tau,0)=0\).
The Hamilton-Jacobi equation follow from the condition \(\underline{x}(\tau)\) is a physical path (i.e. satisfying (11)), while the variation is now required to satisfy \(\underline{u}(\tau_{1})=\underline{0}\) only, while \(\underline{u}(\tau)\) may be arbitrarily chosen. The variation of the action under this perturbation is obtained from (10)
\[\lim_{\epsilon\to 0}\frac{S[\underline{x}+\epsilon\underline{u}]-S[ \underline{x}]}{\epsilon\underline{u}}=\frac{\partial S}{\partial\underline{x }}=\frac{\partial L}{\partial\underline{x}_{\tau}}. \tag{12}\]
The canonical energy-momentum associated with the trajectory of \(\mathcal{O}_{b}\), with reference to the frame \(K_{a}\), is given by
\[\begin{cases}\frac{\partial S}{\partial t}&=E_{p}=E_{0}t_{\tau}\implies t_{ \tau}=\frac{E_{p}}{E_{0}}\\ \frac{\partial S}{\partial x}&=-p=\frac{E_{0}}{c^{2}}x_{\tau}\implies x_{\tau} =\frac{c^{2}p}{E_{0}}\end{cases} \tag{13}\]
The Hamiltonian associated with coordinate map \(\mathbf{X}:K_{b}\to K_{a}\) along \((\tau,0)\) is
\[H=\underline{p}.\underline{x}_{\tau}-L=\frac{E_{p}^{2}-c^{2}p^{2}}{2E_{0}},\]
which of course is conserved.
Upon imposing the constraint \(J=1\), it follows that
\[\left(\frac{\partial S}{\partial t}\right)^{2}-c^{2}\left(\frac{\partial S}{ \partial x}\right)^{2}=E_{0}^{2}. \tag{14}\]
Conservation of energy-momentum in the form \(\frac{1}{c^{2}}\partial_{t}E_{p}+\partial_{x}p=0\) or equivalently
\[\frac{\partial^{2}S}{\partial t^{2}}-c^{2}\frac{\partial S}{\partial x}=0, \tag{15}\]
is consistent with this constraint, since \(\partial_{t}\frac{\partial S}{\partial t}=\partial_{t}\frac{\partial L}{ \partial t_{\tau}}=\frac{\partial^{2}L}{\partial t\partial t_{\tau}}=0\) and likewise for \(\frac{\partial^{2}S}{\partial x^{2}}\).
Upon using the relations (13) and the constraint (14), we also find
\[\frac{dS}{d\tau}=\frac{\partial S}{\partial t}t_{\tau}+\frac{\partial S}{\partial x }\cdot x_{\tau}=E_{0}, \tag{16}\]
and so integrating with respect to \(\tau\) yields \(S[\underline{x}]=E_{0}\tau(\underline{x})\) up to an additive constant. Given \(S[\underline{x}]=E_{0}\tau(\underline{x})\), it follows the system (14)-(15) governing the action \(S[t,x]\) also governs the component \(\tau(t,x)\) of the coordinate map \(\boldsymbol{\Xi}:K_{a}\to K_{b}\), which similarly satisfies
\[\partial_{t}^{2}\tau-c^{2}\partial_{x}^{2}\tau =0 \tag{17b}\] \[(\partial_{t}\tau)^{2}-c^{2}(\partial_{x}\tau)^{2} =1. \tag{17a}\]
Solutions of the system (17a)-(17b) will form representations of the coordinate map \(\tau(t,x)\).
## 3. Coordinate maps and their representations
### Linearity of the coordinate maps
The main result of this section is that the system (14)-(15) only admits solutions \(S[t,x]\) which are linear in \(t\) and \(x\). However, it will also be shown that \(S\) as a solution of (17a)-(17b) may be represented as an exponential function of \(t\) and \(x\) (cf. [14]).
Without imposing assumptions or restrictions, we consider a general solution of the form
\[S(t,x)=E_{0}\Theta(\psi(t,x)), \tag{18}\]
where \(\Theta(\psi(t,x))=\tau(t,x)\) with \(\psi(t,x)\) being a representation of \(\tau(t,x)\). Substituting (18) into the governing equations (14)-(15) yields
\[\left[\partial_{t}^{2}\psi-c^{2}\partial_{x}^{2}\psi\right]\Theta ^{\prime}(\psi)+\left[\left(\partial_{t}\psi\right)^{2}-c^{2}\left(\partial_{ x}\psi\right)^{2}\right]\Theta^{\prime\prime}(\psi)=0 \tag{19b}\] \[\left[\left(\partial_{t}\psi\right)^{2}-c^{2}\left(\partial_{x} \psi\right)^{2}\right]\Theta^{\prime}(\psi)^{2}=1, \tag{19a}\]
where \(\Theta^{\prime}(\psi)=\frac{\mathrm{d}\Theta}{\mathrm{d}\psi}\).
Equation (19b) applied to equation (19a) now yields
\[\partial_{t}^{2}\psi-c^{2}\partial_{x}^{2}\psi+\frac{\Theta^{\prime\prime}( \psi)}{\Theta^{\prime}(\psi)^{3}}=0. \tag{20}\]
Multiplying by \(\partial_{t}\psi\), it now follows that
\[\frac{1}{2}\partial_{t}\left[(\partial_{t}\psi)^{2}-\frac{1}{\Theta^{\prime}( \psi)^{2}}\right]-c^{2}\partial_{x}^{2}\psi\partial_{t}\psi=0, \tag{21}\]
while substituting from equation (19b) we deduce
\[\partial_{x}\psi\partial_{x}\partial_{t}\psi-\partial_{t}\psi\partial_{x}^{2} \psi=0\]
from which it follows \(\partial_{x}\left(\frac{\partial_{x}\psi}{\partial_{t}\psi}\right)=0\). Multiplying equation (20) by \(\partial_{x}\psi\) we also deduce \(\partial_{t}\left(\frac{\partial_{x}\psi}{\partial_{t}\psi}\right)=0\), and as such \(\frac{\partial_{x}\psi}{\partial_{t}\psi}\) is constant.
This means the functions \(\partial_{t}\psi\) and \(\partial_{x}\psi\) are linearly dependent. It follows that \(\psi\) may be written according to
\[\psi(t,x)=\phi(\omega t-kx)\implies\frac{\partial_{x}\psi}{\partial_{t}\psi}= -\frac{k}{\omega},\]
where \(\phi(\cdot)\) is yet to be determined while \(\omega\) and \(k\) are constants. The constraint (17b) or equivalently (19b) now requires
\[\left(\omega_{0}\frac{\mathrm{d}\phi}{\mathrm{d}s}\frac{\mathrm{d}\Theta}{ \mathrm{d}\phi}\right)^{2}=1,\quad\omega_{0}^{2}=\omega^{2}-c^{2}k^{2}>0, \tag{22}\]
where we introduce \(s=\omega t-kx\). Taking the square-root of (22) we now have \(\pm\omega_{0}\frac{\mathrm{d}\phi}{\mathrm{d}s}\frac{\mathrm{d}\Theta}{ \mathrm{d}\phi}=1\) and so integrating it follows that \(\Theta(\phi(s))=\pm\frac{s}{\omega_{0}}\), or equivalently
\[\tau(t,x)=\Theta(\psi(t,x))=\pm\frac{\omega t-kx}{\omega_{0}}. \tag{23}\]
Formally, we have applied the inverse function theorem to equation (22) which ensures \(\pm\omega_{0}\Theta(\cdot)=\phi^{-1}(\cdot)\) (see [18] for instance). It also follows from (13) and (23) with \(S=E_{0}\tau\) that
\[\begin{cases}\frac{\partial S}{\partial t}=E\implies\frac{\omega}{\omega_{0}} =\frac{E}{E_{0}}\\ \frac{\partial S}{\partial x}=-p\implies\frac{k}{\omega_{0}}=\frac{p}{E_{0}}.\end{cases} \tag{24}\]
### Representations of the coordinate map
As a functional equation for \(\Theta(\phi)\), we note that under the re-scaling \(\phi\to r\phi\) for a non-zero constant \(r\), equation (22) also requires
\[r^{2}\Theta^{\prime}(r\phi)^{2}\dot{\phi}(s)^{2}=\Theta^{\prime}(\phi)^{2} \dot{\phi}(s)^{2}=1. \tag{25}\]
It follows \(r^{2}\Theta^{\prime}(r\phi)^{2}\) is independent of \(r\) and so \(\Theta^{\prime}(r\phi)\propto\frac{1}{r\phi}\) from which it follows
\[E_{0}\Theta(\psi(t,x))=\alpha\ln\psi=\pm E_{0}\frac{\omega t-kx}{\omega_{0}}, \tag{26}\]
where \(\alpha\) is a constant action parameter. The representation \(\psi(t,x)\) of the coordinate map \(\tau(t,x)\) is now explicitly:
\[\psi(t,x)=e^{\pm\frac{1}{\alpha}(Et-px)}, \tag{27}\]
having used equation (24) to re-write the ratios \(\frac{E_{0}\omega}{\omega_{0}}=E\) and \(\frac{E_{0}k}{\omega_{0}}=p\).
The other possible solution of (22) is simply
\[\left.\begin{array}{l}\phi(s)=\kappa s\\ \omega_{0}\Theta(\phi)=\pm\frac{\phi}{\kappa}\end{array}\right\}\implies\Theta (\phi(s))=\pm\frac{s}{\omega_{0}} \tag{28}\]
where \(\kappa\) is constant, thereby ensuring \(\frac{\mathrm{d}^{2}\phi}{\mathrm{d}s^{2}}=0\) and \(\frac{\mathrm{d}^{2}\Theta}{\mathrm{d}\phi^{2}}=0\). This in turn ensures (19a) is satisfied while (19b) is satisfied by definition of \(\omega_{0}\) and \(s\).
### Momentum measurement & de Broglie waves
In SSSS3.1-3.2 it has been shown that the coordinate map \(S=E_{0}\tau(t,x)\) governed by (17b)-(17a), is necessarily linear \(E_{0}\tau(t,x)=\pm(Et-px)\) and has a representation of the form \(E_{0}\tau(t,x)=\alpha\ln\psi(t,x)\). Combining these observations then it is necessary that the representation \(\psi(t,x)\) is of the form
\[\psi(t,x)=\exp\biggl{\{}\pm\frac{1}{\alpha}(Et-kx)\biggr{\}}.\]
It is already clear \(\alpha\) must have the units of action, so the choice \(\hbar\) is obvious. To ensure the representation \(\psi(t,x)\) corresponds to a de Broglie wave of the form (4), it is also necessary to show \(\alpha\) is imaginary, which is the aim of the current section.
Figure 3 shows a very simple apparatus consisting of two massive plates \(\mathcal{P}_{l}\) and \(\mathcal{P}_{r}\), both initially static at \(x_{l}=0\) and \(x_{r}=\lambda\) with reference to the frame \(K\), with rest energy \(\mathscr{E}_{0}\) each. It is supposed the point-like observer \(\mathcal{O}_{b}\) is located at some \(x\in(x_{l},x_{r})\), and interacts with either plate only by collision. Upon collision \(\mathcal{O}_{b}\) undergoes a change
of momentum, thereby imparting momentum to one of these plates. Measurement of momentum means \(\mathcal{O}_{b}\) impacts one of the plates and
sets it in motion relative to the other. Immediately after impact the plates are again inertial observers, since there is no further interaction to impart momentum to either plate.
If \(K_{l}\ni(t^{\prime},x^{\prime})\) denotes the rest-frame of \(\mathcal{P}_{l}\), then its coordinates with reference to this frame will always be of the form \((t^{\prime},0)\); those of \(\mathcal{P}_{r}\) will be of the form \((t^{\prime},\lambda)\) prior to collision. Similarly, \(K_{r}\ni(t^{*},x^{*})\) is the rest-frame of \(\mathcal{P}_{r}\) whose coordinates are always of the form \((t^{*},0)\); those of \(\mathcal{P}_{l}\) are of the form \((t^{*},-\lambda)\) initially. Prior to collision it makes sense to identify coordinates \((t,x)\in K\), \((t^{\prime},x^{\prime})\in K_{l}\) and \((t^{*},x^{*})\in K_{r}\) since all three frames see the observers \(\mathcal{P}_{l}\) and \(\mathcal{P}_{r}\) at rest, and so all are equivalent up to constant translations.
At the moment of measurement as observed from the frame \(K_{l}\), it appears the observer \(\mathcal{P}_{r}\) changes energy-momentum according to \((\mathscr{E}_{0},0)\to(\mathscr{E},\mathscr{P})\) where \(\mathscr{E}^{2}=\mathscr{P}^{2}c^{2}+\mathscr{E}_{0}^{2}\) and \(\mathscr{P}>0\) is assumed. Meanwhile the momentum of \(\mathcal{O}_{b}\) changes according to \((E,p)\to(E_{1},p-\mathscr{P})\) (cf. Figure 3). Naturally, the energy-momentum of \(\mathcal{P}_{l}\) is _always_\((\mathscr{E}_{0},0)\) in the frame \(K_{l}\) while the observer \(\mathcal{O}_{b}\) is interpreted to occupy the location \(x^{\prime}=\lambda\) upon collision. Conversely, in the frame \(K_{r}\) the observer \(\mathcal{P}_{l}\) changes its energy-momentum according to \((\mathscr{E}_{0},0)\to(\mathscr{E},-\mathscr{P})\) and the energy-momentum of \(\mathcal{O}_{b}\) changes according to \((E,-p)\to(E_{1},-p+\mathscr{P})\).
Figure 3. The measurement of \(\mathcal{O}_{b}\)’s momentum by collision with massive plates of equal rest-energy \(\mathscr{E}_{0}\).
In this frame of reference the observer \(\mathcal{O}_{b}\) is interpreted to appear at \(x^{*}=-\lambda\) upon impact, and by definition the energy-momentum of \(\mathcal{P}_{r}\) is _always_\((\mathscr{E}_{0},0)\).
Given that \(\mathcal{P}_{l}\) and \(\mathcal{P}_{r}\) are in uniform relative motion before and after collision with \(\mathcal{O}_{b}\), it follows from SS3.2 the component \(t^{*}(t^{\prime},x^{\prime})\) of the coordinate map \(\mathbf{X}^{*}:K_{l}\to K_{r}\) has representation
\[\psi(t^{\prime},x^{\prime})=\begin{cases}e^{\frac{1}{\alpha}\mathscr{E}_{0}(t^ {\prime}-t^{\prime}_{0})},\quad t^{\prime}<t^{\prime}_{0}\\ e^{\frac{1}{\alpha}\left(\mathscr{E}(t^{\prime}-t^{\prime}_{0})-\mathscr{P}x^ {\prime}\right)}\quad t^{\prime}\geq t^{\prime}_{0}\end{cases}\]
where the impact occurs at time \(t^{\prime}_{0}\) with reference to \(K_{l}\). Upon impact the proper-time \(t^{*}\) of the observer \(\mathcal{P}_{r}\) changes according to
\[\frac{\alpha}{\mathscr{E}_{0}}\ln e^{\frac{1}{\alpha}\mathscr{E}_{0}(t^{ \prime}-t^{\prime}_{0})}\to\frac{\alpha}{\mathscr{E}_{0}^{\text{ }}}\ln e^{\frac{1}{\alpha}\left(\mathscr{E}(t^{\prime}-t^{\prime}_{0})- \mathscr{P}x^{\prime}\right)},\]
from the perspective of the observer \(\mathcal{P}_{l}\). However, according to the observer \(\mathcal{P}_{r}\) its own time coordinate is continuous, while it is the time coordinate of \(\mathcal{P}_{l}\) which undergoes a corresponding change during collision with \(\mathcal{O}_{b}\). Continuity of the \(t^{*}\)-coordinate now requires
\[\lim_{t^{\prime}\to t^{\prime}_{0}}e^{\frac{1}{\alpha}\mathscr{E}_{0}(t^{ \prime}-t^{\prime}_{0})}=\lim_{t^{\prime}\to t^{\prime}_{0}}e^{\frac{1}{ \alpha}\left(\mathscr{E}(t^{\prime}-t^{\prime}_{0})-\mathscr{P}\lambda\right)} \iff e^{-\frac{\mathscr{P}\lambda}{\alpha}}=1. \tag{29}\]
Since \(\lambda\neq 0\) and \(\mathscr{P}>0\) by assumption, continuity of \(\psi(t,x)\) at \(t^{\prime}_{0}\) is satisfied only when the argument of the exponential is of the form \(2\pi ni\) for \(n\in\mathbb{Z}\). Hence, we deduce
\[\alpha=-i\hbar,\quad\mathscr{P}=\frac{2\pi n\hbar}{\lambda},\]
and so the action parameter \(\alpha\) is imaginary as anticipated.
With \(\alpha=-i\hbar\) it is now clear that the coordinate transformation between the rest frames of inertial observers may be represented by wave-forms
\[\psi(t,x)=e^{\frac{i}{\hbar}(E_{p}t-px)}, \tag{30}\]
whose eigenvalues may be defined as
\[E_{p}=\bar{\psi}(-i\hbar\partial_{t})\psi\qquad p=\bar{\psi}(i\hbar\partial_{x })\psi, \tag{31}\]
where \(\bar{\psi}\) denotes the complex conjugate of \(\psi\). Both representations \(\psi\) and \(\bar{\psi}\) satisfy the Klein-Gordon equation
\[\frac{1}{c^{2}}\frac{\partial^{2}\psi}{\partial t^{2}}-\frac{\partial^{2}\psi}{ \partial x^{2}}+\frac{m^{2}c^{2}}{\hbar^{2}}\psi=0. \tag{32}\]
Thus, de Broglie waves as per Dirac's terminology (see [10], p. 120) emerge as a representation of the \(\tau\)-component of the coordinate map \(\boldsymbol{\Xi}:K_{a}\to K_{b}\), and so represents to trajectory of \(\mathcal{O}_{b}\) (i.e. \((\tau,0)\in K_{b}\) with reference to \(K_{a}\).
The existence of de Broglie waves was confirmed almost immediately after de Broglie's first prediction [7], with the interference experiments of Davisson & Germer [6] and the contemporaneous experiments of Thomson & Reid [21]. In the years since, the experimental evidence supporting de Broglie's conjecture has accumulated steadily (see [1, 19, 20] among others).
### Energy-momentum eigenfunctions
The \(\tau\)-representation given in equation (30) is an eigenfunction of the linear operators \(-i\hbar\partial_{t}\) and \(i\hbar\partial_{x}\), whose corresponding eigenvalues are simply the energy-momentum of the observer \(\mathcal{O}_{b}\) with reference to the frame \(K_{a}\). The nonlinear constraint (17b) has a particularly elegant geometric interpretation in the relational context, since one may reformulate the coordinate map (7) according to
\[\begin{bmatrix}d\tau\\ d\xi\end{bmatrix}=\begin{bmatrix}\tau_{t}&\tau_{x}\\ \xi_{t}&\xi_{x}\end{bmatrix}\begin{bmatrix}dt\\ dx\end{bmatrix}=\begin{bmatrix}\tau_{t}&\tau_{x}\\ c^{2}\tau_{x}&\tau_{t}\end{bmatrix}\begin{bmatrix}dt\\ dx\end{bmatrix} \tag{33}\]
in which case \(\tau_{t}^{2}-c^{2}\tau_{x}^{2}=1\) is equivalent to \(\det\begin{bmatrix}\tau_{t}&\tau_{x}\\ \xi_{t}&\xi_{x}\end{bmatrix}=1\).
Hence, the Jacobian of the coordinate transformation \(\boldsymbol{\Xi}:K_{a}\to K_{b}\) is required to be one, thus ensuring this map is invertible. Specifically, it means that a trajectory \((\mathrm{d}t\,,\mathrm{d}x)\) in \(K_{a}\) has as counterpart \((\mathrm{d}\tau\,,\mathrm{d}\xi)\) with reference to \(K_{b}\) and vice-versa. In particular it means that a trajectory of \(\mathcal{O}_{b}\) in \(K_{b}\) given by \((\mathrm{d}\tau\,,0)\) has a counterpart \((\mathrm{d}t\,,\mathrm{d}x)\) in \(K_{a}\), while simultaneously the trajectory of \(\mathcal{O}_{a}\) in \(K_{a}\) given by \((\mathrm{d}t\,,0)\)
has a counterpart \((\mathrm{d}\tau\,,\mathrm{d}\xi)\) in \(K_{b}\), cf. Figure 1. As such, these observers appear as point-like bodies moving with reference to the rest-frame of their counterpart (cf. Figure 1). This is only possible since the conditions (17a)-(17b) are both satisfied for the coordinate map \(E_{0}\tau(t,x)=-i\hbar\ln\psi(t,x)\) when \(\psi(t,x)\) is an energy-momentum eigenfunction.
Contrarily, given the linearity of (32) it is clear that superpositions of the form \(\varphi(t,x)=\iint\delta(E^{2}-E_{p}^{2})a(E,p)e^{\frac{t}{\hbar}(Et-px)}dEdp\) are also valid solutions of this wave equation. Such a superposition cannot represent a physically realisable coordinate map from \(K_{a}\) to \(K_{b}\) since the non-linear constraint (17b) is not satisfied for \(-i\hbar\ln\varphi\). This is not to say \(\mathcal{O}_{b}\) becomes somehow de-localised, it always has a precise location \((\tau,0)\in K_{b}\). Rather, it is the case there is no longer a precise correspondence of the form (7) between the frames \(K_{a}\) and \(K_{b}\), and so the trajectory \((\mathrm{d}\tau\,,0)\) in \(K_{b}\) no longer has a precise counterpart with reference to \(K_{a}\) satisfying all the required axioms of special relativity.
## 4. Discussion
A central point of the argument in SS3.3 is that the observers \(\mathcal{P}_{l}\) and \(\mathcal{P}_{r}\) are both always inertial in their own rest-frames, and the acceleration of the pair upon impact with \(\mathcal{O}_{b}\) is only defined in relative terms. This is apparently consistent only in the relational space-time framework. Moreover, the derivation presented here appears to be consistent with Rovelli's Relational Quantum Mechanics (RQM) [15, 17], whereby the properties of a system are not absolutes. In particular, the perceived location and momentum of \(\mathcal{O}_{b}\) upon impact with the apparatus depends on the frame of reference adopted for the measurement.
Indeed the physical properties of a system, in this case the energy-momentum of \(\mathcal{P}_{l}\) and \(\mathcal{P}_{r}\), is a characteristic of interaction between the observers, specifically it is a property of the coordinate maps between their respective rest-frames (cf. [12]). It is also clear the observer \(\mathcal{O}_{b}\) does not have an _absolute location_ in this experiment, its apparent |
2309.01219 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large
Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | 2023-09-03T16:56:48Z | http://arxiv.org/abs/2309.01219v2 | # _Siren's Song in the AI Ocean_:
###### Abstract
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit _hallucinations_: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
## 1 Introduction
Large language models (LLMs), particularly characterized by their substantial number of parameters, have arisen as a promising cornerstone for the development of natural language processing (NLP) and artificial intelligence (Zhao et al., 2023c). With proper alignment techniques, such as supervised finetuning (SFT; Zhang et al., 2023b) and reinforcement learning from human feedback (RLHF; Ouyang et al., 2022; Fernandes et al., 2023), recent LLMs (OpenAI, 2023a; Touvron et al., 2023b; OpenAI, 2023b, _inter alia_) have exhibited strong capabilities in solving various downstream tasks.
Nonetheless, as exemplified in Figure 1, LLMs, despite their remarkable success, occasionally produce outputs that, while seemingly plausible, deviate from user input (Adlakha et al., 2023), previously generated context (Liu et al., 2022), or factual knowledge (Min et al., 2023; Muhlgay et al., 2023; Li et al., 2023a)--this phenomenon is commonly referred to as hallucination, which significantly undermines the reliability of LLMs in real-world scenarios (Kaddour et al., 2023). For instance, LLMs can potentially fabricate erroneous medical diagnoses or treatment plans that lead to tangible real-life risks (Umapathi et al., 2023).
While hallucination in conventional natural language generation (NLG) settings has been widely studied (Ji et al., 2023), understanding and addressing the hallucination problem within the realm of LLMs encounters unique challenges introduced by
1. **Massive training data**: in contrast to carefully curating data for a specific task, LLM pre
Figure 1: Three types of hallucinations occurred in LLM responses (best viewed in color).
training uses trillions of tokens obtained from the web, making it difficult to eliminate fabricated, outdated or biased information;
2. **Versatility of LLMs**: general-purpose LLMs are expected to excel in cross-task, cross-lingual, and cross-domain settings, posing challenges for comprehensive evaluation and mitigation of hallucination.
3. **Imperceptibility of errors**: as a byproduct of their strong abilities, LLMs may generate false information that initially seems highly plausible, making it challenging for models or even humans to detect hallucination.
In addition, the RLHF process (Ouyang et al., 2022), the vague knowledge boundary (Ren et al., 2023) and the black-box property of LLMs (Sun et al., 2022) also complicate the detection, explanation, and mitigation of hallucination in LLMs. There has been a notable upsurge in cutting-edge research dedicated to addressing the aforementioned challenges, which strongly motivates us to compile this survey.
We organize this paper as follows, as also depicted in Figure 2. We first introduce the background of LLMs and offer our definition of hallucination in LLMs (SS2). Next, we introduce relevant benchmarks and metrics (SS3). Subsequently, we discuss potential sources of LLM hallucinations (SS4), and provide an in-depth review of recent work towards addressing the problem (SS5). Finally, we present forward-looking perspectives (SS6). We will consistently update the related open-source materials, which can be accessed at [https://github.com/HillZhang1999/llm-hallucination-survey](https://github.com/HillZhang1999/llm-hallucination-survey).
## 2 Hallucination in the Era of LLM
We begin this section by overviewing the history of LLMs (SS2.1). Next, we present our definition of LLM hallucination, by breaking it down
Figure 2: The overview structure of this paper: We initially categorize LLM hallucinations into three distinct types and then introduce corresponding evaluation benchmarks. Subsequently, we explore the source of hallucinations and discuss mitigation strategies throughout the life cycle of LLMs (pre-training\(\rightarrow\)SFT\(\rightarrow\)RLHF\(\rightarrow\)inference).
into three sub-categories (SS2.2). In addition, we discuss the unique challenges of hallucination in LLMs (SS2.3), and compare hallucination with other prevalent problems that are frequently encountered in the realm of LLMs (SS2.4).
### Large Language Models
An important category of LLMs is autoregressive language models (Radford et al., 2019; Chowdhery et al., 2022; Touvron et al., 2023a, _inter alia_). These models take Transformers (Vaswani et al., 2017) as the backbone, and predict the next token based on previous tokens.1 Prior to the widespread adoption of Transformers, autoregressive language models were built on the backbones of n-grams (Bickel et al., 2005; Pauls and Klein, 2011) and recurrent neural networks (Mikolov et al., 2010), and have been applied to various NLG tasks such as summarization (Nallapati et al., 2017) and dialogue generation (Chen et al., 2017).
Footnote 1: Another variant of language models predicts masked tokens in a corrupted sequence (Devlin et al., 2019; Liu et al., 2019; Lan et al., 2019, _inter alia_).
Transformer-based LLMs have demonstrated exceptional performance across tasks, and have therefore shifted NLP from a paradigm centered on task-specific solutions to general-purpose pre-training (Devlin et al., 2019; Radford et al., 2019). The pretrained models are optimized on various self-supervision objectives (Devlin et al., 2019; Raffel et al., 2020; Lewis et al., 2020, _inter alia_), using large-scale unlabeled corpora. Subsequently, the models are fine-tuned with labeled data on target downstream tasks. Representations from the pretrained models can typically reduce the demand for annotated data and achieve significant performance improvement across downstream tasks (Qiu et al., 2020; Min et al., 2021; Li et al., 2022, _inter alia_).
In addition to performance improvement on downstream tasks, recent work has found that scaling up pretrained language models--both in terms of model parameter count and the volume of pre-training data--enables some remarkable abilities, including in-context learning (Brown et al., 2020), reasoning (Wei et al., 2022), and instruction following (Ouyang et al., 2022). The community has, to some extent, popularized the term _large language models_ (LLMs) to differentiate them from their smaller counterparts. Notably, LLMs exhibit the potential to accurately comprehend human instructions and efficiently tackle a variety of complex tasks with only minimal or even no supervision (OpenAI, 2023a,b; Touvron et al., 2023b).
### _What_ is LLM Hallucination
While LLMs have demonstrated remarkable performances, they still inevitably encounter different problems in practical applications, where hallucination is one of the most significant issues among them. The term _hallucination_ has already been widely adopted in the NLP community before the emergence of LLM, typically referring to generating nonsensical or unfaithful to the provided source content (Ji et al., 2023).
We argue that the definition appears to have considerably expanded due to the versatility of LLMs. To this end, we categorize hallucination within the context of LLMs as follows:
* Input-conflicting hallucination, where LLMs generate content that deviates from the source input provided by users;
* Context-conflicting hallucination, where LLMs generate content that conflicts with previously generated information by itself;
* Fact-conflicting hallucination, where LLMs generate content that is not faithful to established world knowledge.
We present examples for each type of hallucinations in Table 1, and discuss them in detail below.
Input-conflicting hallucination.This type of hallucination arises when the content generated by LLMs deviates from user input. Typically, user input for LLMs comprises two components: task instruction (e.g., user prompt for summarization) and task input (e.g., document to be summarized). The contradiction between LLM response and task instructions typically reflects a misunderstanding of user intents. In contrast, when the contradiction arises between the generated content and task input, the hallucination is in line with the conventional definition in specific NLG tasks, such as machine translation (Lee et al., 2019) and summarization (Maynez et al., 2020; Pu et al., 2023). For instance, the first example in Table 1 appears to highlight a contradiction between the generated content and task input: when users request the LLM to generate a summary, the LLM incorrectly replaces the person's name in its response (_Hill\(\rightarrow\)Lucas_), even though the general form can indeed be perceived as a suitable summary.
Context-conflicting hallucination.LLMs may exhibit self-contradictions when generating lengthy or multi-turn responses. This type of hallucination arises when LLMs lose track of the context or fail to maintain consistency throughout the conversation, potentially due to their limitations in maintaining long-term memory Liu et al. (2023) or identifying relevant context Shi et al. (2023). The second example in Table 1 demonstrates how a user request to introduce the NBA Commissioner leads to a context-conflicting hallucination. Specifically, the LLM initially introduces _Silver_ (the current NBA commissioner), but later refers to _Stern_ (the former NBA commissioner), demonstrating a lack of consistency in the generation.
Fact-conflicting hallucination.This type of hallucination occurs when LLMs generate information or text that contradicts established world knowledge. The source of fact-conflicting hallucinations can be multifarious and introduced at different stages of the LLM life cycle, as shown in Figure 2. We present an illustration in Table 1 (third example): in this case, the user asks the LLM about the mother of Afonso II. The LLM gave a wrong answer (_Queen Urraca of Castle_ instead of _Dulce Berenguer of Barcelone_), which can easily mislead less knowledgeable users.
pre-training corpora are automatically collected from the web and often contain a significant amount of fabricated, outdated, or biased information (Penedo et al., 2023). Such inadequate data may lead LLMs to generate hallucinated content. The large data scale may also increase the difficulty of applying data-centric approaches to mitigate the hallucination in LLMs.
Versatility of LLMs.Conventional NLG models are typically designed for a single task, and thus, hallucination studies on them are usually task-specific (Maynez et al., 2020; Wang and Sennrich, 2020; Xiao and Wang, 2021); however, current LLMs are expected to excel in multi-task, multi-lingual, and multi-domain settings (Bang et al., 2023; Chang et al., 2023). This expectation poses thorny challenges for both the evaluation and mitigation of LLM hallucinations. In terms of evaluation, LLMs are more commonly used for free-form text generation, and the lack of deterministic references in this setting complicates the automatic detection of hallucinations. Therefore, it is crucial to establish a comprehensive, reliable, and automatic evaluation benchmark. Regarding mitigation, the proposed methods should be robustly effective, maintaining decent performance when being applied to various scenarios.
Invisibility of errors.Compared to traditional NLG models, LLMs possess a significantly enhanced writing capability and store a larger volume of knowledge. Consequently, the false information hallucinated by LLMs often appears highly plausible, to the extent that even humans may feel hard to detect. This amplifies the difficulty in detecting and reducing input- and context-conflicting hallucination, as we can no longer resort to simple superficial patterns. Regarding fact-conflicting hallucinations, we also need to consider leveraging more knowledge sources for verification. These factors collectively introduce substantial new challenges.
### Other Problems in LLMs
Besides hallucination, LLMs also present other problems. We outline some common issues below and present examples in Table 2 to help readers distinguish between them and hallucination.
Ambiguity.This type of issue arises when the LLM response is ambiguous, lending itself to multiple interpretations. The response may not necessarily be incorrect, but it falls short of providing a useful answer to the user question (Tamkin et al., 2022). The first example in Table 2 exemplifies this issue. The desired answer is 'Paris', yet the LLM provides an ambiguous response.
Incompleteness.The incompleteness issue occurs when the generated response is incomplete or fragmented. As demonstrated in the second example in Table 2, the LLM only informs users of the first two steps in a four-step process for replacing a tire, resulting in an incomplete explanation.
Bias.Bias in LLMs pertains to the manifestation of unfair or prejudiced attitudes within the generated text. These biases may originate from training data, which frequently encompasses historical texts, literature, social media content, and other sources. Such sources may inherently mirror so
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Type** & **Turn** & **Example** \\ \hline \multirow{2}{*}{**Ambiguity**} & _User_ & What is the capital of France? \\ \cline{2-3} & _Model_ & The capital of a European country. \\ \hline \multirow{2}{*}{**Incompleteness**} & _User_ & How can I replace a flat tire on my car? \\ \cline{2-3} & _Model_ & To replace a flat tire, there are four steps. First, you’ll need to use a jack to lift the car. Second, please remove the lap nuts and flat tire. \\ \hline \multirow{2}{*}{**Bias**} & _User_ & Tell me about a typical elementary school teacher. \\ \cline{2-3} & _Model_ & A typical elementary school teacher is a woman who is patient, nurturing, and skilled at managing a classroom full of young children. \\ \hline \multirow{2}{*}{**Under-informativeness**} & _User_ & Introduce the most popular task in NLP in 2022. \\ \cline{2-3} & _Model_ & Sorry, I don’t know. Maybe you can check external search engines. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples of various problems that LLMs may expose, in addition to hallucinations.
cietal biases, gender bias, stereotypes, or discriminatory beliefs (Navigli et al., 2023). As shown in the third example in Table 2, the LLM portrays the teacher as a woman, which is a gender bias.
Under-informativeness.This kind of issue refers to the propensity of LLMs to evade answering certain questions or providing specific information, even when they should be capable of doing so. For instance, due to imperfections in the reward model, RLHF may lead to over-optimization of LLMs, potentially leading to a state of under-informativeness (Gao et al., 2022). An example of this is presented in Table 2, where the LLM declines to respond to the user query.
## 3 Evaluation of LLM Hallucination
Previous research has primarily concentrated on evaluating hallucination in specific natural language generation tasks, such as machine translation (Guerreiro et al., 2023; Dale et al., 2023), dialogue generation (Dziri et al., 2021), question answering (Durmus et al., 2020) and text summarization (Kryscinski et al., 2020; Maynez et al., 2020; Zhong et al., 2021). These works mainly focus on the **input-conflicting hallucination** facet, which is relatively easy for human users to identify given the source text, as shown in Table 1. Recently, studying this kind of hallucination in traditional NLG tasks has seen significant advancements. However, evaluating them in the setting of LLMs becomes more challenging due to the free-form and often long-form nature of LLM generation. Regarding **context-conflicting hallucination**, Cui et al. (2021) and Liu et al. (2022) evaluate models' ability to identify context conflicts introduced when BERT (Devlin et al., 2019) performs blankfilling. Most benchmarks today evaluate the **fact-conflicting hallucination** of LLMs (Lin et al., 2021; Lee et al., 2022; Min et al., 2023; Yu et al., 2023; Li et al., 2023; Muhlgay et al., 2023), which refers to their tendency to generate factual errors. This is considered a critical issue in LLMs because it is challenging for users to identify and poses real-life risks.
In the upcoming sections, we will review existing benchmark datasets and commonly used evaluation metrics in SS3.1 and SS3.2, respectively.
### Evaluation Benchmarks
Various benchmarks have been proposed for evaluating hallucination in LLMs. We present representative ones in Table 3 and discuss them based on their evaluation formats, task formats, and construction methods below.
Evaluation format.Existing benchmarks mainly evaluate hallucinations based on two different abilities of LLMs: the ability to _generate_ factual statements or to _discriminate_ them from non-factual ones. We present an example in Table 4 to showcase the difference between the two evaluation formats. _Generation_ benchmarks (Lin et al., 2021; Lee et al., 2022; Min et al., 2023; Yu et al., 2023) consider hallucination as a generation characteristic, similar to _fluency_(Napoles et al., 2017) and _coherence_(Du et al., 2022), and evaluate the generated texts from LLMs. For
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Benchmark** & **Evaluation** & **Size** & **Task Format** & **Metrics** \\ \hline TruthfulQA & Gen\&Dis & 817 & Question Answering & Truthfulness \\ FactualityPrompt & Gen & 16,000 & Text Completion & Ensemble \\ FActScore & Gen & 500 & Task Instructions & FActScore \\ KoLA-KC & Gen & 190 & Task Instructions & Self-contrast \\ HaluEval & Dis & 35,000 & Question Answering\& Task Instructions & Accuracy \\ FACTOR & Dis & 4,030 & Text Completion & Accuracy \\ \hline \hline \end{tabular}
\end{table}
Table 3: Representative benchmarks that can be used for evaluating LLM hallucination including TruthfulQA (Lin et al., 2021), FactualityPrompt (Lee et al., 2022), FActScore (Min et al., 2023), KoLA-KC (Yu et al., 2023), HaluEval (Li et al., 2023) and FACTOR (Muhlgay et al., 2023). Note that KoLA (Yu et al., 2023) is designed for benchmarking world knowledge of LLMs, where the Knowledge Creating (KC) task can be used to assess hallucination. These benchmarks all focus on the factuality aspect, but diverge in the following aspects: “**Evaluation**” denotes how these benchmarks evaluate hallucination, either by regarding hallucination as a generation quality metric for LLM generations (Generation, referred to as Gen) or assessing whether the LLM can discriminate between factual and non-factual statements (Discrimination, referred to as Dis); “**Task Format**” reflects different methods of prompting language models, e.g., knowledge-intensive question answering (QA), task instructions (TI) and context prefixes for text completion (TC).
instance, TruthfulQA (Lin et al., 2021) evaluates the truthfulness of LLMs' responses to questions, while FActScore (Min et al., 2023) scrutinizes the factual accuracy of biographies generated by LLMs for specific individuals. In contrast, _discrimination_ benchmarks (Li et al., 2023; Muhlgay et al., 2023) consider LLMs' ability to discriminate truthful statements from hallucinated ones. Specifically, HaluEval (Li et al., 2023) requires the model to determine whether a statement contains hallucinated information, while FACTOR (Muhlgay et al., 2023) investigates whether the LLM assigns a higher likelihood to the factual statement compared to non-factual ones. Note that TruthfulQA (Lin et al., 2021) also supports discrimination format by offering a multiple-choice alternative to test a model's ability to identify truthful statements.
Task format.Existing benchmarks evaluate LLM hallucinations across various application tasks. Firstly, certain benchmarks (Lin et al., 2021; Li et al., 2023) explore the issue of hallucination in the context of question-answering, evaluating the ability of LLMs to provide truthful answers to knowledge-intensive questions. Secondly, FActScore (Min et al., 2023) and HaluEval (Li et al., 2023) employ task instructions, such as biography introduction instructions and 52K instructions from the Alpaca project (Taori et al., 2023), to prompt LLMs to generate responses. The factuality of these responses is then evaluated. Thirdly, a line of work (Lee et al., 2022; Muhlgay et al., 2023) directly prompts LLMs to complete text given a prefix, and diagnoses potential hallucination during the generation of informative and factual statements. For instance, FACTOR (Muhlgay et al., 2023) considers context prefixes in Wikipedia documents, while FactualityPrompt (Lee et al., 2022) designs prefixes specifically for factual or non-factual statements to elicit hallucinations. Table 5 provides samples under different task formats.
Construction methods.Most aforementioned benchmarks involve human annotators for dataset creation or quality assurance. TruthfulQA (Lin et al., 2021) carefully designs the questions to elicit imitative falsehoods, i.e., false statements with a high likelihood on the training distribution. They then hire human annotators to further validate the agreement of golden answers. FActScore (Min et al., 2023) conducts a manual annotation pipeline to transform a long-form model generation into pieces of atomic statements. HaluEval (Li et al., 2023) employs two construction methods. For the automatic generation track, they design prompts to query ChatGPT to sample diverse hallucinations and automatically filter high-quality ones. For the human-annotation track, they hire human annotators to annotate the existence of hallucination in the model responses and list the corresponding spans. FACTOR (Muhlgay et al., 2023) first uses external LLMs to generate non-factual completion. Then, they manually validate whether the automatically created datasets meet the predefined requirements, i.e., they should be non-factual, fluent, and similar to the factual completion. To construct knowledge creation task, Yu et al. (2023) build an annota
\begin{table}
\begin{tabular}{l l} \hline \hline
**Evaluation** & **Sample** \\ \hline \hline \multirow{4}{*}{Generation} & **Question:** What happens to you if you eat watermelon seeds? \\ & **Reference:** Nothing happens; You eat watermelon seeds; The watermelon seeds pass through your digestive system; You will not digest the watermelon seeds; The watermelon seeds will be excreted; There is an old wives’ tale that watermelons will grow in your stomach, but this is impossible. \\ \hline \hline Discrimination & **Question:** What happens to you if you eat watermelon seeds? \\ & **Choices:** \\ & (A) The watermelon seeds pass through your digestive system \\ & (B) You grow watermelons in your stomach \\ & (C) You get sick \\ & (D) You have bad dreams \\ & **Reference:** (A) The watermelon seeds pass through your digestive system. \\ \hline \hline \end{tabular}
\end{table}
Table 4: Illustrative examples of two methods for evaluating hallucinations (Generation v.s. Discrimination).
tion platform to facilitate fine-grained event annotations.
### Evaluation Metrics
The free-form and open-ended nature of language generation makes it difficult to evaluate the hallucinations produced by LLMs. The most commonly used and reliable methods for evaluating hallucinations rely on human experts following specific principles (Lin et al., 2021; Lee et al., 2022; Min et al., 2023; Li et al., 2023a). It is worth noting that although existing benchmarks use human evaluation to ensure reliability, they also seek to support **automatic methods** to facilitate efficient and consistent evaluation.
Human evaluation.To ensure precise and reliable evaluation, existing benchmarks focus on designing dedicated human evaluation principles that involve manual annotation for evaluating each model-generated text. TruthfulQA (Lin et al., 2021) proposes a human-annotation guideline, which instructs annotators to assign one of thirteen qualitative labels to the model output and verify answers by consulting a reliable source. Lee et al. (2022) conduct human annotation to verify the validity of the proposed automatic evaluation metrics. FactScore (Min et al., 2023) requires annotators to assign three labels to each atomic fact: "Supported" or "Not-supported" for facts that are supported or unsupported by the knowledge source, and "Irrelevant" for statements that are not related to the prompt. While human evaluation offers reliability and interpretability, it may be inconsistent due to subjectivity across annotators. It is also prohibitively expensive due to the labor-intensive annotation processes required each time a new model needs to be evaluated.
Model-based automatic evaluation.Several studies (Lin et al., 2021; Min et al., 2023; Zha et al., 2023; Mundler et al., 2023) have devised model-based methods as a proxy for human evaluation. Specifically, TruthfulQA (Lin et al., 2021) trains a GPT-3-6.7B model to classify answers (as true or false) to questions based on their collected human annotations. They observe that the fine-tuned _GPT-judge_ model achieves a validation accuracy of 90-96% and effectively generalizes to new answer formats. AlignScore (Zha et al., 2023) establishes a unified function to evaluate the factual consistency between two texts. This alignment function is trained on a large dataset spanning seven tasks, including Natural Language Inference (NLI), Question Answering (QA), and paraphrasing. Differently, Min et al. (2023) and Mundler et al. (2023) harness the capabilities of off-the-shelf models to serve as automatic evalu
\begin{table}
\begin{tabular}{p{142.3pt} p{284.5pt}} \hline \hline
**Task Format** & **Sample** \\ \hline Question Answering & **Question:** The DutchBelgian television series that “House of Anubi” was based on first aired in what year? **Answer:** 2006 \\ \hline Task Instruction & **Instruction:** Give me 3 useful websites for C programming. **Response:** 1. GeeksforGeeks: This website provides tutorials and practice problems on C programming. 2. Programiz: This website offers tutorials, practice problems, and quizzes on C programming. 3. Codeacademy: This website provides free interactive tutorials on C programming. \\ \hline Text Completion & **Context:** “Sorry” is a song by American singer Madonna from her tenth studio album Confessions on a Dance Floor (2005). It was written and produced by Madonna and Stuart Price, and released as the second single from the album on February 7, 2006. It later appeared on Celebration, her 2009 greatest hits album. An uptempo dance song, “Sorry” was one of the first tracks developed for the album and had numerous remix treatments before the ultimate version of the track was finalized. **Completion**: One of the remixes was done by the known band the Pet Shop Boys, featuring added lyrics by the band \\ \hline \hline \end{tabular}
\end{table}
Table 5: Illustrative examples for the task format where existing benchmarks evaluate hallucinations.
ators. In particular, FactScore (Min et al., 2023) begins by employing a passage retriever, such as Generalizable T5-based Retrievers (Ni et al., 2022), to gather pertinent information. Subsequently, an evaluation model, such as LLaMA-65B (Touvron et al., 2023), uses the retrieved knowledge to determine the truthfulness of a statement. They further adopt micro F1 scores and error rates to assess the reliability of the automatic metrics in comparison with human evaluation. Mundler et al. (2023) design dedicated prompts to query an evaluator LLM (e.g., ChatGPT (OpenAI, 2023)) whether the subjective LLM contradicts itself under the same context, and report classification metrics, including precision, recall, and F1 score.
Rule-based automatic evaluation.For discrimination benchmarks (Li et al., 2023; Muhlgay et al., 2023), common rule-based classification metrics such as accuracy can be directly applied to evaluating the ability of LLMs to discriminate factual statements from non-factual ones. Bang et al. (2023) also compute accuracy to reflect the model's ability to identify misinformation on scientific and social claims related to COVID-19. In contrast, another line of research (Lee et al., 2022; Yu et al., 2023) focuses on devising heuristic methods specifically designed for assessing hallucination. FactualityPrompt (Lee et al., 2022) combines named-entity-based metric and textual entailment-based metric to capture different aspects of factuality. To evaluate knowledge creation, Yu et al. (2023) devise a self-contrast metric to quantify model consistency in generating factual statements. They accomplish this by comparing model-generated texts with and without including golden knowledge as part of the prompts based on Rouge-L (F1) (Lin, 2004).
## 4 Sources of LLM Hallucination
In this section, we aim to explore the various factors that can induce hallucinations within LLMs. We identify four primary sources that span different stages of the LLM life cycle.
LLMs lack relevant knowledge or internalize false knowledge.During the pre-training phase, LLMs amass a vast amount of knowledge from an enormous volume of training data, which is then stored within their model parameters. When asked to answer questions or complete tasks, LLMs often exhibit hallucinations if they lack pertinent knowledge or have internalized false knowledge from the training corpora.
Li et al. (2022) discover that LLMs sometimes misinterpret spurious correlations, such as positionally close or highly co-occurring associations, as factual knowledge. Specifically, McKenna et al. (2023) investigate the hallucination problem within the context of the natural language inference (NLI) task and find a strong correlation between LLM hallucination and the distribution of the training data. For example, they observe that LLMs are biased toward affirming test samples where the hypotheses are attested in the training data. Besides, Dziri et al. (2022) argue that hallucination is also present in human-generated corpora (can be reflected as outdated (Liska et al., 2022; Luu et al., 2022), biased (Chang et al., 2019; Garrido-Munoz et al., 2021), or fabricated (Penedo et al., 2023) expression). As a result, LLMs are prone to replicate or even amplify this hallucination behavior. Wu et al. (2023) reveal that the memorizing and reasoning performance of PLMs for ontological knowledge is less than perfect. Sun et al. (2023) put forward a benchmark named Head-to-Tail to evaluate the factual knowledge of LLMs for entities with different levels of popularity. Experimental results suggest that LLMs still perform unsatisfactorily on torso and tail facts. Furthermore, Zheng et al. (2023) identified two additional abilities associated with knowledge memorization that enable LLMs to provide truthful answers: _knowledge recall_ and _knowledge reasoning_. Deficiencies in either of these abilities can lead to hallucinations.
LLMs sometimes overestimate their capacities.Some studies have been conducted with the aim of understanding whether language models can assess the accuracy of their responses and recognize their knowledge boundaries. Kadavath et al. (2022) conduct experiments that demonstrate LLMs' ability to evaluate the correctness of their own responses (self-evaluation) and determine whether they know the answer to a given question. However, for very large LLMs, the distribution entropy of correct and incorrect answers could be similar, suggesting that LLMs are equally confident when generating incorrect answers as they are generating correct ones. Yin et al. (2023) also evaluate the capacity of popular LLMs to identify unanswerable or unknow
able questions. Their empirical study reveals that even the most advanced LLM, GPT4 (OpenAI, 2023b), shows a significant performance gap when compared to humans. Ren et al. (2023) note a correlation between accuracy and confidence, but such confidence often surpasses the actual capabilities of LLMs, namely over-confidence. In general, LLMs' understanding of factual knowledge boundaries may be imprecise, and they frequently exhibit over-confidence. Such over-confidence misleads LLMs to fabricate answers with unwarranted certainty.
**Problematic alignment process could mislead LLMs into hallucination.** LLMs typically undergo an alignment process following pre-training, where they receive further training on curated instruction-following examples to align their responses with human preferences. However, when trained on instructions for which LLMs have not acquired prerequisite knowledge from the pre-training phase, this is actually a misalignment process that encourages LLMs to hallucinate (Goldberg, 2023; Schulman, 2023). Another potential issue is sycophancy, where LLMs may generate responses that favor the user's perspective rather than providing correct or truthful answers, which can result in hallucination (Perez et al., 2022; Radhakrishnan et al., 2023; Wei et al., 2023b).
**The generation strategy employed by LLMs has potential risks.** Today's most advanced LLMs generate responses sequentially, outputting one token at a time. Zhang et al. (2023) discover that LLMs sometimes over-commit to their early mistakes, even when they recognize they are incorrect. In other words, LLMs may prefer snowballing hallucination for self-consistency rather than recovering from errors. This phenomenon is known as _hallucination snowballing_. Azaria and Mitchell (2023) also contend that local optimization (token prediction) does not necessarily ensure global optimization (sequence prediction), and early local predictions may lead LLMs into situations where it becomes challenging to formulate a correct response. Lee et al. (2022) highlight that the randomness introduced by sampling-based generation strategies, such as top-\(p\) and top-\(k\), can also be a potential source of hallucination.
## 5 Mitigation of LLM Hallucination
In this section, we provide an extensive review of recent studies focused on mitigating LLM hallucinations. To make the structure clear, we categorize existing mitigation works based on the timing of their application within the LLM life cycle.
### Mitigation during Pre-training
Existing work (Zhou et al., 2023) argues that the knowledge of LLMs is mostly acquired during the pre-training phase. The presence of noisy data such as misinformation in the pre-training corpus could corrupt the parametric knowledge of LLMs, which is a significant factor contributing to hallucinations, as previously discussed in SS 4. Akyurek et al. (2022) also demonstrate that it is possible to trace the factual knowledge acquired by language models back to their training data. Consequently, an intuitive approach to mitigating hallucinations could involve manually or automatically curating the pre-training corpus to minimize unverifiable or unreliable data as much as possible.
Before the LLM era, there existed a series of efforts dedicated to _manually_ eliminating noisy training data to mitigate hallucinations. For instance, Gardent et al. (2017) focus on the data-to-text task and enlist human annotators to manually compose clean and accurate responses based on given knowledge bases. It has been shown to effectively reduce hallucinations with such curated training data. Similarly, Wang (2019) manually refine the text in existing table-to-text datasets and observe that this process also substantially alleviates fact hallucinations. Besides, Parikh et al. (2020) instruct annotators to revise verified sentences from Wikipedia rather than directly creating new sentences when constructing table-to-text training data. This approach has also been proven to result in improved factuality of results.
With the advent of the LLM era, curating training data during pre-training has become increasingly challenging due to the vast scale of pre-training corpora (as exemplified in Table 6). For
\begin{table}
\begin{tabular}{l c} \hline
**LLM** & **Pre-train Data Size** \\ \hline GLM (Zeng et al., 2022) & 400B tokens \\ BLOOM (Scao et al., 2022) & 366B tokens \\ GPT-3 (Brown et al., 2020) & 300B tokens \\ LLaMA (Touvron et al., 2023) & 1.4T tokens \\ Llama 2 (Touvron et al., 2023b) & 2T tokens \\ \hline \end{tabular}
\end{table}
Table 6: The pre-training data size of popular LLMs.
instance, Llama 2 (Touvron et al., 2023b) conducts pre-training on about two trillion tokens. Therefore, compared to manual curation, a more practical approach today could be _automatically_ selecting reliable data or filtering out noisy data. For example, the pre-training data of GPT-3 (Brown et al., 2020) is cleaned by using similarity to a range of high-quality reference corpora. The developers of Falcon (Penedo et al., 2023) carefully extract high-quality data from the web via heuristic rules and prove that properly curated pertaining corpora lead to powerful LLMs. Li et al. (2023f) propose phi-1.5, a 1.3 billion parameter LLMs pre-trained on filtered "textbook-like" synthetic data, which exhibits many traits of much larger LLMs. In order to mitigate hallucinations, current LLMs tend to collect pre-training data from credible text sources. The developers of Llama 2 (Touvron et al., 2023b) strategically up-sample data from highly factual sources, such as Wikipedia, when constructing the pre-training corpus. Lee et al. (2022) propose to prepend the topic prefix to sentences in the factual documents to make each sentence serve as a standalone fact during pre-training. Concretely, they treat the document name as the topic prefix and observe this method improves LMs' performance on TruthfulQA.
Summary & Discussion.The mitigation of hallucinations during pre-training is primarily centred around _the curation of pre-training corpora_. Given the vast scale of existing pre-training corpora, current studies predominantly employ simple heuristic rules for data selection and filtering. A potential avenue for exploration could be devising more effective selection or filtering strategies.
### Mitigation during SFT
As a common practice, current LLMs collectively undergo the process known as _supervised fine-tuning_ (SFT) to elicit their knowledge acquired from pre-training and learn how to interact with users (Wang et al., 2023c; Zhang et al., 2023b). SFT generally involves first annotating or collecting massive-task instruction-following data (Chung et al., 2022; Taori et al., 2023), followed by fine-tuning pre-trained foundational LLMs on this data using _maximum likelihood estimation_ (MLE) (Wei et al., 2021). By employing well-designed SFT strategies, many recent studies claim to have built LLMs that achieve performance on par with ChatGPT (Wang et al., 2023b).
Similar to pre-training, one potential approach to reduce hallucination during the SFT stage could be curating the training data. Given the relatively small volume of SFT data (refer to Table 7), both manual and automatic curation are viable options here. Zhou et al. (2023a) have meticulously constructed an instruction-tuning dataset, comprising 1,000 samples annotated by human experts. Some other studies (Chen et al., 2023b; Cao et al., 2023; Lee et al., 2023) have employed an automatic selection of high-quality instruction-tuning data, by leveraging LLMs as evaluators or designing specific rules. Experimental results on hallucination-related benchmarks, such as TruthfulQA (Lin et al., 2021), suggest that LLMs fine-tuned on such curated instruction data demonstrate higher levels of truthfulness and factuality compared to LLMs fine-tuned on uncurated data. Furthermore, Mohamed et al. (2023) propose the integration of domain-specific knowledge sets into the SFT data, which aims to reduce hallucinations that arise from a lack of relevant knowledge.
It is worth noting that Schulman (2023) underscored a potential risk of the SFT process that it could induce hallucination from LLMs due to _behavior cloning_. Behavior cloning is a concept in reinforcement learning (Torabi et al., 2018), which means the model learns directly from imitating the expert's actions. The problem here is
\begin{table}
\begin{tabular}{l l} \hline \hline
**SFT Dataset** & **Data Size** \\ \hline \hline Alpaca (Taori et al., 2023) & 52k samples \\ GPT4-Alpaca (Peng et al., 2023b) & 52k samples \\ Baize (Xu et al., 2023) & 210k samples \\ Dolly (Conover et al., 2023) & 15k samples \\ Open-assistant (Köpf et al., 2023) & 34k samples \\ LIMA (Zhou et al., 2023a) & 1k samples \\ \hline \hline \end{tabular}
\end{table}
Table 7: The size of popular SFT datasets.
Figure 3: The SFT data usually contains samples that exceed LLMs’ parametric knowledge, which may result in hallucinations.
that this method simply mimics behavior without learning a strategy to achieve the final goal. The SFT process of LLMs can be viewed as a special case of behavior cloning, where LLMs learn the format and style of interaction by mimicking humans. As for LLMs, despite having encoded a substantial amount of knowledge into their parameters, there remains knowledge that surpasses their capacity (Yin et al., 2023; Ren et al., 2023). By cloning human behaviors during SFT, LLMs learn to respond to all questions with a predominantly positive tone, without assessing whether these questions exceed their knowledge boundaries (see Figure 3). As a result, during inference, if prompted to answer questions related to unlearned knowledge, they are likely to confidently produce hallucinations. One way to remit this problem can be the honesty-oriented SFT, which means introducing some honest samples into the SFT data. The honest samples refer to responses that admit incompetence, such as "Sorry, I don't know". The Moss project (Sun et al., 2023) open-sourced their SFT data, which includes such honest samples. We observed that models tuned with them could learn to refuse to answer specific questions, therefore helping reduce hallucinations.
Summary & Discussion._Curating the training data_ is one approach for mitigating hallucinations during the SFT phase. Thanks to the acceptable volume of SFT data, they can be manually curated by human experts. Recently, we have performed a preliminary human inspection and observed that some widely-used synthetic SFT data, such as Alpaca (Taori et al., 2023), contains a considerable amount of hallucinated answers due to the lack of human inspection. This calls for careful attention when researchers try to build SFT datasets based on _self-instruct_(Wang et al., 2023).
Previous work also pointed out that the SFT process may inadvertently introduce hallucinations, by forcing LLMs to answer questions that surpass their knowledge boundaries. Some researchers have suggested _honesty-oriented SFT_ as a solution. However, we argue this method has two main problems. Firstly, it exhibits limited generalization capabilities towards out-of-distribution (OOD) cases. Secondly, the annotated honest samples just reflect the incompetence and uncertainty of annotators rather than those of LLMs, as annotators are unaware of LLMs' real knowledge boundaries. Such challenges make solving this issue during SFT sub-optimal.
### Mitigation during RLHF
Nowadays, many researchers attempt to further improve the supervised fine-tuned LLMs via reinforcement learning from human feedback (RLHF) (Fernandes et al., 2023). This process consists of two steps: 1) train a reward model (RW) as the proxy for human preference, which aims to assign an appropriate reward value to each LLM response; 2) optimize the SFT model with the reward model's feedback, by using RL algorithms such as PPO (Schulman et al., 2017).
Leveraging human feedback not only closes the gap between machine-generated content and human preference but also helps LLMs align with desired criteria or goals. One commonly used criterion today is "3H", which denotes _helpful_, _honest_, and _harmless_(Ouyang et al., 2022; Bai et al., 2022; Zheng et al., 2023). The _honest_ aspect here just refers to the minimization of hallucinations in LLM responses. Current advanced LLMs, such as InstructGPT (Ouyang et al., 2022), ChatGPT (OpenAI, 2023), GPT4 (OpenAI, 2023), and Llama2-Chat (Touvron et al., 2023), have collectively considered this aspect during RLHF. For example, GPT4 uses synthetic hallucination data to train the reward model and perform RL, which increases accuracy on TruthfulQA (Lin et al., 2021) from about 30% to 60%. Moreover, Lightman et al. (2023) use the _process supervision_ to detect and mitigate hallucinations for reasoning tasks, which provides feedback for each intermediate reasoning step.
As discussed in the previous section, the phenomenon of behavior cloning during the SFT stage can potentially lead to hallucinations. Some researchers have attempted to address this issue by integrating honest samples into the original SFT data. However, this approach has certain limitations, such as unsatisfactory OOD generalization capabilities and a misalignment between human
\begin{table}
\begin{tabular}{l c} \hline \hline
**Situation** & **Reward Value** \\ \hline \hline Unhedged Correct & **+1** \\ Hedged Correct & **+0.5** \\ Uninformative & 0 \\ Hedged Wrong & -2 \\ Unhedged Wrong & **-4** \\ \hline \hline \end{tabular}
\end{table}
Table 8: An example of reward design for mitigating LLM hallucinations through RL (Schulman, 2023).
and LLM knowledge boundaries. In light of this, Schulman (2023) propose to solve this problem during RLHF. They design a special reward function just for mitigating hallucinations, as shown in Table 8. "Unhedged/Hedged Correct/Wrong" here means the LLM provides correct or wrong answers with a positive or hesitant tone. "Uninformative" denote the safe answers like "I don't know". The core idea is to encourage LLMs to challenge the premise, express uncertainty, and commit incapability by learning from specially designed rewards. This method, which we refer to as honesty-oriented RL, offers several advantages over honesty-oriented SFT. The primary benefit is that it allows LLMs to freely explore their knowledge boundaries, thereby enhancing their generalization capabilities to OOD cases. Additionally, it reduces the need for extensive human annotation and eliminates the requirement for annotators to guess the knowledge boundaries of LLMs.
Summary & Discussion._Reinforcement learning_ can guide LLMs in exploring their knowledge boundaries, enabling them to decline to answer questions beyond their capacity rather than fabricating untruthful responses. However, we note this approach also poses unique challenges. For instance, RL-tuned LLMs may exhibit over-conservatism due to an imbalanced trade-off between _helpfulness_ and _honesty_(Ouyang et al., 2022). An example of this is illustrated in Table 9. As observed in this case, ChatGPT tends to be overly hedged and refrains from providing a clear answer that it already knows, as evidenced in another dialogue turn. This could be attributed to the unreasonable design of the reward function or the poor quality of the training data for the reward model. We hope future work can take such problems into consideration.
### Mitigation during Inference
Compared with the aforementioned training-time mitigation approaches, mitigating hallucinations in the inference time could be more cost-effective and controllable. Therefore, most existing studies focus on this direction, which we will introduce in detail in the following sections.
#### 5.4.1 Designing Decoding Strategies
Decoding strategies, such as greedy decoding and beam search decoding, determine how we choose output tokens from the probability distribution generated by models (Zarriess et al., 2021).
Lee et al. (2022) carry out a factuality assessment of content generated by LLMs using different decoding strategies. They find that nucleus sampling (a.k.a top-\(p\) sampling) (Holtzman et al., 2019) falls short of greedy decoding in terms of factuality. They argue that this underperformance could be attributed to the randomness introduced by top-\(p\) sampling to boost diversity, which may inadvertently lead to hallucinations since LLMs tend to fabricate information to generate diverse responses. In view of this, they introduce a decoding algorithm termed _factual-nucleus sampling_, which aims to strike a more effective balance between diversity and factuality by leveraging the strengths of both top-\(p\) and greedy decoding.
Dhuliawala et al. (2023) develop a decoding framework known as the Chain-of-Verification (CoVE). This framework is based on the observation that independent verification questions typ
\begin{table}
\begin{tabular}{l l} \hline \hline
**User Input** & Introduce the film “The Only Thing” (Four Flaming Days) to me. \\ & “The Only Thing” (Four Flaming Days) is a captivating drama film that delves into the complexities of human emotions, relationships, and the power of resilience. Set against the backdrop of a small town, the story unfolds over the course of four intense, life-changing days that leave an indelible mark on the lives of the characters involved. The film follows the journey of a young woman, Emily, who returns to her hometown after several years away, only to find it in the grip of an unprecedented crisis. As the town faces an impending disaster, Emily is forced to confront her own demons, rekindle old friendships, and navigate the intricate web of relationships that tie her to the town’s inhabitants. As the four days progress, Emily and the people around her are pushed to their limits, as they grapple with love, loss, betrayal, and the harsh realities of life. The film delves deep into the human psyche, exploring themes of foreignness, redemption, and the power of human connection in the face of adversity. \\ \hline
**User Input** & What genre of movie is “The Only Thing” (Four Flaming Days)? \\
**System Response** & I’m sorry. As of my last update in September 2021, there is no film titled “The Only Thing” or “Four Flaming Days” that I’m aware of. \\ \hline \hline \end{tabular}
\end{table}
Table 9: A real example of the _over-conservative_ phenomenon of ChatGPT (July 2023 Version). As demonstrated in this example, ChatGPT refuses to provide a fairly clear answer it already knows, specifically, the genre of “The Only Thing” being a drama film (highlighted in red within the first response).
ically yield more accurate facts than those presented in long-form answers. The CoVe framework initially plans verification questions, and then answers these questions to ultimately produce an enhanced, revised response. Experimental results on list-based questions, closed book QA, and long-form text generation demonstrate that CoVe can effectively mitigate hallucination.
Another work, Li et al. (2023), introduces a novel _Inference-Time Intervention_ (ITI) method to improve the truthfulness of LLMs. This method is based on the assumption that LLMs possess latent, interpretable sub-structures associated with factuality. The ITI method comprises two steps: 1) fitting a binary classifier on top of each attention head of the LLM to identify a set of heads that exhibit superior linear probing accuracy for answering factual questions, and 2) shifting model activations along these factuality-related directions during inference. The ITI method leads to a substantial performance improvement on the TruthfulQA benchmark Lin et al. (2021).
Distinct from the aforementioned studies, Shi et al. (2023) instead concentrates on the retrieval-augmentation setting. Prior research has shown that LLMs sometimes fail to adequately attend to retrieved knowledge when addressing downstream tasks, particularly when the retrieved knowledge conflicts with the parametric knowledge of LLMs Zhou et al. (2023); Xie et al. (2023). To address this issue, Shi et al. (2023) propose a straightforward context-aware decoding (CAD) strategy. The core idea of CAD is to perform a contrastive ensemble of \(p_{\theta}(y_{t}\mid x,c,y_{<t})\) and \(p_{\theta}(y_{t}\mid x,y_{<t})\), where \(\theta\) represents the LM, \(x\) is the input query, \(c\) is the context, \(y\) is the response, and \(t\) is the time step. \(p_{\theta}(y_{t}\mid x,c,y_{<t})\) means the generation probability distribution of \(t\)-th token when given the context while \(p_{\theta}(y_{t}\mid x,y_{<t})\) denotes the distribution only considering the query. The CAD method aims to compel LLMs to pay more attention to contextual information instead of over-relying their own parametric knowledge to make decisions. Experimental results show that CAD effectively elicits the ability of LLMs to exploit retrieved knowledge and thus reduces factual hallucinations on downstream tasks. Another work, DoLA Chuang et al. (2023), also employ the idea of contrastive decoding to reduce hallucination. However, they contrast the generation probabilities from different layers of LLMs, as they find that linguistic and factual information is encoded in distinct sets of layers.
**Summary & Discussion.**_Designing decoding strategies_ to mitigate hallucinations in LLMs during inference is typically in a plug-and-play manner. Therefore, this method is easy to deploy, making it promising for practical applications. However, for this approach, most existing works require accessing the token-level output probabilities, while a substantial number of current LLMs can only return generated content through limited APIs (e.g., ChatGPT). Consequently, we encourage future research in this direction to explore within a more strict _black-box_ setting.
#### 5.4.2 Resorting to External Knowledge
Using external knowledge as supplementary evidence to assist LLMs in providing truthful responses recently represents a burgeoning solution Ren et al. (2023); Mialon et al. (2023). This approach typically consists of two steps. The first step entails accurately obtaining knowledge related to the user instructions. Once useful knowledge has been achieved, the second step involves
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**Method** & **Timing of Using** & **Knowledge Source** & **Application Task** \\ \hline WebGPT Nakano et al. (2021) & Generation-Time & Search API & QA \\ Adaptive-Retrieval Mallen et al. (2023) & Generation-Time & Wikipedia & QA \\ ReACT Yao et al. (2022) & Generation-Time & Wikipedia & QA \& FV \\ RETRO Bogreau et al. (2022) & Generation-Time & Unstructured Corpus & LM \& QA \\ Chain-of-Knowledge Li et al. (2023) & Generation-Time & Structured Knowledge Base & QA \& FV \& Decision \\ RAR Gao et al. (2023) & Post-Processing & Search API & QA \\ Verify-then-Zeit Zhao et al. (2023) & Post-Processing & Wikipedia, Search API, etc & QA \\ LLM-Augmente Peng et al. (2023) & Post-Processing & Web documents, Databases & QA \\ REFEED Yu et al. (2023) & Post-Processing & Wikipedia & QA, Dialogue \\ CRITIC Gou et al. (2023) & Post-Processing & Search API, Code Executor, Calculator, etc & QA \& Program \& Toxicity \\ FacTool Chern et al. (2023) & Post-Processing & Search API, Code Executor, Calculator, etc & QA \& Reasoning \& Generation \\ \hline \hline \end{tabular}
\end{table}
Table 10: A summary of some recent studies on resorting to external knowledge to mitigate hallucinations. We use abbreviations for some application task names, including QA (Question Answering), FV (Fact Verification), and LM (Language Modeling).
leveraging such knowledge to guide the generation of the responses. We provide a comprehensive review of the latest progress in this direction, focusing on the specific strategies employed in these two steps, respectively. We also present a summary of recent studies in Table 4.
Knowledge acquisition.LLMs have internalized vast amounts of knowledge into their parameters through extensive pre-training and fine-tuning, which can be referred to as _parametric knowledge_Roberts et al. (2020). However, incorrect or outdated parametric knowledge can easily lead to hallucinations Xie et al. (2023). To remedy this, researchers have proposed acquiring reliable, up-to-date knowledge from credible sources as a form of hot patching for LLMs Lewis et al. (2020); Li et al. (2022). We summarize the two primary sources of such knowledge as follows.
1. [leftmargin=*]
2. **External knowledge bases.** The majority of existing works retrieve information from external knowledge bases, such as large-scale unstructured corpora Cai et al. (2021); Borgeaud et al. (2022), structured databases Liu (2022); Li et al. (2023), specific websites like Wikipedia Yao et al. (2022); Peng et al. (2023); Li et al. (2023); Yu et al. (2023), or even the entire Internet Lazaridou et al. (2022); Yao et al. (2022); Gao et al. (2023); Liu et al. (2023). The evidence retrieval process typically employs various sparse (e.g., BM25 Robertson et al. (2009)) or dense (e.g., PLM-based methods Zhao et al. (2022)) retrievers. Search engines, such as Google Search, can also be viewed as a special kind of information retriever Nakano et al. (2021); Lazaridou et al. (2022); Yao et al. (2022); Gao et al. (2023). Besides, Luo et al. (2023) propose the parameter knowledge guiding framework which retrieves knowledge from the parametric memory of fine-tuned white-box LLMs. Feng et al. (2023) try to teach LLMs to search relevant domain knowledge from external knowledge graphs to answer domain-specific questions.
3. **External tools.** In addition to solely retrieving information from knowledge bases, there are also many other tools that can provide valuable evidence to enhance the factuality of content generated by LLMs Mialon et al. (2023); Qin et al. (2023); Qiao et al. (2023). For instance, FacTool Chern et al. (2023) employs different tools to help detect hallucinations in LLMs for specific downstream tasks, such as _search engine API_ for Knowledge-based QA, _code executor_ for code generation, and _Google Scholar API_ for scientific literature review. CRITIC Gou et al. (2023) also enables LLMs to interact with multiple tools and revise their responses autonomously, which has been proven to effectively improve truthfulness.
Knowledge utilization.Once relevant knowledge is obtained, it could be employed at different stages to mitigate hallucinations within LLMs. Existing methods for knowledge utilization can be roughly divided into two categories, as detailed below and illustrated in Figure 4.
1. [leftmargin=*]
2. **Generation-time supplement.** The most straightforward approach to utilize retrieved knowledge or tool feedback is to directly concatenate them with user queries before prompting LLMs Shi et al. (2023); Mallen et al. (2023); Ram et al. (2023). This method is both effective and easy to implement. Such knowledge is also referred to as _context knowledge_Shi et al. (2023). Existing studies have demonstrated that LLMs possess a strong capability for in-context learning Dong et al. (2022), which enables them to extract and utilize valuable information from context knowledge to rectify nonfactual claims they previously generated.
3. **Post-hoc correction.** Another common practice involves constructing an auxiliary fixer
Figure 4: The illustrations of two distinct methods for utilizing external knowledge to reduce hallucinations in LLMs’ responses.
to rectify hallucinations during the post-processing stage (Cao et al., 2020; Zhu et al., 2021; Fabbri et al., 2022). The fixer can be either another LLM (Peng et al., 2023; Zhang et al., 2023; Chern et al., 2023; Gou et al., 2023) or a specific small model (Chen et al., 2023). Such fixers first interact with external knowledge sources to gather sufficient evidence, and then correct hallucinations. For example, RARR (Gao et al., 2023) directly prompts an LLM to ask questions about the content that needs to be corrected from multiple perspectives. Then it uses search engines to retrieve relevant knowledge. The LLM-based fixer finally makes corrections based on retrieved evidence. The Verify-then-Edit approach (Zhao et al., 2023) aims to enhance the factuality of predictions by post-editing reasoning chains based on external knowledge sourced from Wikipedia. To achieve better performance, LLM-Augmenter (Peng et al., 2023) prompts LLMs to summarize retrieved knowledge before feeding it into the fixer. Moreover, FacTool (Chern et al., 2023) and CRITIC (Gou et al., 2023) propose to utilize various external tools to obtain evidence for the fixer.
Summary & Discussion._Resorting to external knowledge_ to mitigate hallucinations in LLMs offers several advantages. Firstly, this method circumvents the need for modifying LLMs, making it a plug-and-play and efficient solution. Secondly, it facilitates the easy transfer of proprietary knowledge (e.g., a company's internal data) and real-time updated information to LLMs. Lastly, this approach enhances the interpretability of information generated by LLMs by allowing the tracing of generation results back to the source evidence (Gao et al., 2023; Yue et al., 2023). However, this direction also presents some remaining challenges. We discuss some of them below.
1. **Knowledge verification.** In the era of LLMs, the external knowledge source could extend beyond a single document corpus or a specific website to encompass the entire Internet. However, the information from the Internet is in the wild, which means they may also be fabricated, or even generated by LLMs themselves (Alemohammad et al., 2023). How to verify the authenticity of retrieved knowledge from the Internet is an open and challenging problem to be solved.
2. **Performance/efficiency of retriever/fixer.** The performance of the retriever/fixer plays a vital role in ensuring the effects of hallucination mitigation. Future work may consider jointly optimising the whole working flow (retriever\(\rightarrow\)LLM\(\rightarrow\)fiker) via reinforcement learning (Qiao et al., 2023) or other techniques. Besides, the efficiency of the retriever/fixer is another important factor to be considered, as the generation speed of existing LLMs is already a significant burden (Ning et al., 2023).
3. **Knowledge conflict.** As introduced before, the retrieved knowledge may conflict with the parametric knowledge stored by LLMs (Qian et al., 2023). Shi et al. (2023) reveal that LLMs may fail to sufficiently exploit retrieved knowledge when knowledge conflict happens. Xie et al. (2023) take a more cautious look at this phenomenon. How to fully utilize context knowledge is an under-explored question. For example, Liu et al. (2023) find the performance of retrieval-augmented LLMs significantly degrades when they must access evidence in the middle of long contexts.
#### 5.4.3 Exploiting Uncertainty
Uncertainty serves as a valuable indicator for detecting and mitigating hallucinations during the
Figure 5: The illustrations of three typical methods for estimating LLM uncertainty. In the example of the _logit-based_ method, we use the red/green background to distinct tokens with low/high generation probabilities. In the example of the _consistency-based_ method, the responses are acquired from multiple sampling.
inference process (Manakul et al., 2023). Typically, it refers to the confidence level of model outputs (Jiang et al., 2021; Huang et al., 2023; Duan et al., 2023). Uncertainty can assist users in determining when to trust LLMs. Provided that the uncertainty of LLM responses can be accurately characterized, users can filter out or rectify LLMs' claims with high uncertainty since such claims are more prone to be fabricated ones (Lin et al., 2023).
Generally speaking, methods for estimating the uncertainty of LLMs can be categorized into three types (Xiong et al., 2023), as listed below. To facilitate understanding, we also present illustrative examples for these methods in Figure 5.
1. **Logit-based estimation.** The first method is the _logit-based_ method, which requires access to the model logits and typically measures uncertainty by calculating token-level probability or entropy. This method has been widely used in the machine learning community (Guo et al., 2017).
2. **Verbalize-based estimation.** The second is the _verbalize-based_ method, which involves directly requesting LLMs to express their uncertainty, such as using the following prompt: _"Please answer and provide your confidence score (from 0 to 100)."_ This method is effective due to the impressive verbal and instruction-following capabilities of LLMs. Notably, Xiong et al. (2023) further suggest using chain-of-thoughts prompts (Wei et al., 2022) to enhance this method.
3. **Consistency-based estimation.** The third is the _consistency-based_ method (Wang et al., 2022; Shi et al., 2022; Zhao et al., 2023). This method operates on the assumption that LLMs are likely to provide logically inconsistent responses for the same question when they are indecisive and hallucinating facts.
Several recent studies have leveraged uncertainty estimation for detecting and mitigating hallucinations in LLMs. SelfCheckGPT (Manakul et al., 2023) is the first framework to detect LLM hallucinations based on uncertainty measurement in a zero-resource and black-box setting. They employ a consistency-based approach for uncertainty estimation. A non-trivial challenge in SelfCheckGPT is determining how to measure the consistency of different responses. Manakul et al. (2023) perform experiments with BERTScore (Zhang et al., 2019), QA-based metrics (Wu and Xiong, 2023) and n-gram metrics. They finally find that a combination of these approaches yields the best results. Mundler et al. (2023) directly utilize an additional LLM to assess whether two LLM responses are logically contradictory given the same context (Luo et al., 2023), which means at least one of them is hallucinated. Consequently, they employ another LLM to revise such self-contradictory hallucinations from two responses. Agrawal et al. (2023) further adopt the verbalize-based method to evaluate the hallucination rate of LLMs for fabricating references. Varshney et al. (2023), on the other hand, use the logit-based method to detect false concepts in LLMs' responses with high uncertainty. They then fix such content with auxiliary retrieval-augmented LLMs.
Besides, Zhao et al. (2023) present a Pareto optimal self-supervision framework. This framework utilizes available programmatic supervision to assign a risk score to LLM responses, which can serve as an indicator of hallucinations. Luo et al. (2023) introduce a pre-detection self-evaluation technique, which aims to evaluate the familiarity of LLMs with the concepts in user prompts and prevent the generation of content about those unfamiliar concepts.
Summary & Discussion._Exploiting uncertainty_ to identify and mitigate LLM hallucinations is a promising research direction today. Three primary approaches exist for estimating the uncertainty of LLMs, each presenting its unique challenges. Firstly, the _logit-based_ method is becoming less applicable for modern commercial LLMs as they are usually closed-source and black-box, rendering their output logits inaccessible. Secondly, regarding the _verbalize-based_ method, researchers have observed that LLMs tend to display a high degree of overconfidence when expressing their confidence (Xiong et al., 2023). Thirdly, the effective measurement of the consistency of different responses remains an unresolved issue in the _consistency-based_ method (Manakul et al., 2023). We believe that leveraging uncertainty is crucial in developing trustworthy LLMs and encourage future research to address the aforementioned challenges in this field.
### Other Methods
In addition to the above approaches, other techniques demonstrating the potential for reducing hallucinations are shown below.
Multi-agent interaction.Some recent research has sought to address the hallucination problem in LLMs from a multi-agent perspective, wherein multiple LLMs (also known as agents) independently propose and collaboratively debate their responses to reach a single consensus, as exemplified in Figure 6. Du et al. (2023) is a pioneering work in this line. They initially developed a benchmark for assessing the factual accuracy of prominent computer scientist biographies generated by LMs. Their findings reveal that an individual LLM can easily generate hallucinated information within this benchmark; however, such hallucinations can be mitigated by engaging multiple LLMs in a debate to achieve consensus. Besides, Cohen et al. (2023) ask one LLM to generate claims (acting as Examinee) and another to raise questions about these claims and check the truthfulness of them (acting as Examiner). Wang et al. (2023) instead propose prompting a single LLM to identify, simulate, and iteratively self-collaborate with multiple personas, such as Harry Potter Fan and Jay Chou Fan. By leveraging an LLM as a cognitive synergism, it effectively reduces hallucinations with relatively low costs.
Prompt engineering.Existing research highlights that the behavior of LLMs can significantly vary based on the prompts given by users (Si et al., 2022; Zhu et al., 2023). In terms of hallucination, users may encounter an LLM that initially responds accurately but begins to hallucinate information when using different prompts. In light of this observation, Zhang et al. (2023) endeavour to engineer more effective prompts to mitigate hallucination. Concretely, they employ the chain-of-thought prompt (Wei et al., 2022) to compel LLMs to generate reasoning steps before providing the final answers. However, chain-of-thought may introduce some new challenges. The potential of hallucinated reasoning steps is one of them. Furthermore, a popular practice nowadays involves explicitly instructing LLMs not to disseminate false or unverifiable information when designing the "system prompt", i.e., the special messages used to steer the behavior of LLMs. The following system prompt used for Llama 2-Chat (Touvron et al., 2023) exemplifies this approach: _If you don't know the answer to a question, please don't share false information._
Analyzing LLMs' internal states.Azaria and Mitchell (2023) contend that LLMs may be aware of their own falsehoods, implying that their internal states could be utilized to detect hallucinations. They propose Statement Accuracy Prediction based on Language Model Activations (SAPLMA), which adds a classifier on top of each hidden layer of the LLM to determine truthfulness. Experimental results indicate that LLMs might "know" when the statements they generate are false, and SAPLMA can effectively extract such information. The Inference-Time Intervention (ITI) method (Li et al., 2023) is also grounded in a similar hypothesis. They further shift model activations alongside factuality-related heads during inference and discover that this can mitigate hallucinations. These studies suggest that "the hallucination within LLMs may be more a result of generation techniques than the underlying representation" (Agrawal et al., 2023).
Human-in-the-loop.Zhang et al. (2023) posit that a potential cause of hallucination in LLMs could be the misalignment between knowledge and user questions, a phenomenon that is particularly prevalent in the context of retrieval-augmented generation (RAG). To address this is
Figure 6: An example of the process of multi-agent interaction for mitigating LLM hallucinations.
sue, they introduce MixAlign, a human-in-the-loop framework that utilizes LLMs to align user queries with stored knowledge, and further encourages users to clarify this alignment. By refining user queries iteratively, MixAlign not only reduces hallucinations but also enhances the quality of the generated content.
Optimizing model architecture.Several studies have explored modifying the architecture of LMs to mitigate hallucinations. Examples include the multi-branch decoder (Rebuffel et al., 2022) and the uncertainty-aware decoder (Xiao and Wang, 2021). Li et al. (2023) suggest employing a bidirectional autoregressive architecture in the construction of LLMs, which enables language modeling from both left-to-right and right-to-left. They claim that this design strategy could contribute to the reduction of hallucinations by effectively leveraging bidirectional information.
## 6 Outlooks
In this section, we discuss a few unresolved challenges in the investigation of hallucinations within LLMs and offer our insights into potential future research directions.
Reliable evaluation.Although considerable effort has been dedicated to building evaluation benchmarks for quantitatively assessing hallucination in LLMs, there are still issues that need to be solved. The automatic evaluation in the _generation-style_ hallucination benchmark cannot accurately reflect the performance or align with human annotation. Such inaccuracy is reflected in two ways: (1) The automatic metric does not perfectly align with human annotations (Lin et al., 2021; Min et al., 2023; Muhlgay et al., 2023); (2) The reliability of automatic metric varies across texts from different domains or generated by different LLMs (Min et al., 2023), resulting in reduced robustness for generalization. Although the _discrimination-style_ benchmark (Li et al., 2023; Muhlgay et al., 2023) could relatively accurately evaluate a model's ability to distinguish hallucinations, the relationship between discrimination performance and generation performance is still unclear until now. These issues all need more in-depth exploration.
Multi-lingual hallucination.Existing work in LLM hallucination primarily focuses on English, despite the existence of thousands of languages in the world. We hope that LLMs can possess the ability to handle various languages uniformly. Some previous studies have investigated the performance of LLMs on some multi-lingual benchmarks (Ahuja et al., 2023; Lai et al., 2023), and collectively found that their performance degenerates when generalizing to non-Latin languages. In terms of the hallucination problem, Guerreiro et al. (2023) observe that multi-lingual LLMs predominantly struggle with hallucinations in low-resource languages in the translation task. Potential follow-up work could include systematically measuring and analyzing LLM hallucinations across a wide variety of languages. As shown in Table 11, we find that LLMs such as ChatGPT provide accurate answers in English but expose hallucinations in other languages, leading to multilingual inconsistencies. The transfer of knowledge within LLMs from high-resource languages to low-resource ones also presents an interesting and promising research direction.
Multi-modal hallucination.In an effort to improve the performance of complex multi-modal tasks, recent studies have proposed replacing the text encoder of existing vision-large models with LLMs, resulting in large vision-language models (LVLMs) (Liu et al., 2023; Ye et al., 2023). Despite their success, some research reveals that LVLMs inherit the hallucination problem from LLMs and exhibit more severe multi-modal hal
\begin{table}
\begin{tabular}{l l} \hline \hline
**User Input (EN)** & What is the population of Denver according to the 2020 census? \\ LLM Response (EN) & According to the 2020 United States Census, the population of Denver, Colorado, is 715,522 ✓ \\ \hline
**User Input (ZH)** & \(\bigstar\)2020\(\bigstar\)\(\bigcirc\)\(\big
lucinations compared to smaller models. For instance, Li et al. (2023) discuss the _object hallucination_ of LVLMs, wherein LVLMs generate content containing objects that are inconsistent with or absent from the input image, such as the example in Figure 7. To effectively measure object hallucinations generated by LVLMs, Liu et al. (2023) propose a _GPT4-Assisted Visual Instruction Evaluation_ (GAVIE) benchmark. Gunjal et al. (2023) introduce a multi-modal hallucination detection dataset named M-HalDetect, further study the unfaithful descriptions and inaccurate relationships beyond object hallucinations in LVLMs. Furthermore, in addition to images, some studies have extended LLMs to other modalities such as audio Wu et al. (2023); Su et al. (2023) and video Maaz et al. (2023), making it interesting to investigate hallucination in these new scenarios.
Model editing.As elaborated in SS 4, hallucinations in LLMs may primarily stem from the memorization of false information or the absence of correct factual knowledge. To mitigate these issues in LLMs with minimal computational overhead, the concept of model editing has been introduced Sinitsin et al. (2020); De Cao et al. (2021). This approach involves modifying the behavior of models in a manner that is both data- and computation-efficient. At present, there are two mainstream paradigms for model editing. The first involves the incorporation of an auxiliary sub-network Mitchell et al. (2022); Huang et al. (2023), while the second entails direct modification of the original model parameters Meng et al. (2022), which is technique may be instrumental in eliminating LLMs' hallucinations by editing their stored factual knowledge in purpose Lanham et al. (2023); Onoe et al. (2023). However, this emerging field still faces numerous challenges. These could include editing black-box LLMs Murty et al. (2022), in-context model editing Zheng et al. (2023), and multi-hop model editing Zhong et al. (2023), etc.
Attack/defense for inducing hallucination.As previously discussed, significant efforts have been undertaken by both researchers and companies to guarantee that LLMs produce truthful responses, ultimately improving the overall user experience. Cutting-edge commercial LLMs, such as GPT4 OpenAI (2023), appear to have acquired a decent ability to generate proper responses to factuality-related queries. However, they are not invincible. Several studies show that LLMs can be manipulated using techniques like meticulously crafted jailbreak prompts to elicit arbitrary desired responses Wei et al. (2023); Zou et al. (2023), including hallucinations. Consequently, the attacking and defending strategies for inducing hallucinations could also be a promising research direction. This is particularly important as the generation of fabricated information could potentially breach relevant laws, leading to the forced shutdown of LLM applications. This direction is also intimately tied to the robustness of existing hallucination mitigation methods.
Others.Given that the current research on hallucinations in LLMs is still in its early stages, there are also many other intriguing and promising avenues for further investigation. For instance, researchers have begun to treat LLMs as agents for open-world planning in the pursuit of AGI Park et al. (2023); Wang et al. (2023). Addressing the hallucination problem within the context of LLMs-as-agents presents brand-new challenges and holds considerable practical value. Besides, analyzing and tracing LLM hallucinations from the linguistic aspect is another interesting research topic. Rawte et al. (2023) show that the occurrence of LLM hallucination is closely related to linguistic nuances of the user prompts, such as readability, formality, and concreteness. We believe all these directions merit thorough explo
Figure 7: An example of object hallucination in LVLMs. We highlight the hallucination in red, as there is no person under the tree in this picture.
ration in future research.
## 7 Conclusion
With their strong understanding and generation capabilities in the open domain, LLMs have garnered significant attention from both academic and industrial communities. However, hallucination remains a critical challenge that impedes the practical application of LLMs. In this survey, we offer a comprehensive review of the most recent advances, primarily post the release of ChatGPT, that aim to evaluate, trace, and eliminate hallucinations within LLMs. We also delve into the existing challenges and discuss potential future directions. We aspire for this survey to serve as a valuable resource for researchers intrigued by the mystery of LLM hallucinations, thereby fostering the practical application of LLMs.
## Acknowledgments
We would like to thank Yu Wu and Yang Liu for their valuable suggestions.
|
2310.04202 | Optimal model-based beamforming and independent steering for spherical
loudspeaker arrays | Spherical loudspeaker arrays have been recently studied for directional sound
radiation, where the compact arrangement of the loudspeaker units around a
sphere facilitated the control of sound radiation in three-dimensional space.
Directivity of sound radiation, or beamforming, was achieved by driving each
loudspeaker unit independently, where the design of beamforming weights was
typically achieved by numerical optimization with reference to a given desired
beam pattern. This is in contrast to the methods already developed for
microphone arrays in general and spherical microphone arrays in particular,
where beamformer weights are designed to satisfy a wider range of objectives,
related to directivity, robustness, and side-lobe level, for example. This
paper presents the development of a physical-model-based, optimal beamforming
framework for spherical loudspeaker arrays, similar to the framework already
developed for spherical microphone arrays, facilitating efficient beamforming
in the spherical harmonics domain, with independent steering. In particular, it
is shown that from a beamforming perspective, the spherical loudspeaker array
is similar to the spherical microphone array with microphones arranged around a
rigid sphere. Experimental investigation validates the theoretical framework of
beamformer design. | Boaz Rafaely, Dima Khaykin | 2023-10-06T12:40:02Z | http://arxiv.org/abs/2310.04202v1 | # Optimal model-based beamforming and independent steering for spherical loudspeaker arrays
###### Abstract
Spherical loudspeaker arrays have been recently studied for directional sound radiation, where the compact arrangement of the loudspeaker units around a sphere facilitated the control of sound radiation in three-dimensional space. Directivity of sound radiation, or beamforming, was achieved by driving each loudspeaker unit independently, where the design of beamforming weights was typically achieved by numerical optimization with reference to a given desired beam pattern. This is in contrast to the methods already developed for microphone arrays in general and spherical microphone arrays in particular, where beamformer weights are designed to satisfy a wider range of objectives, related to directivity, robustness, and side-lobe level, for example. This paper presents the development of a physical-model-based, optimal beamforming framework for spherical loudspeaker arrays, similar to the framework already developed for spherical microphone arrays, facilitating efficient beamforming in the spherical harmonics domain, with independent steering. In particular, it is shown that from a beamforming perspective, the spherical loudspeaker array is similar to the spherical microphone array with microphones arranged around a rigid sphere. Experimental investigation validates the theoretical framework of beamformer design.
1
Footnote 1: Copyright (c) 2010 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected].
## I Introduction
Spherical loudspeaker arrays, composed of a set of loudspeaker units mounted on the surface of a sphere, operating as a multiple-channel sound source, have been recently studied for applications such as electro-acoustic music performance, synthesizing the radiation pattern of musical instruments [1, 2], and active control of sound [3]. A physical model of the loudspeaker array has been developed [3, 4], as a rigid sphere with vibrating caps mounted on its surface, employing spherical harmonics to describe caps vibration and sound radiation [5]. At low frequencies, array directivity can be represented as a linear combination of spherical harmonics basis functions [6], and by additionally including models of the loudspeaker units [7], array weights can be designed to achieve
a desired directivity function. A comprehensive review of previous work concerning spherical loudspeaker arrays has been presented recently [6].
Although useful in generating spherical harmonics based beam patterns, the methods presented in previous work posses the following shortcomings:
1. Typically, beam-pattern matching was the sole design objective, and so robustness against noise and uncertainty was not introduced, which can degrade performance in practical systems.
2. No simple steering of the beam pattern was presented, and so re-calculation of the beam pattern is typically required to realize steering.
3. Alternative design approaches to the one previously presented, i.e. numerical fitting to a desired directivity function, may be of interest. These include multiple-objective designs; optimal designs; and analytical designs that produce closed-form expressions for the beamforming weights. However, a framework to apply these designs to spherical loudspeaker array was not presented.
This paper presents a beamforming design framework for spherical loudspeaker arrays that overcomes the shortcoming presented above. The design framework is based on a physical model of the spherical loudspeaker array, presented in section II. In this model the spherical loudspeaker array is represented by a rigid sphere with a set of caps mounted on its surface, representing the vibration of the diaphragm of the loudspeaker units, which is then further simplified to a spherical source with radial velocity represented in the spherical harmonics domain. This model is then used to develop the fundamental beamforming equations in section III, both in the space domain by weighting caps velocities, and more generally in the spherical harmonics domain. Section IV presents the beamforming formulation for far-field, axis-symmetric radiation, which is central to this paper. It is shown that the resulting beamforming problem is almost identical to the beamforming problem of a spherical microphone array with microphones arranged around a rigid sphere. The latter has been recently introduced [8], has been studied extensively since, with well investigated analysis of performance [9], and with a range of beamforming methods developed [10]. The novel result of the similarity between the two arrays leads directly to the development of a beamforming design method for spherical loudspeaker arrays that is based on the framework developed for spherical microphone arrays. A formulation of measures for array directivity index and robustness are presented in section V, after which optimal beamformers with simple steering for the spherical loudspeaker array are developed in section VI, including maximum directivity, maximum robustness, and Dolph-Chebyshev, as examples. Experimental investigation of beamforming with a real array having 12 loudspeaker units, measured in an anechoic chamber, concludes the paper.
## II Sound radiation from spherical sources
Sound radiation from spherical sources is reviewed in this section. A spherical source is modeled as a rigid sphere of radius \(r_{0}\) with \(L\) spherical caps, representing loudspeaker units, positioned on its surface at locations \((\theta_{l},\phi_{l})\), each imposing a constant radial surface velocity of \(v_{l},\ l=1,...,L\), at the surface segment they cover [3, 11]. Here \(\theta_{l}\) represent elevation angle, measured down from the z-axis, and \(\phi_{l}\) represent azimuth angle, measured on the x-y plane away from the x-axis towards the y-axis, defining a spherical coordinate system [12]. The radial
velocity of the sphere surface at wave number \(k\), \(u(k,r_{0},\theta,\phi)\), is composed of contributions from all \(L\) caps. The spherical Fourier transform of the radial velocity, \(u_{nm}(k,r_{0})\), is defined as [12]:
\[u_{nm}(k,r_{0})=\int_{0}^{2\pi}\int_{0}^{\pi}u(k,r_{0},\theta,\phi)[Y_{n}^{m}( \theta,\phi)]^{*}\sin\theta d\theta d\phi \tag{1}\]
with \(Y_{n}^{m}(\cdot,\cdot)\) the spherical harmonics of order \(n\) and degree \(m\). After deriving the spherical Fourier transform of the radial velocity due to a single cap and adding the contributions from all \(L\) caps, Eq. (1) reduces to [3]:
\[u_{nm}(k,r_{0})=g_{n}\sum_{l=1}^{L}v_{l}(k)[Y_{n}^{m}(\theta_{l},\phi_{l})]^{*} \tag{2}\]
with
\[g_{n}\equiv\frac{4\pi^{2}}{2n+1}\left[P_{n-1}(\cos\alpha)-P_{n+1}(\cos\alpha) \right], \tag{3}\]
and with \(P_{n}(\cdot)\) the Legendre polynomial, and \(\alpha\) the aperture angle of each spherical cap.
Given the radial velocity over the sphere surface, the sound pressure \(p(k,r,\theta,\phi)\) away from the source, is computed by [5]:
\[p(k,r,\theta,\phi)=i\rho_{0}c\sum_{n=0}^{\infty}\sum_{m=-n}^{n}\frac{h_{n}(kr) }{h_{n}^{\prime}(kr_{0})}u_{nm}(k,r_{0})Y_{n}^{m}(\theta,\phi), \tag{4}\]
with \(c\) the speed of sound, \(\rho_{0}\) air density, \(i=\sqrt{-1}\), and \(h_{n}(\cdot)\) and \(h_{n}^{\prime}(\cdot)\) are the spherical Hankel function of the first kind of order \(n\), and it's derivative, respectively [12]. Now, the spherical Fourier transform of the sound pressure, \(p_{nm}(k,r)\), can be written as:
\[p_{nm}(k,r)=i\rho_{0}c\frac{h_{n}(kr)}{h_{n}^{\prime}(kr_{0})}u_{nm}(k,r_{0}) \tag{5}\]
Equations (4), (2) and (3) can now be used to represent the sound pressure radiated by the spherical source, given the velocity of each spherical cap, or loudspeaker unit. It is worth noting that with \(L\) spherical caps constructing the source, only \(L\) spherical harmonics in \(u_{nm}\) and \(p_{nm}\) can be independently controlled, typically taking the first \((N+1)^{2}\) harmonics, such that
\[(N+1)^{2}\leq L \tag{6}\]
with \(n\leq N\), and \(-n\leq m\leq n\). Also note that by controlling the radial velocity of \(L\) caps, and assuming control over spherical harmonics of orders \(n\leq N\), the higher order harmonics \(N+1\) and above cannot be controlled. However, sufficiently away from the source, at distances that satisfy \(kr>>N\), the higher order harmonics are significantly attenuated by the term \(h_{n}(kr)/h_{n}^{\prime}(kr_{0})\), and can be neglected [3]. This means that although in practice source control is achieved through control over caps velocity, one can also assume a direct control over \(u_{nm}\) at orders \(n\leq N\), with good accuracy.
## III beamforming with a spherical source
Beamforming with spherical sources is employed with the aim of controlling the directivity pattern of the sound radiated from the source. This is achieved by weighting the source signal \(s(k)\) with weights \(w_{l}(k)\) before driving the caps velocity, or loudspeaker units in practice, such that,
\[v_{l}(k)=w_{l}(k)s(k),\,l=1,...,L \tag{7}\]
Now, repeating the derivation in Eqs. (1), (2) and (3), but this time with \(w_{l}(k)\), \(w_{nm}(k)\) and \(w(k,\theta,\phi)\) replacing \(v_{l}(k)\), \(u_{nm}(k,r_{0})\) and \(u(k,r_{0},\theta,\phi)\), respectively, and using Eq. (7), the following holds:
\[w_{nm}(k)=g_{n}\sum_{l=1}^{L}w_{l}(k)[Y_{n}^{m}(\theta_{l},\phi_{l})]^{*} \tag{8}\]
and
\[u_{nm}(k,r_{0})=s(k)w_{nm}(k) \tag{9}\]
with \(w(k,\theta,\phi)\) representing the beamforming weight function as a continuous function over the sphere surface. Following the same argument as presented in section II, one can assume control over \(w_{nm}(k)\), although in practice beamforming is achieved through a direct control over \(w_{l}(k)\).
Now, the pressure away from the source can be written in terms of the beamforming weights by substituting Eq. (9) in Eq. (4),
\[p(k,r,\theta,\phi)=i\rho_{0}cs(k)\sum_{n=0}^{\infty}\sum_{m=-n}^{n}\frac{h_{n} (kr)}{h_{n}^{{}^{\prime}}(kr_{0})}w_{nm}(k,r_{0})Y_{n}^{m}(\theta,\phi). \tag{10}\]
Following Eqs. (10) and (8), beamforming design requires the computation of weights, \(w_{nm}(k)\) or \(w_{l}(k)\), such that the radiated sound pressure maintains some given design criterion. These equations, or measured versions of them, have been previously employed in a numerical design framework, for computing beamforming weights for spherical sources. The next section presents some further derivations, that will facilitate an analytical, rather than numerical design of beamformers for spherical sources, in a manner similar to spherical microphone arrays.
## IV Axis-symmetric far-field beamforming
An efficient formulation for far-field beamforming is derived in this section by constraining the radiated far-field sound pressure to be rotationally symmetric around the look direction, similar to the approach taken for spherical microphone array beamforming [8]. We first assume that a far-field beam pattern is required, which is the case in most applications involving sound radiated into large rooms, such as music halls and video conferencing rooms. Far-field in the context of this work means that \(kr>>N\), where \(N\) is the highest order controlled by the source. In this case the following large-argument approximation can be employed [5]:
\[h_{n}(kr)\approx(-i)^{n+1}\frac{e^{ikr}}{kr} \tag{11}\]
Also, we introduce the Wronskian relation [5]:
\[j_{n}(kr)h^{\prime}_{n}(kr)-j^{\prime}_{n}(kr)h_{n}(kr)=\frac{i}{(kr)^{2}} \tag{12}\]
which is rearranged as follows:
\[\frac{1}{h^{\prime}_{n}(kr)}=-i(kr)^{2}\left[j_{n}(kr)-\frac{j^{\prime}_{n}(kr) }{h^{\prime}_{n}(kr)}h_{n}(kr)\right] \tag{13}\]
We further denote for notation simplicity:
\[b_{n}(kr)\equiv i\rho_{0}ckr^{2}(-i)^{n}\left[j_{n}(kr)-\frac{j^{\prime}_{n}(kr )}{h^{\prime}_{n}(kr)}h_{n}(kr)\right] \tag{14}\]
Substituting Eqs. (11), (13) and (14) into Eq. (10), the far-field sound pressure can be written as:
\[p(k,r,\theta,\phi)=\frac{e^{ikr}}{r}s(k)\sum_{n=0}^{\infty}\sum_{m=-n}^{n}b_{n }(kr_{0})w_{nm}(k,r_{0})Y^{m}_{n}(\theta,\phi). \tag{15}\]
In the next step of this derivation, we remove the dependance on \(r\) by considering the directivity function, or beam pattern \(B\), computed by normalizing the far-field sound pressure with a factor of \(re^{-ikr}\)[5], and assuming a unit input signal \(s(k)=1\),
\[B(k,\theta,\phi)=\sum_{n=0}^{\infty}\sum_{m=-n}^{n}b_{n}(kr_{0})w_{nm}(k,r_{0} )Y^{m}_{n}(\theta,\phi). \tag{16}\]
We now make a further simplification by considering axis-symmetric beam patterns, in a way similar to spherical microphone array beamforming [9], by selecting weights as follows:
\[w_{nm}(k)=\frac{d_{n}(k)}{b_{n}(kr_{0})}[Y^{m}_{n}(\theta_{0},\phi_{0})]^{*} \tag{17}\]
where \(d_{n}(k)\) is the one-dimensional axis-symmetric beamforming weighting function, and \((\theta_{0},\phi_{0})\) is the look direction, forming the axis of symmetry. By substituting Eq. (17) in Eq. (16), and using the spherical harmonics addition theorem [12], the far-field directivity function can be rewritten as:
\[B(k,\Theta)=\sum_{n=0}^{N}d_{n}(k)\frac{2n+1}{4\pi}P_{n}(\cos\Theta) \tag{18}\]
where \(\Theta\) is the angle between the look direction \((\theta_{0},\phi_{0})\) and the direction of radiated sound, \((\theta,\phi)\), defined as:
\[\cos\Theta=\cos\theta_{0}\cos\theta+\cos(\phi_{0}-\phi)\sin\theta_{0}\sin\theta. \tag{19}\]
Several interesting observations can be made regarding the derivation in this section:
* Eq. (18) representing the beam pattern for the spherical source is exactly the same as the beam pattern equation for spherical microphone arrays [9]. A wide range of analytical beam pattern design methods have been developed for the latter, and will be proposed in this paper for beamforming with the spherical source.
* The term \(b_{n}\) in Eq. (14) is very similar to the same term derived for spherical microphone arrays designed around a rigid sphere. In both cases, \(b_{n}\) represents the dynamics of sound propagation around a rigid sphere, such that a division by \(b_{n}\) turns the beam pattern independent on the spherical array configuration. It is therefore expected that beamforming with a spherical loudspeaker array as formulated in this paper will poses a similar behavior to beamforming with a spherical microphone array with a rigid sphere configuration.
* The weights \(w_{l}\) are assumed to control caps velocity, \(v_{l}\). In practice, the weights will control the signal driving the loudspeaker units, i.e. voltage input rather than velocity input. For typical moving-coil loudspeakers, operating above the mechanical cut-off frequency and below the radiation cut-off, the voltage is proportional to the frequency times the velocity [13], such that in practice \(ks(k)\) can be considered as directly proportional to the voltage signal if \(s(k)\) is the velocity signal, and so the dependance on \(k\) in Eqs. (14), (15) and (16) is removed, making the system models for the spherical loudspeaker and microphone arrays even more similar.
## V Directivity index and robustness
Beamformer design typically involves achieving a desired directivity, while maintaining necessary robustness constraints [14]. A common measure for array performance is the directivity factor, calculated as the directivity at the look direction, relative to the directional average of the directivity function [14]:
\[Q=\frac{|B(k,\theta_{0},\phi_{0})|^{2}}{\frac{1}{4\pi}\int_{0}^{2\pi}\int_{0} ^{\pi}\left|B(k,\theta,\phi)\right|^{2}\sin\theta d\theta d\phi} \tag{20}\]
The directivity factor for the spherical source can be derived by substituting Eq. (18) into Eq. (20). Note that Eq. (18) is identical to the directivity function of the spherical microphone array [15]. The directivity factor, as derived in [15], is therefore:
\[Q=\frac{\left|\sum_{n=0}^{N}d_{n}(k)(2n+1)\right|^{2}}{\sum_{n=0}^{N}\left|d _{n}(k)\right|^{2}(2n+1)} \tag{21}\]
The Directivity Index (DI) is now defined as \(10\log_{10}Q\).
Another important measure is array robustness, which is a measure of the system sensitivity to noise, errors, uncertainties and perturbations. A common measure of robustness relates to the inverse of the 2-norm of the array weights, assuming the directivity function in the look direction is unity. The latter constraint is referred to as distortionless response. This measure is exactly the white-noise gain for sensor arrays, but is also considered as a general measure for robustness [14]. We adopt the same measure for the spherical source. We use the term white-noise gain (WNG) although in the context of this work it refers to general robustness. The WNG can be calculated by normalizing the 2-norm of the coefficients \(w_{nm}\) to satisfy the distortionless response constraint with reference to Eq. (16):
\[\mathrm{WNG}=\frac{\left|\sum_{n=0}^{\infty}\sum_{m=-n}^{n}b_{n}(kr_{0})w_{nm} (k,r_{0})Y_{n}^{m}(\theta_{0},\phi_{0})\right|^{2}}{\sum_{n=0}^{N}\sum_{m=-n}^ {n}|w_{nm}(k)|^{2}} \tag{22}\]
Substituting Eq. (17), and using the spherical harmonics addition theorem, we get:
\[\mathrm{WNG}=\frac{\left|\sum_{n=0}^{\infty}d_{n}(k)(2n+1)\right|^{2}}{\sum_{n=0 }^{N}\frac{|d_{n}(k)|^{2}}{|b_{n}(kr_{0})|^{2}}(2n+1)} \tag{23}\]
This result is equivalent to the WNG calculated for spherical microphone arrays [15], although the function involved, e.g. \(b_{n}(kr_{0})\) are only equivalent up to some frequency-dependant constant, as discussed above.
## VI Optimal beamforming
Having developed expressions for the spherical source concerning directivity and WNG, in this section some optimal beamforming methods are proposed, which have analytical, or closed-form solutions, as opposed to most current methods for spherical source beamforming that use numerical optimization.
### _Maximum Directivity_
This beamforming method aims to find the beamforming weights \(d_{n}\) that maximize the directivity factor of a given spherical loudspeaker array, or spherical source. First, the problem is formulated in a matrix form and then the weights are derived that maximize the directivity factor.
The beamforming weights vector at wave number \(k\) is defined as:
\[\mathbf{d}=\left[d_{0}(k),d_{1}(k),...,d_{N}(k)\right]^{T} \tag{24}\]
The following \((N+1)\times 1\) vector of coefficients, with the \(n\)-th element given by \(2n+1\), is also defined:
\[\mathbf{a}=\left[1,3,...,2N+1\right]^{T} \tag{25}\]
such that the directivity factor, Eq. (21) can be written in a matrix form as:
\[Q=\frac{\mathbf{d}^{H}[\mathbf{a}^{T}\mathbf{a}]\mathbf{d}}{\mathbf{d}^{H}[ \mathrm{diag}(\mathbf{a})]\mathbf{d}} \tag{26}\]
Equation (26) represents a generalized Rayleigh quotient, with a maximum value in this case evaluated to be simply [10]:
\[\mathbf{d}=\left[1,1,...,1\right]^{T} \tag{27}\]
In the spherical microphone array literature, this is referred to as regular beam pattern, or plane-wave decomposition, representing directivity functions having a closed-form expression [10]:
\[B(\Theta)=\frac{N+1}{4\pi(\cos\Theta-1)}[P_{N+1}(\cos\Theta)-P_{N}(\cos\Theta)] \tag{28}\]
also referred to as hyper-cardioid beam pattern, with the maximal directivity factor of \((N+1)^{2}\).
### _Maximum WNG_
In a similar manner, the weights \(d_{n}\) that maximize the WNG can also be computed. Equation (23) can be written in a matrix form as:
\[WNG=\frac{\mathbf{d}^{H}[\mathbf{a}^{T}\mathbf{a}]\mathbf{d}}{\mathbf{d}^{H}[ \mathrm{diag}(\mathbf{c})]\mathbf{d}} \tag{29}\]
with
\[\mathbf{c}=\left[1/b_{0}(kr_{0}),3/b_{1}(kr_{0}),...,(2N+1)/b_{N}(kr_{0})\right] ^{T} \tag{30}\]
and with a maximum value in this case achieved with weights given by [10]:
\[d_{n}(k)=\frac{4\pi|b_{n}(kr_{0})|^{2}}{\sum_{n=0}^{N}|b_{n}(kr_{0})|^{2}(2n+1)} \tag{31}\]
Again, this is similar to the result obtained for the spherical microphone array.
### _Other beam pattern designs_
Due to the similarity in the directivity function, directivity factor, and WNG, between the spherical loudspeaker array presented above and the spherical microphone array developed elsewhere, a range of beam pattern design methods can be applied to the spherical loudspeaker array, see, for example [10]. These include, among other, the Dolph-Chebyshev beam pattern, providing optimal trade-off between main-lobe width and side-lobe level, and other optimal design methods.
## VII Experimental study
The aim of this section is to provide an experimental examination of the beamforming design methods presented in this paper. The examination is based upon comparison of measured beam patterns and simulated beam patterns. The simulated beam patterns are generated by using some of the analytical design method presented in this paper to compute beamforming weights and apply them to a computer model of a spherical loudspeaker array, as presented in this paper. The model represents an experimental spherical loudspeaker array system, and so the same weights are applied to the experimental system to produce beam patterns evaluated by microphones measuring the sound pressure away from the spherical loudspeaker array. The measured and simulated beam patterns are then compared.
The experimental system includes a spherical loudspeaker array of radius \(r_{0}=0.15\,\)m, with 12 individual loudspeaker units mounted on it's surface, in a dodecahedron arrangement. The loudspeaker array is designed and produced by the Institute of Technical Acoustics, Aachen university, and includes power amplifiers to drive each loudspeaker unit individually. The power amplifiers are connected to a two-channel sound card via a switching circuit, such that each loudspeaker unit can be separately driven by the sound card.
A microphone attached to a rotating system is used to spatially sample the sound pressure radiated by the loudspeaker array. The microphone positions followed the Gaussian sampling scheme [9], with a total of \(242\)
samples, positioned at a radius of \(r=0.57\) m, achieving a spherical harmonic order of \(N=10\) at the analysis sphere.
Once the microphone is positioned in place, the impulse response between each loudspeaker unit and the microphone is measured using a linearly swept-sine signal of a duration of 4 seconds, in the range of \(0-1500\) Hz. A sampling frequency \(3000\) Hz was employed by the measurement system. A sound card connected to a computer running MATLAB was used to play and record the signals. An entire session includes measuring and saving the impulse response data for each microphone position and each loudspeaker unit, giving a total of \(242\times 12=2904\) impulse response measurements during a complete session. The entire experiment was performed at the anechoic chamber, acoustics laboratory, Ben-Gurion University of the Negev, having inner dimensions of \(2\) m, certified as anechoic from \(300\) Hz.
At a frequency of \(400\) Hz, the value of \(kr_{0}\) is about \(1.1\), and the value of \(kr\) is about \(4.2\), and so both the spherical loudspeaker array and the measuring spherical microphone array satisfy \(kr_{0}<2\) and \(kr<10\), therefore providing spatial over-sampling in both systems. At a frequency of \(1000\) Hz, \(kr_{0}\approx 2.75\) and \(kr\approx 10.45\), above which spatial aliasing is expected to be significant in both systems.
The design framework presented in this paper was used for the computation of \(d_{n}\), from which \(w_{nm}\) was computed using Eq. (17). Then, \(w_{l}\), the weight assigned to each loudspeaker unit was computed from \(w_{nm}\), as detailed below, and applied directly to the measured data to compute the measured beam pattern. With the aim of calculating \(w_{l}\), Eq. (8) can be written in a matrix form as:
\[\mathbf{w_{nm}}=\mathbf{GYw}, \tag{32}\]
where the \((N+1)^{2}\times 1\) vector of beamforming weights at wave number \(k\) is defined by:
\[\mathbf{w_{nm}}=[w_{00}(k),w_{1(-1)}(k),w_{10},w_{11},...,w_{NN}(k)]^{T}. \tag{33}\]
The spherical harmonics matrix \(\mathbf{Y}\) of size \((N+1)^{2}\times L\) has element at row \(q\) and column \(l\) given by:
\[Y_{ql}=[Y_{n}^{m}(\theta_{l},\phi_{l})]^{*},\quad q=n^{2}+n+m,\ n=0,...,N,\ m=-n,...,n,\ l=1,...,L. \tag{34}\]
Matrix \(\mathbf{G}\) of size \((N+1)^{2}\times(N+1)^{2}\) is given by:
\[\mathbf{G}=\mathrm{diag}(g_{0},g_{1},g_{1},g_{1},...,g_{N}) \tag{35}\]
and the \(L\times 1\) vector \(\mathbf{w}\) is given by
\[\mathbf{w}=[w_{1}(k),w_{2}(k),...,w_{L}(k)]^{T} \tag{36}\]
Having designed \(d_{n}\) to achieve a desired beam pattern, Eq. (17) is used to derive \(w_{nm}\) given the look direction \((\theta_{0},\phi_{0})\), from which the weights assigned to each loudspeaker unit, \(w_{l}\), is computed by:
\[\mathbf{w}=\mathbf{Y}^{\dagger}\mathbf{G}^{-1}\mathbf{w_{nm}} \tag{37}\]
where \(\mathbf{Y}^{\dagger}\) is the pseudo-inverse of \(\mathbf{Y}\).
Beamforming weights \(d_{n}\) were designed as described above, based on the maximum WNG method, as in Eq. (31), and the maximum directivity method, as in Eq. (27). It should be noted that these are only example designs, and other methods as discussed in this paper could also be used. Figure 1 shows balloon plots of the simulated and measured beam patterns for a design frequency of \(400\,\)Hz, using the maximum WNG method. Figure 2 shows a cross-section along the azimuth angle \(\theta\) for elevation angle \(\phi=\pi/2\). Note that because in this paper the design methods produce axis-symmetric beam pattern, any cross-section intersecting the look direction can be presented. The figures show a reasonable similarity between simulated and measured beam patterns, validating the proposed design framework. Improving the agreement between simulated and measured beam patterns may require a more accurate matching between transducers. Also, because the distance of \(r=0.57\,\)m of the microphones cannot be considered far-field, the designed weights were modified to account for this near-field effect according to Eq. (10).
Figures 3 and 4 show similar results for \(1000\,\)Hz, using the maximum directivity design method, with a different beam pattern, which has a narrower main lobe. The measured beam pattern is similar to the simulated one, once again validating the design framework.
## VIII Conclusion
This paper presented an efficient beamforming framework for spherical loudspeaker arrays, facilitating optimal, closed-form beam pattern design, with independent steering. The paper derives beamforming equations for the spherical loudspeaker array, showing similarity to spherical microphone arrays configured around a rigid sphere. This similarity facilitates the use of a wide range of beamforming methods already developed for spherical microphone arrays. The design framework is then employed for beamforming with an experimental spherical loudspeaker array system, validating the theoretical results. The proposed framework can be used to produce directional radiation patterns with the spherical loudspeaker array in a wide range of applications.
## IX Acknowledgement
This work was supported in part by the Ministry of Industry and Trade, grant no. 40161.
|
2303.13646 | A construction of algebraizable formal models | Let $X$ be a variety over a complete nontrivially valued field $K$. We
construct an algebraizable formal model for the analytification of $X$ in the
case $X$ admits a closed embedding into a toric variety. By algebraizable we
mean that the formal model is given by the completion along the special fiber
of a locally finite type flat scheme over the valuation ring $K^\circ$. We
construct the formal model via the combinatorial theory of $\mathbb{T}$-toric
varieties over $K^{\circ}$. | Desmond Coles, Netanel Friedenberg | 2023-03-23T20:09:22Z | http://arxiv.org/abs/2303.13646v1 | # A construction of algebraizable formal models
###### Abstract.
Let \(X\) be a variety over a complete nontrivially valued field \(K\). We construct an algebraizable formal model for the analytification of \(X\) in the case \(X\) admits a closed embedding into a toric variety. By algebraizable we mean that the formal model is given by the completion along the special fiber of a locally finite type flat scheme over the valuation ring \(K^{\circ}\). We construct the formal model via the combinatorial theory of \(\mathbb{T}\)-toric varieties over \(K^{\circ}\).
## 1. Introduction
Let \(K\) be a complete nontrivially valued field with valuation ring \(K^{\circ}\) and value group \(\Gamma\). Let \(X\) be a separated finite type \(K\)-scheme. The Berkovich analytification \(X^{\mathrm{an}}\) admits a formal model in the sense of Raynaud [1, SS8.4, Proposition 7]. The purpose of this article is to construct an _algebraizable_ formal model for \(X\), i.e., a zonal model for \(X\) given by completing an a separated, locally finite type, flat \(K^{\circ}\)-scheme, \(\mathcal{X}\) along its special fiber.
**Theorem 1.1**.: _If \(X\) admits a closed embedding in a normal toric variety over \(K\), then \(X^{\mathrm{an}}\) has an algebraizable formal model._
The hypothesis that \(X\) admits a closed embedding in a normal toric variety is satisfied for any quasiprojective \(K\)-scheme; see, for example, the proof of [10, Lemma 4.3]. If \(K\) is algebraically closed, then Wlodarczyk's embedding theorem [11, Theorem A] tells us that for any normal variety \(X\) the hypothesis is satisfied if and only if any two points of \(X\) have a common open affine neighborhood. From an analytic perspective this hypothesis is also natural to consider because if \(X\) embeds into a toric variety then \(X^{\mathrm{an}}\) can be realized as inverse limit of tropicalizaitons [13].
For suitable \(K^{\circ}\)-models \(\mathcal{X}\) of \(X\), there is a natural identification of the generic fiber \(\mathfrak{X}_{\eta}\) of the formal completion \(\mathfrak{X}\) of \(\mathcal{X}\) along its special fiber with an analytic domain in \(X^{\mathrm{an}}\). We prove Theorem 1.1 by building such a \(K^{\circ}\)-model for the ambient toric variety such that the analytic domain is the whole analytification. This is done by extending the ambient toric variety \(Y\) to the trivial model over \(K^{\circ}\), \(\mathcal{Y}\), and then we prove the following theorem which allows us to modify the special fiber to achieve our desired result. In the following theorem \(\mathbb{T}\) is a split torus over \(K^{\circ}\) with cocharacter lattice \(N\), and a \(\mathbb{T}\)-toric variety is a normal, finite type, \(\mathbb{T}\)-equivvarant, \(K^{\circ}\)-model of a toric variety over \(K\).
**Theorem 1.2**.: _Let \(\mathcal{Y}\) be a normal \(\mathbb{T}\)-toric variety with generic fiber \(Y\). There is a finite separable totally ramified extension \(L/K\) of valued fields, a normal \(\mathbb{T}_{L^{\circ}}\)-toric scheme \(\overline{\mathcal{Y}}\) which is locally of finite type over \(L^{\circ}\), and a \(\mathbb{T}_{L^{\circ}}\)-equivariant open immersion \(\mathcal{Y}_{L^{\circ}}\hookrightarrow\overline{\mathcal{Y}}\) such that_
_the induced map \(Y_{L}=\mathcal{Y}_{L}\to\overline{\mathcal{Y}}_{L}\) on generic fibers is an isomorphism of \(L\)-varieties and_
* _the natural map \(\overline{\mathfrak{Y}}_{\eta}\to(\overline{\mathcal{Y}}_{L})^{\mathrm{an}} \cong(Y_{L})^{\mathrm{an}}\) is an isomorphism of analytic spaces_
_where \(\overline{\mathfrak{Y}}\) denotes the formal completion of \(\overline{\mathcal{Y}}\) along its special fiber. If \(\Gamma\) is discrete or divisible, or, more generally, if \(\mathcal{Y}\) admits a normal equivariant completion, then we may take \(L=K\)._
Note that by Theorem 1.1 of [10] any normal \(\mathbb{T}\)-toric variety admits an equivariant completion when \(\Gamma\) is discrete or divisible.
Our approach to proving Theorem 1.2 is combinatorial. In [11], Gubler and Soto classified \(\mathbb{T}\)-toric varieties in terms of certain fans, called \(\Gamma\)-admissible fans, in the half-space \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\); here \(N_{\mathbb{R}}:=N\otimes_{\mathbb{Z}}\mathbb{R}\). The data of a \(\Gamma\)-admissible fan in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\) is equivalent to that of a rational fan \(\Sigma\) in \(N_{\mathbb{R}}\) and a \(\Gamma\)-rational polyhedral complex \(\Phi\) such that the recession cone \(\operatorname{rec}P\) of any \(P\in\Phi\) is in \(\Sigma\). This allows us to reduce Theorem 1.2 to Theorem 1.1 of [10].
The paper proceeds as follows. In SS2 we recall the classification of \(\mathbb{T}\)-toric varieties by \(\Gamma\)-admissible fans introduced in [11]; we also prove some technical results extending known theorems from the finite type case to the locally finite type case. In SS3 we recall some generalities on constructing generic fibers of formal schemes and gluing \(K\)-analytic spaces. Finally, in SS4 we discuss how to compute generic fibers of completions of \(\mathbb{T}\)-toric varieties. We then prove Theorems 1.1 and 1.2.
## 2. Background on \(\mathbb{T}\)-toric varieties
In this section we review the combinatorial classification of \(\mathbb{T}\)-toric varieties, following [11]. We also discuss a slight extension of this theory from finite type schemes to locally finite types, which we will use for Theorem 1.2.
### \(\mathbb{T}\)-toric schemes
Let \(\mathbb{T}\) be a split torus over \(K^{\circ}\) with character lattice \(M\) and cocharacter lattice \(N\). A _\(\mathbb{T}\)-toric scheme_ is an integral, separated scheme \(\mathcal{Y}\) flat over \(K^{\circ}\) together with an open embedding \(\mathbb{T}_{K}\hookrightarrow\mathcal{Y}_{K}\) of the generic fiber of \(\mathbb{T}\) into the generic fiber of \(\mathcal{Y}\) such that the action of \(\mathbb{T}_{K}\) on itself by translation extends to an action of \(\mathbb{T}\) on \(\mathcal{Y}\). A _\(\mathbb{T}\)-toric variety_ is a \(\mathbb{T}\)-toric scheme which is of finite type over \(K^{\circ}\). Every normal \(\mathbb{T}\)-toric variety arises via a combinatorial construction, which we now review. Throughout this paper we follow the notational conventions of [10]. Let \(A\) be any additive subgroup of \(\mathbb{R}\) (possibly all of \(\mathbb{R}\)); set \(N_{A}:=N\otimes_{\mathbb{Z}}A\) and consider this as a subgroup of \(N_{\mathbb{R}}\).
Let \(\sigma\) be a cone in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\), i.e., \(\sigma\) is a cone in the vector space \(N_{\mathbb{R}}\times\mathbb{R}\) such that \(\sigma\subseteq N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\). The cone \(\sigma\) is _\(\Gamma\)-admissible_ if it is pointed and can be written in the form
\[\sigma=\{(w,t)\in N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\mid\langle u_{i},w \rangle+\gamma_{i}t\geq 0\text{ for }i=1,\dots,m\}\]
for some \(u_{1},\dots,u_{m}\in M\) and \(\gamma_{1},\dots,\gamma_{m}\in\Gamma\). By pointed we mean that \(\sigma\) contains no lines. Because \(\Gamma\neq\{0\}\) every face of a \(\Gamma\)-admissible cone is \(\Gamma\)-admissible. We say that a fan, \(\Delta\), in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\) is _\(\Gamma\)-admissible_ if all of its cones are \(\Gamma\)-admissible.
We can characterize \(\Gamma\)-admissibility by looking at the intersections \(\sigma\cap(N_{\mathbb{R}}\times\{0\})\) and \(\sigma\cap(N_{\mathbb{R}}\times\{1\})\). Let \(\pi\colon N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\to N_{\mathbb{R}}\) be the projection onto the first factor. If \(\sigma\) meets \(N_{\mathbb{R}}\times\{1\}\) then \(\sigma\) is \(\Gamma\)-admissible if and only if \(\pi(\sigma\cap(N_{\mathbb{R}}\times\{1\}))\) is a
pointed, \(\Gamma\)-rational polyhedron in \(N_{\mathbb{R}}\). Recall that a polyhedron, \(P\), is _\(\Gamma\)-rational_ if it can be written:
\[P=\{w\in N_{\mathbb{R}}\mid\langle u_{i},w\rangle\geq\gamma_{i}\text{ for }i=1,\ldots,m\}\]
for some \(u_{1},\ldots,u_{m}\in M\) and \(\gamma_{1},\ldots,\gamma_{m}\in\Gamma\). By pointed we mean that \(P\) contains no lines. Conversely, if \(P\) is a pointed, \(\Gamma\)-rational polyhedron, then the closed cone
\[c(P):=\overline{\{(tw,t)\mid t\in\mathbb{R}_{\geq 0},w\in P\}}\subseteq N_{ \mathbb{R}}\times\mathbb{R}_{\geq 0}\]
over \(P\) is \(\Gamma\)-admissible. Note that \(c(P)\cap(N_{\mathbb{R}}\times\{1\})=P\times\{1\}\); this gives a correspondence between \(\Gamma\)-admissible cones that meet \(N_{\mathbb{R}}\times\{1\}\) and \(\Gamma\)-rational polyhedra in \(N_{\mathbb{R}}\). If \(\sigma\) does not meet \(N_{\mathbb{R}}\times\{1\}\) then \(\sigma\subseteq N_{\mathbb{R}}\times\{0\}\) and \(\sigma\) is \(\Gamma\)-admissible if and only if \(\pi(\sigma)\) is a pointed rational cone in \(N_{\mathbb{R}}\) (recall that rational means generated by elements of \(N\)). Let \(\Delta\) be a fan in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\). For \(i=0,1\) let \(\Delta|_{N_{\mathbb{R}}\times\{i\}}\) denote the set \(\{\pi(\sigma\cap(N_{\mathbb{R}}\times\{i\}))\mid\sigma\in\Delta\}\). From this discussion we see that a fan \(\Delta\) is \(\Gamma\)-admissible if and only if \(\Delta|_{N_{\mathbb{R}}\times\{1\}}\) is a \(\Gamma\)-rational polyhedral complex and \(\Delta|_{N_{\mathbb{R}}\times\{0\}}\) is a rational fan. In [1] the authors study when one can define a \(\Gamma\)-admissible fan given a polyhedral complex \(N_{\mathbb{R}}\times\{1\}\).
Given a \(\Gamma\)-admissible cone \(\sigma\) in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\), the affine normal \(\mathbb{T}\)-toric scheme corresponding to \(\sigma\) is \(\mathcal{U}(\sigma):=\operatorname{Spec}K[M]^{\sigma}\) where
\[K[M]^{\sigma}:=\left\{\sum_{u\in M}\alpha_{u}\chi^{u}\in K[M]\;\middle|\; \text{for all }(w,t)\in\sigma\text{ and }u\in M,\,\langle u,w\rangle+v(\alpha_{u})t\geq 0 \right\}.\]
If \(\Gamma\) is discrete then Gordan's lemma shows that \(K[M]^{\sigma}\) is a finitely generated \(K^{\circ}\)-algebra. If \(\Gamma\) is not discrete then \(K[M]^{\sigma}\) is finitely generated as a \(K^{\circ}\)-algebra if and only if all of the vertices of \(\sigma\cap N_{\mathbb{R}}\times\{1\}\) are in \(N_{\Gamma}\times\{1\}\); see [1, Proposition 6.9].
If \(\tau\) is a face of a \(\Gamma\)-admissible cone \(\sigma\) then the inclusion \(K[M]^{\sigma}\subset K[M]^{\tau}\) induces a \(\mathbb{T}\)-equivariant open immersion \(\mathcal{U}(\tau)\hookrightarrow\mathcal{U}(\sigma)\). Given a \(\Gamma\)-admissible fan \(\Delta\) in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\), the normal \(\mathbb{T}\)-toric scheme \(\mathcal{Y}(\Delta)\) corresponding to \(\Delta\) is obtained by gluing the schemes \(\mathcal{U}(\sigma)\) for \(\sigma\in\Delta\) along the open immersions \(\mathcal{U}(\tau)\hookrightarrow\mathcal{U}(\sigma)\) for \(\tau\leq\sigma\). The following theorem, which classifies the \(\mathbb{T}\)-toric varieties, was first shown in [1, 1] in the case where \(\Gamma\) is discrete and was then shown in [1, Theorem 3] in the case where \(\Gamma\) is not discrete.
**Theorem 2.1**.: _If \(\Gamma\) is discrete then \(\Delta\mapsto\mathcal{Y}(\Delta)\) gives a bijection from the set of finite \(\Gamma\)-admissible fans in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\) to the set of isomorphism classes of normal \(\mathbb{T}\)-toric varieties. If \(\Gamma\) is not discrete then \(\Delta\mapsto\mathcal{Y}(\Delta)\) gives a bijection from the set of finite \(\Gamma\)-admissible fans \(\Delta\) in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\) such that all vertices of \(\Delta|_{N_{\mathbb{R}}\times\{1\}}\) are in \(N_{\Gamma}\) to the set of isomorphism classes of normal \(\mathbb{T}\)-toric varieties._
Recall that \(\mathbb{Q}\Gamma\subset\mathbb{R}\) denotes the divisible hull of \(\Gamma\). For any \(\Gamma\)-admissible fan \(\Delta\) in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\), all of the vertices of \(\Delta|_{N_{\mathbb{R}}\times\{1\}}\) are in \(N_{\mathbb{Q}\Gamma}\). In particular, if \(\Gamma\) is divisible then \(\Delta\mapsto\mathcal{Y}(\Delta)\) gives a bijection from the set of finite \(\Gamma\)-admissible fans in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\) to the set of isomorphism classes of normal \(\mathbb{T}\)-toric varieties.
Let \(\Delta\) be a \(\Gamma\)-admissible fan in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\). For any \(\sigma\in\Delta\), the generic fiber of \(\mathcal{U}(\sigma)\) is the affine toric variety over \(K\) associated to \(\sigma\cap(N_{\mathbb{R}}\times\{0\})\), viewed as a rational cone in \(N_{\mathbb{R}}\). So if \(\Sigma:=\Delta|_{N_{\mathbb{R}}\times\{0\}}\) is finite, then the generic fiber of \(\mathcal{Y}(\Delta)\) is the toric variety \(Y(\Sigma)\) over \(K\) associated to \(\Sigma\). More generally, if \(\Sigma\) is not finite, then the generic fiber of \(\mathcal{Y}(\Delta)\) is the toric scheme \(Y(\Sigma)\) locally of finite type over \(K\) associated to \(\Sigma\) as in [1, Theorem 4.1].
### \(\mathbb{T}\)-toric schemes locally of finite type
We now extend Theorem 2.1 to a classification of normal \(\mathbb{T}\)-toric schemes locally of finite type over \(K^{\circ}\). First, we need some general results about actions of group schemes over a base.
**Lemma 2.2**.: _Let \(S\) be a scheme and let \(G\) be a group \(S\)-scheme which is universally open. If \(\varphi\colon G\times_{S}X\to X\) is an action of \(G\) on an \(S\)-scheme \(X\), then \(\varphi\) is open._
Proof.: This claim appears in [10, Ch. 0 SS2 Remark (4)]. In that remark there are other standing hypotheses, but those hypotheses are not needed for the relevant part of the remark.
**Proposition 2.3**.: _Let \(S\) be a scheme and let \(G\) be a group \(S\)-scheme which is universally open and quasicompact over \(S\). If \(\varphi\colon G\times_{S}X\to X\) is an action of \(G\) on an \(S\)-scheme \(X\), then every point of \(X\) is contained in a \(G\)-invariant quasicompact open subscheme of \(X\)._
Proof.: Given \(x\in X\), let \(U^{\prime}\) be an affine open neighborhood of \(x\). Then \(G\times_{S}U^{\prime}\) is a quasicompact scheme. Let \(U\) be the set-theoretic image \(\varphi(G\times_{S}U^{\prime})\). Then by Lemma 2.2, \(U\) is an open neighborhood of \(x\). Since \(G\times_{S}U^{\prime}\) is quasicompact and \(\varphi\) is continuous, \(U\) is quasicompact.
**Remark 2.4**.: By [11, Theoreme 2.4.6], the hypotheses of Proposition 2.3 are satisfied if \(G\) is flat and of finite presentation over \(S\).
We now apply Proposition 2.3 to the case of \(\mathbb{T}\)-toric schemes locally of finite type over \(K^{\circ}\).
**Proposition 2.5**.: _Any normal \(\mathbb{T}\)-toric scheme which is locally of finite type over \(K^{\circ}\) is covered by \(\mathbb{T}\)-invariant affine open subschemes._
Proof.: Let \(\mathcal{Y}\) be a normal \(\mathbb{T}\)-toric scheme which is locally of finite type over \(K^{\circ}\). Since \(\mathbb{T}\) is flat and of finite presentation over \(K^{\circ}\), Proposition 2.3 and Remark 2.4 tell us that any point \(y\in\mathcal{Y}\) is contained in a \(\mathbb{T}\)-invariant quasicompact open subscheme \(\mathcal{U}\). Then \(\mathcal{U}\) is a normal \(\mathbb{T}\)-toric variety and so, by [1, Theorem 2]\(y\) is contained in a \(\mathbb{T}\)-invariant affine open subscheme.
**Remark 2.6**.: Proposition 2.3 can also be used to remove the hypothesis that \(X\) is quasicompact from other theorems about the existence of certain types of open covers, such as [23, Corollary 3.11]. To relax the finite type hypothesis in that result to the hypothesis that \(X\) is locally of finite type, the only additional fact needed is that \(X\) is covered by invariant open subschemes that are quasicompact over the base scheme \(S\). However, this is immediate if \(S\) is quasiseparated, as is the case in the aforementioned result, where \(S\) is assumed to be noetherian.
**Theorem 2.7**.: _If \(\Gamma\) is discrete then \(\Delta\mapsto\mathcal{Y}(\Delta)\) gives a bijection from the set of \(\Gamma\)-admissible fans in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\) to the set of isomorphism classes of normal \(\mathbb{T}\)-toric schemes locally of finite type over \(K^{\circ}\). If \(\Gamma\) is not discrete then \(\Delta\mapsto\mathcal{Y}(\Delta)\) gives a bijection from the set of \(\Gamma\)-admissible fans \(\Delta\) in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\) such that all vertices of \(\Delta|_{N_{\mathbb{R}}\times\{1\}}\) are in \(N_{\Gamma}\) to the set of isomorphism classes of normal \(\mathbb{T}\)-toric schemes locally of finite type over \(K^{\circ}\)._
Proof.: The proof is exactly the same as the proof of [1, Theorem 3], except that we use Proposition 2.5 rather than [1, Theorem 2] to see that the \(\mathbb{T}\)-toric scheme in question has an open cover by affine \(\mathbb{T}\)-toric varieties.
## 3. Raynaud's formal models
We briefly recall the gluing procedure for \(K\)-analytic spaces from [1, SS1.3], the construction of generic fibers of formal schemes, and the relation to Berkovich analytification. We refer the reader to [1] and [1] for background on Berkovich analytic spaces and to [12] for an introduction to the subject with many exercises.
Suppose we have a family \((\mathscr{X}_{i})_{i\in I}\) of \(K\)-affinoid spaces and for each \(i,j\in I\) we have an affinoid domain \(\mathscr{X}_{ij}\subset\mathscr{X}_{i}\) and an isomorphism \(\phi_{ji}\colon\mathscr{X}_{ij}\to\mathscr{X}_{ji}\) such that \(\mathscr{X}_{ii}=\mathscr{X}_{i}\), \(\phi_{ji}(\mathscr{X}_{ij}\cap\mathscr{X}_{ik})=\mathscr{X}_{ji}\cap\mathscr{ X}_{jk}\), and \(\phi_{ki}=\phi_{kj}\circ\phi_{ji}\) on \(\mathscr{X}_{ij}\cap\mathscr{X}_{ik}\). Assume that, for each \(i\in I\), all but finitely many of the \(\mathscr{X}_{ij}\)s are empty. A _gluing of the \(\mathscr{X}_{i}\)s along the \(\mathscr{X}_{ij}s\)_ is a \(K\)-analytic space \(\mathscr{X}\) together with maps \(\phi_{i}\colon\mathscr{X}_{i}\to\mathscr{X}\) identifying \(\mathscr{X}_{i}\) with an affinoid domain in \(\mathscr{X}\) and satisfying
* \(\phi_{i}(\mathscr{X}_{ij})=\phi_{i}(\mathscr{X}_{i})\cap\phi_{j}(\mathscr{X}_{ j})\),
* \(\phi_{i}=\phi_{j}\circ\phi_{ji}\) on \(\mathscr{X}_{ij}\), and
* \(\{\phi_{i}(\mathscr{X}_{i})\mid i\in I\}\) is a quasinet on \(\mathscr{X}\), i.e., every point \(x\in\mathscr{X}\) has a neighborhood of the form \(\bigcup_{j=1}^{n}\phi_{i_{j}}(\mathscr{X}_{i_{j}})\) with \(x\in\bigcap_{j=1}^{n}\phi_{i_{j}}(\mathscr{X}_{i_{j}})\).
By [1, Proposition 1.3.3(b)], such a gluing exists and is unique up to unique isomorphism. Furthermore, [1, Proposition 1.3.2] tells us that maps from \(\mathscr{X}\) to any \(K\)-analytic space \(\mathscr{Y}\) are naturally in bijection with families \((f_{i})_{i\in I}\) of maps \(f_{i}\colon\mathscr{X}_{i}\to\mathscr{Y}\) such that \(f_{i}=f_{j}\circ\phi_{ji}\) on \(\mathscr{X}_{ij}\).
In order to ensure that the above gluing process can be used to construct the generic fiber of a formal scheme as a Berkovich space we will need to impose a topological condition on our formal schemes. Following [1, SS8.2 Definition 12], we say that a topological space \(X\) is _quasi-paracompact_ if \(X\) admits a cover \(\{U_{i}\}_{i\in I}\) consisting of quasicompact open sets such that, for each \(i\in I\), \(U_{i}\cap U_{j}=\emptyset\) for all but finitely many \(j\in I\).
We briefly recall the definition of admissible formal \(K^{\circ}\)-schemes. A topological \(K^{\circ}\)-algebra \(A\) is _admissible_ if there is some nonzero \(\alpha\in K^{\circ\circ}\) such that \(A\) is \(\alpha\)-torsion free, has the \(\alpha\)-adic topology, and is isomorphic as a \(K^{\circ}\)-algebra to a quotient of \(K^{\circ}\left\langle\zeta_{1},\dots,\zeta_{n}\right\rangle\), the \(\alpha\)-adic completion of the polynomial ring \(K^{\circ}[\zeta_{1},\dots,\zeta_{n}]\)[1, SS7.3, Definition 3 and Corollary 5]. Each of these properties is independent of the choice of \(\alpha\). A formal \(K^{\circ}\)-scheme is _admissible_ if it has an open cover by formal spectra of admissible \(K^{\circ}\)-algebras.
We now recall the construction of the generic fiber of a suitable formal scheme as a Berkovich analytic space. We refer the reader to [1, SS7.4] for details and an accessible presentation of the analogous construction as a rigid space. If \(A\) is an admissible \(K^{\circ}\)-algebra then \(K\otimes_{K^{\circ}}A\) is a \(K\)-affinoid algebra. This gives rise to a functor from affine admissible formal schemes over \(K^{\circ}\) to \(K\)-affinoid spaces, sending \(\operatorname{Spf}A\) to the Berkovich spectrum \(\mathscr{M}(K\otimes_{K^{\circ}}A)\) of \(K\otimes_{K^{\circ}}A\). This functor sends inclusions of affine open formal subschemes to inclusions of \(K\)-affinoid domains. This can be extended to a functor that sends a separated, quasi-paracompact, admissible formal \(K^{\circ}\)-scheme \(\mathfrak{X}\) to a \(K\)-analytic spaces \(\mathfrak{X}_{\eta}\), as follows. Because \(\mathfrak{X}\) is quasi-paracompact and admissible, there is a cover \(\{\mathfrak{U}_{i}\mid i\in I\}\) of \(\mathfrak{X}\) by formal spectra \(\mathfrak{U}_{i}=\operatorname{Spf}A_{i}\) of admissible \(K^{\circ}\)-algebras \(A_{i}\) such that, for each \(i\in I\), \(\mathfrak{U}_{i}\cap\mathfrak{U}_{j}=\emptyset\) for all but finitely many \(j\in I\). Moreover, because \(\mathfrak{X}\) is separated, each \(\mathfrak{U}_{i}\cap\mathfrak{U}_{j}\) is also the formal spectrum of an admissible \(K^{\circ}\)-algebra \(A_{ij}\). We obtain \(\mathfrak{X}_{\eta}\) by gluing the \(\mathscr{M}(K\otimes_{K^{\circ}}A_{i})\)s along the \(\mathscr{M}(K\otimes_{K^{\circ}}A_{ij})\)s. We call
the _generic fiber_ of \(\mathfrak{X}\). Note that if \(\mathfrak{X}\) is an admissible formal scheme which is not quasi-paracompact, the generic fiber may not exist as a Berkovich analytic space.
Given a \(K\)-analytic space \(\mathscr{Z}\), a _formal model of \(\mathscr{Z}\)_ consists of a separated, quasi-paracompact, admissible formal \(K^{\circ}\)-scheme \(\mathfrak{Z}\) and an isomorphism \(\mathscr{Z}\stackrel{{\sim}}{{\longrightarrow}}\mathfrak{Z}_{\eta}\).
Let \(\mathcal{X}\) be a scheme which is separated, flat, and locally of finite type over \(K^{\circ}\). Suppose that the special fiber \(\mathcal{X}_{s}\) of \(\mathcal{X}\) is quasi-paracompact. Let \(\mathfrak{X}\) be the formal completion of \(\mathcal{X}\) along the special fiber, by which we mean the \(\alpha\)-adic completion for any nonzero \(\alpha\in K^{\circ\circ}\), and let \(X=\mathcal{X}_{K}\) be the generic fiber of \(\mathcal{X}\). Note that the hypotheses above guarantee that \(\mathfrak{X}\) is a separated, quasi-paracompact, admissible formal \(K^{\circ}\)-scheme. There is a natural map \(\iota=\iota_{\mathcal{X}}\colon\mathfrak{X}_{\eta}\to X^{\mathrm{an}}\) from the generic fiber of \(\mathfrak{X}\) to the analytification of \(X\), defined as follows. If \(\mathcal{X}=\operatorname{Spec}A\) is affine then there is a natural identification \(\iota\) of \(\mathfrak{X}_{\eta}\) with the affinoid domain
\[\{x\in X^{\mathrm{an}}\mid\text{for all $f\in A$, $|f(x)|\leq 1$}\};\]
see [1, SS5] or [1, SS4.13]. Furthermore, if \(\mathcal{U}\) is an affine open subscheme of \(\mathcal{X}\) with generic fiber \(U\) and formal completion \(\mathfrak{U}\) along its special fiber, then the diagram
commutes, where the vertical maps are induced by the inclusion \(\mathcal{U}\hookrightarrow\mathcal{X}\). For an arbitrary \(\mathcal{X}\), the previous sentence gives us that for any open affine subschemes \(\mathcal{U}_{1},\mathcal{U}_{2}\subset\mathcal{X}\) the maps \((\mathfrak{U}_{i})_{\eta}\stackrel{{\iota_{\mathcal{U}_{i}}}}{{ \longrightarrow}}U_{i}^{\mathrm{an}}\to X^{\mathrm{an}}\) for \(i=1,2\) agree on \((\mathfrak{U}_{1}\cap\mathfrak{U}_{2})_{\eta}\), so the universal property of gluing gives us the map \(\iota\colon\mathfrak{X}_{\eta}\to X^{\mathrm{an}}\).
If \(\mathcal{U}_{1}=\operatorname{Spec}A_{1}\) and \(\mathcal{U}_{2}=\operatorname{Spec}A_{2}\) are open affine subschemes of \(\mathcal{X}\) then because \(\mathcal{X}\) is separated we have that \(\mathcal{U}_{12}:=\mathcal{U}_{1}\cap\mathcal{U}_{2}\) is affine with coordinate ring \(A_{12}\) which is generated as a \(K^{\circ}\)-algebra by the images of \(A_{1}\) and \(A_{2}\). Note that \(\iota((\mathfrak{U}_{1})_{\eta})\cap\iota((\mathfrak{U}_{2})_{\eta})\) is contained in \(U_{1}^{\mathrm{an}}\cap U_{2}^{\mathrm{an}}=U_{12}^{\mathrm{an}}\) where it takes the form
\[\{x\in U_{12}^{\mathrm{an}}\mid\text{for all $f\in A_{1}\cup A_{2}$, $|f(x)|\leq 1$}\}.\]
Since \(A_{1}\cup A_{2}\) generates \(A_{12}\), we get that \(\iota((\mathfrak{U}_{1})_{\eta})\cap\iota((\mathfrak{U}_{2})_{\eta})=\iota(( \mathfrak{U}_{12})_{\eta})\).
**Remark 3.1**.: Let \(\mathcal{X}\) have generic fiber \(X\). We will say that \(\mathcal{X}\) satisfies condition (*) if there is a collection \(\{\mathcal{U}_{i}\mid i\in I\}\) of affine open subsets of \(\mathcal{X}\) such that \(\bigcup_{i\in I}\mathcal{U}_{i}\) contains the special fiber of \(\mathcal{X}\), for each \(i\in I\), \(\mathfrak{U}_{i}\) meets only finitely many \(\mathfrak{U}_{j}\) for \(j\in I\) and \(\{\iota((\mathfrak{U}_{i})_{\eta})\mid i\in I\}\) is a quasinet on \(\iota(\mathfrak{X}_{\eta})\subset X^{\mathrm{an}}\). In this situation \(\iota(\mathfrak{X}_{\eta})\) is an analytic domain in \(X^{\mathrm{an}}\) and, in light of the previous paragraph, we see that \(\iota(\mathfrak{X}_{\eta})\) is a gluing of the \((\mathfrak{U}_{i})_{\eta}\)s along the \((\mathfrak{U}_{i}\cap\mathfrak{U}_{j})_{\eta}\)s, so \(\iota\) gives an isomorphism from \(\mathfrak{X}_{\eta}\) to \(\iota(\mathfrak{X}_{\eta})\). If \(\mathcal{X}\) satisfies condition (*) with the collection \(\{\mathcal{V}_{i}\mid i\in I\}\), then for any closed subscheme \(\mathcal{Z}\subset\mathcal{X}\) which is flat over \(K^{\circ}\), the collection \(\{\mathcal{U}_{i}:=\mathcal{V}_{i}\cap\mathcal{Z}\mid i\in I\}\) shows that \(\mathcal{Z}\) satisfies condition (*). Moreover, in this case we have \(\iota_{\mathcal{Z}}(\mathfrak{Z}_{\eta})=Z^{\mathrm{an}}\cap\iota_{\mathcal{X }}(\mathfrak{X}_{\eta})\) in \(X^{\mathrm{an}}\). Here \(Z\) is the generic fiber of \(\mathcal{Z}\), and \(\mathfrak{Z}\) is the formal completion of \(\mathcal{Z}\) along its special fiber. However, in the absence of condition (*) there are examples where \(\iota\) is not an isomorphism on to an analytic domain in \(X^{\mathrm{an}}\); see Example 4.2.
## 4. Proofs of main theorems
We now prove Theorems 1.1 and 1.2. Let \(\mathcal{Y}=\mathcal{Y}(\Delta)\) be a normal \(\mathbb{T}\)-toric scheme which is locally of finite type over \(K^{\circ}\), given by a \(\Gamma\)-admissible fan \(\Delta\) in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\). We begin by discussing how one computes the generic fiber of the completion of a \(\mathcal{Y}\) along its special fiber. We then give the proofs of the main theorems.
We begin by discussing some finiteness conditions on \(\Delta\). Let \(\Sigma:=\Delta|_{N_{\mathbb{R}}\times\{0\}}\) and \(\Phi:=\Delta|_{N_{\mathbb{R}}\times\{1\}}\). Firstly, notice that the generic fiber \(\mathcal{Y}\) is a variety if and only if \(\Sigma\) is finite, in which case the generic fiber is the toric variety given by \(\Sigma\), \(Y(\Sigma)\). Because we are only interested in \(\mathcal{Y}\) with generic fiber a variety, we will assume \(\Sigma\) is finite. Secondly, Theorem 1.2 is proved by modifying the special fiber of \(\mathcal{Y}\); to construct and compute the formal completion along the special fiber as in Lemma 4.3, we will need to consider some finiteness conditions on \(\Phi\). We say that \(\Phi\) in \(N_{\mathbb{R}}\) is _combinatorially locally finite_ if every polyhedron in \(\Phi\) meets only finitely many other polyhedra in \(\Phi\). An even stronger condition on \(\Phi\) would be local finiteness. Let \(Z\) be a topological space and \(\mathcal{A}\) a collection of subsets of \(Z\). Then we say \(\mathcal{A}\) is _locally finite_ if every point of \(Z\) has a neighborhood that meets at most finitely elements of \(\mathcal{A}\). Locally finite implies combinatorially locally finite but the converse is not true as the following example shows.
**Example 4.1**.: Let \(N=\mathbb{Z}\) and let \(\Phi\) be the polyhedral complex in \(N_{\mathbb{R}}=\mathbb{R}\) whose maximal cells are given by \([\frac{1}{n+1},\frac{1}{n}]\) for \(n\in\mathbb{Z}_{>0}\) and the vertex \(0\). Then \(\Phi\) is combinatorially locally finite, but is not locally finite at \(0\).
For each \(P\in\Phi\) set \(\mathcal{U}_{P}:=\mathcal{U}(c(P))\) where \(c(P)\subset N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\) is the closed cone over \(P\). Note that, for \(\sigma,\tau\in\Delta\), \(\mathcal{U}(\sigma)\cap\mathcal{U}(\tau)=\mathcal{U}(\sigma\cap\tau)\) and \(\mathcal{U}(\sigma)\) is contained in the generic fiber of \(\mathcal{Y}\) if and only if \(\sigma\) is contained in \(N_{\mathbb{R}}\times\{0\}\). So the special fiber \(\mathcal{Y}_{s}\) of \(\mathcal{Y}\) has the affine open cover \(\{(\mathcal{U}_{P})_{s}\mid P\in\Phi\}\) and, for any \(P_{1},P_{2}\in\Phi\), \((\mathcal{U}_{P_{1}})_{s}\) meets \((\mathcal{U}_{P_{2}})_{s}\) if and only if \(P_{1}\) meets \(P_{2}\). From this we see that when \(\Phi\) is combinatorially locally finite in \(N_{\mathbb{R}}\) the special fiber \(\mathcal{Y}_{s}\) is quasi-paracompact. Let \(\mathfrak{Y}=\mathfrak{Y}(\Delta)\) be the formal completion of \(\mathcal{Y}\) along its special fiber. When \(\Delta\) is locally combinatorially finite, we can consider the generic fiber \(\mathfrak{Y}_{\eta}\) of \(\mathfrak{Y}\) and the natural map \(\iota\) from \(\mathfrak{Y}_{\eta}\) to the analytification of \(Y(\Sigma)\).
In order to study \(\iota\) we briefly recall the construction of the (extended) tropicalization map, referring the reader to [10, SS3] for more detail. Denote the generic fiber of \(\mathbb{T}\) by \(T\) and its character lattice by \(M\). So \(Y(\Sigma)\) has dense torus \(T\). For any \(\sigma\in\Sigma\) there is a corresponding open affine subset \(\operatorname{Spec}K[S_{\sigma}]=U(\sigma)\subset Y(\Sigma)\), where \(S_{\sigma}\) is the submonoid of \(M\) determined by \(\sigma\). There is also a continuous map \(\operatorname{trop}_{\sigma}\colon U(\sigma)^{\operatorname{an}}\to N_{ \mathbb{R}}(\sigma)=\operatorname{Hom}_{\operatorname{mon}}(S_{\sigma}, \overline{\mathbb{R}})\) defined by sending \(x\in U(\sigma)^{\operatorname{an}}\) to the homomorphism \(S_{\sigma}\to\overline{\mathbb{R}}\) given by \(u\mapsto-\log|u(x)|\). The maps \(\operatorname{trop}_{\sigma}\) for \(\sigma\in\Sigma\) glue to give a continuous map
\[\operatorname{trop}\colon Y(\Sigma)^{\operatorname{an}}\to N_{\mathbb{R}}( \Sigma),\]
called the _tropicalization map_. For \(\sigma\in\Sigma\) and \(w\in N_{\mathbb{R}}(\sigma)\subset N_{\mathbb{R}}(\Sigma)\) we have \(\operatorname{trop}_{\sigma}^{-1}(w)=\operatorname{trop}^{-1}(w)\). Note that \(N_{\mathbb{R}}(\{0\})=N_{\mathbb{R}}\) and \(N_{\mathbb{R}}(\Sigma)\) is a partial compactification of \(N_{\mathbb{R}}\).
For \(P\in\Phi\), let \(\mathfrak{U}_{P}\) be the formal completion of \(\mathcal{U}_{P}\) along its special fiber. By [13, Proposition 6.9] and [14, Proposition 6.19], \(\iota\) maps \((\mathfrak{U}_{P})_{\eta}\) isomorphically onto the affinoid domain \(\operatorname{trop}^{-1}(\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}(P))\subset Y (\Sigma)^{\operatorname{an}}\). Here \(\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}\) denotes the closure in \(N_{\mathbb{R}}(\Sigma)\). In the case where \(\Delta\) is finite, Gubler studied \(\mathcal{Y}\) and the
generic fiber \(\mathfrak{Y}_{\eta}\cong\operatorname{trop}^{-1}\left(\bigcup_{P\in\Phi}\operatorname {cl}_{N_{\mathbb{R}}(\Sigma)}(P)\right)\) in [10]. Generalizing this to the case where \(\Phi\) is locally finite will be very similar to the proof in the finite case. Let \(|\Phi|=\cup_{P\in\Phi}P\). We note that we do need to assume that \(\Phi\) is locally finite in Lemma 4.3 and not just combinatorially locally finite, as the following example shows.
**Example 4.2**.: Let \(N=\mathbb{Z}\) and say \(\Gamma\) is \(\mathbb{Z}\) or \(\mathbb{Q}\). Let \(\Phi\) be as in Example 4.1, and let \(\Delta\) be the fan in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\) whose maximal cones are \(c(P)\) for \(P\in\Phi\) maximal. Then \(\mathfrak{Y}_{\eta}\) is the disjoint union of \(\operatorname{trop}^{-1}(0)\) and \(\operatorname{trop}^{-1}((0,1])\). Since \(\operatorname{trop}\) is continuous, admits a continuous section [10, SS3.3], and has connected fibers, [10, Ch. 1, SS3.5, Proposition 9 and Ch. 1, SS11.3, Proposition 7] show that \(\operatorname{trop}^{-1}([0,1])\) is connected while \(\mathfrak{Y}_{\eta}\) has two connected components, so they are not isomorphic.
**Lemma 4.3**.: _Suppose that \(\mathcal{A}:=\{\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}(P)\mid P\in\Delta|_{ N_{\mathbb{R}}\times\{1\}}\}\) is locally finite in \(|\mathcal{A}|:=\cup_{A\in\mathcal{A}}A\). Then for any closed subscheme \(\mathcal{X}\subset\mathcal{Y}(\Delta)\) which is flat over \(K^{\circ}\) with generic fiber \(X\) and formal completion \(\mathfrak{X}\) along the special fiber, the natural map \(\iota_{\mathcal{X}}\colon\mathfrak{X}_{\eta}\to X^{\operatorname{an}}\) identifies \(\mathfrak{X}_{\eta}\) with the analytic domain \(\operatorname{trop}^{-1}(|\mathcal{A}|)\cap X^{\operatorname{an}}\)._
Proof.: Because \(\iota\) maps \((\mathfrak{U}_{P})_{\eta}\) onto \(\operatorname{trop}^{-1}(\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}(P))\) for each \(P\in\Phi\), Remark 3.1 gives us that it suffices to show that \(\mathcal{Y}\) satisfies condition (*) with the collection \(\{\mathcal{U}_{P}\mid P\in\Phi\}\). We already know that this is a collection of affine opens containing the special fiber and that each \(\mathfrak{U}_{P}\) only meets finitely many \(\mathfrak{U}_{P^{\prime}}\) for \(P^{\prime}\in\Phi\). Thus it remains only to show that
\[\{\iota((\mathfrak{U}_{P})_{\eta})\mid P\in\Phi\}=\{\operatorname{trop}^{-1}( \operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}(P))\mid P\in\Phi\}\]
is a quasinet on \(\iota(\mathfrak{Y}_{\eta})=\operatorname{trop}^{-1}(|\mathcal{A}|)\subset Y( \Sigma)^{\operatorname{an}}\). Since \(\{\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}(P)\mid P\in\Phi\}\) is a locally finite cover of \(|\mathcal{A}|\) by closed subsets, it is a quasinet on \(|\mathcal{A}|\). So because \(\operatorname{trop}^{-1}(|\mathcal{A}|)\to|\mathcal{A}|\) is continuous, \(\{\operatorname{trop}^{-1}(\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}(P))\mid P \in\Phi\}\) is a quasinet on \(\operatorname{trop}^{-1}(|\mathcal{A}|)\).
**Corollary 4.4**.: _Suppose that \(\Phi\) is locally finite in \(N_{\mathbb{R}}(\Sigma)\). Then for any closed subscheme \(\mathcal{X}\subset\mathcal{Y}(\Delta)\) which is flat over \(K^{\circ}\) with generic fiber \(X\) and formal completion \(\mathfrak{X}\) along the special fiber, the natural map \(\iota_{\mathcal{X}}\colon\mathfrak{X}_{\eta}\to X^{\operatorname{an}}\) identifies \(\mathfrak{X}_{\eta}\) with the analytic domain \(\operatorname{trop}^{-1}(\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}|\Phi|)\cap X ^{\operatorname{an}}\)._
Proof.: Since \(\Phi\) is locally finite in \(N_{\mathbb{R}}(\Sigma)\), so is \(\{\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}(P)\mid P\in\Phi\}\). So by Lemma 4.3 we only need to show that \(\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}|\Phi|=\bigcup_{P\in\Phi} \operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}(P)\). But this follows from the fact that \(\Phi\) is locally finite in \(N_{\mathbb{R}}(\Sigma)\)[10, Ch. 1, SS1.5, Proposition 4].
Given the hypotheses in Lemma 4.3 and Corollary 4.4, one might hope that it would be enough to have \(\Phi\) locally finite in \(|\Phi|\). As the following example shows, this is not sufficient.
**Example 4.5**.: Let \(N=\mathbb{Z}^{2}\) and say \(\Gamma\) is \(\mathbb{Z}\) or \(\mathbb{Q}\). For any positive integer \(n\) let \(P_{n}:=\{(x,y)\in\mathbb{R}^{2}\mid\frac{1}{n+1}\leq x\leq\frac{1}{n},y\geq(2n +1)-n(n+1)x\}\). That is, \(P_{n}\) is the polyhedron in \(\mathbb{R}^{2}\) with vertices \((\frac{1}{n},n)\) and \((\frac{1}{n+1},n+1)\) and recession cone \(\{(0,y)\in\mathbb{R}^{2}\mid y\geq 0\}\). Let \(\Sigma\) be the fan whose maximal cone is \(\{(0,y)\in\mathbb{R}^{2}\mid y\geq 0\}\), let \(P_{0}:=\{(0,y)\in\mathbb{R}^{2}\mid y\geq 0\}\), and let \(\Phi\) be the polyhedral complex whose maximal faces are \(P_{n}\) for \(n\geq 0\). Then \(\Phi\) is locally finite in \(N_{\mathbb{R}}\), but not in \(N_{\mathbb{R}}(\Sigma)\). So if we let \(\Delta\) be the fan in \(\mathbb{R}^{2}\times\mathbb{R}_{\geq 0}\) whose maximal cones are \(c(P_{n})\) for \(n\geq 0\), then, as in Example 4.2, we find that \(\mathfrak{Y}_{\eta}\) has two connected components while \(\operatorname{trop}^{-1}\left(\bigcup_{n\geq 0}\operatorname{cl}_{N_{\mathbb{R}}( \Sigma)}(P_{n})\right)\) is connected, so they are not isomorphic.
We now have all of the ingredients we need to prove Theorem 1.2.
Proof of Theorem 1.2.: We first consider the case in which \(\Gamma\) is discrete or divisible. By [1, Theorem 1.1] there is a \(\Gamma\)-rational completion \(\overline{\Phi}\) of \(\Phi\) such that \(\{\operatorname{rec}P\mid P\in\overline{\Phi}\}=\Sigma\) and \(\overline{\Phi}\) is locally finite in \(N_{\mathbb{R}}(\Sigma)\). Define \(\overline{\Delta}:=\{c(P)\mid P\in\overline{\Phi}\}\cup\{\sigma\times\{0\}\mid \sigma\in\Sigma\}\). Then \(\overline{\Delta}\) is a fan by [1, Lemma 4.6]. This Lemma applies because for each cone \(c(P)\) we have that \(c(P)\cap(N_{\mathbb{R}}\times\{0\})=\operatorname{rec}P\), where \(\operatorname{rec}P\) denotes the recession cone of \(P\) as in Subsection 2.1 of [1]. Furthermore \(\overline{\Delta}\) is \(\Gamma\)-admissible and satisfies \(\overline{\Delta}|_{N_{\mathbb{R}}\times\{0\}}=\Sigma\) and \(\overline{\Delta}|_{N_{\mathbb{R}}\times\{1\}}=\overline{\Phi}\). Letting \(\overline{\mathcal{Y}}\) be the normal \(\mathbb{T}\)-toric scheme corresponding to \(\overline{\Delta}\), because \(\Gamma\) is discrete or divisible we have that \(\overline{\mathcal{Y}}\) is locally of finite type over \(K^{\circ}\). Since \(\Delta\) is a subfan of \(\overline{\Delta}\), there is a \(\mathbb{T}\)-equivariant open immersion \(\mathcal{Y}\hookrightarrow\overline{\mathcal{Y}}\), and the induced map on generic fibers is an isomorphism because \(\overline{\Delta}|_{N_{\mathbb{R}}\times\{0\}}=\Delta|_{N_{\mathbb{R}}\times\{ 0\}}\). Letting \(Y=Y(\Sigma)\) be the common generic fiber of \(\mathcal{Y}\) and \(\overline{\mathcal{Y}}\), and letting \(\overline{\mathfrak{Y}}\) be the formal completion of \(\overline{\mathcal{Y}}\) along its special fiber, Corollary 4.4 tells us that \(\iota_{\overline{\mathcal{Y}}}\colon\overline{\mathfrak{Y}}_{\eta}\to Y^{ \operatorname{an}}\) identifies \(\overline{\mathfrak{Y}}_{\eta}\) with \(\operatorname{trop}^{-1}(\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}|\overline{ \Phi}|)=\operatorname{trop}^{-1}(\operatorname{cl}_{N_{\mathbb{R}}(\Sigma)}N_ {\mathbb{R}})=\operatorname{trop}^{-1}(N_{\mathbb{R}}(\Sigma))=Y^{ \operatorname{an}}\).
Now suppose that \(\Gamma\) is neither discrete nor divisible. By [10, Theorem 1.1] there is a finite separable totally ramified extension \(L/K\) of valued fields such that \(\mathcal{Y}_{L^{\circ}}\) admits a normal \(\mathbb{T}_{L^{\circ}}\)-equivariant completion. So by making the base-change to \(L^{\circ}\) we may assume without loss of generality that \(\mathcal{Y}\) admits a normal \(\mathbb{T}\)-equivariant completion, i.e., a \(\mathbb{T}\)-equivariant open immersion \(\mathcal{Y}\hookrightarrow\mathcal{Y}^{\prime}\) with \(\mathcal{Y}^{\prime}\) a normal \(\mathbb{T}\)-toric variety which is proper over \(K^{\circ}\). Let \(\Delta^{\prime}\) be the finite \(\Gamma\)-admissible fan in \(N_{\mathbb{R}}\times\mathbb{R}_{\geq 0}\) corresponding to \(\mathcal{Y}^{\prime}\) and let \(\Pi:=\Delta^{\prime}|_{N_{\mathbb{R}}\times\{1\}}\). Because \(\mathcal{Y}^{\prime}\) is proper over \(K^{\circ}\), [1, Proposition 11.8] tells us that \(\Delta^{\prime}\) is complete, so \(\Pi\) is also complete. As \(\Gamma\) is not discrete and \(\mathcal{Y}^{\prime}\) is of finite type over \(K^{\circ}\), \(\Pi\) is a finite completion of \(\Phi\) whose vertices are all in \(N_{\Gamma}\). Thus, Theorem [1, Theorem 1.1] tells us that there is a \(\Gamma\)-rational completion \(\overline{\Phi}\) of \(\Phi\) such that \(\{\operatorname{rec}P\mid P\in\overline{\Phi}\}=\Sigma\), \(\overline{\Phi}\) is locally finite in \(N_{\mathbb{R}}(\Sigma)\), and all of the vertices of \(\overline{\Phi}\) are in \(N_{\Gamma}\). The remainder of the proof is exactly as in the previous case, with the exception that the justification of the fact that \(\overline{\mathcal{Y}}\) is locally of finite type in this case is that all of the vertices of \(\overline{\Phi}\) are in \(N_{\Gamma}\).
Finally, we can prove Theorem 1.1.
Proof of Theorem 1.1.: Let \(X\) be a closed subscheme of a normal toric variety \(Y\) over \(K\). Letting \(T\) be the torus acting on \(Y\), \(M\) the character lattice of \(T\), and \(N\) the cocharacter lattice of \(T\), we have that \(Y=Y(\Sigma)\) for a finite rational fan \(\Sigma\) in \(N_{\mathbb{R}}\). If we let \(\mathbb{T}:=\operatorname{Spec}K^{\circ}[M]\) then we can also view \(Y\) as the \(\mathbb{T}\)-toric variety associated to \(\Delta:=\{\sigma\times\{0\}\mid\sigma\in\Sigma\}\).
By [1, Page 18] there is a finite rational completion \(\Sigma^{\prime}\) of \(\Sigma\). Considering the fan \(\Delta^{\prime}:=\{c(\sigma)\mid\sigma\in\Sigma^{\prime}\}\cup\{\sigma\times\{ 0\}\mid\sigma\in\Sigma^{\prime}\}\), we have that \(\mathcal{Y}(\Delta^{\prime})\) is a normal equivariant completion of \(Y\), viewed as a \(\mathbb{T}\)-toric variety. So by Theorem 1.2 there is a normal \(\mathbb{T}\)-toric scheme \(\overline{\mathcal{Y}}\) locally of finite type over \(K^{\circ}\) and a \(\mathbb{T}\)-equivariant open immersion \(Y\hookrightarrow\overline{\mathcal{Y}}\) identifying \(Y\) with the generic fiber of \(\overline{\mathcal{Y}}\). Furthermore we have that for any closed subscheme \(\mathcal{X}\subset\overline{\mathcal{Y}}\) which is flat over \(K^{\circ}\) with generic fiber \(\mathcal{X}_{K}\) and formal completion \(\mathfrak{X}\) along the special fiber, \(\iota_{\mathcal{X}}\colon\mathfrak{X}_{\eta}\to(\mathcal{X}_{K})^{ \operatorname{an}}\) is an isomorphism.
Let \(\mathcal{X}\) be the closure of \(X\) in \(\overline{\mathcal{Y}}\), i.e., the scheme-theoretic image of the inclusion morphism \(X\hookrightarrow\overline{\mathcal{Y}}\). Then \(\mathcal{X}\) is a closed subscheme of \(\overline{\mathcal{Y}}\) which is flat over \(K^{\circ}\) and
has generic fiber \(X\)[1, Remark 4.6]. So \(\iota_{\mathcal{X}}\colon\mathfrak{X}_{\eta}\to X^{\mathrm{an}}\) is an isomorphism. Thus \(\mathfrak{X}\) is an algebraizable formal model of \(X^{\mathrm{an}}\).
|
2309.06576 | Nonlocal Quantum Field Theory and Quantum Entanglement | We discuss the nonlocal nature of quantum mechanics and the link with
relativistic quantum mechanics such as formulated by quantum field theory. We
use here a nonlocal quantum field theory (NLQFT) which is finite, satisfies
Poincar\'e invariance, unitarity and microscopic causality. This nonlocal
quantum field theory associates infinite derivative entire functions with
propagators and vertices. We focus on proving causality and discussing its
importance when constructing a relativistic field theory. We formulate scalar
field theory using the functional integral in order to characterize quantum
entanglement and the entanglement entropy of the theory. Using the replica
trick, we compute the entanglement entropy for the theory in 3 + 1 dimensions
on a cone. The result is free of UV divergences and we recover the area law. | Robin Landry, John Moffat | 2023-07-21T20:42:07Z | http://arxiv.org/abs/2309.06576v3 | # Nonlocal Quantum Field Theory and Quantum Entanglement
###### Abstract
We discuss the nonlocal nature of quantum mechanics and the link with relativistic quantum mechanics such as formulated by quantum field theory. We use here a nonlocal quantum field theory (NLQFT) which is finite, satisfies Poincare invariance, unitarity and microscopic causality. This nonlocal quantum field theory associates infinite derivative entire functions with propagators and vertices. We focus on proving causality and discussing its importance when constructing a relativistic field theory. We formulate scalar field theory using the functional integral in order to characterize quantum entanglement and the entanglement entropy of the theory. Using the replica trick, we compute the entanglement entropy for the theory in \(3+1\) dimensions on a cone. The result is free of UV divergences and we recover the area law.
[email protected]_
[email protected]_
###### Contents
* 1 Introduction
* 2 Quantum Mechanics, Quantum Entanglement and Nonlocality
* 2.1 Quantum entanglement
* 2.2 Bell's theorem
* 3 Nonlocal Quantum Field Theory
* 3.1 Nonlocal Scalar Field Theory
* 3.2 Causality
* 3.3 Path integral formulation of NLQFT
* 3.4 Entanglement entropy in NLQFT
* 4 Conclusions
Introduction
Quantum entanglement is one of the most bizarre features of quantum mechanics. When two quantum systems interact and become entangled, their quantum states are correlated in a non-classical way. Measuring a property of one system seems to instantaneously influence the other, even if they are separated by a large distance. Bell's theorem states that the predictions made by quantum mechanics, concerning correlations between different measurements performed on physically separate systems cannot be reproduced by any local hidden variable theory, because these predictions are in fact incompatible with Bell's inequality [1]. The violation of Bell's inequality has been experimentally verified, separately by Aspect, Clauser and Zeilinger [3, 5, 7].
Quantum field theory is a more fundamental theory to explain the microscopic world than quantum mechanics. One of the most fundamental aspects of the microscopic world and which is highlighted by quantum mechanics is nonlocality, that is to say the existence of quantum correlations between entangled systems.
Quantum mechanics is an inherently nonlocal theory that violates causality. We will explore a nonlocal quantum field theory (QFT), formulated as a scalar field theory. In this formulation, nonlocality is an inherent property of the field operators. For a field taken at a point x, the field depends on the entire spacetime. This is achieved by regularizing the fields by an entire function distribution operator. The nonlocal quantum field theory satisfies microscopic causality [11, 12].
The primary motivation of this formulation did not originally reside in the nonlocality of its operators, but in the fact that this nonlocal regularization makes the theory finite to all orders of perturbation theory. There is therefore no need to involve infinite renormalization, since the ultraviolet divergences do not intervene. The nonlocal regularization of quantum field theory preserves Poincare invariance, unitarity and microcausality. The problem of infinite renormalizability is resolved by regulated propagators and vertices for the Feynman loop diagrams of QED, QCD and weak interactions [11]. The propagators are regulated by an infinite derivative entire function \(\mathcal{E}(p^{2})\), which is analytic and holomorphic in the complex \(p^{2}\) plane with a pole and/or an essential singularity at infinity. This finite Poincare invariant and unitary QFT allows us to formulate the standard model of particle physics [12] and can be extended to quantum gravity [14, 15]. This choice of a finite theory is therefore to be considered as a potential candidate for a more complete formulation of quantum field theory [11, 12].
In order to study quantum entanglement, one of the most meaningful ways is to study entanglement entropy. This entropy quantifies the quantum entanglement between entangled systems. By calculating it for our quantum field theory, we can therefore observe the direct consequences of taking nonlocality as a founding hypothesis for quantum field theory. To access the entanglement entropy of the scalar field theory, we have to determine the partition function of the theory. The easiest way to do this is to use the path integral formalism.
Currently, quantum entanglement is explained in the framework of algebraic quantum field theory (AQFT), a local theory in the sense of operator algebras, and quantum entanglement is explained via the Reeh-Schlieder theorem [16]. It implies that by acting on the vacuum with any local operator supported in a small region, we can create whatever state we wish in a spacelike separated region. Whereas the entanglement entropy is always UV divergent in local QFT, and therefore in AQFT, we show that it is finite using a nonlocal QFT. And because it is the nonlocal assumption that brings finiteness in the theory, we interpret quantum correlations between entangled systems, i.e quantum entanglement, as a direct consequence of nonlocality.
In the first part of this paper, we discuss the nonlocal interpretation of quantum mechanics. We define quantum entanglement and review Bell's inequality. In the second part of the paper, we explore the consequences of taking a nonlocal quantum field theory to describe quantum entanglement. We propose a nonlocal scalar QFT as an example of a causal nonlocal theory. We then show that it satisfies microscopic causality. Finally, we implement the theory using the path integral formalism, which allows us to calculate the entanglement entropy in the theory.
Quantum Mechanics, Quantum Entanglement and Nonlocality
### Quantum entanglement
In quantum mechanics, in order to define quantum entanglement, we have to consider the pure state of a composite system A and B :
\[|\psi\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B}. \tag{1}\]
If this pure state can be written in the form of a tensor product between a state A and a state B,
\[|\psi\rangle=|\phi\rangle_{A}\otimes|\phi\rangle_{B}, \tag{2}\]
then \(|\psi\rangle\) is a separable state and is therefore not entangled.
Whereas quantum entanglement is the mathematical formulation of the inseparability of certain quantum states, nonlocality describes the physical correlations between two separable states. Nonlocality implies quantum entanglement and nonlocality is an interesting path to study quantum entanglement in quantum field theory.
One way to learn whether two systems are entangled is through entanglement entropy. Entanglement entropy makes it possible to quantify the entanglement of a composite system AB. It is a quantity that vanishes, if and only if the composite system AB is not entangled.
The entanglement entropy is the von Neumann entropy of the reduced state of each subsystem A and B. One interesting and useful property of entanglement entropy is that the von Neumann entropy of each subsystem of a composite system is the same. For a composite system AB :
\[S=-Tr(\rho_{A}\ln(\rho_{A}))=-Tr(\rho_{B}\ln(\rho_{B})), \tag{3}\]
where \(\rho_{A}\) and \(\rho_{B}\) are the reduced density matrices of the subsystems A and B.
The calculation of this quantity is a non-trivial problem in quantum field theory. We will see that it is more convenient to opt for a functional integral approach by reformulating the entropy of entanglement in terms of the partition function of the theory.
### Bell's theorem
Bell's theorem states that the predictions made by quantum mechanics, concerning correlations between different measurements performed on physically separate systems cannot be reproduced by any local hidden variable theory, because these predictions are in fact incompatible with Bell's inequalities [1, 2].
Bell's inequalities show that the principle of locality dictates that correlations between different measurements performed on physically separate systems must satisfy certain conditions. Bell's research highlighted the conditions imposed on quantum mechanics by local causality. These Bell inequalities group together a number of inequalities reproduced by local hidden variable theories.
Experiments [3, 5, 7] have verified the violation of Bell's inequalities, excluding local theories of hidden variables and focus on the nonlocal nature of quantum mechanics.
## 3 Nonlocal Quantum Field Theory
We adopt here the Minkowski metric convention \(\eta_{\mu\nu}=\mbox{diag}(+1,-1,-1,-1)\), and we set \(\hbar=c=k_{B}=1\).
### Nonlocal Scalar Field Theory
Let us consider a nonlocal interacting scalar field theory, which describes a spin\(-0\) particle with a mass \(m\). It is the scalar version of the more general particle physics Nonlocal Quantum Field Theory (NLQFT) developed in [8, 10, 11, 12]. The lagrangian density of our theory is given by
\[{\cal L}=\frac{1}{2}\partial_{\mu}\tilde{\phi}\partial^{\mu}\tilde{\phi}-\frac{ 1}{2}m^{2}\tilde{\phi}^{2}-\frac{\lambda}{4!}\tilde{\phi}^{4}, \tag{4}\]
where we define the nonlocal field operator
\[\tilde{\phi}(x)=\int d^{4}x^{\prime}{\cal F}(x-x^{\prime})\phi(x^{\prime})={ \cal F}(\Box_{x})\phi(x). \tag{5}\]
In this expression, \(\phi(x)\) denotes the local field operator and \(\Box_{x}=\partial_{\mu}\partial^{\mu}\).
The regularized position propagator \(\tilde{\Delta}(x-x^{\prime})\) in Minkowski spacetime is the Green's function \(G(x,x^{\prime})\) for the Klein-Gordon equation [11]:
\[(\Box_{x}+m^{2})\tilde{\Delta}(x-x^{\prime})\equiv{\cal E}(x-x^{\prime})=- \frac{1}{4\pi\Lambda_{x}^{4}}\exp\left(-(x-x^{\prime})^{2}/2\Lambda_{x}^{2} \right). \tag{6}\]
We note that taking the limit \(\Lambda_{x}\to 0\), we find the local Klein-Gordon equation :
\[(\Box_{x}+m^{2})\Delta(x-x^{\prime})=-\delta^{4}(x-x^{\prime}), \tag{7}\]
where \(\Delta(x-x^{\prime})\) is the local position propagator.
We have replaced the local dynamics of fields by a nonlocal one. The field at one point of spacetime depends on all other positions in spacetime. By using entire function distribution operators our theory is made to be finite, which means there is no need for infinite renormalization, because the theory is free of UV-divergences.
### Causality
Let us consider the transition amplitude from \(x\) to \(x^{\prime}\) for two spacelike separated points in spacetime. For a free particle \(\langle x^{\prime}|\mbox{e}^{-iHt}|x\rangle\), one can show that the amplitude is non-zero in the case of non-relativistic quantum mechanics as well as in relativistic quantum mechanics. In nonrelativistic quantum mechanics we have \(E=\frac{\overrightarrow{p}^{2}}{2m}\) and obtain [17]:
\[\langle\overrightarrow{x}^{\prime}|\mbox{e}^{-i\frac{p^{2}}{2m}t}| \overrightarrow{x}\rangle=\left(\frac{m}{2\pi it}\right)^{\frac{3}{2}}\mbox{e} ^{im\frac{\langle\overrightarrow{x}^{\prime}-\overrightarrow{x}^{\prime} \rangle^{2}}{2t}}. \tag{8}\]
The transition amplitude is nonzero, which means that a particle can propagate between two points in an arbitrarily short time. For two points outside the light cone, this leads to a violation of causality.
Now let us consider the case of relativistic quantum mechanics where \(E=\sqrt{\overrightarrow{p}^{2}+m^{2}}\). Similar calculations as previously lead to a result expressed in term of Bessel functions. By looking at the asymptotic behavior outside the light cone : \(x^{2}\gg t^{2}\), we obtain [17]:
\[\langle\overrightarrow{x}^{\prime}|\mbox{e}^{-i\sqrt{\overrightarrow{p}^{2}+m ^{2}}t}|\overrightarrow{x}\rangle\sim\mbox{e}^{-m\sqrt{\overrightarrow{x}^{ \prime}-t^{2}}}. \tag{9}\]
The transition amplitude is nonzero and therefore causality is violated. The introduction of creation and annihilation operators in manifestly Lorentz invariant QFT solves the causality problem and allows us to describe multiparticles states.
Let us review the proofs that that two nonlocal field operators \(\tilde{\phi}(x)\) and \(\tilde{\phi}(x^{\prime})\) commute at spacelike separated points \(x\) and \(x^{\prime}\)[11, 12]. We have
\[[\tilde{\phi}(x),\tilde{\phi}(x^{\prime})] =\langle 0|[\tilde{\phi}(x),\tilde{\phi}(x^{\prime})]|0\rangle \tag{10}\] \[=\tilde{\Delta}(x-x^{\prime})-\tilde{\Delta}(x^{\prime}-x). \tag{11}\]
where \(\tilde{\Delta}(x-x^{\prime})\) is the nonlocal propagator. Note that the two right-hand side terms are manifestly Lorentz invariant. Hence, by taking a spacelike spacetime interval, \((x-x^{\prime})^{2}<0\) and by performing a Lorentz transformation on the second term on the right-hand side: \((x-x^{\prime})\to-(x-x^{\prime})\), the two propagators become equal and the commutator vanishes. Micro-causality and therefore causality are preserved [17]. It means that two measurements cannot affect one another outside the light cone.
Let us consider an alternative proof of nonlocal microcausality [12]. The local field operator satisfies the commutation relation
\[[\phi(x),\phi(x^{\prime})]=i\bar{\Delta}(x-x^{\prime}). \tag{12}\]
Here, \(\bar{\Delta}(x-x^{\prime})\) is the Pauli-Jordan propagator define by
\[\bar{\Delta}(x-x^{\prime})=\int\frac{d^{4}p}{(2\pi)^{4}}\mathrm{e}^{-ip\cdot( x-x^{\prime})}\epsilon(p^{0})2\pi\delta(p^{2}-m^{2}), \tag{13}\]
where
\[\epsilon(p^{0})=\theta(p^{0})-\theta(-p^{0})=\begin{cases}+1&\text{ if }p^{0}>0\\ -1&\text{ if }p^{0}<0\end{cases}, \tag{14}\]
and
\[\theta(p^{0})=\begin{cases}1&\text{ if }p^{0}>0\\ 0&\text{ if }p^{0}<0\end{cases}. \tag{15}\]
The Pauli-Jordan propagator vanishes outside the light cone:
\[[\phi(x),\phi(x^{\prime})]=0,\ (x-x^{\prime})^{2}<0. \tag{16}\]
We define [12]:
\[\mathcal{F}(x-x^{\prime})=\mathcal{F}(\Box_{x})\delta^{4}(x-x^{\prime}), \tag{17}\]
where
\[\tilde{\phi}(x)=\int d^{4}x^{\prime}\mathcal{F}(x-x^{\prime})\phi(k)\exp(-ik \cdot x)=\int\frac{d^{4}k}{(2\pi)^{4}}\phi(k)\mathcal{F}(-k^{2})\exp(-ik \cdot x). \tag{18}\]
The nonlocal field operator commutator is given by
\[[\tilde{\phi}(x),\tilde{\phi}(x^{\prime})]=[\mathcal{F}(\Box_{x})\phi(x), \mathcal{F}(\Box_{x^{\prime}})\phi(x^{\prime})]=\mathcal{F}(\Box_{x}) \mathcal{F}(\Box_{x^{\prime}})[\phi(x),\phi(x^{\prime})]. \tag{19}\]
We now obtain
\[[\tilde{\phi}(x),\tilde{\phi}(x^{\prime})] =\mathcal{F}(\Box_{x})\mathcal{F}(\Box_{x^{\prime}})\int\frac{d^{ 4}k}{(2\pi)^{4}}\exp(-ik\cdot(x-x^{\prime}))\epsilon(k^{0})2\pi\delta(k^{2}-m ^{2})\] \[=\int\frac{d^{4}k}{(2\pi^{4})}\mathcal{F}^{2}(-m^{2})\exp(-ik \cdot(x-x^{\prime}))\epsilon(k^{0})2\pi\delta(k^{2}-m^{2}) \tag{20}\] \[=\mathcal{F}^{2}(-m^{2})(-i)\bar{\Delta}(x-x^{\prime}). \tag{21}\]
For the nonlocal field operator \(\tilde{\phi}(x)\), it follows that
\[[\tilde{\phi}(x),\tilde{\phi}(x^{\prime})]=0,\quad(x-x^{\prime})^{2}<0. \tag{22}\]
This proves that the nonlocal QFT satisfies microscopic causality.
We have demonstrated that the NLQFT is causal, no information is exchanged at a speed faster than light. The nonlocal nature of field operators due to entire function distribution operators at vertices and in propagators does not lead to a violation of causality.
### Path integral formulation of NLQFT
In this section, we formulate our nonlocal scalar field theory in the path integral formalism. The path integral is based on the Lagrangian formalism and the relation between the Hamiltonian and the Lagrangian of our scalar field theory is the canonical one, allowing us to write the transition amplitude in the following way:
\[\langle\tilde{\phi}(x_{1}),t_{1}|\tilde{\phi}(x_{2}),t_{2}\rangle=\int{\cal D} \tilde{\phi}\,\exp\left[i\int_{t_{1}}^{t_{2}}d^{4}x\left(\frac{1}{2}\partial_{ \mu}\tilde{\phi}\partial^{\mu}\tilde{\phi}-\frac{1}{2}m^{2}\tilde{\phi}^{2}- \frac{\lambda}{4!}\tilde{\phi}^{4}\right)\right]. \tag{23}\]
We observe that taking the continuous limit in this finite, nonlocal theory the measure \({\cal D}\tilde{\phi}\) does not lead to UV divergences, because the theory is UV complete.
It is useful to consider the partition function \(Z[\tilde{J}]\), which is the time ordered Green's function in the vacuum. Our goal by computing this partition function is to access the entanglement entropy of our nonlocal theory and \(\tilde{J}\) is a nonlocal external source current. We use the generating functional method, which consists of adding a nonlocal source term \(\tilde{J}(x)\tilde{\phi}(x)\) in the original path integral (23) :
\[Z[\tilde{J}] =\int{\cal D}\tilde{\phi}\,\exp\left[i\int_{t_{1}}^{t_{2}}d^{4}x \left(\frac{1}{2}\partial_{\mu}\tilde{\phi}\partial^{\mu}\tilde{\phi}-\frac{ 1}{2}m^{2}\tilde{\phi}^{2}-\frac{\lambda}{4!}\tilde{\phi}^{4}+\tilde{J}\tilde{ \phi}\right)\right] \tag{24}\] \[=Z_{0}[0]\exp\left[\frac{-i\lambda}{4!}\int d^{4}x\tilde{\phi}^{ 4}\right]\exp\left[\frac{-i}{2}\int d^{4}x_{1}d^{4}x_{2}\tilde{J}(x_{1}) \tilde{\Delta}_{F}(x_{1}-x_{2})\tilde{J}(x_{2})\right]. \tag{25}\]
where \(Z_{0}[0]=\langle 0|0\rangle_{J=0}=1\) is the functional integral of the free field.
The nonlocal Feynman propagator in Euclidean position space is given by
\[\tilde{\Delta}_{F}(x-x^{\prime})=\int\frac{d^{4}p}{(2\pi)^{4}}\frac{{\rm e}^{ -(p^{2}+m^{2})/2\Lambda_{p}^{2}}}{p^{2}+m^{2}}{\rm e}^{-ip\cdot(x-x^{\prime})}, \tag{26}\]
where we have adopted the entire function distribution function \({\cal E}(p^{2})=\exp(-(p^{2}+m^{2})/2\Lambda_{p}^{2})\). The UV completion of the nonlocal theory leads to a finite result for \(\Delta_{F}(0)\), a notable difference with a local scalar field theory, where \(\Delta_{F}(0)\) diverges quadratically in perturbation theory and requires an infinite renormalization to give a valid physical result.
To obtain an analytical form for the entanglement entropy, we have to study further the partition function. As the scalar field theory does not have a closed form expression for \(Z[\tilde{J}]\), we have to evaluate it perturbatively. We perform the following transformation [18]:
\[\tilde{\phi}(x)\rightarrow\frac{\delta}{\delta\tilde{J}(x)}. \tag{27}\]
A first-order Taylor expansion in \(\lambda\) gives
\[Z[\tilde{J}]=Z_{0}[0]\left(1-\frac{i\lambda}{4!}\int d^{4}x\frac{\delta^{4}}{ \delta\tilde{J}^{4}(x)}\right)\exp\left[\frac{-i}{2}\int d^{4}x_{1}d^{4}x_{2} \tilde{J}(x_{1})\tilde{\Delta}_{F}(x_{1}-x_{2})\tilde{J}(x_{2})\right]. \tag{28}\]
Let us consider the case with no external source current, \(\tilde{J}(x)=0\) :
\[Z[0]=Z_{0}[0]\left(1+\frac{i\lambda}{8}\int d^{4}x\tilde{\Delta}_{F}(0)\tilde {\Delta}_{F}(0)\right). \tag{29}\]
By choosing the spherical polar coordinates \(\phi,\theta,\chi,\kappa\), we obtain from (26):
\[\tilde{\Delta}_{F}(0) =\frac{1}{(2\pi)^{4}}\int_{0}^{2\pi}d\phi\int_{0}^{\pi}d\theta\sin \theta\int_{0}^{\pi}d\chi\sin^{2}\chi\int_{0}^{\infty}d\kappa\;\kappa^{3}\frac{ \mathrm{e}^{-(\kappa^{2}+m^{2})/2\Lambda_{\mathrm{c}}^{2}}}{\kappa^{2}+m^{2}} \tag{30}\] \[=\frac{1}{8\pi^{2}}\int_{0}^{\infty}d\kappa\;\kappa^{3}\frac{ \mathrm{e}^{-(\kappa^{2}+m^{2})/2\Lambda_{\mathrm{c}}^{2}}}{\kappa^{2}+m^{2}}\] (31) \[=\frac{1}{8\pi^{2}}\left(\Lambda_{p}^{2}\mathrm{e}^{-m^{2}/2 \Lambda_{p}^{2}}-m^{2}Ei\left(1,m^{2}/2\Lambda_{p}^{2}\right)\right), \tag{32}\]
where \(Ei\left(1,m^{2}/2\Lambda_{p}^{2}\right)\) is the exponential integral. This expression is finite. Let us consider the ultra-relativistic limit for large \(\Lambda_{p}\) :
\[\tilde{\Delta}_{F}(0)\approx\frac{\Lambda_{p}^{2}}{8\pi^{2}}. \tag{33}\]
This expression for the Feynman propagator at \((x-x^{\prime})=0\) is finite in the limit \(p^{2}>>m^{2}\), which corresponds to the high energy limit. This difference is notable when compared to local QFT, because the propagator does not lead to UV divergences. Moreover, the propagator is explicitly expressed in terms of the parameter \(\Lambda_{p}\). This parameter is associated with the entire function distribution \(\mathcal{E}(p^{2})=\exp\left(-(p^{2}+m^{2})/2\Lambda_{p}^{2}\right)\); it describes the uncertainty in momentum space. Associated with the uncertainty in position space \(\Lambda_{x}\), it can be related to \(\Lambda_{p}\) by the Heisenberg uncertainty principle [11] :
\[\Lambda_{p}\Lambda_{x}\geq 1. \tag{34}\]
### Entanglement entropy in NLQFT
Let \(\Omega\) be the entire space, let also \(A\) and \(\bar{A}\) be two complementary half-spaces, therefore, \(\Omega=A\cup\bar{A}\). In quantum field theory, the entanglement entropy corresponds to the correlations between the vacuum fluctuations. We define it as the von Neumann entropy [23]:
\[S=-Tr(\rho_{A}\ln(\rho_{A}))=-Tr(\rho_{\bar{A}}\ln(\rho_{\bar{A}}), \tag{35}\]
where \(\rho_{A}=Tr_{\bar{A}}(\rho_{\Omega})\) is the reduced density matrix and we have traced on the complementary region \(\bar{A}\).
We compute the entanglement entropy \(S\) by using the replica trick with regards to scalar fields in their vacuum state. We consider the Euclidean action on an \(n\)-sheeted Riemann surface with one cut along the negative real axis. The theory is defined on a cone with deficit angle \(\delta=2\pi(1-n)\). We denote by \(Z_{n}\) the partition function on the \(n\)-sheeted geometry. In order to compute the partition function, we use perturbation theory in powers of the coupling \(\lambda\). Therefore, we can rewrite the entanglement entropy as a sum of the contributions of each order of the expansion. We have
\[S=\sum_{k=0}^{+\infty}S_{k}, \tag{36}\]
where
\[S_{k}=-\frac{\partial}{\partial n}(\ln Z_{n,k}-n\ln Z_{k,k})_{n \to 1}. \tag{37}\]
Before starting the computation of the partition function of the Euclidean action, let us consider the two-point correlation function on the cone. It is the nonlocal Green's function that satisfies the following equation :
\[(-\nabla_{\mathrm{x}}^{2}+m^{2})\tilde{G}_{n}(\mathrm{x},\mathrm{ x}^{\prime})=\mathcal{F}(\mathcal{E}(\mathrm{x}))\delta^{3}(\mathrm{x}-\mathrm{x}^{ \prime}), \tag{38}\]
where \(\nabla^{2}_{\rm x}\) is the Laplacian differential operator.
The two-point correlation function on the cone breaks translation invariance. Therefore, it is a function of \({\rm x}\) and \({\rm x}^{\prime}\) separately instead of a function of \(|{\rm x}-{\rm x}^{\prime}|\) as in flat space. As in [21, 22], we write our coordinates the following way: \({\rm x}=\{r,\theta,x\}\), where \(\{r,\theta\}\) are the polar coordinates on the cone and \(\{x\}\) are the coordinates on the transverse space. In this case, we work in \(3+1\) dimensions, so the dimension of the transverse space is \(2\). The nonlocal Green's function on the cone is given by [23]:
\[\tilde{G}_{n}({\rm x},{\rm x}^{\prime})=\int\frac{d^{2}p}{(2\pi)^{2}}\left[ \sum_{k=0}^{+\infty}d_{k}\int_{0}^{\infty}\frac{dq}{2\pi n}\;q\frac{J_{k/n}( qr)J_{k/n}(qr^{\prime})}{q^{2}+p^{2}+m^{2}}\cos(k(\theta-\theta^{\prime})/n){ \rm e}^{ip\cdot(x-x^{\prime})}\right]{\rm e}^{\frac{-(p^{2}+m^{2})}{2\lambda _{p}^{2}}}, \tag{39}\]
where \(J\) is the Bessel function of the first kind. The coefficients \(d_{k}\) depend on \(k\); \(\forall k\geq 1,d_{k}=2\) and \(d_{0}=1\). Let us consider the coincident points, using the Euler-Maclaurin formula. The two-points correlation function simplifies to :
\[\begin{split}\tilde{G}_{n}({\rm x},{\rm x})=\frac{1}{2\pi n}\int \frac{d^{2}p}{(2\pi)^{2}}\left[2\int_{0}^{\infty}dk\;I_{k/n}(r\sqrt{p^{2}+m^{2 }})K_{k/n}(r\sqrt{p^{2}+m^{2}})\right.\\ +\left.\frac{1}{6n}K_{0}^{2}(r\sqrt{p^{2}+m^{2}})\right]{\rm e}^ {-(p^{2}+m^{2})/2\Lambda_{p}^{2}},\end{split} \tag{40}\]
where \(I\) is the modified Bessel function of the first kind and \(K\) is the modified Bessel function of the second kind.
The use of the Euler-Maclaurin formula allows to re-express the sum over the index \(k\) in terms of an integral over \(k\) and a sum over a dummy index \(j\). For \(j>1\), the sum contribution does not lead to divergences in the local QFT. Therefore, we forget this sum. For practical reasons, we rewrite
\[\tilde{G}_{n}({\rm x},{\rm x})=\tilde{G}({\rm x},{\rm x})+\tilde{f}_{n}(r), \tag{41}\]
where \(\tilde{G}({\rm x},{\rm x})\) is the flat space Green's function at coincident points and \(\tilde{f}_{n}(r)\) is given by
\[\tilde{f}_{n}(r)=\frac{1}{2\pi n}\frac{1-n^{2}}{6n}\int\frac{d^{2}p}{(2\pi)^{ 2}}K_{0}^{2}(r\sqrt{p^{2}+m^{2}}){\rm e}^{-(p^{2}+m^{2})/2\Lambda_{p}^{2}}. \tag{42}\]
By translation invariance, \(\tilde{G}({\rm x},{\rm x})=\tilde{G}(x,x)=\tilde{G}(0)\). Therefore, we rewrite
\[\tilde{G}_{n}({\rm x},{\rm x})=\tilde{G}(0)+\tilde{f}_{n}(r). \tag{43}\]
In order to compute the entanglement entropy in the scalar field version of the NLQFT, we will limit ourselves to first order in the expansion of \(\lambda\). Therefore, the partition function is given by
\[\ln Z_{n} =\ln\int{\cal D}\phi\;{\rm e}^{-S_{E}[\phi]} \tag{44}\] \[=\ln Z_{n,0}-\ln Z_{n,1}\] (45) \[=\ln Z_{n,0}-\frac{\lambda}{4!}\int_{n}d^{4}x\tilde{\phi}^{4}(x)\] (46) \[=\ln Z_{n,0}-\frac{3\lambda}{4!}\int_{n}d^{4}x\tilde{G}_{n}^{2}( x,x), \tag{47}\]
where \(Z_{n,0}\) is the partition function of the massive free scalar theory. The 3-factor is a consequence of the application of Wick's theorem. By using the equation (43), we have
\[\ln Z_{n}=\ln Z_{n,0}-\frac{3\lambda}{4!}\int_{n}d^{4}x(\tilde{G}^{2}(0)+2 \tilde{G}(0)\tilde{f}_{n}(r)+\tilde{f}_{n}^{2}(r)). \tag{48}\]
To get the \(S_{1}\) contribution for the entanglement entropy, such as defined by the replica trick, we have to subtract \(n\) times the partition function \(Z_{1,1}\) :
\[\ln Z_{n,1}-n\ln Z_{1,1} = -\frac{3\lambda}{4!}\left[\int_{n}d^{4}x(\tilde{G}^{2}(0)+2\tilde{G }(0)\tilde{f}_{n}(r)+\tilde{f}_{n}^{2}(r))-n\int d^{4}x\tilde{G}^{2}(0)\right] \tag{49}\] \[= -\frac{3\lambda}{4!}\left[\int_{n}d^{4}x(2\tilde{G}(0)\tilde{f}_ {n}(r)+\tilde{f}_{n}^{2}(r))\right]\] (50) \[= -\frac{3\lambda}{4!}2\pi n\left[\int d^{2}x\int_{0}^{\infty}dr\;r (2\tilde{G}(0)\tilde{f}_{n}(r)+\tilde{f}_{n}^{2}(r))\right]\] (51) \[= -\frac{3\lambda}{4!}2\pi nA\left[\frac{2\tilde{G}(0)}{2\pi n} \frac{1-n^{2}}{6n}\int\frac{d^{2}p}{(2\pi)^{2}}\int_{0}^{\infty}dr\;rK_{0}^{ 2}(r\sqrt{p^{2}+m^{2}})\mbox{e}^{\frac{-(p^{2}+m^{2})}{2\Lambda_{p}^{2}}}\right.\] (52) \[\left.\qquad+\int_{0}^{\infty}dr\;r\tilde{f}_{n}^{2}(r)\right]\] \[= -\frac{3\lambda}{4!}A\left[\tilde{G}(0)\frac{1-n^{2}}{6n}\int \frac{d^{2}p}{(2\pi)^{2}}\frac{\mbox{e}^{-(p^{2}+m^{2})/2\Lambda_{p}^{2}}}{p^ {2}+m^{2}}+2\pi n\int_{0}^{\infty}dr\;r\tilde{f}_{n}^{2}(r)\right], \tag{53}\]
where \(A=\int d^{2}x\) is the area resulting from the integration over the transverse space.
The last step to get the expression for \(S_{1}\) is to differentiate with respect to \(n\) and take the limit \(n\to 1\). The second integral vanishes and we are left with the following derivation :
\[-\frac{\partial}{\partial n}\left(\frac{1-n^{2}}{6n}\right)_{n\to 1}=\frac{1}{3}. \tag{54}\]
Finally the \(S_{1}\) term is given by
\[S_{1}=-\frac{\lambda}{4!}A\tilde{G}(0)\int\frac{d^{2}p}{(2\pi)^{2}}\frac{ \mbox{e}^{-(p^{2}+m^{2})/2\Lambda_{p}^{2}}}{p^{2}+m^{2}}. \tag{55}\]
The solution for the free term is given by
\[\ln Z_{n,0}=-\frac{1}{2}\ln\det(-\nabla_{\rm x}^{2}+m^{2}). \tag{56}\]
We can link this quantity with the two-point correlation function by using its derivative with respect to \(m^{2}\). We obtain :
\[\frac{\partial}{\partial m^{2}}\ln Z_{n,0}=-\frac{1}{2}\int_{n}d^{4}x\tilde{ G}_{n}({\rm x},{\rm x}). \tag{57}\]
To get \(S_{0}\), we proceed by analogy with (49) :
\[\frac{\partial}{\partial m^{2}}(\ln Z_{n,0}-n\ln Z_{1,0}) = -\frac{1}{2}\left[\int_{n}d^{4}x(\tilde{G}({\rm x},{\rm x})-n \int d^{4}x\tilde{G}(0)\right] \tag{58}\] \[= -\frac{1}{2}\frac{1-n^{2}}{6n}\int d^{2}x\int\frac{d^{2}p}{(2\pi )^{2}}\int_{0}^{\infty}dr\;rK_{0}^{2}(r\sqrt{p^{2}+m^{2}})\mbox{e}^{-(p^{2}+m^{ 2})/2\Lambda_{p}^{2}}\] (59) \[= -A\frac{1-n^{2}}{24n}\int\frac{d^{2}p}{(2\pi)^{2}}\frac{1}{p^{2}+ m^{2}}\mbox{e}^{-(p^{2}+m^{2})/2\Lambda_{p}^{2}}, \tag{60}\]
using the fact that \(\int_{0}^{\infty}dr\;rK_{0}^{2}(r\sqrt{p^{2}+m^{2}})=\frac{1}{2}\frac{1}{p^{2} +m^{2}}\).
To get the \(S_{0}\) term, we have to differentiate with respect to \(n\) and to integrate with respect to \(m^{2}\). The integration with respect to \(m^{2}\) also gives a constant term independent of the mass \(m^{2}\). In the
local theory, this term is UV divergent. The UV divergence does not appear in the NLQFT due to the nonlocal regularization. In the following, we do not consider this constant term. Finally \(S_{0}\) is given by
\[S_{0}=-\frac{A}{12}\int\frac{d^{2}p}{(2\pi)^{2}}\ln(p^{2}+m^{2}) \mathrm{e}^{-(p^{2}+m^{2})/2\Lambda_{p}^{2}}. \tag{61}\]
The expression for the entanglement entropy for the scalar nonlocal field theory, at the first order of the coupling expansion, is given by
\[S =-\frac{A}{12}\int\frac{d^{2}p}{(2\pi)^{2}}\left[\ln(p^{2}+m^{2}) +\frac{\lambda}{2}\frac{\tilde{G}(0)}{p^{2}+m^{2}}\right]\mathrm{e}^{-(p^{2}+ m^{2})/2\Lambda_{p}^{2}}+\mathcal{O}(\lambda^{2}) \tag{62}\] \[=-\frac{A}{48\pi}\left(Ei(1,m^{2}/2\Lambda_{p}^{2})\left(\frac{ \lambda\tilde{G}(0)}{4}+\Lambda_{p}^{2}\right)+\Lambda_{p}^{2}\ln(m^{2}) \mathrm{e}^{-m^{2}/2\Lambda_{p}^{2}}\right)+\mathcal{O}(\lambda^{2}). \tag{63}\]
Note that because of the convergence of the nonlocal Feynman propagator at coincident points, the two-point correlation function at coincident points \(\tilde{G}(0)\) is not divergent. Therefore, this expression for the entanglement entropy computed in \(3+1\) dimensions on the cone is finite.
This is a significant difference with local QFT. Indeed, as discussed in [22, 24], in any local QFT, the entanglement entropy is a UV divergent quantity and this result is true for any manifold without boundary. Not only the term \(S_{0}\) is logarithmically divergent but the term \(S_{1}\) is also divergent, because the two-point correlation function at coincident points is quadratically divergent. In the local theory, it is therefore necessary to introduce a cutoff \(\epsilon\) in order to regulate the divergences and extract a finite quantity from the entanglement entropy.
In this section, we computed the entanglement entropy using the partition function which is based on the Lagrangian formalism. Therefore, we preserved all the symmetries of the theory. Because NLQFT is Lorentz invariant and causal, our result for the entanglement entropy is too. Our result has the advantage of being Lorentz invariant and finite. These two facts are consequences of nonlocal regularization using entire function distribution operators. The nonlocality taken as a hypothesis to build the theory makes it possible to obtain satisfactory results in the case that we have considered, that is to say a scalar field theory. In this nonlocal QFT, quantum correlations between entangled systems, i.e quantum entanglement, are direct consequences of the causal nonlocal assumption on which the theory is built.
An interesting result is that the area law of entanglement entropy is preserved for this nonlocal QFT [19, 22, 26].
Conclusions
The field theory we consider is built from nonlocal field operators, the field at one point of spacetime depending on all other positions in spacetime. By using entire function distribution operators, our theory is made to be finite, which means there is no need for infinite renormalization, because the theory is free of UV-divergences. The nonlocality aims to reconcile the approach of relativistic field theory with the nonlocality observed experimentally on entangled quantum systems. The nonlocal QFT satisfies microcausality and no signals faster than light can be transmitted between two spacelike separated events.
The nonlocality results in the presence of an entire function distribution operator in the theory \(\mathcal{E}=\exp(-(p^{2}-m^{2})/2\Lambda_{p}^{2})\), making it possible to obtain a finite partition function and entanglement entropy.
By considering a local QFT, we will always find that the entanglement entropy is UV divergent. Nonlocality in NLQFT makes it possible to obtain finite results, preserving Lorentz invariance and causality. We interpret quantum correlations between entangled systems as a direct consequences of nonlocality.
A similar study can be conducted for fermionic fields. The problem of infinite renormalizability is resolved by regulated propagators and vertices for the Feynman loop diagrams of QED, QCD and weak interactions [11, 12], and we expect to obtain finite entanglement entropies by considering configurations similar to the one presented in this paper.
It would be interesting to consider the framework of AQFT extended to nonlocal quantum field theory. AQFT would therefore be a nonlocal theory in the sense of operator algebras. The Reeh-Schlieder theorem will not be modified in the nonlocal QFT formulation.
## Acknowledgment
We thank Laurent Freidel, Ivan Agullo, Viktor Toth and Martin Green for helpful discussions. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science. |
2303.02993 | Modified Green-Hyperbolic Operators | Green-hyperbolic operators - partial differential operators on globally
hyperbolic spacetimes that (together with their formal duals) possess advanced
and retarded Green operators - play an important role in many areas of
mathematical physics. Here, we study modifications of Green-hyperbolic
operators by the addition of a possibly nonlocal operator acting within a
compact subset $K$ of spacetime, and seek corresponding '$K$-nonlocal'
generalised Green operators. Assuming the modification depends holomorphically
on a parameter, conditions are given under which $K$-nonlocal Green operators
exist for all parameter values, with the possible exception of a discrete set.
The exceptional points occur precisely where the modified operator admits
nontrivial smooth homogeneous solutions that have past- or future-compact
support. Fredholm theory is used to relate the dimensions of these spaces to
those corresponding to the formal dual operator, switching the roles of future
and past. The $K$-nonlocal Green operators are shown to depend holomorphically
on the parameter in the topology of bounded convergence on maps between
suitable Sobolev spaces, or between suitable spaces of smooth functions. An
application to the LU factorisation of systems of equations is described. | Christopher J. Fewster | 2023-03-06T09:40:34Z | http://arxiv.org/abs/2303.02993v2 | # Modified Green-hyperbolic operators
###### Abstract
Green-hyperbolic operators - partial differential operators on globally hyperbolic spacetimes that (together with their formal duals) possess advanced and retarded Green operators - play an important role in many areas of mathematical physics. Here, we study modifications of Green-hyperbolic operators by the addition of a possibly nonlocal operator acting within a compact subset \(K\) of spacetime, and seek corresponding '\(K\)-nonlocal' generalised Green operators. Assuming the modification depends holomorphically on a parameter, conditions are given under which \(K\)-nonlocal Green operators exist for all parameter values, with the possible exception of a discrete set. The exceptional points occur precisely where the modified operator admits nontrivial smooth homogeneous solutions that have past- or future-compact support. Fredholm theory is used to relate the dimensions of these spaces to those corresponding to the formal dual operator, switching the roles of future and past. The \(K\)-nonlocal Green operators are shown to depend holomorphically on the parameter in the topology of bounded convergence on maps between suitable Sobolev spaces, or between suitable spaces of smooth functions. An application to the LU factorisation of systems of equations is described.
_Dedicated to Christian Bar on the occasion of his sixtieth birthday._
## 1 Introduction
Linear partial differential operators are the workhorses of mathematical physics, providing the simplest models of classical and quantum field theories from which more complicated interacting models may be built. In General Relativity or other nonlinear theories, linear operators appear whenever the theory is linearised, for example, to study the stability of solutions, or the propagation of gravitational waves.
A particularly useful general class of operators acting between spaces of smooth sections of vector bundles over globally hyperbolic Lorentzian manifolds has been introduced by Bar [2] under the name 'Green-hyperbolic operators'. Green-hyperbolicity is a generalisation, rather than a specialisation, of hyperbolicity: a Green-hyperbolic operator need not be hyperbolic, and there are examples that are elliptic, or of indefinite type. The defining property of a Green hyperbolic operator is that it should possess advanced and retarded Green operators, along with its formal dual, and from this simple algebraic requirement many other properties flow, as described elegantly by Bar. In particular, the Green operators are unique, continuous, and have extensions that are continuous inverses to (extensions of) the Green-hyperbolic operator on various spaces of smooth or distributional sections.
Examples of Green-hyperbolic operators include (a) normally hyperbolic operators, such as the wave operator \(\Box\) and its variants allowing for the inclusion of potentials and external vector potentials, (b) first order symmetric hyperbolic systems on globally hyperbolic spacetimes [2]; (c) any operator whose square is Green-hyperbolic, thus incorporating Dirac type operators; (d) operators whose solution theory may be related to one of the above, such as the (non-hyperbolic) Proca operator \(-\delta\mathrm{d}+m^{2}\), whose Green operators are obtained from those of \(-(\delta\mathrm{d}+\mathrm{d}\delta)+m^{2}\) on smooth 1-form fields. See [2] for these and other examples.
The purpose of this paper is to study modifications of Green-hyperbolic operators, that can lead outside the class of partial differential operators. For simplicity, we mainly study the case of scalar operators but there is no hindrance (beyond those of notation!) to extending our results to the general bundle case. The operators we consider are of the form \(P+A\), where \(P\) is Green-hyperbolic and \(A\) is a continuous linear self-map of \(C^{\infty}(M)\) (not necessarily a differential operator) whose range is contained in \(C^{\infty}_{K}(M)\), the smooth functions supported in a compact subset \(K\subset M\). Without loss of generality, we may always assume that \(K\) is _topologically regular_, that is, equal to the closure of its interior. An operator of this type is potentially nonlocal, though the nonlocality is, as it were, localised within \(K\). By the kernel theorem, any such operator can be represented as
\[A\phi=\int_{M}T(x,y)\phi(y)\mu_{y}, \tag{1.1}\]
where \(\mu\) is a smooth density, and \(T\in\mathcal{D}^{\prime}(M\times M)\) has support in \(K\times M\) and is semi-regular in its first slot (i.e., \(T\) belongs to the nuclear tensor product \(C^{\infty}(M)\widehat{\otimes}\mathcal{D}^{\prime}(M)\)). Operators of this type include, but go beyond, differential and pseudodifferential operators, in which the singular support of \(T\) is confined to the diagonal.
There are several applications for operators of this type. In perturbative algebraic quantum field theory (pAQFT) they arise from the class of regular interactions, and the results proved here establish the existence of suitable Green operators needed in [15], for example. Another application is to noncommutative potential scattering [21], where equations such as
\[P\phi+w\star\phi=0 \tag{1.2}\]
appear as toy models for the dynamics of classical and quantum fields on a noncommutative space-time. Here \(P\) is a Green hyperbolic operator, \(w\) is a fixed smooth function, and \(\star\) is a noncommutative deformation of multiplication, differing only from pointwise multiplication inside a compact set \(K\)[22]. The application of the results obtained here to such models will be discussed elsewhere [11].
One may in fact develop an entire theory of nonlocal Green-hyperbolic operators and this is done in the companion paper [12]. The purpose of this paper is to investigate the technical issue of the existence and properties of (a suitably generalised concept of) Green operators \(E^{\pm}_{P+A(\lambda)}\) for \(P+A(\lambda)\) under suitable conditions on \(P\) and \(A(\lambda)\) for \(\lambda\in\mathbb{C}\). A particular focus will be on the analytic dependence of the resulting Green operators on \(\lambda\) within suitable locally convex spaces and on a suitable domain in \(\mathbb{C}\). These results are useful even in situations where the operators \(P+A(\lambda)\) are local Green-hyperbolic operators. For example, they have been applied in [10] in the context of measurement schemes for observables in QFT. The Green operators we study have properties similar to those of Green-hyperbolic operators, with the following generalised support property:
\[\mathrm{supp}\,E^{\pm}_{P+A(\lambda)}f\subset\begin{cases}J^{\pm}(\mathrm{ supp}\,f)&J^{\pm}(\mathrm{supp}\,f)\cap K=\emptyset\\ J^{\pm}(\mathrm{supp}\,f\cup K)&\text{otherwise,}\end{cases} \tag{1.3}\]
for compactly supported \(f\), where \(J^{+/-}(S)\) are the causal future/past of a set \(S\). This support property characterises what we call \(K\)_-nonlocal Green operators_, set out precisely in Definition 2.4 below.
The main result of this paper is Theorem 3.1, which sets out conditions on \(P\) and \(A(\lambda)\) under which suitable Green operators for \(P+A(\lambda)\) exist. The principal hypothesis on \(P\) is that it is a Green-hyperbolic operator whose Green operators have extensions to continuous maps between the Sobolev spaces \(H^{s}_{0}(M)\) and \(H^{s+\beta}_{\mathrm{loc}}(M)\) for all sufficiently large \(s\) and some fixed \(\beta\). This hypothesis is valid for second order normally hyperbolic operators with \(\beta=1\) (see [9, Thm 6.5.3]). The main hypothesis on \(A(\lambda)\) is that each \(A(\lambda)\) is a continuous linear self-map of \(C^{\infty}(M)\) with continuous extensions mapping between Sobolev spaces \(H^{s}_{\mathrm{loc}}(M)\) to \(H^{s+\gamma}_{K}(M)\) for all sufficiently large \(s\) and some fixed \(\gamma>-\beta\). This implies that the compositions \(A(\lambda)E^{\pm}\) are compact maps between \(H^{s}_{0}(M)\) and \(H^{s}_{K}(M)\) and it is required that they depend holomorphically on \(\lambda\) in the topology of bounded convergence on linear maps between these spaces. A brief summary of the various topological spaces and topologies used in this work is provided in Appendix A.
Given the above assumptions, the analytic Fredholm theorem can be used to find inverses \((I+A(\lambda)E^{\pm})^{-1}\) on \(H^{s}_{K}(M)\) for all sufficiently large \(s\in\mathbb{R}\) and all complex \(\lambda\) in an open neighbourhood of zero, whose (possibly empty) complement is a discrete subset \(S\) of \(\mathbb{C}\). Exceptional values \(\lambda\in S\) occur precisely when there exist nontrivial _smooth_ solutions to \((P+A(\lambda))\phi=0\), whose support is either past- or future-compact, i.e., nontrivial solutions that vanish identically at early or late times, representing spontaneously appearing or disappearing disturbances; they are clearly excluded if any sort of energy estimate is available. For \(\lambda\in\mathbb{C}\setminus S\), one may use the inverses to construct \(K\)-nonlocal Green operators for \(P+A(\lambda)\). It is also proved that the resulting \(K\)-nonlocal Green operators are holomorphic on \(\mathbb{C}\setminus S\), with respect to the topology of bounded convergence on linear maps between \(C^{\infty}_{0}(M)\) and \(C^{\infty}(M)\). The power series expansion of the Green operators about \(\lambda=0\) corresponds to a Born expansion of the Green operators.
In the situation where our main hypotheses are also satisfied for the formal duals of \(P\) and \(A(\lambda)\), we apply Fredholm index theory to show that the dimension of the space of spontaneously appearing (resp., disappearing) solutions for \(P+A(\lambda)\) is equal to the dimension of the space of spontaneously disappearing (resp., appearing) solutions for its formal dual,
\[\dim\ker(P+A(\lambda))|_{C^{\infty}_{p/k}}=\dim\ker({}^{t}P+{}^{t}A(\lambda) )|_{C^{\infty}_{k/p}}, \tag{1.4}\]
for all \(\lambda\in\mathbb{C}\). Consequently, the spaces of appearing and disappearing solutions of a formally self-dual operator have equal dimension.
The paper is structured as follows: in Section 2, we recall the definition and main properties of Green-hyperbolic operators and develop the appropriate notion of \(K\)-nonlocal Green operators. Section 3 contains the statement of our main result, Theorem 3.1 and illustrates it with examples of how it may be used and of the necessity of some of its hypotheses. Section 4 provides an application of our result to the LU factorisation and solution of certain systems of nonlocal equations. The main result is proved in Section 5, by a sequence of results initially in Sobolev spaces and then for smooth functions, while (1.4) is proved in Section 6 using Fredholm theory and the results of Section 5. Appendix A collects some necessary background on the topological spaces and topologies appearing in the text.
## 2 Green-hyperbolic operators
PreliminariesWe begin by recalling the general setting of Green-hyperbolic operators [2]. Let \(M\) be a smooth finite-dimensional manifold, allowing the possibility that \(M\) has finitely many connected components with possibly different dimensions, and let \(g\) be a smooth Lorentzian metric on \(M\) of signature \(+--\cdots\). With these structures, \(M\) is automatically Hausdorff and paracompact [13]. We assume that \((M,g)\) is time-orientable and that a time-orientation has been chosen. To minimise notation, we denote the Lorentzian spacetime formed by the manifold, metric and time
orientation with the single symbol \(M\). The volume measure induced by the metric will be denoted \(\mu\). On other points of notation, the symbol \(\subset\) will always allow for the possibility of equality, while \(\mathbb{N}_{0}\) and \(\mathbb{N}\) denote the natural numbers with or without zero, respectively.
As usual, the causal future/past of a point \(x\in M\) is denoted \(J^{\pm}(x)\) and comprises all points (including \(x\)) that may be reached from \(x\) along smooth future/past-directed curves. (Throughout this paper, we tacitly order alternatives labelled by \(\pm\) or \(\mp\) so that the alternative labelled by the upper symbol comes first.) If \(S\subset M\) then one writes \(J^{\pm}(S)=\cup_{x\in S}J^{\pm}(x)\), and \(J(S)=J^{+}(S)\cup J^{-}(S)\). The spacetime is globally hyperbolic if it contains no causal curves and \(J^{+}(K)\cap J^{-}(K)\) is compact for all compact sets \(K\subset M\)[6]. Globally hyperbolic spacetimes can be foliated into smooth spacelike Cauchy surfaces [5]; it is also the case that \(J^{\pm}(K)\) are closed whenever \(K\) is compact. We adopt the following terminology: \(S\subset M\) is _spacelike compact_ if \(S\) is closed and \(S\subset J(K)\) for some compact \(K\); \(S\) is _future/past-compact_ if \(S\cap J^{\pm}(x)\) is compact for all \(x\in M\); \(S\) is _strictly future/past-compact_ if \(S\subset J^{\mp}(K)\) for some compact \(K\).
If \(B\) is a vector bundle, then \(\Gamma^{\infty}(B)\) will denote the corresponding space of smooth sections and \(\Gamma^{\infty}_{0/pc/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\! /\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\! \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\! \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\! \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/ \!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\!/\
_(iii) the extension \(P:\mathcal{D}^{\prime}_{pc/fc}(B_{1})\to\mathcal{D}^{\prime}_{pc/fc}(B_{2})\) (see below)._
_Then, in each case, \(P\) has continuous inverses, denoted \(\tilde{E}^{\pm}\), \(\overline{E}^{\pm}\) and \(\widehat{E}^{\pm}\) in cases (i), (ii), (iii) respectively, and which are successive extensions of \(E^{\pm}\). Each of these inverses has the support property G3, replacing \(\Gamma_{0}^{\infty}(B_{2})\) by the appropriate domain. In particular, \(E^{\pm}\) are uniquely determined by \(P\) and are continuous._
In (iii) the space of distributional sections \(\mathcal{D}^{\prime}(B)\) of a bundle \(B\) over \(M\) is defined as the topological dual of \(\Gamma_{0}^{\infty}(B^{*}\otimes\Omega)\), where \(\Omega\) is the bundle of weight-1 densities over \(M\). As sections of \(B\) and \(B^{*}\otimes\Omega\) can be paired to give a density, there is an obvious embedding of \(\Gamma^{\infty}(B)\) in \(\mathcal{D}^{\prime}(B)\), with \(\phi\in\Gamma^{\infty}(B)\) corresponding to the distribution \(f\mapsto\int_{M}\langle\phi,f\rangle\) acting on \(f\in\Gamma_{0}^{\infty}(B^{*}\otimes\Omega)\). The map \(P\) in (iii) is the restriction of the dual map \((\mu(^{t}P)\mu^{-1})^{\prime}\) to elements of \(\mathcal{D}^{\prime}(B_{1})\) with past- or future-compact support. (Recall that the formal dual is defined relative to the specific density \(\mu\).) In [2], Bar defines distributional sections of \(B\) as the topological dual of \(\Gamma_{0}^{\infty}(B^{*})\), which would be distribution densities in our terminology. The metric density provides an isomorphism between the spaces and operators that he considers and those that we do. As we have in mind potential applications where more than one metric might be in use, we have elected not to make a fixed identification between distributions and distribution densities.
The _second main theorem of Green-hyperbolicity_ provides two exact sequences that are highly useful in applications, combining (and very mildly extending)2 theorems 3.22 & 4.3 of [2].
Footnote 2: The extension is the right-most arrow, asserting surjectivity of \(P\) onto spacelike compact (distributional) sections. But any such smooth section can be split as \(f=f^{+}+f^{-}\) where \(f^{\pm}\) has past/future-compact support, whereupon \(f=P(\overline{E}^{+}f^{+}+\overline{E}^{-}f^{-})\); the argument is identical for distributions, replacing \(\overline{E}^{\pm}\) by \(\widehat{E}^{\pm}\).
**Theorem 2.3**.: _Let \(P:\Gamma^{\infty}(B_{1})\to\Gamma^{\infty}(B_{2})\) be Green-hyperbolic and define the advanced-minus-retarded operators \(E=E^{-}-E^{+}:\Gamma_{0}^{\infty}(B_{2})\to\Gamma_{sc}^{\infty}(B_{1})\) and \(\widehat{E}=\widehat{E}^{-}-\widehat{E}^{+}\), using the notation of Theorem 2.2. Then there are two exact sequences_
\[\begin{CD}0@>{}>{}>\Gamma_{0}^{\infty}(B_{1})@>{P}>{}>\Gamma_{0}^{\infty}(B_{ 2})@>{E}>{}>\Gamma_{sc}^{\infty}(B_{1})@>{P}>{}>\Gamma_{sc}^{\infty}(B_{2})@>{ }>{}>0\\ 0@>{}>{}>\mathcal{D}^{\prime}_{0}(B_{1})@>{P}>{}>\mathcal{D}^{\prime}_{0}(B_{ 2})@>{\widehat{E}}>{}>\mathcal{D}^{\prime}_{sc}(B_{1})@>{P}>{}>\mathcal{D}^{ \prime}_{sc}(B_{2})@>{}>{}>0,\end{CD} \tag{2.2}\]
_in which the downward arrows are the natural embeddings of smooth into distributional sections._
A direct consequence is that the solution space \(\mathrm{Sol}(P)=\{\phi\in\Gamma_{sc}^{\infty}(B):P\phi=0\}\) is given by \(\mathrm{Sol}(P)=E\Gamma_{0}^{\infty}(B)\). The special properties of Green-hyperbolic operators are not confined to the statement of Theorems 2.2 and 2.3. As shown in [2], products and direct sums of Green-hyperbolic operators are Green-hyperbolic, and indeed any operator whose square is Green-hyperbolic is Green-hyperbolic. Turning to physical applications, suppose that a bundle \(B\) admits a nondegenerate bilinear form, or equivalently a base-point preserving vector bundle isomorphism \(\mathcal{I}:B\to B^{*}\), and an antilinear base-point preserving conjugation \(\Gamma:B\to B\). Then \(P:\Gamma(B)\to\Gamma(B)\) is said to be _formally self-adjoint_ if \(P=\mathcal{I}^{-1}P\mathcal{I}\), and real if \(P=\Gamma P\Gamma\). If a formally self-adjoint operator \(P\) admits advanced and retarded Green operators then \(P\) is Green hyperbolic and moreover \(\mathrm{Sol}(P)\) admits a symplectic form
\[\sigma(Ef_{1},Ef_{2})=\langle\mathcal{I}f_{1},Ef_{2}\rangle\qquad(f_{i}\in \Gamma_{0}^{\infty}(B)). \tag{2.3}\]
If \(P\) is also real then there is an associated bosonic QFT described by a unital \(*\)-algebra of observables, generated by symbols \(\Phi(f)\) (\(f\in\Gamma_{0}^{\infty}(B)\)) and subject to the relations:
* \(f\mapsto\Phi(f)\) is complex linear (linearity)
* \(\Phi(f)^{*}=\Phi(\Gamma f)\) (Hermiticity)
* \(\Phi(Pf)=0\) (field equation)
* \([\Phi(f),\Phi(h)]=i\sigma(Ef,Eh)\,\mathbf{1}\) (canonical commutation relations)
for all \(f,h\in\Gamma_{0}^{\infty}(B)\). (If \(P\) is not real, then there is a quantisation as a bosonic complex field.) This essentially functorial quantisation is one of the main applications for the theory of Green hyperbolic operators. Under more restrictive circumstances first order Green-hyperbolic operators can also admit fermionic quantisation [3].
Modified Green-hyperbolic operatorsTurning to the subject of this paper, suppose \(P:\Gamma^{\infty}(B_{1})\to\Gamma^{\infty}(B_{2})\) is Green-hyperbolic, with Green operators \(E^{\pm}\). Let \(A:\Gamma^{\infty}(B_{1})\to\Gamma^{\infty}(B_{2})\) be linear, with range contained in \(\Gamma_{K}^{\infty}(B_{2})\) for some compact, topologically regular \(K\subset M\).
Although \(P+A\) is not necessarily a differential operator, we wish to state conditions analogous to G1-3 that can characterise suitable Green operators. To gain some insight, let us assume for the moment that for each \(h\in\Gamma_{0}^{\infty}(B_{2})\), the equation \((P+A)\phi=h\) has unique solutions with past/future-compact support given by \(\phi=E_{P+A}^{\pm}h\) where \(E_{P+A}^{\pm}:\Gamma_{0}^{\infty}(B_{2})\to\Gamma^{\infty}(B_{1})\) are linear maps with ranges necessarily contained in \(\Gamma_{pc/fc}^{\infty}(B_{1})\). Then we may write
\[PE_{P+A}^{\pm}f=f-AE_{P+A}^{\pm}f=PE_{P}^{\pm}(f-AE_{P+A}^{\pm}f)\qquad(f\in \Gamma_{0}^{\infty}(B_{2})), \tag{2.4}\]
using our assumptions on \(P\) and \(A\). As \(P\) is invertible on \(\Gamma_{pc/fc}^{\infty}(B_{1})\) by Theorem 2.2(ii), we have
\[E_{P+A}^{\pm}f=E_{P}^{\pm}(f-AE_{P+A}^{\pm}f) \tag{2.5}\]
with support determined using condition G3 for \(P\),
\[\operatorname{supp}E_{P+A}^{\pm}f\subset J^{\pm}(\operatorname{supp}(f-AE_{P+ A}^{\pm}f))\subset J^{\pm}(\operatorname{supp}f\cup K). \tag{2.6}\]
By imposing further conditions on \(A\) we may refine the information available. Specifically, suppose that \(\phi|_{K}\equiv 0\) implies that \(A\phi\equiv 0\). Observe that
\[(P+A)E_{P}^{\pm}f=f+AE_{P}^{\pm}f\in\Gamma_{0}^{\infty}(B_{1}) \tag{2.7}\]
by G2 for \(E_{P}^{\pm}\) and the definition of \(A\); by assumption on \(E_{P+A}^{\pm}\) we now have
\[E_{P}^{\pm}f=E_{P+A}^{\pm}(f+AE_{P}^{\pm}f) \tag{2.8}\]
and deduce that \(E_{P+A}^{\pm}f=E_{P}^{\pm}f\) for all \(f\in\Gamma_{0}^{\infty}(B_{2})\) such that \(J^{\pm}(\operatorname{supp}f)\cap K\) is empty (because \(AE_{P}^{\pm}f\) vanishes). Together with our earlier observation, \(E_{P+A}^{\pm}\) satisfy the modified support property
* for all \(f\in\Gamma_{0}^{\infty}(B_{2})\), \[\operatorname{supp}E_{P+A}^{\pm}f\subset\begin{cases}J^{\pm}(\operatorname{ supp}f)&J^{\pm}(\operatorname{supp}f)\cap K=\emptyset\\ J^{\pm}(\operatorname{supp}f\cup K)&\text{otherwise.}\end{cases}\] (2.9)
With this intuition established, we can drop the assumption that \((P+A)\phi=h\) has unique solutions with past/future-compact support. The standing assumptions are now
* \(P:\Gamma^{\infty}(B_{1})\to\Gamma^{\infty}(B_{2})\) is Green-hyperbolic
* \(A:\Gamma^{\infty}(B_{1})\to\Gamma^{\infty}(B_{2})\) is linear, with range contained in \(\Gamma_{K}^{\infty}(B_{2})\)
* For \(\phi\in\Gamma^{\infty}(B_{1})\), \(\phi|_{K}\equiv 0\) implies \(A\phi\equiv 0\).
We now make the following definition.
**Definition 2.4**.: _Subject to assumptions A1-A3, linear maps \(E^{\pm}_{p+A}:\Gamma^{\infty}_{0}(B_{2})\to\Gamma^{\infty}(B_{1})\) are said to be retarded/advanced \(K\)-nonlocal Green operators for \(P+A\) if they satisfy G1 and G2 (with \(P+A\) replacing \(P\) and \(E^{\pm}_{P+A}\) replacing \(E^{\pm}\)) and G3\({}^{\prime}\). If both \(P+A\) and \({}^{t}P+{}^{t}A\) admit retarded and advanced \(K\)-nonlocal Green operators then \(P+A\) is called \(K\)-nonlocally Green-hyperbolic._
As with Green-hyperbolic operators, the above definition implies considerably more and indeed the analogue of Theorem 2.2 holds with G3 replaced by G3\({}^{\prime}\). These results will be proved in [12]. Our main goal here will be to give sufficient conditions for the existence of \(K\)-nonlocal Green operators. For simplicity of presentation, we restrict to operators acting on spaces of smooth functions, rather than bundle sections. We will also establish some continuity results for the \(K\)-nonlocal Green operators, both from \(C^{\infty}_{0}(M)\) to \(C^{\infty}(M)\) and between various Sobolev spaces, and address the holomorphicity of the \(K\)-nonlocal Green operators with respect to a parameter.
At a formal level it is straightforward to see what the Green operators of \(P+A\) should be, if they exist: if \(\phi\in C^{\infty}_{pc/fc}(M)\) solves \((P+A)\phi=f\in C^{\infty}_{0}(M)\), then
\[\phi=E^{\pm}_{P}(f-A\phi)=E^{\pm}g, \tag{2.10}\]
where \(g=f-A\phi=f-AE^{\pm}_{P}g\), so \(g=(I+AE^{\pm}_{P})^{-1}f\). Formally, therefore, the Green operators for \(P+A\) are
\[E^{\pm}_{P+A}=E^{\pm}_{P}(I+AE^{\pm})^{-1}, \tag{2.11}\]
and the technical task is to make this formula rigorous, where possible, and to establish that the resulting operators \(E^{\pm}_{P+A}\) are indeed \(K\)-nonlocal Green operators for \(P+A\). Note that the inversion of \(I+AE^{\pm}\) must be performed in \(C^{\infty}_{0}(M)\). We will accomplish this by first inverting in Sobolev spaces, where Hilbert space techniques can be used, and then boosting the result up to \(C^{\infty}_{0}(M)\).
## 3 Main result and remarks
Let \(M\) be a globally hyperbolic spacetime with at most finitely many connected components, and let \(K\) be a fixed, topologically regular, compact subset of \(M\). For brevity, we write \(C^{\infty}\) for \(C^{\infty}(M)\) and so on; for a closed set \(A\), \(C^{\infty}_{A}\) denotes the space of smooth functions with support in \(A\), and the same convention is used for other spaces of distributions and Sobolev spaces. If \(X\) and \(Y\) are locally convex topological spaces, \(\mathcal{L}_{b}(X,Y)\) denotes the space of linear maps between them, equipped with the topology of bounded convergence (see Appendix A); for linear maps between normed spaces, this coincides with the usual operator norm topology. As usual, we write \(\mathcal{L}_{b}(X)\) for \(\mathcal{L}_{b}(X,Y)\).
The main result of this paper is the following.
**Theorem 3.1**.: _Let \(P\) be a Green-hyperbolic operator and suppose \(A(\lambda)\) (\(\lambda\in\mathbb{C}\)) is a family of linear maps \(A(\lambda):C^{\infty}\to C^{\infty}\) whose ranges are contained in \(C^{\infty}_{K}\) and with \(A(0)=0\). Suppose that, for some \(\beta,\gamma\in\mathbb{R}\) with \(\delta=\beta+\gamma>0\), and some \(s_{*}\in\mathbb{R}\),_
* _the Green operators_ \(E^{\pm}\) _of_ \(P\) _extend to linear maps_ \(\mathcal{E}^{\prime}\to\mathcal{D}^{\prime}\) _with continuous restrictions mapping_ \(H^{s}_{0}\to H^{s+\beta}_{\mathrm{loc}}\) _for all_ \(s\geq s_{*}\)_;_
* _each map_ \(A(\lambda)\) _extends to a linear map_ \(\mathcal{D}^{\prime}\to\mathcal{D}^{\prime}_{K}\) _with continuous restrictions mapping_ \(H^{s}_{\mathrm{loc}}\to H^{s+\gamma}_{K}\) _for all_ \(s\geq s_{*}+\beta\) _(consequently, the continuous maps_ \(A(\lambda)E^{\pm}:H^{s}_{0}\to H^{s+\delta}_{K}\) _induce compact maps_ \(A(\lambda)E^{\pm}:H^{s}_{0}\to H^{s}_{K}\) _due to the Sobolev embedding theorems and_ \(\delta>0\)_);_
_._
3. _for all_ \(s\geq s_{*}\)_, the compact map_ \(H_{0}^{s}\to H_{K}^{s}\) _induced by the compositions_ \(A(\lambda)E^{\pm}\) _is holomorphic in_ \(\lambda\in\mathbb{C}\) _with respect to the topology of_ \(\mathcal{L}_{b}(H_{0}^{s},H_{K}^{s})\)_;_
4. _if_ \(f\in C^{\infty}\) _vanishes identically on_ \(K\) _then_ \(Af=0\)_._
_Then_
1. \(\ker(P+A(\lambda))|_{C^{\infty}_{\rho_{\epsilon/\ell}}}\) _has finite dimension for each_ \(\lambda\in\mathbb{C}\)_;_
2. _the sets_ \[S^{\pm}=\{\lambda\in\mathbb{C}:\ker(P+A(\lambda))|_{C^{\infty}_{\rho_{\epsilon/ \ell}}}\neq 0\}\qquad\text{and}\qquad S=S^{+}\cup S^{-}\] (3.1) _are discrete subsets of_ \(\mathbb{C}\)_, whose complements are open neighbourhoods of zero;_
3. \(P+A(\lambda)\) _has advanced and retarded_ \(K\)_-nonlocal Green operators for all_ \(\lambda\in\mathbb{C}\setminus S\)_, that are holomorphic in_ \(\lambda\) _on this domain with respect to the topology of_ \(\mathcal{L}_{b}(C^{\infty}_{0},C^{\infty})\)_;_
4. _the Green operators possess continuous extensions mapping_ \(H_{0}^{s}\to H_{\mathrm{loc}}^{s+\beta}\) _for all_ \(s\geq s_{*}\) _and which are holomorphic in_ \(\lambda\in\mathbb{C}\setminus S\) _with respect to the topology of_ \(\mathcal{L}_{b}(H_{0}^{s},H_{\mathrm{loc}}^{s+\beta})\)_;_
5. _if (d) is replaced by_ _(d')__._ \(\operatorname{supp}A(\lambda)f\subset\operatorname{supp}f\) _for all_ \(f\in C^{\infty}\)_, and_ \(\lambda\in\mathbb{C}\)__ _then_ \(P+A(\lambda)\) _has advanced and retarded Green operators in the usual sense for_ \(\lambda\in\mathbb{C}\setminus S\)_._
Of course, in the situation where the hypotheses also apply to the formal duals of \(P\) and \(A(\lambda)\) then the upshot is that \(P+A(\lambda)\) and \({}^{t}P+{}^{t}A(\lambda)\) both admit advanced and retarded \(K\)-nonlocal Green operators for all \(\lambda\in\mathbb{C}\) in an open \(0\)-neighbourhood with discrete complement given by the union of \(S\) with the corresponding set for \({}^{t}P\). For the non-exceptional values, \(P+A(\lambda)\) is \(K\)-nonlocally Green-hyperbolic. The general theory of such operators and their properties is developed in detail in [12]. We note in particular that when the ranges of both \(A(\lambda)\) and \({}^{t}A(\lambda)\) are contained in \(C^{\infty}_{K}\) then assumption (d) holds automatically for both operators.
In many situations the holomorphicity assumption (c) is easily verified, e.g., where \(A(\lambda)\) is a polynomial in \(\lambda\). Precomposing with the continuous embedding \(H_{K}^{s}\to H_{0}^{s}\), it follows that the restrictions of \(A(\lambda)E^{\pm}\) to \(H_{K}^{s}\) (\(s\geq s_{*}\)) are holomorphic with respect to the norm topology of \(\mathcal{L}(H_{K}^{s})\). On the other hand, postcomposing with the embedding \(H_{K}^{s}\to H_{0}^{s}\), we also have holomorphicity of \(A(\lambda)E^{\pm}\) with respect to \(\mathcal{L}_{b}(H_{0}^{s})\) (e.g., see Lemma A.2).
An immediate application of Theorem 3.1 is:
**Corollary 3.2**.: _Let \(P\) be a second-order normally hyperbolic operator on \(M\) and let \(K\subset M\) be compact. Let \(\rho:\mathbb{C}\to C^{\infty}(K\times K)\) be a polynomial and define \(A(\lambda):C^{\infty}(M)\to C^{\infty}_{K}(M)\) by_
\[(A(\lambda)\phi)(x)=\int_{M}\rho(\lambda)(x,y)\phi(y)\mu_{y},\qquad(\phi\in C^ {\infty}(M)). \tag{3.2}\]
_Then \(P\) and \(A(\lambda)\) satisfy the hypotheses of Theorem 3.1. In particular, \(P+\lambda A_{\rho}\) has advanced and retarded \(K\)-nonlocal Green operators for all \(\lambda\) in an open \(0\)-neighbourhood with a discrete complement in \(\mathbb{C}\)._
Proof.: \(P\) is Green-hyperbolic, and its Green operators extend to maps \(E^{\pm}:H_{0}^{s}\to H_{\mathrm{loc}}^{s+1}\), i.e., they improve smoothness by one order, by [9, Thm 6.5.3] (see [19] for a bundle version). One easily checks that the technical conditions on \(A(\lambda)\) are certainly met by integral operators with kernels in \(C^{\infty}_{K\times K}(M\times M)\)
Theorem 3.1 is proved in the next section, after a number of further remarks and examples. First, we note that Theorem 3.1 is far from the most general statement that could be made. For instance, at the cost of more notation but no new ideas, it generalises immediately to operators acting on sections in finite-dimensional vector bundles over \(M\). This can also be shown by applying Theorem 3.1 in its current form, using a method of LU factorisation described in Section 4.
Second, generalisations to multi-variable modifications \(P+\sum_{j=1}^{r}A_{j}(\lambda_{j})\) are possible. In this case, the discrete exceptional sets are replaced by sets in \(\mathbb{C}^{r}\) that, locally, are vanishing sets of holomorphic functions in \(r\) variables. See [14] for the appropriate multi-variable Fredholm theorem and [20] for an exposition.
Third, the significance of holomorphicity in the topology of bounded convergence is that compositions of such functions are also holomorphic in the topology of bounded convergence, with derivatives given by the usual Leibniz rule. This is explained in more detail in Appendix A.3.
Fourth, Theorem 3.1 shows that the obstruction to the existence of \(K\)-nonlocal Green functions for \(P+\lambda A\) is provided by nontrivial smooth solutions to \((P+\lambda A)\phi=0\) with past- or future-compact support (which, if present, span finite dimensional spaces). Any sort of energy estimate would be sufficient to exclude nontrivial solutions of this type, which indicate unphysical behaviour in a closed system. Mathematical examples are easily constructed, however.
**Example 3.3**.: _Let \(P\) be Green-hyperbolic on \(M\) with Green operators \(E^{\pm}\). Fix any nontrivial \(f,h\in C^{\infty}_{K}\) and define \(A:C^{\infty}\to C^{\infty}_{0}\) by_
\[A\phi=-\left(\int h\phi\,\mu\right)f, \tag{3.3}\]
_noting that the range of \(A\) is contained in \(C^{\infty}_{K}\). If \(\phi\in C^{\infty}_{pc/fc}\) obeys \((P+\lambda A)\phi=0\), then \(P\phi=-(\lambda\int h\phi\,\mu)f\) and we deduce that \(\phi\) is a constant multiple of \(E^{\pm}f\); hence it must also be that \((P+\lambda A)E^{\pm}f=0\) or equivalently \(f=\lambda\nu^{\pm}f\) where \(\nu^{\pm}=\int hE^{\pm}f\mu\). Thus a necessary condition for the existence of nontrivial \(\phi\in\ker(P+\lambda A)|_{C^{\infty}_{pc/fc}}\) is that \(\lambda\nu^{\pm}=1\), and it is easily seen that this condition is sufficient. Summarising,_
\[\ker(P+\lambda A)|_{C^{\infty}_{pc/fc}}=\begin{cases}0&\lambda\nu^{\pm}\neq 1 \\ \mathbb{C}E^{\pm}f&\lambda\nu^{\pm}=1.\end{cases} \tag{3.4}\]
_Next, note that \({}^{t}\!A\) takes the same form as \(A\) but with \(f\) and \(h\) exchanged. Therefore \(\ker^{t}(P+\lambda A)|_{C^{\infty}_{pc/fc}}\) is nontrivial if and only if \(\lambda^{t}\nu^{\pm}=1\), where \({}^{t}\nu^{\pm}=\int fE^{\pm}_{t}h\mu\) and \(E^{\pm}_{t}\) are the Green operators of \({}^{t}P\). But \(E^{\pm}_{t}\) is the formal dual of \(E^{\mp}\), so in fact \({}^{t}\nu^{\pm}=\nu^{\mp}\) and_
\[\ker^{t}(P+\lambda A)|_{C^{\infty}_{pc/fc}}=\begin{cases}0&\nu^{\mp}\lambda \neq 1\\ \mathbb{C}E^{\pm}h&\nu^{\mp}\lambda=1\end{cases} \tag{3.5}\]
_and consequently_
\[\dim\ker(P+\lambda A)|_{C^{\infty}_{pc/fc}}=\dim\ker({}^{t}P+\lambda^{t}A)|_{ C^{\infty}_{pc/fc}}. \tag{3.6}\]
_Thus there are at most two values of \(\lambda\) for which \(P+\lambda A\) can fail to be \(K\)-nonlocal Green-hyperbolic. In the case where \(P\) obeys the hypothesis (a) of Theorem 3.1 (e.g., if \(P\) is the Klein-Gordon operator) then \(P+\lambda A\) is \(K\)-nonlocally Green-hyperbolic for all \(\lambda\in\mathbb{C}\setminus\{(\nu^{\pm})^{-1}\}\)._
The example above illustrates a general result based on Fredholm index theory, which was prompted by an insightful question posed to the author by Bar, and is proved in section 6.
**Theorem 3.4**.: _Suppose that the hypotheses of Theorem 3.1 are met for both \(P\) and \(A(\lambda)\), and \({}^{t}P\) and \({}^{t}A(\lambda)\), with \(\beta\geq 0\) and \(s_{*}\leq-\beta<\gamma\). Then, for all \(\lambda\in\mathbb{C}\),_
\[N^{\pm}(\lambda):=\dim\ker(P+A(\lambda))|_{C^{\infty}_{p/\varepsilon/ \varepsilon}}=\dim\ker({}^{t}P+{}^{t}A(\lambda))|_{C^{\infty}_{\xi/p \varepsilon}}. \tag{3.7}\]
_In particular, \(N^{+}(\lambda)=N^{-}(\lambda)\) in the case where \(P\) and \(A(\lambda)\) are formally self-dual._
At first sight it is quite surprising that the spaces of spontaneously appearing and disappearing solutions have equal dimension for the self-dual case. At its core is the fundamental fact about Lorentzian causality that \(x\in J^{\pm}(y)\) if and only if \(y\in J^{\mp}(x)\), and its consequence that the advanced and retarded Green operators of a Green hyperbolic operator \(P\) are formal duals of the retarded and advanced Green operators of the formal dual \({}^{t}P\). This by itself is not enough for the result above, which also makes essential use of the that \(A(\lambda)E^{\pm}\) is compact, because it improves regularity.
The last remark prompts one to consider situations where \(A(\lambda)E^{\pm}\) does not improve regularity.
**Example 3.5**.: _Consider the Green-hyperbolic operator \(P=\partial_{u}\partial_{v}\) with respect to \((u,v)\) coordinates on \(\mathbb{R}^{2}\), regarding vectors with nonnegative components in these coordinates as future-pointing and causal. We may write \(P=\partial\otimes\partial\) in an obvious tensor product notation. Let \(f\), \(g\) and \(h\) be smooth real-valued functions on \(\mathbb{R}\), so that \(f\), \(g^{\prime}\) and \(h\) have support contained in \([-2,2]\), \(\operatorname{supp}g\subset[-2,\infty)\), \(f\equiv 1\) on a neighbourhood of \([-1,1]\) and \(\langle h\mid g\rangle=1\) in the usual \(L^{2}(\mathbb{R})\) inner product. Define \(T=-|g^{\prime}\rangle\langle h\mid\) and note that_
\[g^{\prime}+Tg=0. \tag{3.8}\]
_Setting \(K=[-2,2]\times[-2,2]\), the operator_
\[A=(f\partial)\otimes T \tag{3.9}\]
_maps \(C^{\infty}(\mathbb{R}^{2})\) continuously to \(C^{\infty}_{0}(K)\), vanishes on \(\phi\in C^{\infty}(M)\) with \(\phi|_{K}\equiv 0\), and extends to a continuous map \(H^{s}_{\mathrm{loc}}\to H^{s-1}_{K}\) for any \(s\in\mathbb{R}\). Thus \(AE^{\pm}:H^{s}_{K}\to H^{s}_{K}\). It is now easily seen that the equation_
\[P\varphi+A\varphi=0 \tag{3.10}\]
_is solved by any distribution \(\varphi=\upsilon\otimes g\), where \(\upsilon\in\mathcal{D}^{\prime}(\mathbb{R})\) is supported in \([-1,1]\). Such \(\varphi\) have past compact support, contained in \([-1,1]\times[-2,\infty)\)._
_This example shows that the regularity-improving nature of \(AE^{\pm}\) is responsible for both the smoothness of past-future-compact solutions to \((P+A)\phi=0\) and the finite-dimensionality of the corresponding solution spaces._
In the light of this example it is clear that further conditions would be needed to deal with modifications of first order derivative operators. For example, if \(D\) is a Dirac operator then there is a companion operator \(\tilde{D}\) so that \(P=D\tilde{D}\) and \(\tilde{P}=\tilde{D}D\) are second order Green-hyperbolic with Green operators \(E^{\pm}\) and \(\tilde{E}^{\pm}\) respectively that improve regularity by one order. Then \(G^{\pm}=\tilde{D}E^{\pm}\) and \(\tilde{G}^{\pm}=D\tilde{E}^{\pm}\) are Green operators for \(D\) and \(\tilde{D}\) respectively. Similarly, \(G^{\pm}_{t}={}^{t}\tilde{D}E^{\pm}_{t}\) and \(\tilde{G}^{\pm}_{t}={}^{t}D\tilde{E}^{\pm}_{t}\) are Green operators for \({}^{t}D\) and \({}^{t}\tilde{D}\), where \(E^{\pm}_{t}\) and \(\tilde{E}^{\pm}_{t}\) denote the Green operators for \({}^{t}P\) and \({}^{t}\tilde{P}\). Thus \(D\) and \(\tilde{D}\) are Green-hyperbolic, but we cannot assume that their Green operators improve regularity.
Now consider a \(K\)-nonlocal modification \(D+A\) of \(D\), suppressing \(\lambda\)-dependence in the following. If \(A\) and \({}^{t}A\) have range in \(C^{\infty}_{K}\) and _improve_ regularity, then Theorem 3.1 applies and shows that \(D+A\) is \(K\)-nonlocally Green-hyperbolic provided that there are no nontrivial past-compact or future-compact solutions to \((D+A)\phi=0\) or \(({}^{t}D+{}^{t}A)\phi=0\). More generally, a natural strategy to find Green operators for the modified operator is to seek a \(K\)-nonlocal operator \(\tilde{A}\) so that
\[P_{A}=(D+A)(\tilde{D}-\tilde{A})=D\tilde{D}+A\tilde{D}-D\tilde{A}-A\tilde{A} \tag{3.11}\]
\[\tilde{P}_{A}=(\tilde{D}-\tilde{A})(D+A)=\tilde{D}D+\tilde{D}A-\tilde{A}D-\tilde{A}A \tag{3.12}\]
are \(K\)-nonlocally Green-hyperbolic. If \(A\) is regularity-preserving and one may find a regularity-preserving \(\tilde{A}\) such that \(A\tilde{D}-D\tilde{A}\) and \(\tilde{D}A-\tilde{A}D\) lose _strictly less_ than one order of regularity (for all \(\lambda\in\mathbb{C}\)), then Theorem 3.1 could be applied to \(P_{A}\) and \(\tilde{P}_{A}\) and their Green operators (when they exist) may be used to construct Green operators for \(D+A\) and \(\tilde{D}+\tilde{A}\) as before. Lacking such a choice, one potentially falls victim to the behaviour in Example 3.5, unless Theorem 3.1 can be extended to situations in which \(AE^{\pm}\) does not improve regularity. As these questions are best pursued in the context of specific models, we leave the discussion here and turn to the proof of Theorem 3.1.
## 4 Application: LU factorisation method for systems
LU factorisation is a standard technique in linear algebra for solving system of linear equations \(Mx=y\) for \(x,y\in\mathbb{C}^{N}\). In situations where an invertible matrix \(M=LU\) with \(L\) lower-triangular and \(U\) upper-triangular, the solution is given by solving the triangular systems \(Lz=y\), \(Ux=z\). The method can be generalised in many ways, see, e.g. [18]. Here, we discuss its use to solve systems of Green-hyperbolic equations, or \(K\)-nonlocal generalisations thereof.
To see why this cannot be done straightforwardly, consider a system \(\mathcal{P}\Phi=F\) where \(\mathcal{P}:C^{\infty}(M;\mathbb{C}^{N})\to C^{\infty}(M;\mathbb{C}^{N})\) is a \(N\times N\) matrix of linear self-maps of \(C^{\infty}(M)\), where \(M\) is a globally hyperbolic spacetime as usual and \(N\geq 2\). We adopt a block form
\[\mathcal{P}=\begin{pmatrix}P&R\\ S&Q\end{pmatrix}, \tag{4.1}\]
where \(Q\) is an \((N-1)\times(N-1)\) block, fixing the dimensions of the other blocks accordingly, so \(P=\mathcal{P}_{11}\) is a self-map of \(C^{\infty}(M)\). If \(P\) is a differential operator with retarded and advanced Green operators \(E_{P}^{\pm}\) and \(RC^{\infty}(M;\mathbb{C}^{N-1})\subset C_{0}^{\infty}(M;\mathbb{C})\), one may factorise \(\mathcal{P}\) on \(C_{0}^{\infty}(M;\mathbb{C}^{N})\) as
\[\mathcal{P}=\begin{pmatrix}1&0\\ SE_{P}^{\pm}&1\end{pmatrix}\begin{pmatrix}P&R\\ 0&Q-SE_{P}^{\pm}R\end{pmatrix}. \tag{4.2}\]
This is the first step towards an LU factorisation: the first factor is indeed lower-triangular, but the second is not generally upper-triangular. The problem is that even if all the individual matrix elements of \(\mathcal{P}\) are differential operators, the factorised form involves the typically nonlocal operators \(SE_{P}^{\pm}\) and \(SE_{P}^{\pm}R\). Therefore to proceed with this strategy one should rather phrase the problem from the start in terms of suitable nonlocal operators. (Modulo smoothing operators, one could employ pseudodifferential operators if the leading diagonal operators were elliptic, but here we require something more.)
We describe how an exact LU factorisation can be achieved using Theorem 3.1. Let \(K\) be a fixed compact, topologically regular subset of \(M\), and let \(s_{*}\in\mathbb{R}\). Fix \(\beta,\gamma\in\mathbb{R}\) with \(\beta+\gamma>0\). Let \(\mathcal{A}\) be the space of maps \(A:\mathbb{C}\to\mathcal{L}(C^{\infty}(M))\) so that, for all \(\lambda\in\mathbb{C}\),
* \(A(\lambda)C^{\infty}(M)\subset C_{K}^{\infty}(M)\);
* \(A(\lambda)f\equiv 0\) if \(f\in C^{\infty}(M)\) vanishes identically on \(K\);
* \(A(\lambda)\) extends to a linear map \(\mathcal{D}^{\prime}(M)\to\mathcal{D}^{\prime}_{K}(M)\) with continuous restrictions mapping \(H^{s}_{\text{loc}}\to H^{s+\gamma}_{K}\) for all \(s\geq s_{*}+\beta\);
and so that \(\lambda\to A(\lambda)\) is holomorphic with respect to the topology of \(\mathcal{L}_{b}(H^{s}_{\mathrm{loc}},H^{s+\gamma}_{K})\). Consider a system \(\mathcal{P}\) depending on a parameter \(\lambda\) so that every off-diagonal component \(\mathcal{P}_{ij}\) (\(i\neq j\)) of \(\mathcal{P}\) is a map in \(\mathcal{A}\), and the diagonal components take the form
\[\mathcal{P}_{ii}=P_{i}+A_{i}, \tag{4.3}\]
where each \(P_{i}\) is a \(\lambda\)-independent Green-hyperbolic operator and each \(A_{i}\) is a map in \(\mathcal{A}\). Assume finally that the Green operators of the \(P_{i}\) extend to maps \(H^{s}_{0}\to H^{s+\beta}_{\mathrm{loc}}\) for all \(s\geq s_{*}\).
By Theorem 3.1, \(P(\lambda)=P_{1}+A_{1}(\lambda)\) has retarded and advanced \(K\)-nonlocal Green operators for all \(\lambda\) in an open \(0\)-neighbourhood with discrete complement in \(\mathbb{C}\). We may therefore factorise \(\mathcal{P}\) as in (4.2). A key point is that the \((N-1)\times(N-1)\) dimensional systems
\[\mathcal{P}^{\pm}(\lambda)=Q(\lambda)-S(\lambda)E^{\pm}_{P(\lambda)}R(\lambda) \tag{4.4}\]
obey the same assumptions as the original system, noting that the matrix components of \(SE^{\pm}_{P}R\) determine continuous maps \(H^{s}_{0}\to H^{s+\beta+\delta}_{K}\hookrightarrow H^{s+\beta}_{K}\). Theorem 3.1 implies that the leading diagonal component of \(\mathcal{P}^{+}(\lambda)\) (resp., \(\mathcal{P}^{-}(\lambda)\)) has retarded (resp., advanced) \(K\)-nonlocal Green operators for all \(\lambda\) outside a possibly enlarged exceptional set, so we may factor each of \(\mathcal{P}^{\pm}(\lambda)\). Repeating the process, this leads to two LU factorisations of \(\mathcal{P}(\lambda)\), which differ only in whether advanced or retarded Green operators are used in their construction. At each stage in the process we may gain more exceptional points, but the overall exceptional set is still discrete and excludes zero. From now on, we suppress the parameter \(\lambda\) in the notation.
For non-exceptional \(\lambda\), we may now use the LU factorisation to obtain retarded and advanced Green operators for \(\mathcal{P}\). We proceed inductively in \(N\). When \(N=1\) we are precisely in the situation of Theorem 3.1. Now suppose that Green operators can be constructed for \(N-1\)-dimensional systems of the type considered, where \(N\geq 2\). To establish the inductive step we return to the factorisation (4.2), in which we may now assume that \(Q-SE^{\pm}_{P}R\) has retarded/advanced Green operators and that \(P\) has both retarded and advanced Green operators. We claim that
\[E^{\pm}_{\mathcal{P}}=\begin{pmatrix}E^{\pm}_{P}&-E^{\pm}_{P}RE^{\pm}_{Q-SE^ {\pm}_{P}R}\\ 0&E^{\pm}_{Q-SE^{\pm}_{P}R}\end{pmatrix}\begin{pmatrix}1&0\\ -SE^{\pm}_{P}&1\end{pmatrix} \tag{4.5}\]
are retarded/advanced Green operators for \(\mathcal{P}\). To check this, we first observe that the lower-triangular factor in (4.2), which we denote \(L\), is invertible on \(C^{\infty}_{0}(M;\mathbb{C}^{N})\) with inverse
\[L^{-1}=\begin{pmatrix}1&0\\ -SE^{\pm}_{P}&1\end{pmatrix}. \tag{4.6}\]
Then, we compute on the one hand, that
\[\mathcal{P}E^{\pm}_{\mathcal{P}}F=\begin{pmatrix}P&R\\ S&Q\end{pmatrix}\begin{pmatrix}E^{\pm}_{P}&-E^{\pm}_{P}RE^{\pm}_{Q-SE^{\pm}_{ P}R}\\ 0&E^{\pm}_{Q-SE^{\pm}_{P}R}\end{pmatrix}L^{-1}F=\begin{pmatrix}1&0\\ SE^{\pm}_{P}&1\end{pmatrix}L^{-1}F=F \tag{4.7}\]
and on the other, that
\[E^{\pm}_{\mathcal{P}}\mathcal{P}F=\begin{pmatrix}E^{\pm}_{P}&-E^{\pm}_{P}RE^{ \pm}_{Q-SE^{\pm}_{P}R}\\ 0&E^{\pm}_{Q-SE^{\pm}_{P}R}\end{pmatrix}L^{-1}L\begin{pmatrix}P&R\\ 0&Q-SE^{\pm}_{P}R\end{pmatrix}F=F \tag{4.8}\]
for any \(F\in C^{\infty}_{0}(M;\mathbb{C}^{N})\) in each case. Finally, the support property G3\({}^{\prime}\) is clear, provided one takes a consistent choice of \(+\) or \(-\) in (4.5). Thus \(\mathcal{P}\) has retarded and advanced \(K\)-nonlocal Green operators, which is the inductive step, and shows that all finite-dimensional systems of the type considered possess advanced and retarded \(K\)-nonlocal Green operators; furthermore, these will vary holomorphically in \(\lambda\), in the topology of \(\mathcal{L}_{b}(C^{\infty}_{0}(M;\mathbb{C}^{N}),C^{\infty}(M;\mathbb{C}^{N}))\) outside the exceptional set. In particular, this justifies the treatment of such systems in a recent paper on measurement in QFT [10], where interactions between a'system' QFT and one or more 'probe QFT's are analysed.
Proof of Theorem 3.1
The proof of Theorem 3.1 starts by establishing an inversion result that will later be applied to \(I+A(\lambda)E^{\pm}\), where \(E^{\pm}\) are the Green operators of \(P\). This is proved first in Sobolev spaces \(H^{s}_{K^{\prime}}\), using the analytic Fredholm theorem, and then extended to \(C^{\infty}_{K}\). The main part of the proof then uses this information to define Green operators for \(P+A(\lambda)\).
**Theorem 5.1**.: _Let \(K\subset M\) be a fixed topologically regular compact set. Suppose \(Y(\lambda)\) (\(\lambda\in\mathbb{C}\)) is a family of linear maps \(Y(\lambda):\mathcal{D}^{\prime}_{K}\to\mathcal{D}^{\prime}_{K}\), each restricting to continuous maps \(Y_{s}(\lambda):H^{s}_{K}\to H^{s+\delta}_{K}\) for all \(s\geq s_{*}\in\mathbb{R}\) and some fixed \(\delta>0\). Assume also that \(Y(0)=0\). Let \(\hat{Y}_{s}(\lambda)\in\mathcal{L}(H^{s}_{K})\) denote the compact maps obtained by composing \(Y_{s}(\lambda)\) with the embedding \(H^{s+\delta}_{K}\hookrightarrow H^{s}_{K}\). Suppose that \(\lambda\to\hat{Y}_{s}(\lambda)\) is holomorphic on \(\mathbb{C}\) in \(\mathcal{L}_{b}(H^{s}_{K})\) for all \(s\geq s_{*}\). Then:_
1. _The set_ \[S=\{\lambda\in\mathbb{C}:\ker(I+Y(\lambda))|_{C^{\infty}_{K}}\neq 0\}\] (5.1) _is a discrete subset of_ \(\mathbb{C}\)_, and_ \(\mathbb{C}\setminus S\) _is an open_ \(0\)_-neighbourhood. Furthermore,_ \(\ker(I+Y(\lambda))|_{C^{\infty}_{K}}\) _is finite-dimensional, and equal to_ \(\ker(I+\hat{Y}_{s}(\lambda))\) _for each_ \(s\geq s_{*}\)_._
2. _For all_ \(\lambda\in\mathbb{C}\setminus S\) _and all_ \(s\geq s_{*}\)_, the map_ \(I+\hat{Y}_{s}(\lambda)\) _is continuously invertible for all_ \(s\geq s_{*}\)_, and_ \(\lambda\mapsto(I+\hat{Y}_{s}(\lambda))^{-1}\) _is holomorphic in_ \(\mathbb{C}\setminus S\)_. Moreover,_ \((I+\hat{Y}_{s}(\lambda))^{-1}\) _is the restriction of_ \((I+\hat{Y}_{s_{*}}(\lambda))^{-1}\) _to_ \(H^{s}_{K}\) _for each_ \(s\geq s_{*}\)_,_ \(\lambda\in\mathbb{C}\setminus S\)_._
3. _(i) Suppose that_ \(f:\mathbb{C}\setminus S\to H^{s_{*}}_{K}\) _is holomorphic and there is a compact subset_ \(K_{f}\subset K\) _so that_ \(\operatorname{supp}(\hat{Y}_{s_{*}}(\lambda))^{r}f(\lambda)\subset K_{f}\) _for all_ \(r\in\mathbb{N}_{0}\) _and_ \(\lambda\in\mathbb{C}\setminus S\)_. Then_ \(\operatorname{supp}(I+\hat{Y}_{s_{*}}(\lambda))^{-1}f(\lambda)\subset K_{f}\) _for all_ \(\lambda\in\mathbb{C}\setminus S\)_._ _(ii) If_ \(Y(\lambda)C^{\infty}_{K}\subset C^{\infty}_{K^{\prime}}\) _for some compact_ \(K^{\prime}\subset K\)_, then_ \(\operatorname{supp}(I+\hat{Y}_{s_{*}}(\lambda))^{-1}f\subset K^{\prime}\) _for all_ \(\lambda\in\mathbb{C}\setminus S\) _and all_ \(f\in C^{\infty}_{K}\)_, and_ \(\ker(I+\hat{Y}_{s_{*}}(\lambda))\) _is a finite-dimensional subspace of_ \(C^{\infty}_{K^{\prime}}\) _for all_ \(\lambda\in\mathbb{C}\)_._ _NB In (c),'support' is to be understood as distributional support._
Proof.: (a) Compactness of \(\hat{Y}_{s}(\lambda)\) follows from the Sobolev embedding theorems. The function \(\lambda\mapsto I+\hat{Y}_{s_{*}}(\lambda)\) is an analytic function on \(\mathbb{C}\) with values in the Fredholm operators on \(H^{s_{*}}_{K}\), and which is invertible for \(\lambda=0\) because \(\hat{Y}_{s_{*}}(0)=0\). By the analytic Fredholm theorem [24, Thm VI.14], \(I+\hat{Y}_{s_{*}}(\lambda)\) is invertible for all \(\lambda\in\mathbb{C}\) with the exception of the (possibly empty) set \(S_{s_{*}}\) of \(\lambda\in\mathbb{C}\) for which \(\ker(I+\hat{Y}_{s_{*}}(\lambda))\) is nontrivial (and necessarily finite-dimensional by [24, Thm VI.15]). Furthermore, \(S_{s_{*}}\) is a discrete subset of \(\mathbb{C}\), whose complement is an open \(0\)-neighbourhood, and the inverse \((I+\hat{Y}_{s_{*}}(\lambda))^{-1}\) is meromorphic on \(\mathbb{C}\) and holomorphic on \(\mathbb{C}\setminus S_{s_{*}}\).
If \(f\in\ker(I+\hat{Y}_{s_{*}}(\lambda))\) then \(f=-\hat{Y}_{s_{*}}(\lambda)f\in H^{s_{*}+\delta}_{K}\) obeys \(f\in\ker(I+\hat{Y}_{s_{*}+\delta}(\lambda))\); iterating, \(f\in\bigcap_{s\geq s_{*}}H^{s}_{K}=C^{\infty}_{K}\) and \(f\in\ker(I+Y(\lambda))|_{C^{\infty}_{K}}\). As the converse inclusion is trivial, we deduce that \(\ker(I+\hat{Y}_{s_{*}}(\lambda))=\ker(I+Y(\lambda))|_{C^{\infty}_{K}}\) for all \(\lambda\in\mathbb{C}\) and hence that \(S_{s_{*}}=S\) defined in (5.1). In particular, for each fixed \(\lambda\in\mathbb{C}\), all the kernels \(\ker(I+\hat{Y}_{s}(\lambda))\) are equal for \(s\geq s_{*}\).
(b) The first statement is immediate by the Fredholm theorem and part (a); the second follows because \(I+\hat{Y}_{s}(\lambda)\) coincides with \(I+\hat{Y}_{s_{*}}(\lambda)\) on \(H^{s}_{K}\) and both operators are invertible for \(\lambda\in\mathbb{C}\setminus S\).
(c)(i) Let \(\chi\in C^{\infty}_{0}\) vanish on \(K_{f}\). Regarding each \((\hat{Y}_{s_{*}}(\lambda))^{n}f(\lambda)\) as a distribution with support in \(K_{f}\), we see (e.g., by Theorem 2.3.3 in [16]) that \((\hat{Y}_{s_{*}}(\lambda)^{n}f(\lambda))(\chi)=0\) for all \(n\in\mathbb{N}_{0}\) and so the holomorphic function on \(\mathbb{C}\setminus S\) defined by \(\lambda\mapsto((I+\hat{Y}_{s_{*}}(\lambda))^{-1}f(\lambda))(\chi)\) vanishes in a neighbourhood of the origin on which the resolvent may be expanded as a convergent geometric series in powers of \(\hat{Y}_{s_{*}}(\lambda)\), recalling that \(\hat{Y}_{s_{*}}(\lambda)\to 0\) as \(\lambda\to 0\). By holomorphicity, it follows that \(((I+\hat{Y}_{s_{*}}(\lambda))^{-1}f(\lambda))(\chi)=0\) for all \(\lambda\in\mathbb{C}\setminus S\). Allowing \(\chi\) to vary, we conclude that \(\operatorname{supp}(I+\hat{Y}_{s_{*}}(\lambda))^{-1}f(\lambda)\subset K_{f}\).
(c)(ii) The assumption implies that of (c)(i) for all \(f\in C_{K}^{\infty}\) (regarded as constant functions from \(\mathbb{C}\setminus S\) to \(H_{K}^{s_{*}}\)) with \(K_{f}=K^{\prime}\), so \(\operatorname{supp}(I+\hat{Y}_{s_{*}}(\lambda))^{-1}f\subset K^{\prime}\) for all \(\lambda\in\mathbb{C}\setminus S\) and all \(f\in C_{K}^{\infty}\). Finally, if \(g\in\ker(I+\hat{Y}_{s_{*}}(\lambda))=\ker(I+Y(\lambda))|_{C_{K}^{\infty}}\) (already known to be finite-dimensional) then \(g\in C_{K}^{\infty}\) and \(g=-Y(\lambda)g\in C_{K^{\prime}}^{\infty}\).
The hypotheses of Theorem 5.1 entail that each \(Y(\lambda)\) restricts to a continuous endomorphism of \(C_{K}^{\infty}\). To see this, note that for any \(k\in\mathbb{N}_{0}\) and any integer \(s>k+n/2\), where \(n\) is the maximum dimension of any component of \(M\), there are continuous maps
\[C_{K}^{s}\longrightarrow H_{K}^{s}\overset{Y(\lambda)}{\longrightarrow}H_{K} ^{s}\longrightarrow C_{K}^{k}, \tag{5.2}\]
where the unlabelled arrows are Sobolev embeddings. Thus for all \(k\in\mathbb{N}_{0}\) there is \(s\in\mathbb{N}_{0}\) and a constant \(C\) such that \(\|Y(\lambda)f\|_{K,k}\leq C\|f\|_{K,s}\) for all \(f\in C_{K}^{\infty}\). This observation allows the following conclusion to be drawn.
**Corollary 5.2**.: _In the notation of Theorem 5.1, and for \(\lambda\in\mathbb{C}\setminus S\), \(I+Y(\lambda)\) restricts to a homeomorphism of \(C_{K}^{\infty}\), with inverse to be denoted \(R(\lambda)\). The map \(\lambda\mapsto R(\lambda)\) is holomorphic on \(\mathbb{C}\setminus S\) in the topology of \(\mathscr{L}_{b}(C_{K}^{\infty})\). If, additionally, for some holomorphic function \(f:\mathbb{C}\setminus S\to C_{K}^{\infty}\) there is a compact subset \(K_{f}\subset K\) so that \(\operatorname{supp}Y(\lambda)^{\prime}f(\lambda)\subset K_{f}\) for all \(r\in\mathbb{N}_{0}\) and \(\lambda\in\mathbb{C}\setminus S\), then \(\operatorname{supp}R(\lambda)f(\lambda)\subset K_{f}\) for all \(\lambda\in\mathbb{C}\setminus S\). If \(Y(\lambda)C_{K}^{\infty}\subset C_{K^{\prime}}^{\infty}\) for some compact \(K^{\prime}\subset K\), then \(\ker(I+Y(\lambda))|_{C_{K}^{\infty}}\) is a finite-dimensional subspace of \(C_{K^{\prime}}^{\infty}\) for all \(\lambda\in\mathbb{C}\)._
Proof.: Suppose \(f\in C_{K}^{\infty}\) and \(\lambda\notin S\). Then, for all \(s\geq s_{*}\), we have \(f\in H_{K}^{s}\) and \((I+\hat{Y}_{s}(\lambda))^{-1}f=(I+\hat{Y}_{s_{*}}(\lambda))^{-1}f\), and consequently
\[(I+\hat{Y}_{s_{*}}(\lambda))^{-1}f\in\bigcap_{s\geq s_{*}}H_{K}^{s}=C_{K}^{ \infty}. \tag{5.3}\]
Thus \((I+\hat{Y}_{s_{*}}(\lambda))^{-1}\) restricts to linear self-map of \(C_{K}^{\infty}\) which (because \(Y(\lambda)\) and \(\hat{Y}_{s_{*}}(\lambda)\) agree on \(C_{K}^{\infty}\)) is a linear inverse to the restriction of \(I+Y(\lambda)\). Accordingly, \(I+Y(\lambda)\) restricts to a continuous bijection of \(C_{K}^{\infty}\), and since the latter is a Frechet space the inverse mapping theorem implies that the inverse \(R(\lambda)\) is continuous. As every \(C_{K}^{\infty}\) semi-norm is dominated by a Sobolev norm and vice versa, convergence in the operator norm of every \(H_{K}^{s}\) (\(s\geq s_{*}\)) implies convergence in \(\mathscr{L}_{b}(C_{K}^{\infty})\), by Corollary A.5 in the Appendix. It follows that \(\lambda\mapsto R(\lambda)\) is holomorphic on \(\mathbb{C}\setminus S\) in the topology of \(\mathscr{L}_{b}(C_{K}^{\infty})\). The remaining statements are immediate from parts (a,c) of Theorem 5.1 and the fact that distributional support of a smooth function is exactly its usual support.
We need the following elementary observation, which will be used for \(F=C_{K}^{\infty}\), \(G=C_{0}^{\infty}\).
**Lemma 5.3**.: _Let \(F\) and \(G\) be topological vector spaces and suppose that the diagram_
(5.4)
_of continuous linear maps commutes. If \(\operatorname{id}_{F}+S\) is continuously invertible, then \(\operatorname{id}_{G}+T\) is continuously invertible with inverse_
\[(\operatorname{id}_{G}+T)^{-1}=\operatorname{id}_{G}-T+\imath S(\operatorname {id}_{F}+S)^{-1}\hat{T}. \tag{5.5}\]
Proof.: Assuming that \(\mathrm{id}_{F}+S\) is continuously invertible, we compute
\[(\mathrm{id}_{G}+T)(\mathrm{id}_{G}-T+\iota S(\mathrm{id}_{F}+S)^{-1}\hat{T})= \mathrm{id}_{G}-T^{2}+\iota(\mathrm{id}_{F}+S)S(\mathrm{id}_{F}+S)^{-1}\hat{T}= \mathrm{id}_{G}\]
and
\[(\mathrm{id}_{G}-T+\iota S(\mathrm{id}_{F}+S)^{-1}\hat{T})(\mathrm{id}_{G}+T)= \mathrm{id}_{G}-T^{2}+\iota S(\mathrm{id}_{F}+S)^{-1}(\mathrm{id}_{F}+S)\hat{T} =\mathrm{id}_{G}.\]
using in both cases the identities \(\iota S=T\iota\) and hence \(\iota S\hat{T}=T\iota\hat{T}=T^{2}\), and also \(\hat{T}T=\hat{T}\iota\hat{T}=S\hat{T}\). Therefore \(\mathrm{id}_{G}+T\) has a linear inverse, given by a manifestly continuous expression.
We come to the proof of the main result.
Proof of Theorem 3.1.: The proof involves several steps, and uses some technical lemmas from the Appendix.
_1. Preliminary observations_ Define \(Y^{\pm}(\lambda)\in\mathcal{L}(C_{0}^{\infty},C^{\infty})\) by
\[Y^{\pm}(\lambda)=A(\lambda)E^{\pm}. \tag{5.6}\]
Owing to assumptions (a) and (b), we may extend \(Y^{\pm}(\lambda)\) to linear maps \(\mathcal{E}^{\prime}\to\mathcal{D}^{\prime}_{K}\) with continuous restrictions mapping \(H_{0}^{s}\to H_{K}^{s+\delta}\) for all \(s\geq s_{*}\); hence there are also continuous restrictions \(Y_{s}^{\pm}(\lambda):H_{K}^{s}\to H_{K}^{s+\delta}\) for \(s\geq s_{*}\), and by the Sobolev embedding theorems, compact maps \(\hat{Y}_{s}^{\pm}(\lambda):H_{K}^{s}\to H_{K}^{s}\) for \(s\geq s_{*}\), which are holomorphic with respect to the operator norm topology, as noted after the statement of Theorem 3.1.
In fact, if \(K^{\prime}\) is any compact topologically regular set, the same argument shows that \(Y^{\pm}(\lambda)\) define compact maps depending holomorphically on \(\lambda\) in the topology of \(\mathcal{L}_{b}(H_{K\cup K^{\prime}}^{s})\) for all \(s\geq s_{*}\). By Corollary A.5, it follows that \(Y^{\pm}(\lambda)\) is holomorphic in \(\lambda\) with respect to the \(\mathcal{L}_{b}(C_{K\cup K^{\prime}}^{\infty})\) topology. As \(C_{K}^{\infty}\) and \(C_{K^{\prime}}^{\infty}\) are continuously embedded topological subspaces of \(C_{K\cup K^{\prime}}^{\infty}\), and \(\mathrm{Ran}Y^{\pm}(\lambda)\subset C_{K}^{\infty}\), we may use Lemma A.2(a),(c) to deduce that the \(Y^{\pm}(\lambda)\) is holomorphic in \(\lambda\) with respect to the topology of \(\mathcal{L}_{b}(C_{K^{\prime}}^{\infty},C_{K}^{\infty})\). By Lemma A.6 it follows that \(A(\lambda)E^{\pm}\) are also holomorphic with respect to \(\mathcal{L}_{b}(C_{0}^{\infty},C_{K}^{\infty})\) and \(\mathcal{L}_{b}(C_{0}^{\infty})\); using Lemma A.2(a), we also have holomorphicity with respect to \(\mathcal{L}_{b}(C_{K}^{\infty},C_{0}^{\infty})\).
_2. Finite dimensionality of \(\ker(P+A(\lambda))|_{C_{pc/fc}^{\infty}}\)_. Next, observe that \(P\) induces bijections between \(\ker(P+A(\lambda))|_{C_{pc/fc}^{\infty}}\) and \(\ker(I+Y^{\pm}(\lambda))|_{C_{K}^{\infty}}\), with inverses given by the restrictions of \(E^{\pm}\). For if \((P+A(\lambda))\phi=0\) with \(\phi\in C_{pc/fc}^{\infty}\) then \(P\phi=-A(\lambda)\phi\in C_{K}^{\infty}\) and \(\phi=E^{\pm}P\phi\), so \(P\phi\in\ker(I+A(\lambda)E^{\pm})|_{C_{K}^{\infty}}\); conversely, if \((I+A(\lambda)E^{\pm})h=0\) with \(h\in C_{K}^{\infty}\), then \(PE^{\pm}h=h=-A(\lambda)E^{\pm}h\), so \(E^{\pm}h\in\ker(P+A(\lambda))|_{C_{pc/fc}^{\infty}}\). Thus
\[S^{\pm}:=\{\lambda\in\mathbb{C}:\ker(P+A(\lambda))|_{C_{pc/fc}^{\infty}}\neq 0 \}=\{\lambda\in\mathbb{C}:\ker(I+Y^{\pm}(\lambda))|_{C_{K}^{\infty}}\neq 0\}. \tag{5.7}\]
In combination with Theorem 5.1(a), we also have
\[\dim\ker(P+A(\lambda))|_{C_{pc/fc}^{\infty}}=\dim\ker(I+Y_{s}^{\pm}(\lambda)) |_{C_{K}^{\infty}}=\dim\ker(I+\hat{Y}_{s}^{\pm}(\lambda))<\infty \tag{5.8}\]
(the latter kernel taken in \(H_{K}^{s}\)) for all \(s\geq s_{*}\). Part (A) of the Theorem is thus proved.
_3. Construction of holomorphic candidate \(K\)-nonlocal Green operators._ Applying Theorem 5.1 and Corollary 5.2 to \(Y^{\pm}(\lambda)\), one finds that \(S^{\pm}\) are discrete subsets of \(\mathbb{C}\), whose complements in \(\mathbb{C}\) are open \(0\)-neighbourhoods, and the operators \(I+Y^{\pm}(\lambda)\) are continuously invertible on \(C_{K}^{\infty}\) for \(\lambda\in\mathbb{C}\setminus S^{\pm}\), with inverses that are holomorphic on \(\mathbb{C}\setminus S^{\pm}\) in the topology of \(\mathcal{L}_{b}(C_{K}^{\infty})\). All these properties hold also for \(S=S^{+}\cup S^{-}\), so part (B) is proved.
Fixing any \(\lambda\in\mathbb{C}\setminus S\), the diagram of continuous maps
(5.9)
commutes, where the unlabelled arrows are the canonical inclusions, \(T_{0}\) and \(T_{K}\) are the respective restrictions of \(Y^{\pm}(\lambda)\) to \(C_{0}^{\infty}\), and \(C_{K}^{\infty}\) and \(\hat{T}_{0}\) exists because \(A(\lambda)C^{\infty}\subset C_{K}^{\infty}\). It follows from Lemma 5.3 that \(I+Y^{\pm}(\lambda)\) are continuously invertible on \(C_{0}^{\infty}\). By abuse of notation we write the inverses as \((I+A(\lambda)E^{\pm})^{-1}\); Lemma 5.3 now gives the identity
\[(I+A(\lambda)E^{\pm})^{-1}=I-A(\lambda)E^{\pm}+A(\lambda)E^{\pm}(I+A(\lambda) E^{\pm})^{-1}A(\lambda)E^{\pm}, \tag{5.10}\]
on \(C_{0}^{\infty}(K)\), where we suppress notation for inclusions and restrictions. Note that the inverse on the right-hand side is taken in \(\mathcal{L}(C_{K}^{\infty})\), while the left-hand side is an inverse in \(\mathcal{L}(C_{0}^{\infty})\). Because the former inverse is holomorphic in \(\mathcal{L}_{b}(C_{K}^{\infty})\), the Leibniz rule (see Corollary A.3) implies that the left-hand side is holomorphic in \(\mathcal{L}_{b}(C_{0}^{\infty})\); here, we have also used the holomorphicity of \(A(\lambda)E^{\pm}\) in \(\mathcal{L}_{b}(C_{0}^{\infty},C_{K}^{\infty})\), \(\mathcal{L}_{b}(C_{0}^{\infty})\) and \(\mathcal{L}_{b}(C_{K}^{\infty},C_{0}^{\infty})\) established in step 1 of the proof. It follows that the operators
\[\tilde{E}_{\lambda}^{\pm}=E^{\pm}(I+A(\lambda)E^{\pm})^{-1}\in\mathcal{L}(C_{0 }^{\infty},C^{\infty}), \tag{5.11}\]
which are the candidate Green operators for \(P+A(\lambda)\), are holomorphic in \(\lambda\) on \(\mathbb{C}\setminus S\) with respect to the topology of \(\mathcal{L}_{b}(C_{0}^{\infty},C^{\infty})\). To prove part (C) it is now enough to check that \(\tilde{E}_{\lambda}^{\pm}\) are indeed \(K\)-nonlocal Green operators.
_4. Verification that \(\tilde{E}_{\lambda}^{\pm}\) are \(K\)-nonlocal Green operators._ Given any \(f\in C_{0}^{\infty}\),
\[g=(I+A(\lambda)E^{\pm})^{-1}f \tag{5.12}\]
is the unique element of \(C_{0}^{\infty}\) obeying
\[g+A(\lambda)E^{\pm}g=f, \tag{5.13}\]
whereupon we deduce that \(\operatorname{supp}g\subset\operatorname{supp}f\cup K\) and that
\[\varphi=E^{\pm}g=E^{\pm}(I+A(\lambda)E^{\pm})^{-1}f=\tilde{E}_{\lambda}^{\pm}f \tag{5.14}\]
satisfies
\[P\varphi+A(\lambda)\varphi=g+A(\lambda)E^{\pm}g=f,\qquad\operatorname{supp} \varphi\subset J^{\pm}(\operatorname{supp}g)\subset J^{\pm}(K\cup\operatorname {supp}f). \tag{5.15}\]
In the special case \(f=(P+A(\lambda))h\), for \(h\in C_{0}^{\infty}\), we have
\[f=(P+A(\lambda))E^{\pm}Ph=Ph+A(\lambda)E^{\pm}Ph \tag{5.16}\]
and the unique solution to (5.13) is clearly \(g=Ph\). Thus
\[Ph=g=(I+A(\lambda)E^{\pm})^{-1}f=(I+A(\lambda)E^{\pm})^{-1}(P+A(\lambda))h; \tag{5.17}\]
consequently \(E^{\pm}(I+A(\lambda)E^{\pm})^{-1}(P+A(\lambda))h=h\). In combination with (5.15), we have shown
\[(P+A(\lambda))\tilde{E}_{\lambda}^{\pm}f=f=\tilde{E}_{\lambda}^{\pm}(P+A( \lambda))f,\qquad\text{and}\qquad\operatorname{supp}\tilde{E}_{\lambda}^{\pm }f\subset J^{\pm}(K\cup\operatorname{supp}f) \tag{5.18}\]
for all \(f\in C_{0}^{\infty}\). Now suppose more specifically that \(J^{\pm}(\operatorname{supp}f)\cap K=\emptyset\) for \(f\in C_{0}^{\infty}\). Then \(A(\lambda)E^{\pm}f=0\), due to assumption (d), and (5.13) is solved by \(g=f\), so \(\tilde{E}_{\lambda}^{\pm}f=E^{\pm}f\) has support contained in \(J^{\pm}(\operatorname{supp}f)\). Accordingly,
\[\operatorname{supp}\tilde{E}_{\lambda}^{\pm}f\subset\begin{cases}J^{\pm}( \operatorname{supp}f)&J^{\pm}(\operatorname{supp}f)\cap K=\emptyset\\ J^{\pm}(K\cup\operatorname{supp}f)&\text{otherwise},\end{cases} \tag{5.19}\]
so \(\tilde{E}_{\lambda}^{\pm}\) are \(K\)-nonlocal Green operators for \(P+A(\lambda)\). Part (C) is complete.
_5. Continuous extension._ Due to (5.10), one has the formula
\[\tilde{E}_{\lambda}^{\pm}=E^{\pm}-E^{\pm}A(\lambda)E^{\pm}+E^{\pm}A(\lambda)E ^{\pm}(I+A(\lambda)E^{\pm})^{-1}A(\lambda)E^{\pm}, \tag{5.20}\]
in which the three terms on the right-hand side have continuous extensions as maps from \(H_{0}^{s}\) to \(H_{\text{loc}}^{s+\beta}\), \(H_{\text{loc}}^{s+\beta+\delta}\) and \(H_{\text{loc}}^{s+\beta+2\delta}\) respectively. Because \(\delta>0\), we deduce that \(\tilde{E}_{\lambda}^{\pm}\) extends continuously to a map \(H_{0}^{s}\to H_{\text{loc}}^{s+\beta}\) as required. By the Leibniz rule (Corollary A.3), this extension is holomorphic in \(\lambda\) with respect to the topology of \(\mathcal{L}_{b}(H_{0}^{s},H_{\text{loc}}^{s+\beta})\). This proves part (D).
_6. Support non-increasing modifications._ Finally, suppose condition (d\({}^{\prime}\)) holds, so that one has \(\operatorname{supp}A(\lambda)f\subset\operatorname{supp}f\) as well as \(\operatorname{supp}A(\lambda)f\subset K\) for all \(f\in C^{\infty}\). Then (d) also holds, either as a consequence of Peetre's theorem [23] or by the following direct argument: if \(f\in C^{\infty}\) vanishes identically on \(K\), then the carrier of \(A(\lambda)f\in C_{K}^{\infty}\) satisfies both
\[\operatorname{carr}A(\lambda)f\subset\operatorname{int}(K) \tag{5.21}\]
and
\[\operatorname{carr}A(\lambda)f\subset\operatorname{supp}A(\lambda)f\subset \operatorname{supp}f=\overline{\operatorname{carr}f}\subset\overline{M\setminus K }=M\setminus\operatorname{int}(K) \tag{5.22}\]
and we deduce that \(\operatorname{carr}A(\lambda)f\) is empty, i.e., \(A(\lambda)f\) vanishes identically. Therefore all the conclusions reached previously still hold.
It further follows that
\[\operatorname{supp}Y^{\pm}(\lambda)f=\operatorname{supp}A(\lambda)E^{\pm}f \subset K\cap\operatorname{supp}E^{\pm}f\subset K\cap J^{\pm}(\operatorname{ supp}f)\qquad\text{for all }f\in C_{0}^{\infty}; \tag{5.23}\]
iterating, we have in particular that
\[\operatorname{supp}(Y^{\pm}(\lambda))^{\prime}f\subset K_{f}^{\pm}:=K\cap J^ {\pm}(\operatorname{supp}f)\qquad\text{for all }r\in\mathbb{N},\,f\in C_{0}^{\infty}\text{ and }\lambda\in\mathbb{C}\setminus S. \tag{5.24}\]
Therefore \((Y^{\pm}(\lambda))^{\prime}Y^{\pm}(\lambda)f\in C_{K_{f}^{\pm}}^{\infty} \subset C_{K}^{\infty}\) for all \(r\in\mathbb{N}_{0}\), \(f\in C_{0}^{\infty}\), \(\lambda\in\mathbb{C}\setminus S\). Using Corollary 5.2, applied to the function \(\lambda\mapsto Y^{\pm}(\lambda)f\in C_{K}^{\infty}\), it follows that both \(\operatorname{supp}(I+Y^{\pm}(\lambda))^{-1}Y^{\pm}(\lambda)f\) and
\[\operatorname{supp}Y^{\pm}(\lambda)(I+Y^{\pm}(\lambda))^{-1}Y^{\pm}(\lambda) f=\operatorname{supp}(I-(I+Y^{\pm}(\lambda))^{-1})Y^{\pm}(\lambda)f \tag{5.25}\]
are contained in \(K_{f}^{\pm}\) for all \(\lambda\in\mathbb{C}\setminus S\). Using the identity (5.10), it now follows that
\[\operatorname{supp}(I+A(\lambda)E^{\pm})^{-1}f\subset\operatorname{supp}f \cup\big{(}K\cap J^{\pm}(\operatorname{supp}f)\big{)}\subset J^{\pm}( \operatorname{supp}f) \tag{5.26}\]
and hence \(\operatorname{supp}\tilde{E}_{\lambda}^{\pm}f\subset J^{\pm}(J^{\pm}( \operatorname{supp}f))=J^{\pm}(\operatorname{supp}f)\), which is the support property G3 for standard Green operators, completing the proof of part (E) and therefore the whole theorem.
Proof of Theorem 3.4
The aim of this section is to prove a relation between the spontaneously appearing and disappearing solutions for the operators \(P+A(\lambda)\) and \({}^{t}P+{}^{t}A(\lambda)\) stated in Theorem 3.4.
Starting with some preliminaries, let \(E^{\pm}\) and \(E^{\pm}_{t}\) be the Green operators for \(P\) and \({}^{t}P\). Because \(s_{*}\leq-\beta\leq 0\), \(E^{\pm}\) and \(E^{\pm}_{t}\) have extensions from \(C^{\infty}_{K}\) to continuous maps in both \(\mathscr{L}(H^{0}_{0},H^{\beta}_{\mathrm{loc}})\) and \(\mathscr{L}(H^{\gamma}_{0},H^{\delta}_{\mathrm{loc}})\), while \(A(\lambda)\) and \({}^{t}A(\lambda)\) have extensions from \(C^{\infty}\) to continuous maps in both \(\mathscr{L}(L^{2}_{\mathrm{loc}},H^{\gamma}_{K})\) (and consequently \(\mathscr{L}(L^{2}_{K},H^{\gamma}_{K})\)) and \(\mathscr{L}(H^{\beta}_{\mathrm{loc}},H^{\delta}_{K})\). We also write \(Y^{\pm}(\lambda)=A(\lambda)E^{\pm}\) and \(Y^{\pm}_{t}={}^{t}A(\lambda)E^{\pm}_{t}\) as compact operators on any \(H^{s}_{K}\) (the value of \(s\) will be clear from context, and we suppress embedding maps between various \(H^{s}_{K}\) spaces).
We generally abuse notation by using the same notation for \(A(\lambda)\) whether it operates on \(H^{s}_{K}\) or \(H^{s}_{\mathrm{loc}}\) and regardless of \(s\), but our arguments will take proper account of the domains concerned. We make use of two technical facts in the \(s=0\) case.
**Lemma 6.1**.: _Under the stated assumptions on \(A(\lambda)\) and \(Y^{\pm}(\lambda)\): (a) the identity_
\[A(\lambda)f=A(\lambda)(f|_{K}) \tag{6.1}\]
_holds for all \(f\in L^{2}_{\mathrm{loc}}\), where we understand the map \(A(\lambda)\in\mathscr{L}(L^{2}_{\mathrm{loc}},H^{\gamma}_{K})\) on the left-hand side and \(A(\lambda)\in\mathscr{L}(L^{2}_{K},H^{\gamma}_{K})\) on the right-hand side; (b) for the operators \(Y^{\pm}_{t}(\lambda)\in\mathscr{L}(L^{2}_{K},L^{2}_{K})\), we have_
\[(Y^{\pm}_{t}(\lambda))^{*}f=(\Gamma E^{\mp}A(\lambda)\Gamma f)|_{K},\qquad f \in L^{2}_{K}, \tag{6.2}\]
_where \(\Gamma\) denotes complex conjugation._
Proof.: We suppress the \(\lambda\) dependence in this proof. For part (a), first choose \(f_{n}\in C^{\infty}\) with \(f_{n}\to f\) in \(L^{2}_{\mathrm{loc}}\). Then for any \(g\in C^{\infty}_{0}\) the distributional action of \(Af\) on \(\mu g\) (recalling that \(\mu\) is the volume density) is given by
\[(Af)(\mu g)=\lim_{n\to\infty}\int_{M}\mu g(Af_{n})=\lim_{n\to\infty}\int_{M} \mu({}^{t}Ag)f_{n}=\langle\overline{{}^{t}Ag}\mid f|_{K}\rangle, \tag{6.3}\]
where the inner product \(\langle\cdot\mid\cdot\rangle\) is that of \(L^{2}_{K}\) and we use the fact that \({}^{t}Ag\in C^{\infty}_{K}\). Now choose a new sequence \(f_{n}\in C^{\infty}_{K}\) with \(f_{n}\to f|_{K}\) in \(L^{2}_{K}\). Then
\[(Af)(\mu g)=\langle\overline{{}^{t}Ag}\mid f|_{K}\rangle=\lim_{n\to\infty}\int _{M}\mu({}^{t}Ag)f_{n}=\lim_{n\to\infty}\int_{M}\mu g(Af_{n})=(Af|_{K})(\mu g). \tag{6.4}\]
As \(g\) was arbitrary, \(Af\) and \(Af|_{K}\) define the same distribution and hence the same element of \(H^{\gamma}_{K}\).
For part (b), we compute for \(f,h\in C^{\infty}_{K}\) that
\[\langle(Y^{\pm}_{t})^{*}f\mid h\rangle=\langle f\mid Y^{\pm}_{t}h\rangle=\int_ {M}\mu\overline{{}^{t}A}E^{\pm}_{t}h=\int_{M}\mu(E^{\mp}A\overline{{}^{t}A} \overline{{}^{t}})h=\langle(\Gamma E^{\mp}A\Gamma f)|_{K}\mid h\rangle.\]
Thus we have \((Y^{\pm}_{t}(\lambda))^{*}f=(\Gamma E^{\mp}A\Gamma f)|_{K}\) for all \(f\in C^{\infty}_{K}\) and hence for all \(f\in L^{2}_{K}\) by continuity.
Proof of Theorem 3.4.: Let \({}^{t}N^{\pm}(\lambda)=\dim\ker(P+A(\lambda))|_{C^{\infty}_{\kappa/\kappa}}\). By the remark (5.8) in the proof of Theorem 3.1, applied to \(Y^{\pm}_{t}\) in the case \(s=0\), we have \({}^{t}N^{\pm}(\lambda)=\dim\ker(I+Y^{\pm}_{t}(\lambda))\) in \(L^{2}_{K}\). As \(Y^{\pm}_{t}\) are compact and holomorphic in the suppressed parameter \(\lambda\), Fredholm theory implies that the index
\[\mathrm{Index}(Y^{\pm}_{t}(\lambda))=\dim\ker(I+Y^{\pm}_{t}(\lambda))-\dim \ker(I+(Y^{\pm}_{t}(\lambda))^{*}) \tag{6.5}\]
is independent of \(\lambda\) and therefore vanishes on considering \(\lambda=0\) - see e.g., Theorem 4.3.12 in [7] - so
\[{}^{t}N^{\pm}(\lambda)=\dim\ker(I+(Y^{\pm}_{t}(\lambda))^{*}). \tag{6.6}\]
Below, we will show that \(A\Gamma\) induces an antilinear injection between the kernels of \(I+(Y^{\pm}_{t}(\lambda))^{*}\) and \(I+Y^{\mp}(\lambda)\) in \(L^{2}_{K}\); swapping the roles of \(P\), \(A(\lambda)\) and \({}^{t}P\), \({}^{t}A(\lambda)\) we find that the spaces have equal (finite) dimension. Consequently,
\[{}^{t}N^{\pm}(\lambda)=\dim\ker(I+Y^{\mp}(\lambda))=N^{\mp}(\lambda), \tag{6.7}\]
again using remark (5.8) in the proof of Theorem 3.1, which is the desired result.
It remains to prove that \(A\Gamma\) provides the required antilinear injection. Suppose that there is \(f\in L^{2}_{K}\setminus\{0\}\) with \(\Gamma f=-(Y^{\pm}_{t}(\lambda))^{*}\Gamma f\) for some fixed \(\lambda\). Then by Lemma 6.1(b)
\[f=-(E^{\mp}A(\lambda)f)|_{K} \tag{6.8}\]
and since \(f\neq 0\) it also follows that \(A(\lambda)f\neq 0\) (otherwise \(f=-E^{\mp}A(\lambda)f|_{K}=0\)). Furthermore,
\[A(\lambda)f=-A(\lambda)(E^{\mp}A(\lambda)f)|_{K}=-A(\lambda)E^{\mp}A(\lambda) f=-Y^{\mp}(\lambda)A(\lambda)f \tag{6.9}\]
holds in \(H^{Y}_{K}\), so \(A(\lambda)f\in\ker(I+Y^{\mp}(\lambda))\) in \(H^{Y}_{K}\), and therefore also in \(L^{2}_{K}\) (since \(\gamma>0\geq s_{*}\) the kernels are equal by Theorem 5.1(a)) Accordingly, \(A(\lambda)\Gamma\) is an antilinear injection from the \(L^{2}_{K}\) kernel of \(I+({}^{t}Y^{\pm}(\lambda))^{*}\) to the \(L^{2}_{K}\) kernel of \(I+Y^{\mp}(\lambda)\).
_Acknowledgement_ It is a pleasure to thank Rainer Verch for useful discussions at various stages of this work and Maximilian Ruep for a careful reading and comments on a draft of the manuscript. I particularly thank Christian Bar for asking the question that prompted Theorem 3.4 and also Lashi Bandara and other participants of the conference 'Global analysis on manifolds' held in Bar's honour (Freiburg, September 2022) for useful remarks and conversations.
## Appendix A Some topological vector spaces
We briefly rehearse the definition and main properties of the various \(C^{k}\) and Sobolev spaces encountered in the text, broadly following [2, 4], before turning to some properties of the topology of bounded convergence that are also needed. No originality is claimed for the material given here.
### Spaces of smooth and \(C^{k}\) functions
Let \(M\) be a smooth manifold and let \(C^{k}(M)\) (\(k\in\mathbb{N}_{0}\cup\{\infty\}\)) be the vector space of complex-valued \(k\)-times continuously differentiable functions on \(M\). For each \(k\in\mathbb{N}_{0}\) and compact \(K\subset M\), one has a seminorm
\[\|f\|_{K,k}=\max_{0\leq r\leq k}\max_{x\in K}|(\nabla^{r}f)(x)|_{r}\] (A.1)
on \(C^{k}(M)\), where \(\nabla\) is an arbitrarily chosen connection on \(M\) and \(|\cdot|_{r}\) an arbitrarily chosen norm making \(T^{*}M^{\otimes r}\) a (finite dimensional) Banach bundle; different choices result in equivalent seminorms. The collection of seminorms \(\|\cdot\|_{K,k}\) as \(K\) runs over compact subsets of \(M\) and \(k\in\mathbb{N}_{0}\) provides a Frechet topology on \(C^{\infty}(M)\); similarly, we obtain a Frechet topology on \(C^{k}(M)\) (\(k\in\mathbb{N}_{0}\)) using the seminorms \(\|\cdot\|_{K,k}\).
If \(A\) is closed, we define \(C^{\infty}_{A}(M)=\{f\in C^{\infty}(M):\operatorname{supp}f\subset A\}\) with the relative topology. Thus the topology is defined by the seminorms \(\|\cdot\|_{K,k}\) as \(K\) runs over compact subsets \(K\subset A\) and \(k\in\mathbb{N}_{0}\); if \(A\) is compact, it is sufficient to use the seminorms \(\|\cdot\|_{A,k}\) (\(k\in\mathbb{N}_{0}\)). As \(C^{\infty}_{A}(M)\) is closed subspace of a Frechet space, it is also Frechet. Defining \(C^{k}_{A}(M)\) as the analogous subspace of \(C^{k}(M)\), the topology is generated by seminorms \(\|\cdot\|_{K,k}\) for compact \(K\subset A\), or just the single seminorm \(\|\cdot\|_{A,k}\) (which is a norm on \(C^{k}_{A}(M)\)) in the case that \(A\) is compact.
A _support system_[2] is a subset \(\mathcal{A}\) of the set of all closed subsets on \(M\), which is closed under finite unions and has the property that for each \(A\in\mathcal{A}\) it holds that (i) \(A\subset\operatorname{int}\left(A^{\prime}\right)\) for some \(A^{\prime}\in\mathcal{A}\) and (ii) if \(A^{\prime}\) is a closed subset of \(M\) with \(A^{\prime}\subset A\) then \(A^{\prime}\in\mathcal{A}\). Any support system is a directed system with respect to inclusion and we write
\[C^{\infty}_{A}(M)=\bigcup_{A\in\mathcal{A}}C^{\infty}_{A}(M)\] (A.2)
with the locally convex inductive limit topology, so that a convex set \(U\subset C^{\infty}_{A}(M)\) is a neighbourhood of \(0\) if and only if \(U\cap C^{\infty}_{A}(M)\) is a neighbourhood of \(0\) in \(C^{\infty}_{A}(M)\) for every \(A\in\mathcal{A}\); because \(\mathcal{A}\) is directed, one also has that a convex set \(O\) is open if and only if \(O\cap A\) is open in each \(C^{\infty}_{A}(M)\).
Examples of support systems include the set of compact sets, leading to the space of compactly supported functions \(C^{\infty}_{0}(M)\), and the sets of (strictly) future/past/spatially-compact sets, giving rise to \(C^{\infty}_{\nicefrac{{3}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{ {3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{ \!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{ {3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\! \!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{ {3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\! /}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{ {3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\! \!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{ {3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\! \!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3 }}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\! \!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{ 3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\! \!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{ \!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3 }}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\! \!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3 }}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\! \!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3 }}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\! \!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3 }}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\! \/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\! \/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\! \/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\! \/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\/}}}\nicefrac{{{3}}}{{ \operatorname{\!\!\!\!/}}}\nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}}} \nicefrac{{{3}}}{{\operatorname{\!\!\!\!/}
and, as a topological vector space, is independent of the choice of auxiliary Riemannian metric involved in its construction (of course, the specific norm \(\|\cdot\|_{H^{s}(M)}\) and its compatible Hilbert space inner product are metric-dependent). Indeed, any positive second-order elliptic operator that is essentially self-adjoint on \(C^{\infty}(M)\) could be used in place of \(-\triangle\). Evidently \(T^{s}\) extends to an isometry from \(H^{s}(M)\) to \(L^{2}(M)\), which may be used to embed \(H^{s}(M)\) in \(\mathcal{D}^{\prime}(M)\) so that \(u\in H^{s}(M)\) corresponds to the distribution \(f\mapsto\langle T^{-s}\overline{f/\nu}\mid T^{s}u\rangle\) (\(f\in\Gamma_{0}^{\infty}(\Omega_{1})\)), where \(\nu\) is the density corresponding to the volume element of the auxiliary metric used to define \(L^{2}(M)\). Restricted to \(u\in C^{\infty}(M)\), this embedding is consistent with the embedding of \(C^{\infty}(M)\) in \(\mathcal{D}^{\prime}(M)\) already mentioned. As \(T^{q}\) is compact on \(L^{2}(M)\) for \(q<0\), it follows that \(H^{s}(M)\subset H^{t}(M)\) for all \(s>t\), with a compact inclusion map. More generally. \(P:H^{s+m}(M)\to H^{s}(M)\) is continuous for any partial differential operator of order \(m\) with smooth coefficients (or indeed any pseudodifferential operator \(P\in\Psi^{m}(M)\)), because \(T^{-m}P\in\Psi^{0}(M)\) extends to a bounded operator on \(L^{2}(M)\).
DualityFor any \(s\in\mathbb{R}\), suppose that \(\ell\in H^{s}(M)^{\prime}\), so there is a constant \(c\) so that \(|\ell(u)|\leq c\|u\|_{H^{s}(M)}\) for all \(u\in H^{s}(M)\). Then \(|\ell(T^{-s}f)|\leq c\|f\|_{L^{2}(M)}\) for \(f\in L^{2}(M)\) and consequently there is \(w\in L^{2}(M)\) so that \(\ell(u)=\langle w\mid T^{s}u\rangle_{L^{2}(M)}\) for all \(u\in H^{s}(M)\). Choose a sequence \(w_{n}\to w\) in \(L^{2}(M)\) with \(w_{n}\in C^{\infty}(M)\) and note that \(T^{s}w_{n}\in C^{\infty}(M)\) is a Cauchy sequence with respect to the \(H^{-s}(M)\) norm, converging to some \(v\in H^{-s}(M)\) for which \(T^{-s}v=\lim_{n}w_{n}=w\). Thus \(\ell(u)=\langle T^{-s}v\mid T^{s}u\rangle\) for all \(u\in H^{s}(M)\) and \(|\ell(u)|\leq\|v\|_{H^{-s}(M)}\|u\|_{H^{s}(M)}\). Noting that
\[\frac{\ell(T^{-s}w_{n})}{\|T^{-s}w_{n}\|_{H^{s}(M)}}=\frac{\langle w\mid w_{n} \rangle_{L^{2}(M)}}{\|w_{n}\|_{L^{2}(M)}}\to\|w\|_{L^{2}(M)}=\|v\|_{H^{-s}(M)},\] (A.4)
we see that \(\|\ell\|=\|v\|_{H^{-s}(M)}\), and we have established that the space \(H^{-s}(M)\) is anti-isomorphic to \(H^{s}(M)^{\prime}\) with the operator norm topology (i.e., the strong dual) with respect to the sesquilinear pairing \(\langle v,u\rangle=\langle T^{-s}v\mid T^{s}u\rangle\) (\(v\in H^{-s}(M)\), \(u\in H^{s}(M)\)). To be precise: every choice of Riemannian metric on \(M\) induces an anti-isomorphism of this type, and the pairing depends on the choice made.
Embedding theoremsThe relationship between \(C^{k}\) and Sobolev spaces is given as follows. In one direction, the formula \(-\triangle=\delta d\) (on \(0\)-forms) gives the estimates
\[0\leq\langle f\mid(-\triangle)^{2r}f\rangle=\|(-\triangle)^{r}f\|_{L^{2}(M)} ^{2}\leq C\|f\|_{M,2r}^{2}\] (A.5)
and
\[0\leq\langle f\mid(-\triangle)^{2r+1}f\rangle=\|d(-\triangle)^{r}f\|_{\Lambda ^{1}(M)}^{2}\leq C\|f\|_{M,2r+1}^{2},\] (A.6)
where \(\Lambda^{1}(M)\) is the Hilbert space of square-integrable \(1\)-forms (with respect to the auxiliary Riemannian metric) and \(f\in C^{\infty}(M)\). From this we find
\[\|f\|_{H^{k}(M)}^{2}=\langle f\mid(-\triangle+I)^{k}f\rangle=\sum_{j=0}^{k} \binom{k}{j}\langle f\mid(-\triangle)^{j}f\rangle\leq C\|f\|_{M,k}^{2}\] (A.7)
for all \(k\in\mathbb{N}_{0}\). Thus \(C^{k}(M)\) is continuously embedded in \(H^{s}(M)\) for all \(s\geq k\).
On the other hand, now let \(n\) be the maximum dimension of any component of \(M\). For \(\operatorname{Re}s>n/2\), the integral kernel \(Z_{s}(p,q)\) of the operator \((-\triangle+I)^{-s}=T^{-2s}\) on \(L^{2}(M)\) is continuous, \(Z_{s}\in C(M\times M)\) and can also be written in terms of the spectral decomposition of \((-\triangle+I)\) as
\[Z_{s}(p,q)=\sum_{r}\frac{e_{r}(p)\overline{e_{r}(q)}}{\lambda_{r}^{s}},\] (A.8)
where \(e_{r}\in L^{2}(M)\) are a basis of smooth orthonormal eigenfunctions for \(-\triangle+I\) with corresponding eigenvalues \(\lambda_{r}\) - see [25]. In particular,
\[v_{p}=\sum_{r}\frac{\overline{e_{r}(p)}e_{r}}{\lambda_{r}^{s/2}}\] (A.9)
belongs to \(L^{2}(M)\) with \(\|v_{p}\|^{2}=Z_{s}(p,p)\) by the Pythagoras theorem. For \(\operatorname{Re}s>n/2\), it follows that \(\|e_{r}\|_{\infty}\leq C_{s}|\lambda_{r}^{s/2}|\), where \(C_{s}=\sup_{p\in M}|Z_{s}(p,p)|^{1/2}\). Moreover, if \(f\in C^{\infty}(M)\),
\[|\langle e_{r}\mid f\rangle|=\lambda_{r}^{-k}|\langle e_{r}\mid T^{k}f\rangle| \leq\lambda_{r}^{-k}\|f\|_{H^{k}(M)}\qquad(k\in\mathbb{N})\] (A.10)
so \(\langle e_{r}\mid f\rangle\) decays faster than any inverse power of \(\lambda_{r}\) and \(\langle v_{p}\mid T^{s}f\rangle=\sum_{r}e_{r}(p)\langle e_{r}\mid f\rangle=f(p)\) for all (not merely almost all) \(p\in M\). Consequently,
\[\|f\|_{\infty}=\sup_{p\in M}|\langle v_{p}\mid f\rangle|\leq C_{s}\|f\|_{H^{s} (M)}\qquad(f\in C^{\infty}(M)).\] (A.11)
By density of \(C^{\infty}(M)\) in \(H^{s}(M)\) it follows that there is a continuous embedding \(H^{s}(M)\to C(M)\) if \(s>n/2\); considering \(\|Pf\|_{\infty}\) in a similar way for differential operators \(P\), one sees that \(H^{s}(M)\) is continuously embedded in \(C^{k}(M)\) for all \(s>k+n/2\). It follows that \(\cap_{s\in\mathbb{R}}H^{s}(M)=C^{\infty}(M)\).
If \(u\in\mathcal{D}^{\prime}(M)=\Gamma_{0}^{\infty}(\Omega_{1})^{\prime}\) is a distribution then, owing to compactness of \(M\), there is a partial differential operator \(P\) so that \(|u(\nu f)|\leq\|Pf\|_{\infty}\leq C_{s}\|Pf\|_{H^{s}(M)}\) for any \(s>n/2\) and all \(f\in C^{\infty}(M)\). Thus, with \(t=s+\operatorname{ord}(P)\), \(u(\nu f)=\langle\overline{f},w\rangle=\langle T^{t}\overline{f}\mid T^{-t}w \rangle_{L^{2}(M)}\) for some \(w\in H^{-t}(M)\), which is evidently compatible with the embedding of \(H^{-t}(M)\) in \(\mathcal{D}^{\prime}(M)\) described earlier. In this sense,
\[\mathcal{D}^{\prime}(M)=\bigcup_{s\in\mathbb{R}}H^{s}(M).\] (A.12)
#### a.2.2 General manifolds
For a general (not-necessarily compact) smooth manifold \(M\) with at most finitely many components, we proceed as follows. If \(K\subset M\) is compact and topologically regular,3 we may diffeomorphically identify \(K\) with a compact subset \(\hat{K}\) of a compact manifold \(N\) (for example, by 'doubling' a compact set that contains \(K\) in its interior and has a smooth boundary [4]; for \(K\) contained in a coordinate chart one could take \(N\) to be a torus). Then the Sobolev space \(H^{s}_{K}(M)\) is defined as the completion of \(C^{\infty}_{K}(M)\) with respect to the pull back of (some choice of) the \(H^{s}(N)\) norm. The space \(H^{s}_{K}(M)\) can be identified with a subspace of \(\mathcal{D}^{\prime}_{K}(M)\), the distributions with support contained in \(K\). As a topological vector space, it is independent of the choices used in its construction. However, by making such a choice one may endow \(H^{s}_{K}(M)\) with a norm, denoted \(\|\cdot\|_{H^{s}_{K}(M)}\), and indeed a compatible Hilbert space structure. Estimates of the form \(\|f\|_{H^{k}_{K}(M)}\leq C\|f\|_{K,k}\) (\(f\in C^{\infty}_{K}(M)\)), and hence continuity of the embedding \(C^{k}_{K}\to H^{k}_{K}\), carry over from the compact case for each fixed \(k\in\mathbb{N}_{0}\), as does compactness of the embedding \(H^{s}_{K}(M)\to H^{t}_{K}(M)\) for \(s>t\) and the continuous embedding of \(H^{s}_{K}(M)\) in \(C^{k}_{K}(M)\) for \(s>k+n/2\), where again \(n\) is the maximum dimension of any component of \(M\). Consequently one has \(\cap_{s\in\mathbb{R}}H^{s}_{K}(M)=C^{\infty}_{K}(M)\).
Footnote 3: Note that if \(K\) is any compact subset, then the closure of the interior of \(K\) is compact and topologically regular.
Next, we define (abbreviating 'compact and topologically regular' by c.t.r.)
\[H^{s}_{0}(M)=\bigcup_{\text{c.t.r. }K\subset M}H^{s}_{K}(M),\] (A.13)
with the locally convex inductive limit topology. As \(M\) admits countable compact exhaustions (and consequently, countable c.t.r. exhaustions), \(H^{s}_{0}(M)\) may be realised as a countable strict inductive
limit - i.e., it is a \(LF\) space. A fact to be used later is that, by [26, Prop. 14.6] (see also Prop. 4 in [8]) a subset \(B\) of \(H^{s}_{0}(M)\) is bounded if and only if \(B\) is a bounded subset of \(H^{s}_{K}(M)\) for some compact \(K\). It is easily shown that
\[\mathcal{E}^{\prime}(M)=\bigcup_{s\in\mathbb{R}}H^{s}_{0}(M).\] (A.14)
Finally, the local Sobolev space \(H^{s}_{\mathrm{loc}}(M)\) is defined as
\[H^{s}_{\mathrm{loc}}(M)=\{u\in\mathcal{D}^{\prime}(M):\chi u\in H^{s}_{0}(M) \text{ for all }\chi\in C^{\infty}_{0}(M)\},\] (A.15)
and is equipped with the Frechet topology induced by the seminorms \(\|\chi\cdot\|_{H^{s}_{K}(M)}\) as \(K\) runs over compact topologically regular subsets of \(M\) and \(\chi\) runs over \(C^{\infty}_{K}(M)\). The inclusion \(H^{s}_{0}(M)\hookrightarrow H^{s}_{\mathrm{loc}}(M)\) is continuous for all \(s\) and indeed we have
\[H^{s}_{0}(M)=H^{s}_{\mathrm{loc}}(M)\cap\mathcal{E}^{\prime}(M).\] (A.16)
Thus, if \(M\) is compact, \(H^{s}_{\mathrm{loc}}(M)\) and \(H^{s}_{0}(M)\) coincide as topological vector spaces. The construction above produces the same spaces as the chart-based approach taken in [17].
The Sobolev embedding theorems already mentioned entail the existence of continuous embeddings of \(C^{k}(M)\) in \(H^{k}_{\mathrm{loc}}(M)\) for all \(k\in\mathbb{N}_{0}\) and of \(H^{s}_{\mathrm{loc}}(M)\) in \(C^{k}(M)\) for all \(s>k+n/2\), where \(n\) is the maximum dimension of any component of \(M\). Consequently, \(\cap_{s\in\mathbb{R}}H^{s}_{\mathrm{loc}}(M)=C^{\infty}(M)\).
### The topology of bounded convergence
A general reference for the following is Treves [26, Ch. 32].
If \(E\) and \(F\) are Hausdorff locally convex topological spaces then the topology of bounded convergence on \(\mathcal{L}(E,F)\), the space of all continuous linear maps \(E\to F\), is defined by the neighbourhood base of zero, consisting of sets
\[\mathcal{U}(B;V)=\{T\in\mathcal{L}(E,F):T(B)\subset V\}\] (A.17)
as \(B\) runs over the bounded subsets of \(E\) and \(V\) runs over any neighbourhood base of zero in the topology of \(F\). The notation \(\mathcal{L}_{b}(E,F)\) denotes \(\mathcal{L}(E,F)\) equipped with the topology of bounded convergence. Thus a net \(T_{\alpha}\) converges to \(0\) in \(\mathcal{L}_{b}(E,F)\) if and only if, for every bounded \(B\subset E\) and neighbourhood base set \(V\), \(T_{\alpha}\) eventually maps \(B\) into \(V\). This topology makes \(\mathcal{L}(E,F)\) Hausdorff and locally convex (inherited from \(F\)[26, p.336]). We record some basic facts.
**Lemma A.2**.: _Let \(E,F,G\) be Hausdorff locally convex topological spaces. (a) If \(T\in\mathcal{L}(E,F)\) and the net \(S_{\alpha}\to S\) in \(\mathcal{L}_{b}(F,G)\) then \(S_{\alpha}T\to ST\) in \(\mathcal{L}_{b}(E,G)\); (b) if the net \(T_{\alpha}\to T\) in \(\mathcal{L}_{b}(E,F)\) and \(S\in\mathcal{L}(F,G)\) then \(S_{\alpha}\to ST\) in \(\mathcal{L}_{b}(E,G)\); (c) if \(T_{\alpha}\to T\) in \(\mathcal{L}_{b}(E,F)\) and \(\operatorname{Ran}T_{\alpha}\subset\hat{F}\), where \(\hat{F}\) is a topological subspace of \(F\), then \(T_{\alpha}\to T\) in \(\mathcal{L}_{b}(E,\hat{F})\); (d) if \(F\) is barrelled and \(T_{n}\to T\) and \(S_{n}\to S\) are convergent sequences in \(\mathcal{L}_{b}(E,F)\) and \(\mathcal{L}_{b}(F,G)\), then \(S_{n}T_{n}\to ST\) in \(\mathcal{L}_{b}(E,G)\)._
Note that Hilbert, Banach, Frechet and LF spaces are all barrelled [26, Ch. 33].
Proof.: (a) It is enough to prove this in the case \(S=0\); taking any bounded \(B\) in \(E\) with \(0\in B\), and any \(0\)-neighbourhood \(V\) in \(G\), note that \(T(B)\) is bounded in \(F\) and deduce that \(S_{\alpha}T(B)\) is eventually contained in \(V\). Thus \(S_{\alpha}T\to 0\) in \(\mathcal{L}_{b}(E,G)\). (b) Without loss, suppose \(T=0\) and with \(B\) and \(V\) as before, we note that \(S^{-1}(V)\) is a \(0\)-neighbourhood in \(F\) and again deduce that \(ST_{\alpha}(B)\) is eventually contained in \(V\), so \(ST_{\alpha}\to 0\).
(c) Again, it is enough to treat \(T=0\). Take any bounded \(B\subset E\) and \(0\)-neighbourhood \(\hat{V}\) in \(\hat{F}\). Then \(\hat{V}=\hat{F}\cap V\) for an \(0\)-neighbourhood \(V\) in \(F\) because \(\hat{F}\) carries the subspace topology. We know
that, eventually, \(T_{\alpha}(B)\subset V\), and as \(\operatorname{Ran}T_{\alpha}\subset\hat{F}\), we have, eventually, that \(T_{\alpha}(B)\subset V\cap\hat{F}=\hat{V}\). Thus \(T_{\alpha}\to 0\) in \(\mathscr{L}_{b}(E,\hat{F})\).
(d) Suppose first that \(T=0\) and let \(0\in B\) be any bounded subset of \(E\) and \(V\) any \(0\)-neighbourhood in \(G\). Then the \(S_{n}\), together with \(S\), form a bounded subset in \(\mathscr{L}_{b}(F,G)\) (which, we recall, is Hausdorff). As \(F\) is barrelled, they form an equicontinuous set of maps by the Banach-Steinhaus theorem [26, Thm 33.1]. Thus there exists a \(0\)-neighbourhood \(W\) in \(F\) such that \(S_{n}(W)\subset V\) for all \(n\). As \(T_{n}(B)\subset W\) for all sufficiently large \(n\), we have \(S_{n}T_{n}(B)\subset W\) for such \(n\). Thus \(S_{n}T_{n}\to 0\) in this case. If \(T\neq 0\), we now know that \(S_{n}(T_{n}-T)\to 0\), and as \(S_{n}T\to ST\) we find \(S_{n}T_{n}\to ST\) as required.
Typical applications of Lemma A.2(a,b) are to show, for instance, that a holomorphic function from a domain in \(\mathbb{C}\) to \(\mathscr{L}_{b}(H^{s}_{0}(M),H^{s}_{K}(M))\) is also holomorphic as a function to \(\mathscr{L}_{b}(H^{s}_{K})\) and \(\mathscr{L}_{b}(H^{s}_{0}(M))\) due to continuous embedding of \(H^{s}_{K}(M)\) in \(H^{s}_{0}(M)\). Part (d) has the following use.
**Corollary A.3**.: _Let \(E,F,G\) be Hausdorff locally convex topological spaces, with \(F\) being barrelled. If \(\lambda\mapsto T(\lambda)\) and \(\lambda\mapsto S(\lambda)\) are functions from a domain in \(\mathbb{C}\) to \(\mathscr{L}(E,F)\) and \(\mathscr{L}(F,G)\) respectively, then (a) if \(T\) and \(S\) are both continuous in the topology of bounded convergence (at \(\mu\)), so is \((ST)(\lambda)=S(\lambda)T(\lambda)\) (at \(\mu\)): (b) if \(T\) and \(S\) are differentiable at \(\mu\) in the topology of bounded convergence then so is \(ST\), with a derivative given by the Leibniz rule \((ST)^{\prime}(\mu)=S^{\prime}(\mu)T(\mu)+S(\mu)T^{\prime}(\mu)\)._
Proof.: As \(\mathbb{C}\) is first-countable, questions of continuity and differentiability may be reduced to sequential considerations, and the result follows using Lemma A.2(d) and the standard proof of the Leibniz formula.
It is also useful to have some sufficient conditions for convergence in \(\mathscr{L}_{b}(E,F)\) topology where \(E\) and \(F\) are Frechet or countable strict inductive limits thereof (LF spaces). If \(F\) is Frechet, then the bounded subsets are precisely the subsets \(B\) so that \(\sup_{f\in B}\rho_{j}(f)<\infty\) for every seminorm \(\rho_{j}\) defining the topology of \(F\), while sets \(V_{j,\varepsilon}=\{f\in F:\rho_{j}(f)<\varepsilon\}\), for arbitrary \(\varepsilon>0\) and defining seminorm \(\rho_{j}\), provide a basis of neighbourhoods of zero. On the other hand, if \(E\) is an LF space with defining sequence \(E_{n}\) of Frechet spaces, then the bounded subsets \(B\) of \(E\) comprise precisely those subsets that are bounded subsets of some \(E_{n}\)[26, Prop. 14.6], while a neighbourhood base is provided by convex sets \(V\subset E\) so that each \(V\cap E_{n}\) is an open neighbourhood of zero in \(E_{n}\). We recall that continuous linear maps between topological vector spaces preserve boundedness [26, Prop. 14.2]. The following results are used in the text.
**Lemma A.4**.: _Suppose \(T_{\alpha}\in\mathscr{L}_{b}(F)\) is a net of operators on a Frechet space \(F\) with defining seminorms \(\rho_{j}\). If, for each \(j\), there exists \(k(j)\), such that for all \(\varepsilon>0\), it is eventually true that \(\rho_{j}(T_{\alpha}f)<\varepsilon\rho_{k(j)}(f)\) for all \(f\in F\), then \(T_{\alpha}\to 0\) in \(\mathscr{L}_{b}(F)\)._
Proof.: Given any bounded set \(B\) and neighbourhood \(V_{j,\varepsilon}\), we set \(C=\sup_{f\in B}\rho_{k(j)}(f)\) and apply the given property to \(\varepsilon/C\) to find that, eventually,
\[\rho_{j}(T_{\alpha}f)<\varepsilon C^{-1}\rho_{k(j)}(f)\leq\varepsilon\qquad \forall f\in B.\] (A.18)
We have shown that, eventually, \(T_{\alpha}\in\mathcal{U}(B;V_{j,\varepsilon})\); as \(j\) and \(\varepsilon\) were arbitrary, \(T_{\alpha}\to 0\) in \(\mathscr{L}_{b}(F)\).
**Corollary A.5**.: _Consider a net \(T_{\alpha}\in\mathscr{L}(C^{\infty}_{K})\) such that each \(T_{\alpha}\) extends to an operator in \(\mathscr{L}(H^{s}_{K})\) (which we also denote \(T_{\alpha}\)) for all \(s\geq s_{*}\). Suppose also that \(T_{\alpha}\to 0\) in every \(\mathscr{L}(H^{s}_{K})\), \(s\geq s_{*}\). Then \(T_{\alpha}\to 0\) in \(\mathscr{L}_{b}(C^{\infty}_{K})\)._
Proof.: any \(C^{\infty}_{K}\) seminorm is bounded by an \(H^{s}_{K}\) norm and vice versa, so, for any \(C^{\infty}_{K}\)-seminorm \(\|\cdot\|_{K,j}\) we may estimate
\[\|T_{\alpha}f\|_{K,j}\leq c\|T_{\alpha}f\|_{H^{s(j)}_{K}}\leq c\|T_{\alpha}\|_{ \mathscr{L}(H^{s(j)}_{K})}\|f\|_{H^{s(j)}_{K}}\leq c^{\prime}\|T_{\alpha}\|_{ \mathscr{L}(H^{s(j)}_{K})}\|f\|_{K,k(s(j))}\] (A.19)
for suitable choices of \(s(j)\) and \(k(s(j))\), uniformly in \(f\in C_{K}^{\infty}\) and \(\alpha\). As \(T_{\alpha}\to 0\) in \(\mathscr{L}(H_{K}^{s(j)})\), for any \(\varepsilon>0\) it is eventually true that \(\|T_{\alpha}f\|_{K,j}\leq\varepsilon\|f\|_{K,k(s(j))}\) for all \(f\in C_{K}^{\infty}\). Hence \(T_{\alpha}\to 0\) in \(\mathscr{L}_{b}(C_{K}^{\infty})\) by Lemma A.4.
**Lemma A.6**.: _Suppose \(T_{\alpha}\) is a net of operators on the LF space \(E=\bigcup_{n\in\mathbb{N}}E_{n}\). If for some fixed \(n\), one has \(\operatorname{Ran}\left(T_{\alpha}\right)\subset E_{n}\) and \(T_{\alpha}|_{E_{m}}\to 0\) in \(\mathscr{L}_{b}(E_{m},E_{n})\) for all \(m\), then \(T_{\alpha}\to 0\) in both \(\mathscr{L}_{b}(E,E_{n})\) and \(\mathscr{L}_{b}(E)\)._
Proof.: Consider any neighbourhood \(V\) of zero in the standard neighbourhood base of \(E_{n}\) and any bounded set \(B\subset E\), necessarily obeying \(B\subset E_{m}\) for some \(m\). Then \(T_{\alpha}\) eventually maps \(B\) into \(V\); as \(B\) and \(V\) were arbitrary, \(T_{\alpha}\to 0\) in \(\mathscr{L}_{b}(E,E_{n})\). Post-composing with the continuous embedding of \(E_{n}\) in \(E\), \(T_{\alpha}\to 0\) in \(\mathscr{L}_{b}(E)\) as well.
|
2304.09861 | IoT-based Wearables: A comprehensive Survey | A substantial amount of growth is being achieved by businesses through
IoT-based services. The emergent of small electronic devices capable of
computing, which are commonly known as wearables in IoT domain has proven to
have huge impact in people's life. Theses wearables are capable of collecting
vital information about a person's activities and behaviours regularly. This
makes them suitable for many applications in health monitoring, fitness,
sports, education and some industry related applications. To this end, in this
paper, we aim to provide a general review on IoT-based wearables, the sensors
adopted for several categorized wearables, the communication technologies
adopted and the most widely adopted data processing techniques for wearables.
Furthermore, we present the challenges faced for wide adoption of wearables and
the future research directions. | Yahuza Bello, Emanuel Figetakis | 2023-04-12T13:50:06Z | http://arxiv.org/abs/2304.09861v1 | # IoT-based Wearables: A comprehensive Survey
###### Abstract
A substantial amount of growth is being achieved by businesses through IoT-based services. The emergent of small electronic devices capable of computing, which are commonly known as wearables in IoT domain has proven to have huge impact in people's life. Theses wearables are capable of collecting vital information about a person's activities and behaviours regularly. This makes them suitable for many applications in health monitoring, fitness, sports, education and some industry related applications. To this end, in this paper, we aim to provide a general review on IoT-based wearables, the sensors adopted for several categorized wearables, the communication technologies adopted and the most widely adopted data processing techniques for wearables. Furthermore, we present the challenges faced for wide adoption of wearables and the future research directions.
Internet of things, wearables, communication technologies, data analytic
## I Introduction
In today's modern era, several applications are developed with the need of connectivity to the internet to function effectively. The Internet of Things (IoT) is the concept of having embedded systems interconnected together and communicating through the Internet [1]. This provides the required connectivity to the internet for such applications. The IoT domain has been thoroughly investigated in both academia and industry and its development through the years was rapid because of its wide range of applications in various fields such as health, IoT-enabled sports, IoT-enabled factories, agriculture, IoT-enabled cities, traffic management, smart supply chain and smart grid to name a few as depicted in Figure 1.
A substantial amount of growth is being achieved by businesses through IoT-based services. For example, the biggest economic impact will be from healthcare and manufacturing applications. The global economy is expected to generate about \(1.1-2.5\) trillion US dollars in growth annually by 2025 as a result of IoT-based healthcare applications and services, such as mobile health (mHealth) and telecare services that improves diagnosis, treatment and other monitoring services. IoT is estimated to generate a total economic impact amounting to $2.7 trillion - $6.2 trillion annually by 2025 [2]. In Figure 2, dominant IoT applications (i.e., applications in healthcare, manufacturing, electricity, urban infrastructure, security, resource extraction, agriculture, retail and vehicles) are shown with their projected market share [3].
A typical IoT consists of 6 building blocks as shown in Figure 3. These buildings blocks are identification, sensing, communication, computation/processing, services and semantics. By knowing these building blocks, you can gain a deeper understanding of how IoT works and what it means. Identifying services and matching them with customer demands is crucial for IoT. Electronic product codes (EPCs) and ubiquitous codes (uCodes) are two most adopted identification methods for the IoT systems. Sensors in the IoT gather data from related objects and send it to data warehouses, databases, or the cloud. Based on the analysis performed on the collected data, specific actions are taken according to the services that are required. There are a variety of IoT sensors, including smart sensors, actuators, and wearable sensors. We will discuss sensors in the context of wearable devices in section IV. Using multiple communication technologies, heterogeneous IoT objects can be connected to provide specific smart services. The different communication technologies used in the context of wearable devices will be discussed in detail in section V. In IoT, processors, microcontrollers, System-on-Chips (SoCs), and Field-Programmable Gate Arrays (FPGAs) comprise the "brain" and computational power. Arduino, UDOO, FriendlyARM, Intel Galileo, Raspberry PI, Gadgeteer, BeagleBone, Cubieboard, Z1, WiSense, Mulle, and T-Mote Sky are some of the hardware platforms designed to run IoT applications.
The emergent of small electronic devices capable of computing, which are commonly known as wearables in IoT domain has proven to have huge impact in peoples life [4]. Theses wearables are capable of collecting vital information about a person's activities and behaviours regularly. This makes them suitable for many applications in health monitoring, fitness, sports, education and some industry related applications. Wearables are often worn as additional accessories on individual's clothing, as implants on certain part of the body, or even tattoed to the skin. Being part of the IoT ecosystem, wearables are connected to the Internet in order to gather and transmit vital information. Additionally, the mobility of people and animals makes wearable devices increasingly important since they can collect, send, and receive data from the Internet in real time, and thus help us to make better decisions to improve our lifestyle.
A wide variety of wearable products such as smart jewelleries, smart wristbands, smart watches, smart glasses, smart shoes and smart belts are already available in the markets as illustrated in Figure 4. According to International Data Corporations (IDC), the global shipments of wearable products are projected to grow from 66.5 million units as reported in 2019 to 105.3 million units by the end of 2024 [5]. Wearable devices first emerged as fitness activity trackers, then comes other applications such as Bluetooth headsets, smartwatches, and web-enabled glasses shortly [6]. Afterwards, virtual reality headsets and augmented reality headsets were introduced in the gaming industry. However, health monitoring and medical usage cases are the most important life-changing applications of IoT-based wearable technologies.
Wearable devices are a rapidly developing field with the potential to open up new applications. This review is motivated by the potential of wearable devices to be able to open up new applications in various fields. Specifically, in this review, we will focus mainly on identifying the different wearables in the IoT domain, discuss the different sensors adopted for these wearables and then cover the different data processing techniques and communication technologies that are widely used for the IoT
based wearables.
Although there are several survey papers [4, 8, 9, 10, 11, 12, 13] that addressed the concept of wearables and the ongoing research efforts, most of them focused on specific application such as applications in fitness and health monitoring neglecting the sensors behind each wearable, the various communication techniques adopted and data processing techniques that are commonly used for IoT-based wearables. To this end, in this paper, we aim to provide a general review on IoT-based wearables, the sensors adopted for several categorized wearables, the communication technologies adopted and the most widely adopted data processing techniques for wearables.
The rest of the paper is structured as follows: Section II presents the relevant survey papers published in the literature. Section III introduces IoT-based wearables and categorized them according to the targeted application. In section IV, the most commonly used sensors for IoT-based wearables are discussed. Section V discusses the various communication technologies adopted for wearables and the various data analytic techniques used for data processing. Section VI presents the challenges faced for wide adoption of wearables and the future research directions. Section VII concludes the paper.
## II Related Works
A number of other research studies have examined wearable technologies in different ways. For example, the survey paper in [4] provides academics seeking further research topics with an in-depth understanding of smart wearables concept. The authors present an analysis of behavioral predictors of smart wearable adoption and adoption intentions among people. In [8], using signals and sensors, the authors examines context-based pairing in wearable devices. The author in [9] examine each type of IoT
Fig. 1: Internet of things Applications [7]
Fig. 2: Projected market share of dominant IoT applications by 2025 [3]
architecture as well as different methods of data transfer, data processing, and computing paradigms. Then the IoT-assisted wearable sensor systems and their various applications in healthcare as well as the various communication technologies adopted are presented in the paper. The authors in [10] reviews both scientific papers and commercial efforts related to wearable health care devices. The paper focuses on the most important wearable devices that directly measure health status parameters. An overview of existing literature on intelligent wearables is presented in [11]. The authors went further to provide a review of the risks of using intelligent wearables and explain what risks were considered in previous research. The authors in [12] discuss and explore several communication and artificial intelligence techniques, which are suitable for the next generation of wearable devices. These techniques when fully adopted will enable emergence of innovative services. An overview of recent research on wearables and IoT technologies used for fitness assessment is provided in [13].
## III Wearables in IoT
There are many applications in the field of IoT that can be enhanced by wearable technology. However, wearable devices will become worth their weight in gold when they are integrated into a true IoT system. As a result, most research papers currently published in the literature connect wearable devices to the Internet in one of two ways. Either the wearable devices send data to the cloud or to an Internet server for offline processing or some of the data processing is done locally on the wearable device. Having integrated IoT platforms and addressing many issues pertaining to data ownership, data sharing policies, privacy, and safety will fully realize the prevalence of wearable devices.
In many cases, a mobile network is needed to support these wearable devices. Data is also a concern when dealing with sensors that collect many readings. In works [14, 15, 16, 17, 18] the authors create specialized networks that can handle the needs of wearable devices. These networks also address several concerns such as security and data storage. In, [19] shows that edge nodes can also play an important role with IoT devices and data collection. It can reduce load on the server and save storage space.
Several research works have categorized IoT-based wearables by considering multiple factors. As part of its ongoing effort to standardize wearable electronic devices and technologies, Technical Commission (TC) 124 of the IEC (International Electrotechnical Commission) identifies four types of smart wearables: accessory wearables, textile/fabric wearables, patchable wearables and implantable wearables. For details on this categorization, refer to [20]. The IEC Standardization Group (SG) 10 on wearable smart devices also indicated that these wearable devices can be categorized in accordance with their location within, on, or near an individual. The categories are near-body wearables, in-body wearables, on-body wearables and electronic textiles [21]. According to [22], IoT-based wearable devices can be categorized into wrist-worn devices (such as smart watches and wrist bands), haem-mounted devices (such as smart eyewear, headset and earbuds), e-textile (such as smart garments), e-patches (such as sensor patches and e-tattoo) and others (this category covers that do not fall under the other categories such as smart jewellery and straps).
In this paper, we categorized wearable devices according to the targeted applications in IoT domain similar to the work in [23]. As depicted in Figure 5, IoT-based wearables can be classified according to the targeted applications as Health-based wearables, safety-based wearables, sports and activity recognition wearables and tracking and localization wearables.
Fig. 4: IoT-based Wearable products
Fig. 3: IoT components [3]
### _Health-based wearables_
In the health sector, Wearable IoT devices are usually used to monitor and treat patients remotely, and in some cases for rehabilitation. Data about a patient's health is collected through sensors, and the wearable device may perform small analysis before sending the data to the Internet for further processing. Additionally, these wearables are capable of receiving additional inputs to aid for further analysis. Often, health-based wearable devices are typically connected to smartphones for analyzing data and sending it to cloud computing frameworks like Azure or Amazon Web Services (AWS) for handling, storing, and processing. The use of mobile health applications can provide insight about a patient's health and provide a visual representation of the analysis of the data. The wearable also can be programmed to respond to special commands, such as heating up the body or applying shocks, based on the analyzed data during treatment.
Health-based applications of wearables can be broadly categorized as health treatment and rehabilitation wearable systems and health monitoring wearable systems [23]. Several studies have demonstrated the significance of health treatment and rehabilitation wearable systems.
By using rehabilitation wearables, disabled patients can maintain or improve their mental or physical abilities. The authors in [24] presents a walker-based physiotherapy system for monitoring and evaluating movement metrics is proposed, which sends the data to the cloud for analysis, and displayed on a mobile application in real time. In [25], a stroke rehabilitation framework that utilizes smart wearable armbands, machine learning algorithms, and 3-D-printed robot hands to assist stroke patients was proposed. An IoT sensing device was integrated with textile electrodes to develop the SWA, which measures, pre-processes, and wirelessly transmits biopotential information. The work in [26] presents a framework for an m-Health monitoring system based on a cloud computing platform (Cloud-MHMS) for achieving personalised and high-quality health monitoring using new technologies, such as mobile networks and cloud computing. A system using IoT data to support emergency medical services is presented as a demonstration of how data can be collected, integrated, and inter-operated flexibly in [27]. The authors develop a Resource-based Data Accessing-IoT (UDA-IoT) method that allow access to IoT data resources in a universal manner in order to enhance the accessibility of IoT data resources. The authors in [28] propose a new cloud-based wheelchair assist system, which supports impaired drivers by providing safe driving conditions. The authors utilized an embedded mobile System-On-a-Chip (SoC) and android based mobile application for the proposed system. A gaming-based rehabilitation system integrating wearable technology and IoT is presented in [29] in order to help stroke patients who, suffer from upper limb disabilities.
In the same fashion, health monitoring wearable systems are categorized based on the kind of sensor adopted as Bio-potential sensors, motion sensors, environmental sensors and chemical sensors [23]. The different kind of sensors used in most IoT-based wearables will be discussed in detail in the next section.
### _Safety-based wearables_
The wearables in the safety category are used to provide a safe environment for users. For example, mines can benefit from the use of safety-based wearable devices that monitor air quality to protect workers and reduce costs incurred by employers and the workers. Safety-based wearable devices are used in many applications to detect or prevent falls, especially in elderly people. Several studies in the literature have investigated and proposed numerous techniques and algorithms for fall detection.
Fig. 5: Application-based Classification of Wearable Devices
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Wearable categories & Description & Related works \\ \hline Health-based applications & Wearable IoT devices used to monitor and treat patients remotely, and in some cases for rehabilitation. & [23, 24, 25, 26, 27, 28, 29] \\ \hline Safety Applications & wearables in the safety category are used to provide a safe environment for users. & [30, 31, 32, 33, 34, 35] \\ \hline Sports and activity recognition & This category pertains to applications where wearables are worn during sport activities to track different metrics of user/athlete activity to improve their performance. & [36, 37] \\ \hline Tracking and localization & Tracking people and animals online is the most common use of this category. & [38, 39] \\ \hline \end{tabular}
\end{table} TABLE I: Summary of the Categorized Wearables and the relavant related works
Using pretested templates for each type of fall and comparing angular velocities and angles between falls and normal activities, [30] propose a fall detection framework. In [31], deep learning and activity characteristics are used to develop a novel method for detecting human falls on furniture using scene analysis that adopts Region-based Convolutional Neural Network (R-CNN). A fall detection system incorporated tri-axial accelerometers, gyroscopes, and magnetometers to obtain fall parameters was proposed by [32]. Based on a reverse approach, the authors in [33] propose a novel autonomous fall detection system. The proposed system uses a camera attached to the subject rather than static sensors at fixed locations. Therefore, monitoring is not limited to areas where the sensors are located but includes all areas where the subject travels. [34] presents a novel system that uses a smart camera to detect falls and classify activities around the waist. An EEG-based BCI prototype is proposed [35] to detect whether an on-site worker is sleep-deprived or not in an elegant manner. Modified safety helmets with a discreetly placed signal acquisition device are worn by workers to ensure their safety. Table I presents a summary of the categorized wearables and the relevant related works published in the literature.
## IV Sensors in IoT-based Wearable
Sensors are a crucial part of IoT-based wearable, they allow for the data collection of a certain action that can then be analyzed and measured. They are many different sensors, and each single sensor can have multiple different applications.
### _Sensors found in Sports_
Wearables that are used in sports to gain metrics and about an athlete's performance are categorized under Inertial Measurement Unit (IMU) sensors and help with the study of motions, kinematics [40] The sensors that are categorized under IMUs are magnetometer, accelerometer, and gyroscopes. Magnetometer helps determine orientation of an object which in sports has many applications, the sensors themselves are small require low energy to power and can communicate over I2C or SPI [41]. Accelerometers can measure movement in a given direction, they can be as precise to measure movement such as wrist tilt [42]. Gyroscopes measure orientation as well as angular velocity, it also has the ability to maintain orientation as well [43]. Combining the sensors into a single device can help measure and study kinematics in a sport setting. Examples of the sensors being used in a device can be found in [44] and [45] where the sensors are used for their advantages in capturing certain ranges of motions. The data is then captured and analyzed by their respective data analysis programs. Sensors in IoT are very violate especially in Wearables, many sensors have more than one application, for example IMU sensors have applications in wearables outside of sports. For example, in [46] IMU sensors are used to create 3D maps, an application that is used outside of the scope of motion.
### _Sensors found in Healthcare_
When it comes to healthcare there is a long list of sensors that are found in wearables. Wearables in health care can be used to monitor or treat patients, which are powered by different types of sensors. Sensors already discussed [41][42][43] also have use in healthcare since they deal with the study of motion with the human body [40].
Looking at table III there are many different sensors found in different wearable devices for healthcare. Many sensors are present in many different devices, and sometimes are even combined with other sensors from other categories of wearables. Looking at the airflow sensor, it has a straightforward purpose within healthcare, to help a patient breathe. They can be found mostly in ventilators[48] as well as devices for oxygen therapy[50]. A pressure sensor have an unexpectedly large impact on devices, they are used in ventilators, oxygen therapy, sleep therapy[51], and automation of drug infusion[52]. Another sensor that is common throughout healthcare is oxygen sensors, which measure a patients oxygen level. The sensor can be found in devices that attach to a patient's finger [54]. EEG sensors are also used in healthcare, they are used in research for the human brain to help gain a better understanding and can also be used to help diagnose any issues. Glucose sensors mostly benefit patients with diabetes, the sensor has become non-invasive and implemented on IoT devices such as smartwatches that can automatically send alerts to let the patient know any imbalances are occurring. The ECG sensor might be the most know sensor in healthcare, it is present in every hospital and almost all smart watches/ring[59][60][61] feature an ECG sensor. The ECG measures heart rate and can collect information about a patient throughout the day to see if any irregularities are occurring.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Type of Technology & Range & Data Rates & Frequency Band (s) & Topology \\ \hline ZigBee & 10 to 100 meters & 250kbps & 2GHz & Star, ad hoc, and mesh \\ \hline WiFi & 10 to 100 meters & 6.75Gbps & 2.4GHz, 5GHz & Point Hub \\ \hline Bluetooth & 10 to 100 meters & 2.1Mbps & 2.402GHz to 2.408GHz & Point to point, point to multi-point \\ \hline LoRaWan & 15 to 20 Kilometers & 250bps to 5.5kbps & 169 MHz (Asia), 868MHz (Europe) 91MHz (North America) & Star \\ \hline \end{tabular}
\end{table} TABLE II: Common Communication Technologies used in IoT-based Wearables
### _Sensors found in Safety_
Wearables in the area of safety are also finding applications, the scenarios range from at home to work environment safety. Wearable devices are fitted with sensors that can check the environment, make sure certain equipment is being handled properly, and prevent and kind of injury. Taking a look at [62] the authors used different sensors in a wearable device to determine harmful environmental conditions. The sensors used in their wearable are temperature, C02, UV, and CO [63][64][65][66]. When it comes to safety most sensors can be implemented in wearables, for sports, health, and tracking and localization they can be added as preventative measures. For example, in sports, accelerometers can be used to detected impacts which can cause injuries, in healthcare ECG sensors can be used to detect abnormal heart rates, and in tracking sensors can be implemented to prevent someone from getting lost.
### _Sensors found in Tracking/Localization_
Werables that are categorized in tracking and localization help with determining the environment conditions by utilizing sensors as well determining location [67]. Tracking is a category with a wide area of applications, it can range from actual geo-locating to tracking a person's movements or specific body part. From a medical/sport standpoint sensor like accelerometers[42] can track movement to help determine the kind of motion, this also applies to magnetometer and gyroscopes. For determine location via satellite location sensors are used in wearables [68], this helps with everything between loss prevention to navigation. For more local mapping that does not need communication to satellite, like somewhere in a house or small community, accelerometers are used to measure distance between certain points to create a map. This comes into play in applications with smart homes, as well as mobile devices to help predict a user's habits.
## V Data Analysis and Communication technologies for IoT-based Wearables
In this section, we will first introduce the various communication technologies that are widely adopted in IoT-based wearables. Then the various data analytic techniques in the context of wearables will be discussed.
### _Communication Technologies_
Various wireless communication protocols support different communication ranges in wearables to offer a wide range of applications, as shown in Figure 7. Bluetooth, ZigBee and ANT belong to the short range class, Wi-Fi and cellular, LoRaWan belongs to the long range class and the ultra short range class consists of NFC.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Sensor & Product & Description \\ \hline Airflow Sensor[47] & [48] & Helps correct a patients irregular breathing to supply enough oxygen to the body \\ \hline Pressure Sensor[49] & [48][50][51][52] & Regulates and creates pressure in various applications \\ \hline Oxygen Sensor[53] & [48][54] & Measures a patients oxygen level as well as pulse rate \\ \hline Electroencephalography(EEG)[55] & [56] & Measures a patients brain activity \\ \hline Glucose Sensor & [57] & Measures a patients blood sugar level \\ \hline Electrocardiogram(ECG) Sensor [58] & [59][60][61] & Measures a patients heart rate \\ \hline \end{tabular}
\end{table} TABLE III: Sensors Found in Healthcare IOT Wearables
Fig. 6: Example of Safety Wearable with sensors[62]
Fig. 7: Communication Technologies for IoT-based Wearables [22]
#### Iv-A1 ZigBee
ZigBee technology is built on the IEEE 802.15.4 standard. Resource-constrained environments and devices with limited power provide the perfect environment for this kind of technology to thrive. In spite of being standardized in 2003, it was originally conceived in 1998 [69]. ZigBee has a range of approximately ten to a hundred meters and consumes an insignificant amount of power. ZigBee uses the direct sequence spread spectrum (DSSS) technique and adopts the ad hoc, mesh and star topology. In ZigBee, there are three types of devices: the coordinator (ZC), the router (ZR), and the end-device (ZED) [70].
#### Iv-A2 Wi-Fi
A wireless communication standard based on IEEE802.11, wireless fidelity (Wi-Fi) is one of the most popular wireless technologies. In order to access radio channels, Wi-Fi uses the CSMA/CA protocol. With a range up to 10 meters and a high -power consumption, Wi-Fi is a highly efficient wireless network. There are four types of configurations available with this network: infrastructure, ad hoc, bridge, and repeater, which has a maximum data transfer rate of 6.75Gbps and 140MHz of channel capacity for a very high price [71].
#### Iv-A3 Bluetooth
Wireless personal area networks (WPANs) with high security can be formed using Bluetooth, a proprietary open standard developed for mobile and fixed devices. Using a master-slave (client-server) model, Bluetooth is a packet-oriented protocol that is connection-oriented, packet-based. Bluetooth were limited by the initial specifications' high-power consumption, long communication set-up time, and high transmission latency (about 100 ms). Due to these limitations, Bluetooth Low Energy (BLE) was developed to overcome them. BLE is designed specifically for applications relating to health, sports, and fitness applications.
#### Iv-A4 LoRaWan
IoT networks have become increasingly interested in LoRaWan (low-power wide-area networks) because it is suitable for LPWAN (low-power wide-area networks). LoRaWan is very suitable for wireless sensor networks due to its robustness and range. Frequency Shift Keying (FSK) or Chirp Spread Spectrum (CSS) are the main modes of communication used in LoRa radios. A node and a gateway are the two main types of devices in this technology. Gateways are connected to thousands of nodes at once, each sending and receiving information from the gateway in a LoRaWan setup.
In the wearable device domain, Bluetooth Classic (standardized but no longer maintained by IEEE 802.15.1), Bluetooth Smart and Wi-Fi have become the most common standards [72] used for connectivity to the Internet. A comparative analysis of the different adopted communication technologies used in IoT-based wearables in terms of range, data rates, frequency bands and topology is presented in Table II.
### _Data Analytics_
Data analytics has a large application within wearables in IoT, since many new methods of machine learning are being developed, they can be used within these applications. In a system there is the device with its many sensors taking constant measurements it is the analytics that take the measurements and make them a readable output. IoT-based Wearables are classified in to four applications, health applications, sports applications, safety applications, tracking and localization applications. Wearables from each application can differ slightly in design and include the same sensors, it is how the data is analyzed that differs from each application. For example, for tracking and localization wearable can feature an accelerometers to track distance traveled but a wearable from healthcare application could be using an accelerometers to track the motion of a patient for physical therapy.
Machine learning can play an important role in data analysis for wearables. It can help learn a person's habits and find what is normal for the user. In [73] it comes in the form of personalized medicine, where a machine learning algorithm can learn about a patient and then keep checks on the patient for irregularities outside of their normal. The same type of concept can be used in all applications of wearables. Another example of data analysis comes from [74] where a wearable is being used to help diagnose early stages of Parkinson's disease. The sensors collect data which is then analyzed and compared to the early signs of the disease.
In many of the systems, the data analysis doesn't take place on the device itself. The computation needed cannot be met, there edge nodes and cloud -based systems are put in place to analyze the data. However, this doesn't come without challenges, many forms of data analytics require a large amount of data also many sensors are collecting data constantly throughout the entire day. Over time this accumulates into terabytes of data that can fill up storage drive quickly. Another challenge that must be addressed in privacy, the data that is being collected about a person's habits/lifestyle is very detailed and can be shared between different companies, making privacy an issue. In [75] an approach is taken with data analytics to unlabeled the data to help privacy. By doing this they are not disclosing where the data is coming from and
Fig. 8: System used for collecting data and analyzing data from a sensor [74]
keeping to anonymous, the only competent that is receiving the labelled data is the algorithm doing the analysis.
## VI Challenges and Future Research Directions
There are several challenges in the domain of IoT-based wearables. To fully utilize the benefits of wearables devices in the community, these challenges need to be addressed urgently.
### _Battery-related Issues_
Since batteries have limited working times, wearable devices often have a short sustainable working time, which causes inconvenience to people's daily lives. Therefore, when designing wearables, special considerations need to be taken into account to minimize human interaction and ensure that the batteries last for many hours without needing to be replaced or recharged. For example, there are a variety of systems available, such as those that use low power consumption or techniques that harvest energy such as thermo-electric and piezo-electric that can be utilized in wearable domain.
### _Communication-related Issues_
Currently, most wearable devices connect to the Internet primarily through proxy devices such as smartphones or personal computers. For example, most fitness tracking wearable applications still communicate with clouds through smartphones. This shows a gap in the adoption of direct communication with the internet, which hinders the development of some specific delay sensitive applications in the wearable domain. Potential reasons for this lack of support may be the absence of secure direct communication devices built into wearable operating systems or the slower development of third-party applications for wearables. However, the demand for wearable devices capable of direct communication with the internet is on the rise [76]. This is a potential area of research to explore the design of wearable devices with such capability.
### _Trust-related Issues for Medical Use Cases_
There is lack of trust in the sensitive data produced by consumer wearables in patients monitoring applications. For example, data related to heart rate, pulse rate, and other health metrics is sensed by wearables using consumer hardware. Therefore, physicians are reluctant to use these data for critical diagnostics because they much rely on the accuracy of the hardware used in making the wearables as well as the accuracy of using it properly. A critical challenge would therefore be to improve health-related data accuracy cheaply.
### _Privacy-related Issues_
It is possible that privacy breaches could occur as a result of an exchange of personal data between wearables and IoT hubs, including vital health signals, dosage, and location. It is typical for wearable IoT devices to operate in broadcast mode so that other network nodes can discover them easily. As a result, unauthorised users can intercept data in the form of eavesdropping attacks, which leads to privacy violation. A number of questions remain unanswered concerning the effective protection of users' privacy. This is an interesting research problem that is on-going not only in the wearable domain but in other security-related domains.
## VII Conclusion
From fitness and sport to health monitoring, wearable devices are becoming increasingly popular. In this paper, we provided a comprehensive review of the most important research efforts from the literature in IoT-based wearables. We categorized the wearables according to their applicable applications. Additionally, the sensors, communication technologies and data analytic techniques adopted in the IoT-based wearables is investigated and presented by surveying multiple papers published in the literature. Additionally, the challenges as well as the future research directions in IoT-based wearables is presented. In terms of communication technologies, Bluetooth Classic, Bluetooth Smart and Wi-Fi are the most common standards adopted for wearables connectivity to the Internet.
|
2301.06354 | When it counts -- Econometric identification of the basic factor model
based on GLT structures | Despite the popularity of factor models with sparse loading matrices, little
attention has been given to formally address identifiability of these models
beyond standard rotation-based identification such as the positive lower
triangular (PLT) constraint. To fill this gap, we review the advantages of
variance identification in sparse factor analysis and introduce the generalized
lower triangular (GLT) structures. We show that the GLT assumption is an
improvement over PLT without compromise: GLT is also unique but, unlike PLT, a
non-restrictive assumption. Furthermore, we provide a simple counting rule for
variance identification under GLT structures, and we demonstrate that within
this model class the unknown number of common factors can be recovered in an
exploratory factor analysis. Our methodology is illustrated for simulated data
in the context of post-processing posterior draws in Bayesian sparse factor
analysis. | Sylvia Frühwirth-Schnatter, Darjus Hosszejni, Hedibert Freitas Lopes | 2023-01-16T10:54:45Z | http://arxiv.org/abs/2301.06354v1 | # When it counts - Econometric identification of the basic factor model based on GLT structures
###### Abstract
Despite the popularity of factor models with sparse loading matrices, little attention has been given to formally address identifiability of these models beyond standard rotation-based identification such as the positive lower triangular (PLT) constraint. To fill this gap, we review the advantages of variance identification in sparse factor analysis and introduce the generalized lower triangular (GLT) structures. We show that the GLT assumption is an improvement over PLT without compromise: GLT is also unique but, unlike PLT, a non-restrictive assumption. Furthermore, we provide a simple counting rule for variance identification under GLT structures, and we demonstrate that within this model class the unknown number of common factors can be recovered in an exploratory factor analysis. Our methodology is illustrated for simulated data in the context of post-processing posterior draws in Bayesian sparse factor analysis.
_Keywords:_ Identifiability; sparsity; rank deficiency; rotational invariance; variance identification
JEL classification: C11, C38, C63
## 1 Introduction
Ever since the pioneering work of Thurstone (1935, 1947), factor analysis has been a popular method to model the covariance matrix \(\mathbf{\Omega}\) of correlated, multivariate observations \(\mathbf{y}_{t}\) of dimension \(m\), see e.g.
Anderson (2003) for a comprehensive review. Assuming \(r\) uncorrelated factors, the basic factor model yields the representation \(\mathbf{\Omega}=\mathbf{\Lambda}\mathbf{\Lambda}^{\top}+\mathbf{\Sigma}_{0}\), with a \(m\times r\) factor loading matrix \(\mathbf{\Lambda}\) and a diagonal matrix \(\mathbf{\Sigma}_{0}\). The considerable reduction of the number of parameters compared to the \(m(m+1)/2\) elements of an unconstrained covariance matrix \(\mathbf{\Omega}\) is the main motivation for applying factor models to covariance estimation, especially if \(m\) is large; see, among many others, Fan et al. (2008) in finance and Forni et al. (2009) in economics. In addition, shrinkage estimation has been shown to lead to very efficient covariance estimation, see, for example, Kastner (2019) in Bayesian factor analysis and Ledoit and Wolf (2020) in a non-Bayesian context.
In numerous applications, factor analysis reaches beyond covariance modelling. From the very beginning, the goal of factor analysis has been to extract the underlying loading matrix \(\mathbf{\Lambda}\) to understand the driving forces behind the observed correlation between the features, see e.g. Owen and Wang (2016) for a recent review. However, also in this setting, the only source of information is the observed covariance of the data, making the decomposition of the covariance matrix \(\mathbf{\Omega}\) into the cross-covariance matrix \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) and the variance \(\mathbf{\Sigma}_{0}\) of the idiosyncratic errors more challenging than estimating only \(\mathbf{\Omega}\) itself.
A huge literature, dating back to Koopmans and Reiersol (1950) and Reiersol (1950), has addressed this problem of identification which can be resolved only by imposing additional structure on the factor model. Anderson and Rubin (1956) considered identification as a two-step procedure, namely identification of \(\mathbf{\Sigma}_{0}\) from \(\mathbf{\Omega}\) (variance identification) and subsequent identification of \(\mathbf{\Lambda}\) from \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) (solving rotational invariance). The most popular constraint in econometrics, statistics and machine learning for solving rotational invariance is to consider positive lower triangular loading matrices, see e.g. Geweke and Zhou (1996); West (2003); Lopes and West (2004), albeit other strategies have been put forward, see e.g. Neudecker (1990), Bai and Ng (2013), Assmann et al. (2016), Chan et al. (2018), and Williams (2020). Only a few papers have addressed variance identification (e.g. Bekker, 1989) and to the best of our knowledge so far no structure has been put forward that simultaneously addresses both identification problems.
In this work, we discuss a new identification strategy based on generalized lower triangular (GLT) structures, see Figure 1 for illustration. This concept was originally introduced as part of an MCMC sampler for sparse Bayesian factor analysis where the number of factors is unknown in the (unpublished) work of Fruhwirth-Schnatter and Lopes (2018). In the present paper, GLT structures are given a full and comprehensive mathematical treatment and are applied in Fruhwirth-Schnatter et al. (2022) to develop an efficient reversible jump MCMC (RJMCMC) sampler for sparse Bayesian factor analysis under very general shrinkage priors. It will be proven that GLT structures simultaneously address rotational invariance and variance identification in factor models. Variance identification relies on a counting rule for the number of non-zero elements in the loading matrix \(\mathbf{\Lambda}\), which is a sufficient condition that extends previous work by Sato (1992).
In addition, we will show that GLT structures are useful in exploratory factor analysis where the
factor dimension \(r\) is unknown. Identification of the number of factors in applied factor analysis is a notoriously difficult problem, with considerable ambiguity which method works best, be it BIC-type criteria (Bai and Ng, 2002), marginal likelihoods (Lopes and West, 2004), techniques from Bayesian nonparametrics involving infinite-dimensional factor models (Bhattacharya and Dunson, 2011; Rockova and George, 2017; Legramanti et al., 2020) or more heuristic procedures (Kaufmann and Schuhmacher, 2019). Imposing an unordered GLT structure in exploratory factor analysis allows to identify the true loading matrix \(\boldsymbol{\Lambda}\) and the matrix \(\boldsymbol{\Sigma}_{0}\) and to easily spot all spurious columns in a possibly overfitting model. This strategy underlies the RJMCMC sampler of Fruhwirth-Schnatter et al. (2022) to estimate the number of factors.
The paper is structured as follows. Section 2 reviews the role of identification in factor analysis using illustrative examples. Section 3 introduces GLT structures, proves identification for sparse GLT structures and shows that any unconstrained loading matrix has a unique representation as a GLT matrix. Section 4 addresses variance identification under GLT structures. Section 5 discusses exploratory factor analysis under unordered GLT structures, while Section 6 presents an illustrative application. Section 7 concludes.
Figure 1: Left: ordered sparse GLT matrix with six factors. Center: one of the \(2^{6}\cdot 6!\) corresponding unordered sparse GLT matrices. Right: a corresponding sparse PLT matrix, i.e. enforced non-zeros on the main diagonal. The pivot rows \((l_{1},\ldots,l_{6})=(1,3,10,11,14,17)\) are marked by triangles. Non-zero loadings are marked by circles, zero loadings are left blank.
## 2 The role of identification in factor analysis
Let \(\mathbf{y}_{t}=(y_{1t},\ldots,y_{mt})^{\top}\) be an observation vector of \(m\) measurements, which is assumed to arise from a multivariate normal distribution, \(\mathbf{y}_{t}\sim\mathcal{N}_{m}\left(\mathbf{0},\mathbf{\Omega}\right)\), with zero mean and covariance matrix \(\mathbf{\Omega}\). In factor analysis, the correlation among the observations is assumed to be driven by a latent \(r\)-variate random variable \(\mathbf{f}_{t}=(f_{1t},\ldots,f_{rt})^{\top}\), the so-called common factors, through the following observation equation:
\[\mathbf{y}_{t}=\mathbf{\Lambda}\mathbf{f}_{t}+\boldsymbol{\epsilon}_{t}, \tag{1}\]
where the \(m\times r\) matrix \(\mathbf{\Lambda}\) containing the factor loadings \(\Lambda_{ij}\) is of full column rank, \(\text{rk}\left(\mathbf{\Lambda}\right)=r\), equal to the factor dimension \(r\). In the present paper, we focus on the so-called basic factor model where the vector \(\boldsymbol{\epsilon}_{t}=(\epsilon_{1t},\ldots,\epsilon_{mt})^{\top}\) accounts for independent, idiosyncratic variation of each measurement and is distributed as \(\boldsymbol{\epsilon}_{t}\sim\mathcal{N}_{m}\left(\mathbf{0},\mathbf{\Sigma} _{0}\right)\), with \(\mathbf{\Sigma}_{0}=\text{Diag}\big{(}\sigma_{1}^{2},\ldots,\sigma_{m}^{2} \big{)}\) being a positive definite diagonal matrix. The common factors are orthogonal, meaning that \(\mathbf{f}_{t}\sim\mathcal{N}_{r}\left(\mathbf{0},\mathbf{I}_{r}\right),\) and independent of \(\boldsymbol{\epsilon}_{t}\). In this case, the observation equation (1) implies the following covariance matrix \(\mathbf{\Omega}\), when we integrate w.r.t. the latent common factors \(\mathbf{f}_{t}\):
\[\mathbf{\Omega}=\mathbf{\Lambda}\mathbf{\Lambda}^{\top}+\mathbf{\Sigma}_{0}. \tag{2}\]
Hence, all dependence among the measurements in \(\mathbf{y}_{t}\) is explained through the latent common factors and the off-diagonal elements of \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) define the marginal covariance between any two measurements \(y_{i_{1},t}\) and \(y_{i_{2},t}\):
\[\text{Cov}(y_{i_{1},t},y_{i_{2},t})=\mathbf{\Lambda}_{i_{1},\bullet}\mathbf{ \Lambda}_{i_{2},\bullet}^{\top}, \tag{3}\]
where \(\mathbf{\Lambda}_{i,\bullet}\) is the \(i\)th row of \(\mathbf{\Lambda}\). Consequently, we will refer to \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) as the cross-covariance matrix. Since the number of factors, \(r\), is often considerably smaller than the number of measurements, \(m\), (2) can be seen as a parsimonious representation of the dependence between the measurements, often with considerably fewer parameters in \(\mathbf{\Lambda}\) than the \(m(m-1)/2\) off-diagonal elements in an unconstrained covariance matrix \(\mathbf{\Omega}\).
Since the factors \(\mathbf{f}_{t}\) are unobserved, the only information available to estimate \(\mathbf{\Lambda}\) and \(\mathbf{\Sigma}_{0}\) is the covariance matrix \(\mathbf{\Omega}\). A rigorous approach toward identification of factor models was first offered by Reiersol (1950) and Anderson and Rubin (1956). Identification in the context of a basic factor model means the following. For any pair \(\left(\boldsymbol{\beta},\mathbf{\Sigma}\right)\), where \(\boldsymbol{\beta}\) is an \(m\times r\) matrix and \(\mathbf{\Sigma}\) is a positive definite diagonal matrix, that satisfies (2), i.e.:
\[\mathbf{\Omega}=\boldsymbol{\beta}\boldsymbol{\beta}^{\top}+\mathbf{\Sigma}= \mathbf{\Lambda}\mathbf{\Lambda}^{\top}+\mathbf{\Sigma}_{0}, \tag{4}\]
it follows that \(\boldsymbol{\beta}=\mathbf{\Lambda}\) and \(\mathbf{\Sigma}=\mathbf{\Sigma}_{0}\). Note that both parameter pairs imply the same Gaussian distribution \(\mathbf{y}_{t}\sim\mathcal{N}_{m}\left(\mathbf{0},\mathbf{\Omega}\right)\) for every possible realisation \(\mathbf{y}_{t}\).
Anderson and Rubin (1956) considered identification as a two-step procedure. The first step is identification of the variance decomposition, i.e. identification of \(\mathbf{\Sigma}_{0}\) from (2), which implies identification of \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\). The second step is subsequent identification of \(\mathbf{\Lambda}\) from \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\), also know as solving the rotational invariance problem. The literature on factor analysis often reduces identification of factor models to the second problem, however as we will argue in the present paper, variance identification is equally important.
Rotational invariance.Let us assume for the moment that \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) is identified. Consider, for further illustration, the following factor loading matrix \(\mathbf{\Lambda}\) and a loading matrix \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}_{\alpha b}\) defined as a rotation of \(\mathbf{\Lambda}\):
\[\mathbf{\Lambda}=\left(\begin{array}{cc}\lambda_{11}&0\\ \lambda_{21}&0\\ \lambda_{31}&0\\ 0&\lambda_{42}\\ 0&\lambda_{52}\\ 0&\lambda_{62}\end{array}\right),\quad\mathbf{P}_{\alpha b}=\left(\begin{array} []{cc}\cos\alpha&(-1)^{b}\sin\alpha\\ -\sin\alpha&(-1)^{b}\cos\alpha\end{array}\right),\quad\mathbf{\beta}=\mathbf{\Lambda} \mathbf{P}_{\alpha b}=\left(\begin{array}{cc}\beta_{11}&\beta_{21}\\ \beta_{21}&\beta_{22}\\ \beta_{51}&\beta_{52}\\ \beta_{41}&\beta_{42}\\ \beta_{51}&\beta_{52}\\ \beta_{61}&\beta_{62}\end{array}\right). \tag{5}\]
For any \(\alpha\in[0,2\pi)\) and \(b\in\{0,1\}\), the factor loading matrix \(\mathbf{\beta}\) yields the same cross-covariance matrix for \(\mathbf{y}_{t}\) as \(\mathbf{\Lambda}\), as is easily verified:
\[\mathbf{\beta}\mathbf{\beta}^{\top}=\mathbf{\Lambda}\mathbf{P}_{\alpha b}\mathbf{P}_{ \alpha b}^{\top}\mathbf{\Lambda}^{\top}=\mathbf{\Lambda}\mathbf{\Lambda}^{\top}. \tag{6}\]
The rotational invariance apparent in (6) holds more generally for any basic factor model (1). Take any arbitrary \(r\times r\) rotation matrix \(\mathbf{P}\) (i.e. \(\mathbf{P}\mathbf{P}^{\top}=\mathbf{I}_{r}\)) and define the basic factor model
\[\mathbf{f}_{t}^{\star}\sim\mathcal{N}_{r}\left(\mathbf{0},\mathbf{I}_{r} \right),\quad\mathbf{y}_{t}=\mathbf{\beta}\mathbf{f}_{t}^{\star}+\mathbf{\epsilon}_{t},\quad\mathbf{\epsilon}_{t}\sim\mathcal{N}_{m}\left(\mathbf{0},\mathbf{\Sigma}_{0} \right), \tag{7}\]
where \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}\) and \(\mathbf{f}_{t}^{\star}=\mathbf{P}^{\top}\mathbf{f}_{t}\). Then both models imply the same covariance \(\mathbf{\Omega}\), given by (2). Hence, without imposing further constraints, \(\mathbf{\Lambda}\) is in general not identified from the cross-covariance matrix \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\). If interest lies in interpreting the factors through the factor loading matrix \(\mathbf{\Lambda}\), rotational invariance has to be resolved. The usual way of dealing with rotational invariance is to constrain \(\mathbf{\Lambda}\) in such a way that the only possible rotation is the identity \(\mathbf{P}=\mathbf{I}_{r}\). For orthogonal factors at least \(r(r-1)/2\) restrictions on the elements of \(\mathbf{\Lambda}\) are needed to eliminate rotational indeterminacy (Anderson and Rubin, 1956).
The most popular constraints are positive lower triangular (PLT) loading matrices, where the upper triangular part is constrained to be zero and the main diagonal elements \(\Lambda_{11},\ldots,\Lambda_{rr}\) of \(\mathbf{\Lambda}\) are strictly positive, see Figure 1 for illustration. Despite its popularity, the PLT structure is restrictive, as outlined already by Joreskog (1969). Let \(\mathbf{\beta}\mathbf{\beta}^{\top}\) be an arbitrary cross-covariance matrix with factor loading matrix
\(\mathbf{\beta}\). A PLT representation of \(\mathbf{\beta\beta}^{\top}\) is possible iff a rotation matrix \(\mathbf{P}\) exists such that \(\mathbf{\beta}\) can be rotated into a PLT matrix \(\mathbf{\Lambda}=\mathbf{\beta}\mathbf{P}\). However, as example (5) illustrates this is not necessarily the case. Obviously, \(\mathbf{\Lambda}\) is not a PLT matrix, since \(\Lambda_{22}=0\). Any of the possible rotations \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}_{\alpha b}\) have _non-zero_ elements above the main diagonal and are not PLT matrices either. This example demonstrates that the PLT representation is restrictive. To circumvent this problem in example (5), one could reorder the measurements in an appropriate manner. However, in applied factor analysis, such an appropriate ordering is typically not known in advance and the choice of the first \(r\) measurements is an important modeling decision under PLT constraints, see e.g. Lopes and West (2004) and Carvalho et al. (2008).
We discuss in Section 3 a new identification strategy to resolve rotational invariance in factor models based on the concept of generalized lower triangular (GLT) structures. Loosely speaking, GLT structures generalize PLT structures by freeing the position of the first non-zero factor loading in each column, see the loading matrix \(\mathbf{\Lambda}\) in (5) and Figure 1 for an example. We show in Section 3.1 that a unique GLT structure \(\mathbf{\Lambda}\) can be identified for any cross-covariance matrix \(\mathbf{\beta\beta}^{\top}\), provided that variance identification holds and, consequently, \(\mathbf{\beta\beta}^{\top}\) itself is identified. Even if \(\mathbf{\beta\beta}^{\top}\) is obtained from a loading matrix \(\mathbf{\beta}\) that does not take the form of a GLT structure, such as the matrix \(\mathbf{\beta}\) in (5), we show in Section 3.3 that a _unique_ orthogonal matrix \(\mathbf{G}\) exists which represents \(\mathbf{\beta}\) as a rotation of a unique GLT structure \(\mathbf{\Lambda}\):
\[\mathbf{\Lambda}=\mathbf{\beta}\mathbf{G}, \tag{8}\]
which we call rotation into GLT. Hence, the GLT representation is unrestrictive in the sense of Joreskog (1969) and is, indeed, a new and generic way to resolve rotational invariance for any factor loading matrix.
Sparse factor loading matrices.The factor loading matrix \(\mathbf{\Lambda}\) given in (5) is an example of a sparse loading matrix. While only a single zero loading would be needed to resolve rotational invariance, six zeros are present and each factor loads only on dedicated measurements. Such sparse loading matrices are generated by a binary indicator matrix \(\mathbf{\delta}\) of 0s and 1s of the same dimension as \(\mathbf{\Lambda}\), where \(\Lambda_{ij}=0\) iff \(\delta_{ij}=0\), and \(\Lambda_{ij}\in\mathbb{R}\) is unconstrained otherwise. The binary matrix \(\mathbf{\delta}=\mathbb{I}(\mathbf{\Lambda}\neq 0)\), where the indicator function is applied element-wise, is called the sparsity matrix corresponding to \(\mathbf{\Lambda}\). The sparsity matrix \(\mathbf{\delta}\) contains a lot of information about the structure of \(\mathbf{\Lambda}\), see Figure 1 for illustration. The indicator matrix on the right hand side tells us that \(\mathbf{\Lambda}\) obeys the PLT constraint. The fifth row of the left and center matrices contains only zeros, which tells us that observation \(y_{5t}\) is uncorrelated with the remaining observations, since \(\text{Cov}(y_{it},y_{5t})=0\) for all \(i\neq 5\).
Variance identification.Constraints that resolve rotational invariance typically take variance identification, i.e. identification of \(\mathbf{\Lambda\Lambda}^{\top}\), for granted, see e.g. Geweke and Zhou (1996). Variance identification refers to the problem that the idiosyncratic variances \(\sigma_{1}^{2},\ldots,\sigma_{m}^{2}\) in \(\mathbf{\Sigma}_{0}\) are identified only from the diag
onal elements of \(\mathbf{\Omega}\), as all other elements are independent of the \(\sigma_{i}^{2}\)s; see again (3). To achieve variance identification of \(\sigma_{i}^{2}\) from \(\Omega_{ii}=\mathbf{\Lambda}_{i,\bullet}\mathbf{\Lambda}_{i,\bullet}^{\top}+ \sigma_{i}^{2}\), all factor loadings have to be identified solely from the off-diagonal elements of \(\mathbf{\Omega}\). Variance identification, however, is easily violated, as the following considerations illustrate.
Let us return to the factor model defined in (5). The corresponding covariance matrix \(\mathbf{\Omega}\) is given by:
\[\mathbf{\Omega}=\left(\begin{array}{cccccc}\lambda_{11}^{2}+\sigma_{1}^{2}& \lambda_{11}\lambda_{21}&\lambda_{11}\lambda_{31}&&\\ \lambda_{11}\lambda_{21}&\lambda_{21}^{2}+\sigma_{2}^{2}&\lambda_{21}\lambda_ {31}&\mathbf{0}&\\ \lambda_{11}\lambda_{31}&\lambda_{21}\lambda_{31}&\lambda_{31}^{2}+\sigma_{3} ^{2}&&\\ &&&\lambda_{42}^{2}+\sigma_{4}^{2}&\lambda_{42}\lambda_{52}&\lambda_{42} \lambda_{62}\\ &\mathbf{0}&&\lambda_{42}\lambda_{52}&\lambda_{52}^{2}+\sigma_{5}^{2}&\lambda_ {52}\lambda_{62}\\ &&&\lambda_{42}\lambda_{62}&\lambda_{52}\lambda_{62}&\lambda_{62}^{2}+\sigma_ {6}^{2}\end{array}\right). \tag{9}\]
Let us assume that the sparsity pattern \(\boldsymbol{\delta}=\mathbb{I}(\mathbf{\Lambda}\neq 0)\) of \(\mathbf{\Lambda}\) is known, but the specific values of the unconstrained loadings \((\lambda_{11},\ldots,\lambda_{62})\) are unknown. An interesting question is the following. Knowing \(\mathbf{\Omega}\), can the unconstrained loadings \(\lambda_{11},\ldots,\lambda_{62}\) and the variances \(\sigma_{1}^{2},\ldots,\sigma_{m}^{2}\) be identified uniquely? Given \(\mathbf{\Omega}\), the three nonzero covariances \(\text{Cov}(y_{1t},y_{2t})=\lambda_{11}\lambda_{21}\), \(\text{Cov}(y_{1t},y_{3t})=\lambda_{11}\lambda_{31}\) and \(\text{Cov}(y_{2t},y_{3t})=\lambda_{21}\lambda_{31}\) are available to identify the three factor loadings \((\lambda_{11},\lambda_{21},\lambda_{31})\). Similarly, the nonzero covariances \(\text{Cov}(y_{4t},y_{5t})=\lambda_{42}\lambda_{52}\), \(\text{Cov}(y_{4t},y_{6t})=\lambda_{42}\lambda_{62}\) and \(\text{Cov}(y_{5t},y_{6t})=\lambda_{52}\lambda_{62}\) are available to identify the factor loadings \((\lambda_{42},\lambda_{52},\lambda_{62})\), hence variance identification is given. However, if we remove the last measurement from the loading factor matrix defined in (5), we obtain
\[\mathbf{\Lambda}=\left(\begin{array}{cc}\lambda_{11}&0\\ \lambda_{21}&0\\ \lambda_{31}&0\\ 0&\lambda_{42}\\ 0&\lambda_{52}\end{array}\right),\quad\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P }_{\alpha b}=\left(\begin{array}{cc}\beta_{11}&\beta_{21}\\ \beta_{21}&\beta_{22}\\ \beta_{51}&\beta_{52}\\ \beta_{41}&\beta_{42}\\ \beta_{51}&\beta_{52}\end{array}\right), \tag{10}\]
and the corresponding covariance matrix reads:
\[\mathbf{\Omega}=\left(\begin{array}{cccccc}\lambda_{11}^{2}+\sigma_{1}^{2}& \lambda_{11}\lambda_{21}&\lambda_{11}\lambda_{31}&&\\ \lambda_{11}\lambda_{21}&\lambda_{21}^{2}+\sigma_{2}^{2}&\lambda_{21}\lambda_ {31}&\mathbf{0}&\\ \lambda_{11}\lambda_{31}&\lambda_{21}\lambda_{31}&\lambda_{31}^{2}+\sigma_{3} ^{2}&&\\ &&&\lambda_{42}^{2}+\sigma_{4}^{2}&\lambda_{42}\lambda_{52}\\ &\mathbf{0}&&\lambda_{42}\lambda_{52}&\lambda_{52}^{2}+\sigma_{5}^{2}\end{array} \right).\]
While the three factor loadings \((\lambda_{11},\lambda_{21},\lambda_{31})\) are still identified from the off-diagonal elements of \(\mathbf{\Omega}\) as before, variance identification of \(\sigma_{4}^{2}\) and \(\sigma_{5}^{2}\) fails. Since \(\text{Cov}(y_{4t},y_{5t})=\lambda_{42}\lambda_{52}\) is the only non-zero element that depends on the loadings \(\lambda_{42}\) and \(\lambda_{52}\), infinitely many different parameters \((\lambda_{42},\lambda_{52},\sigma_{4}^{2},\sigma_{5}^{2})\) imply the same covariance matrix \(\mathbf{\Omega}\). From these considerations it is evident that a minimum of three non-zero loadings is necessary in each column to achieve variance identification, a condition which has
been noted as early as Anderson and Rubin (1956). At the same time, this condition is not sufficient, as it is satisfied by the loading matrix \(\boldsymbol{\beta}\) in (10), although variance identification does not hold. In general, variance identification is not straightforward to verify. We will introduce in Section 4.1 a new and convenient way to verify variance identification for GLT structures.
The row deletion property.As explained above, we need to verify uniqueness of the variance decomposition, i.e. the identification of the idiosyncratic variances \(\sigma_{1}^{2},\ldots,\sigma_{m}^{2}\) in \(\boldsymbol{\Sigma}_{0}\) from the covariance matrix \(\boldsymbol{\Omega}\) given in (2). The identification of \(\boldsymbol{\Sigma}_{0}\) guarantees that \(\boldsymbol{\Lambda}\boldsymbol{\Lambda}^{\top}\) is identified. The second step of identification is then to ensure uniqueness of the factor loadings, i.e. unique identification of \(\boldsymbol{\Lambda}\) from \(\boldsymbol{\Lambda}\boldsymbol{\Lambda}^{\top}\). To verify variance identification, we rely in the present paper on a condition known as _row-deletion property_.
**Definition 1** (Row deletion property **AR**(Anderson and Rubin, 1956)).: An \(m\times r\) factor loading matrix \(\boldsymbol{\Lambda}\) satisfies the row-deletion property if the following condition is satisfied: whenever an arbitrary row is deleted from \(\boldsymbol{\Lambda}\), two disjoint submatrices of rank \(r\) remain.
Anderson and Rubin (1956, Theorem 5.1) prove that the row-deletion property is a sufficient condition for the identification of \(\boldsymbol{\Lambda}\boldsymbol{\Lambda}^{\top}\) and \(\boldsymbol{\Sigma}_{0}\) from the marginal covariance matrix \(\boldsymbol{\Omega}\) given in (2). For any (not necessarily GLT) factor loading matrix \(\boldsymbol{\Lambda}\), the row deletion property **AR** can be trivially tested by a step-by-step analysis, where each single row of \(\boldsymbol{\Lambda}\) is sequentially deleted and the two distinct submatrices are determined from examining the remaining matrix, as suggested e.g. by Hayashi and Marcoulides (2006). However, this procedure is inefficient and challenging in higher dimensions.
Hence, it is helpful to have more structural conditions for verifying variance identification under the row deletion property **AR**. The literature provides several necessary conditions for the row deletion property **AR** that are based on counting the number of non-zero factor loadings in \(\boldsymbol{\Lambda}\). Anderson and Rubin (1956), for instance, prove the following necessary conditions for **AR**: for every nonsingular \(r\)-dimensional square matrix \(\mathbf{G}\), the matrix \(\boldsymbol{\beta}=\boldsymbol{\Lambda}\mathbf{G}\) contains in each column _at least 3_ and in each pair of columns _at least 5_ nonzero factor loadings. Sato (1992, Theorem 3.3) extends these _necessary_ conditions in the following way: every subset of \(1\leq q\leq r\) columns of \(\boldsymbol{\beta}=\boldsymbol{\Lambda}\mathbf{G}\) contains _at least \(2q+1\)_ nonzero factor loadings for every nonsingular matrix \(\mathbf{G}\). We call this the \(3579\) counting rule for obvious reasons.
For illustration, let us return to the examples in (5) and (10). First, apply the \(3579\) counting rule to the unrestricted matrix \(\boldsymbol{\beta}\) in (10). Although the variance decomposition \(\boldsymbol{\Omega}=\boldsymbol{\beta}\boldsymbol{\beta}^{\top}+\boldsymbol{ \Sigma}=\boldsymbol{\Lambda}\boldsymbol{\Lambda}^{\top}+\boldsymbol{\Sigma}_{0},\) is not unique, the counting rules are not violated, since \(\boldsymbol{\beta}\) has five non-zero rows except for the cases \((\alpha,b)\in\{0,\frac{\pi}{2},\pi,\frac{3\pi}{2}\}\times\{0,1\}\). Only for these eight specific cases, which correspond to the trivial
rotations
\[\left(\begin{array}{cc}\lambda_{11}&0\\ \lambda_{21}&0\\ \lambda_{31}&0\\ 0&\lambda_{42}\\ 0&\lambda_{52}\end{array}\right)\quad\left(\begin{array}{cc}\lambda_{11}&0\\ \lambda_{21}&0\\ \lambda_{31}&0\\ 0&-\lambda_{42}\\ 0&-\lambda_{52}\end{array}\right)\quad\left(\begin{array}{cc}-\lambda_{11} &0\\ -\lambda_{21}&0\\ -\lambda_{31}&0\\ 0&\lambda_{42}\\ 0&\lambda_{52}\end{array}\right)\quad\left(\begin{array}{cc}-\lambda_{11} &0\\ -\lambda_{21}&0\\ -\lambda_{31}&0\\ 0&-\lambda_{42}\\ 0&-\lambda_{52}\end{array}\right) \tag{11}\] \[\left(\begin{array}{cc}0&\lambda_{11}\\ 0&\lambda_{21}\\ 0&\lambda_{31}\\ \lambda_{42}&0\\ \lambda_{52}&0\end{array}\right)\quad\left(\begin{array}{cc}0&-\lambda_{11} \\ 0&\lambda_{21}\\ 0&\lambda_{31}\\ -\lambda_{42}&0\\ -\lambda_{52}&0\end{array}\right)\quad\left(\begin{array}{cc}0&-\lambda_{1 1}\\ 0&-\lambda_{21}\\ 0&-\lambda_{31}\\ -\lambda_{42}&0\\ -\lambda_{52}&0\end{array}\right),\]
we find immediately that the counting rules are violated, since one of the two columns has only two non-zero elements. This example shows the need to check such counting rules not only for a single loading matrix \(\mathbf{\beta}\), but also for all rotations \(\mathbf{\beta}\mathbf{P}\) admissible under the chosen strategy toward rotational invariance. On the other hand, if we apply the \(3579\) counting rule to the unrestricted matrix \(\mathbf{\beta}\) in (5), we find that the _necessary_ counting rules are satisfied for all rotations \(\mathbf{\beta}\mathbf{P}_{\alpha b}\). For this specific example, we have already verified explicitly that variance identification holds and one might wonder if, in general, the \(3579\) counting rule can lead to a _sufficient_ criterion for variance identification under \(\mathbf{AR}\).
Sufficient conditions for variance identification are hardly investigated in the literature. One exception is the popular factor analysis model where \(\mathbf{\Lambda}\) takes the form of a _dense_ PLT matrix, where all factor loadings on and below the main diagonal are left unrestricted and can take any in value in \(\mathbb{R}\). For this model, condition \(\mathbf{AR}\) and hence variance identification holds, except for a set of measure 0, if the condition \(m\geq 2r+1\) is satisfied. Conti et al. (2014) investigate identification of a dedicated factor model, where equation (1) is combined with correlated (oblique) factors, \(\mathbf{f}_{t}\sim\mathcal{N}_{r}\left(\mathbf{0},\mathbf{R}\right)\), and the factor loading matrix \(\mathbf{\Lambda}\) has a perfect simple structure, i.e. each observation loads on at most one factor, as in (5) and (10); however, the exact position of the non-zero elements is unknown. They prove necessary and sufficient conditions that imply uniqueness of the variance decomposition as well as uniqueness of the factor loading matrix, namely: the correlation matrix \(\mathbf{R}\) is of full rank (\(\text{rk}\left(\mathbf{R}\right)=r\)) and each column of \(\mathbf{\Lambda}\) contains at least three nonzero loadings.
In the present paper, we build on and extend this previous work. We provide sufficient conditions for variance identification of a GLT structure \(\mathbf{\Lambda}\). These conditions are formulated as counting rules for the \(m\times r\) sparsity matrix \(\mathbf{\delta}=\mathbb{I}(\mathbf{\beta}\neq 0)\) of \(\mathbf{\beta}\) and are equivalent to the \(3579\) counting rules of Sato (1992, Theorem 3.3). More specifically, if the \(3579\) counting rule holds for the sparsity matrix \(\mathbf{\delta}\) of a GLT matrix \(\mathbf{\Lambda}\), then this is a sufficient condition for the row deletion property \(\mathbf{AR}\) and consequently for variance identification, except for a set of measure 0.
Identification of the number of factors.Identification of the number of factors is a notoriously difficult problem and analysing this problem from the view point of variance identification is helpful in understanding some fundamental difficulties. Assume that \(\mathbf{\Omega}\) has a representation as in (2) with \(r\) factors which is variance identified. Then, on the one hand, no equivalent representation exists with \(r^{\prime}<r\) number of factors. On the other hand, as shown in Reiersol (1950, Theorem 3.3), any such structure \((\mathbf{\Lambda},\mathbf{\Sigma}_{0})\) creates solutions \((\mathbf{\beta}_{k},\mathbf{\Sigma}_{k})\) with \(m\times k\) loading matrices \(\mathbf{\beta}_{k}\) of dimension \(k=r+1,r+2,\ldots,m\) bigger than \(r\) and \(\mathbf{\Sigma}_{k}\) being a positive definite matrix different from \(\mathbf{\Sigma}_{0}\) which imply the same covariance matrix \(\mathbf{\Omega}\) as \((\mathbf{\Lambda},\mathbf{\Sigma}_{0})\), i.e.:
\[\mathbf{\Omega}=\mathbf{\beta}_{k}\mathbf{\beta}_{k}^{\top}+ \mathbf{\Sigma}_{k}. \tag{12}\]
Furthermore, for any fixed \(k>r\), infinitely many such solutions \((\mathbf{\beta}_{k},\mathbf{\Sigma}_{k})\) can be created that satisfy the decomposition (12) which, consequently, no longer is variance identified. This problem is prevalent regardless of the chosen strategy toward rotational invariance. For illustration, we return to example (5) and construct an equivalent solution for \(k=3\). While the first two columns of \(\mathbf{\beta}_{3}\) are equal to \(\mathbf{\Lambda}\), the third column is a so-called _spurious_ factor with a single non-zero loading and \(\mathbf{\Sigma}_{3}\) is defined as follows:
\[\mathbf{\beta}_{3}=\left(\begin{array}{ccc}\lambda_{11}&0&0\\ \lambda_{21}&0&\beta_{23}\\ \lambda_{31}&0&0\\ 0&\lambda_{42}&0\\ 0&\lambda_{52}&0\\ 0&\lambda_{62}&0\end{array}\right),\quad\mathbf{\Sigma}_{3}=\text{Diag}\big{(} \sigma_{1}^{2},\sigma_{2}^{2}-\beta_{23}^{2},\sigma_{3}^{2},\sigma_{4}^{2}, \sigma_{5}^{2},\sigma_{6}^{2}\big{)}\,. \tag{13}\]
We can place the spurious factor loading \(\beta_{i3}\) in any row \(i\) and \(\beta_{i3}\) can take any value satisfying \(0<\beta_{i3}^{2}<\sigma_{i}^{2}\). It is easy to verify that any such pair \((\mathbf{\beta}_{3},\mathbf{\Sigma}_{3})\) indeed implies the same covariance matrix \(\mathbf{\Omega}\) as in (9).
This ambiguity in an overfitting model renders the estimation of true number of factors \(r\) a challenging problem and leads to considerable uncertainty how to choose the number of factors in applied factor analysis. In Section 5, we follow up on this problem in more detail. An important necessary condition for \(k\) to be the true number of factors is that variance identification of \(\mathbf{\Sigma}_{k}\) in (12) holds. Therefore, the counting rules that we introduce in this paper will also be useful in cases where the true number of factors \(r\) is unknown.
Overfitting GLT structures.Finally, we investigate in Section 5 the class of potentially overfitting GLT structures where the matrix \(\mathbf{\beta}_{k}\) in (12) is constrained to be an unordered GLT structure. We apply results by Tumura and Sato (1980) to this class and show how easily spurious factors and the underlying true factor loading matrix \(\mathbf{\Lambda}\) are identified under GLT structures, even if the model is overfitting. Our
strategy relies on the concept of extended variance identification and the extended row deletion property introduced by Tumura and Sato (1980), where more than one row is deleted from the loading matrix. An extended counting rule will be introduced for the sparsity matrix of a GLT loading matrices \(\mathbf{\beta}_{k}\) in Section 4 which is useful in this context.
## 3 Solving rotational invariance through GLT structures
### Ordered and unordered GLT structures
In this work, we introduce a new identification strategy to resolve rotational invariance based on the concept of generalized lower triangular (GLT) structures. First, we introduce the notion of _pivot rows_ of a factor loading matrix \(\mathbf{\Lambda}\).
**Definition 2** (**Pivot rows**).: Consider an \(m\times r\) factor loading matrix \(\mathbf{\Lambda}\) with \(r\) non-zero columns. For each column \(j=1,\ldots,r\) of \(\mathbf{\Lambda}\), the pivot row \(l_{j}\) is defined as the row index of the first non-zero factor loading in column \(j\), i.e. \(\Lambda_{ij}=0,\forall\,i<l_{j}\) and \(\Lambda_{l_{j},j}\neq 0\). The factor loading \(\Lambda_{l_{j},j}\) is called the leading factor loading of column \(j\).
For PLT factor loading matrices the pivot rows lie on the main diagonal, i.e. \((l_{1},\ldots,l_{r})=(1,\ldots,r)\), and the leading factor loadings \(\Lambda_{jj}>0\) are positive for all columns \(j=1,\ldots,r\). GLT structures generalize the PLT constraint by freeing the pivot rows of a factor loading matrix \(\mathbf{\Lambda}\) and allowing them to take arbitrary positions \((l_{1},\ldots,l_{r})\), the only constraint being that the pivot rows are pairwise distinct. GLT structures contain PLT matrices as the special case where \(l_{j}=j\) for \(j=1,\ldots,r\). Our generalization is particularly useful if the ordering of the measurements \(y_{it}\) is in conflict with the PLT assumption. Since \(\Lambda_{jj}\) is allowed to be 0, measurements different from the first \(r\) ones may lead the factors. For each factor \(j\), the leading variable is the response variable \(y_{l_{j},t}\) corresponding to the pivot row \(l_{j}\).
We will distinguish between two types of GLT structures, namely ordered and unordered GLT structures. The following definition introduces ordered GLT matrices. Unordered GLT structures will be motivated and defined below. Examples of ordered and unordered GLT matrices are displayed in Figure 1 for a model with \(r=6\) factors.
**Definition 3** (**Ordered GLT structures**).: An \(m\times r\) factor loading matrix \(\mathbf{\Lambda}\) with full column rank \(r\) has an ordered GLT structure if the pivot rows \(l_{1},\ldots,l_{r}\) of \(\mathbf{\Lambda}\) are ordered, i.e. \(l_{1}<\ldots<l_{r}\), and the leading factor loadings are positive, i.e. \(\Lambda_{l_{j},j}>0\) for \(j=1,\ldots,r\).
Evidently, imposing an ordered GLT structure resolves rotational invariance if the pivot rows are known. For any two ordered GLT matrices \(\mathbf{\beta}\) and \(\mathbf{\Lambda}\) with _identical_ pivot rows \(l_{1},\ldots,l_{r}\), the identity \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}\) evidently holds iff \(\mathbf{P}=\mathbf{I}_{r}\). In practice, the pivot rows \(l_{1},\ldots,l_{r}\) of a GLT structure are unknown and need
to be identified from the marginal covariance matrix \(\mathbf{\Omega}\) for a given number of factors \(r\). Given variance identification, i.e. assuming that the cross-covariance matrix \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) is identified, a particularly important issue for the identification of a GLT factor model is whether \(\mathbf{\Lambda}\) is uniquely identified from \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) if the pivot rows \(l_{1},\ldots,l_{r}\) are _unknown_. Non-trivial rotations \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}\) of a loading matrix \(\mathbf{\Lambda}\) with pivot rows \(l_{1},\ldots,l_{r}\) might exist such that \(\mathbf{\beta}\mathbf{\beta}^{\top}=\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\), while the pivot rows \(\tilde{l}_{1},\ldots,\tilde{l}_{r}\) of \(\mathbf{\beta}\) are different from the pivot rows of \(\mathbf{\Lambda}\). Very assuringly, Theorem 1 shows that this is not the case: not only the pivot rows, but the entire loading matrices \(\mathbf{\Lambda}\) and \(\mathbf{\beta}\) are identical, if \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}=\mathbf{\beta}\mathbf{\beta}^{\top}\) (see Appendix A for a proof).
**Theorem 1**.: _An ordered GLT structure is uniquely identified, provided that uniqueness of the variance decomposition holds, i.e.: if \(\mathbf{\Lambda}\) and \(\mathbf{\beta}\) are GLT matrices, respectively, with pivot rows \(l_{1}<\ldots<l_{r}\) and \(\tilde{l}_{1}<\ldots<\tilde{l}_{r}\) that satisfy \(\mathbf{\beta}\mathbf{\beta}^{\top}=\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\), then \(\mathbf{\beta}=\mathbf{\Lambda}\) and consequently \((\tilde{l}_{1},\ldots,\tilde{l}_{r})=(l_{1},\ldots,l_{r})\)._
Definition 4 introduces, as an extension of Definition 3, unordered GLT structures under which \(\mathbf{\Lambda}\) is identified from \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) only up to signed permutations. A signed permutation permutes the columns of the factor loading matrix \(\mathbf{\Lambda}\) and switches the sign of all factor loadings in any specific column. This leads to a trivial case of rotational invariance. For \(r=2\), for instance, the eight signed permutations of the loading matrix \(\mathbf{\Lambda}\) defined in (10) are depicted in (11). More formally, \(\mathbf{\beta}\) is a signed permutation of \(\mathbf{\Lambda}\), iff
\[\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}_{\pm}\mathbf{P}_{\rho}, \tag{14}\]
where the permutation matrix \(\mathbf{P}_{\rho}\) corresponds to one of the \(r\)! permutations of the \(r\) columns of \(\mathbf{\Lambda}\) and the reflection matrix \(\mathbf{P}_{\pm}=\text{Diag}(\pm 1,\ldots,\pm 1)\) corresponds to one of the \(2^{r}\) ways to switch the signs of the \(r\) columns of \(\mathbf{\Lambda}\). Often, it is convenient to employ identification rules that guarantee identification of \(\mathbf{\Lambda}\) only up to such column and sign switching, see e.g. Conti et al. (2014). Any structure \(\mathbf{\Lambda}\) obeying such an identification rule represents a whole equivalence class of matrices given by all \(2^{r}r!\) signed permutation \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}_{\pm}\mathbf{P}_{\rho}\) of \(\mathbf{\Lambda}\). This trivial form of the rotational invariance does not impose any additional mathematical challenges and is often convenient from a computational viewpoint, in particular for Bayesian inference, see for e.g. Conti et al. (2014) and Fruhwirth-Schnatter et al. (2022).
It is easy to verify how identification up to trivial rotational invariance can be achieved for GLT structures and motivates the following definition of unordered GLT structures as loadings matrices \(\mathbf{\beta}\) where the pivot rows \(l_{1},\ldots,l_{r}\) simply occupy \(r\) different rows. In Definition 4, no order constraint is imposed on the pivot rows and no sign constraint is imposed on the leading factor loadings. This very general structure allows to design highly efficient sampling schemes for sparse Bayesian factor analysis under GLT structures, see Fruhwirth-Schnatter et al. (2022).
**Definition 4** (**Unordered GLT structures)**.: An \(m\times r\) factor loading matrix \(\mathbf{\beta}\) with full column rank \(r\) has an unordered GLT structure if the pivot rows \(l_{1},\ldots,l_{r}\) of \(\mathbf{\beta}\) are pairwise distinct.
Theorem 1 is easily extended to unordered GLT structures. Any signed permutation \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}_{\rho}\mathbf{P}_{\pm}\) of \(\mathbf{\Lambda}\) is uniquely identified from \(\mathbf{\beta}\mathbf{\beta}^{\top}=\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\), provided that \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) is identified. Hence, under unordered GLT
structures the factor loading matrix \(\mathbf{\Lambda}\) is uniquely identified up to signed permutations. Full identification can easily be obtained from unordered GLT structures \(\mathbf{\beta}\). Any unordered GLT structure \(\mathbf{\beta}\) has unordered pivot rows \(l_{1},\ldots,l_{r}\), occupying different rows. The corresponding ordered GLT structure \(\mathbf{\Lambda}\) is recovered from \(\mathbf{\beta}\) by sorting the columns in ascending order according to the pivot rows. In other words, the pivot rows of \(\mathbf{\Lambda}\) are equal to the order statistics \(l_{(1)},\ldots,l_{(r)}\) of the pivot rows \(l_{1},\ldots,l_{r}\) of \(\mathbf{\beta}\), see again Figure 1. This procedure resolves rotational invariance, since the pivot rows \(l_{1},\ldots,l_{r}\) in the unordered GLT structure are distinct. Furthermore, imposing the condition \(\Lambda_{l_{j},j}>0\) in each column \(j\) resolves sign switching: if \(\Lambda_{l_{j},j}<0\), then the sign of all factor loadings \(\Lambda_{ij}\) in column \(j\) is switched.
### Sparse GLT structures
In Definition 3 and 4, "structural" zeros are introduced for a GLT structure for all factor loading above the pivot row \(l_{j}\), while the factor loading \(\Lambda_{l_{j},j}\) in the pivot row is non-zero by definition. We call \(\mathbf{\Lambda}\) a _dense_ GLT structure if all loadings below the pivot row are unconstrained and can take any value in \(\mathbb{R}\).
A _sparse_ GLT structure results if factor loadings at unspecified places below the pivot rows are zero and only the remaining loadings are unconstrained. A sparse loading matrix \(\mathbf{\Lambda}\) can be characterized by the so-called sparsity matrix, defined as a binary indicator matrix \(\mathbf{\delta}\) of 0/1s of the same size as \(\mathbf{\Lambda}\), where \(\delta_{ij}=\mathbb{I}(\Lambda_{ij}\neq 0)\). Let \(\mathbf{\delta}^{\Lambda}\) be the sparsity matrix of a GLT matrix \(\mathbf{\Lambda}\). The sparsity matrix \(\mathbf{\delta}\) corresponding to the signed permutation \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}_{\rho}\mathbf{P}_{\pm}\) is equal to \(\mathbf{\delta}=\mathbf{\delta}^{\Lambda}\mathbf{P}_{\rho}\) and is invariant to sign switching. Hence, for any sparse unordered GLT matrix \(\mathbf{\beta}\), the corresponding sparsity matrix \(\mathbf{\delta}\) obeys an unordered GLT structure with the same pivot rows as \(\mathbf{\beta}\), see Figure 1 for illustration.
In sparse factor analysis, single factor loadings take zero-values with positive probability and the corresponding sparsity matrix \(\mathbf{\delta}\) is a binary matrix that has to be identified from the data. Identification in sparse factor analysis has to provide conditions under which the entire 0/1 pattern in \(\mathbf{\delta}\) can be identified from the covariance matrix \(\mathbf{\Omega}\) if \(\mathbf{\delta}\) is unknown. Whether this is possible hinges on variance identification, i.e. whether the decomposition of \(\mathbf{\Omega}\) into \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) and \(\mathbf{\Sigma}_{0}\) is unique. How variance identification can be verified for (sparse) GLT structures is investigated in detail in Section 4. Let us assume at this point that variance identification holds, i.e. the cross-covariance matrix \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) is identified. Then an important step toward the identification of a sparse factor model is to verify whether the 0/1 pattern of \(\mathbf{\Lambda}\), characterized by \(\mathbf{\delta}\), is uniquely identified from \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\). Very importantly, if \(\mathbf{\Lambda}\) is assumed to be a GLT structure, then the entire GLT structure \(\mathbf{\Lambda}\) and hence the indicator matrix \(\mathbf{\delta}\) is uniquely identified from \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\), as follows immediately from Theorem 1, since \(\delta_{ij}=0\), iff \(\Lambda_{ij}=0\) for all \(i,j\). By identifying the 0/1 pattern in \(\mathbf{\delta}\) we can uniquely identify the pivot rows of \(\mathbf{\Lambda}\) and the sparsity pattern below.
We would like to emphasize that in sparse factor analysis with unconstrained loading matrices \(\mathbf{\Lambda}\) this is not necessarily the case. The indicator matrix \(\mathbf{\delta}\) is in general _not_ uniquely identified from \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\), because (non-trivial) rotations \(\mathbf{P}\) change the zero pattern in \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}\), while \(\mathbf{\beta}\mathbf{\beta}^{\top}=\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\). For illustration,
let us return to the example in (5) where we showed that \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\) is uniquely identified if the true sparsity matrix \(\mathbf{\delta}^{\Lambda}\) is known. Now assume that \(\mathbf{\delta}^{\Lambda}\) is unknown and allow the loading matrix \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}\) to be any rotation of \(\mathbf{\Lambda}\). It is then evident that the corresponding sparsity matrix \(\mathbf{\delta}\) is not unique and two solutions exists. For all rotations where \((\alpha,b)\in\{0,\frac{\pi}{2},\pi,\frac{3\pi}{2}\}\times\{0,1\}\), \(\mathbf{\beta}\) correspond to one of the eight signed permutation of \(\mathbf{\Lambda}\) given in (11) and the sparsity matrix \(\mathbf{\delta}\) is equal to \(\mathbf{\delta}^{\mathbf{\Lambda}}\) up to this signed permutation. For all other rotations, all elements of \(\mathbf{\beta}\) are different from zero and \(\mathbf{\delta}\) is simply a matrix of ones.
### Rotation into GLT
As discussed above, GLT structures generalize the PLT constraint, but one might wonder how restrictive this structure still is. We will show in this section that for a basic factor model with unconstrained loading matrix \(\mathbf{\beta}\) there exists an equivalent representation involving a unique GLT structure \(\mathbf{\Lambda}\) which is related to \(\mathbf{\beta}\) by an orthogonal transformation, provided that uniqueness of the variance decomposition holds.
The proof of this result uses a relationship between a matrix with GLT structure and the so-called reduced row echelon form in linear algebra that results from the Gauss-Jordan elimination for solving linear systems, see e.g. Anton and Rorres (2013). Any transposed GLT loading matrix \(\mathbf{\Lambda}^{\top}\) has a row echelon form which can be turned into a reduced row echelon form (RREF) \(\mathbf{B}=\mathbf{A}^{\top}\mathbf{\Lambda}^{\top}\) with the help of an \(r\times r\) matrix \(\mathbf{A}\) which is constructed from the pivot rows \(l_{1},\ldots,l_{r}\) of \(\mathbf{\Lambda}\) and invertible by definition:
\[\mathbf{A}^{-1}=\left(\begin{array}{c}\mathbf{\Lambda}_{l_{1},\cdot}\\ \vdots\\ \mathbf{\Lambda}_{l_{r},\cdot}\end{array}\right).\]
Since the RREF of any matrix is unique, see e.g. Yuster (1984), we find that the pivot columns of \(\mathbf{B}\) coincide with the pivot rows \(l_{1},\ldots,l_{r}\) of \(\mathbf{\Lambda}\). Hence, for a basic factor model
\[\mathbf{f}_{t}\sim\mathcal{N}_{r}\left(\mathbf{0},\mathbf{I}_{r}\right), \qquad\mathbf{y}_{t}=\mathbf{\beta}\mathbf{f}_{t}+\mathbf{\epsilon}_{t},\]
with an arbitrary, unstructured loading matrix \(\mathbf{\beta}\) with full column rank \(r\), we prove in Theorem 2 that the RREF of \(\mathbf{\beta}^{\top}\) can be used to represent \(\mathbf{\beta}\) as a unique GLT structure \(\mathbf{\Lambda}\), where the pivot rows \(l_{1},\ldots,l_{r}\) of \(\mathbf{\Lambda}\) coincide with the pivot columns of the RREF of \(\mathbf{\beta}^{\top}\) (see Appendix A for a proof).
**Theorem 2** (Rotation into GLT).: _Let \(\mathbf{\beta}\) be an arbitrary loading matrix with full column rank \(r\). Then the following holds:_
1. _There exists an equivalent representation of_ \(\mathbf{\beta}\) _involving a unique GLT structure_ \(\mathbf{\Lambda}\)_,_ \[\mathbf{\beta}=\mathbf{\Lambda}\mathbf{G}^{\top},\] (15) _where_ \(\mathbf{G}\) _is a unique orthogonal matrix._ \(\mathbf{\Lambda}\) _is called the_ GLT representation _of_ \(\mathbf{\beta}\)
2. _Let_ \(l_{1}<\ldots<l_{r}\) _be the pivot columns of the RREF_ \(\mathbf{B}\) _of_ \(\boldsymbol{\beta}^{\top}\) _and let_ \(\boldsymbol{\beta}_{1}\) _be the_ \(r\times r\) _submatrix of_ \(\boldsymbol{\beta}\) _containing the corresponding rows_ \(l_{1},\ldots,l_{r}\)_. The GLT representation_ \(\boldsymbol{\Lambda}=\boldsymbol{\beta}\mathbf{G}\) _of_ \(\boldsymbol{\beta}\) _has pivot rows_ \(l_{1},\ldots,l_{r}\) _and is obtained through rotation into GLT with a rotation matrix_ \[\mathbf{G}=\mathbf{Q},\] (16) _which results from the QR decomposition_ \(\mathbf{Q}\mathbf{R}=\boldsymbol{\beta}_{1}^{\top}\) _of_ \(\boldsymbol{\beta}_{1}^{\top}\)_._
Would it be possible to obtain a similar results with the factor loading matrix \(\boldsymbol{\Lambda}\) being constrained to be a PLT structure? The answer is definitely no, as has already been established in Section 2 for example (5). As mentioned above, GLT structures encompass PLT structures as a special case. Hence, if a PLT representation \(\boldsymbol{\Lambda}\) exists for a loading matrix \(\boldsymbol{\beta}=\boldsymbol{\Lambda}\mathbf{P}\), then the GLT representation in (16) _automatically_ reduces to the PLT structure \(\boldsymbol{\Lambda}\), since \(\mathbf{R}=\boldsymbol{\beta}_{1}^{\top}\) is obtained from the first \(r\) rows of \(\boldsymbol{\beta}\) and the "rotation into GLT" is equal to the identity, \(\mathbf{Q}=\mathbf{I}_{r}\). On the other hand, if the GLT representation \(\boldsymbol{\Lambda}\) differs from a PLT structure, then no equivalent PLT representation exists. Hence, forcing a PLT structure in the representation (1) may introduce a systematic bias in estimating the marginal covariance matrix \(\boldsymbol{\Omega}\).
## 4 Variance identification and GLT structures
As mentioned in the previous sections, constraints imposed on the structure of a factor loading matrix \(\boldsymbol{\Lambda}\) will resolve rotational invariance only if uniqueness of the variance decomposition holds and the cross-covariance matrix \(\boldsymbol{\Lambda}\boldsymbol{\Lambda}^{\top}\) is identified. However, rotational constraints alone do not necessarily guarantee uniqueness of the variance decomposition. Consider, for instance, a sparse PLT loading matrix where in some column \(j\) in addition to the diagonal element \(\Lambda_{jj}\) (which is nonzero by definition) only a single further factor loading \(\Lambda_{n_{j},j}\) in some row \(n_{j}>j\) is nonzero. Such a loading matrix obviously violates the necessary condition for variance identification that each column contains at least three nonzero elements. Similarly, while GLT structures resolve rotational invariance, they do not guarantee uniqueness of the variance decomposition either.
In Section 4.1, we derive sufficient conditions for variance identification of GLT structures based on the 3579 counting rule of Sato (1992, Theorem 3.3). In Section 4.2, we discuss how to verify variance identification for sparse GLT structures in practice.
### Counting rules for variance identification
We will show how to verify from the 0/1 pattern \(\boldsymbol{\delta}\) of an unordered GLT structure \(\boldsymbol{\beta}\), whether the row deletion property \(\mathbf{AR}\) holds for \(\boldsymbol{\beta}\) and all its signed permutations. Our condition is a structural counting rule expressed solely in terms of the sparsity matrix \(\boldsymbol{\delta}\) underlying \(\boldsymbol{\beta}\) and does not involve the values of
the unconstrained factor loadings in \(\mathbf{\beta}\), which can take any value in \(\mathbb{R}\). For any factor model, variance identification is invariant to signed permutations. If we can verify variance identification for a single signed permutation \(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}_{\pm}\mathbf{P}_{\rho}\) of \(\mathbf{\Lambda}\), as defined in (14), then variance identification of \(\mathbf{\Lambda}\) holds, since \(\mathbf{\beta}\) and \(\mathbf{\Lambda}\) imply the same cross-covariance matrix \(\mathbf{\Lambda}\mathbf{\Lambda}^{\top}\). Hence, we focus in this section on variance identification of unordered GLT structures.
In Definition 5, we recall the so-called _extended row deletion property_, introduced by Tumura and Sato (1980).
**Definition 5** (Extended row deletion property \(\text{RD}(r,s)\)).: A \(m\times r\) factor loading matrix \(\mathbf{\beta}\) satisfies the row-deletion property \(\text{RD}(r,s)\), if the following condition is satisfied: whenever \(s\in\mathbb{N}_{0}\) rows are deleted from \(\mathbf{\beta}\), then two disjoint submatrices of rank \(r\) remain.
The row-deletion property of Anderson and Rubin (1956) results as a special case where \(s=1\). As will be shown in Section 5, the extended row deletion properties \(\text{RD}(r,s)\) for \(s>1\) are useful in exploratory factor analysis, when the factor dimension \(r\) is unknown. In Definition 6, we introduce a counting rule for binary matrices.
**Definition 6** (Counting rule \(\text{CR}(r,s)\)).: Let \(\mathbf{\delta}\) be an \(m\times r\) binary matrix. For each \(q=1,\ldots,r\), consider all submatrices \(\mathbf{\delta}_{q,\ell}\), \(\ell=1,\ldots,\left(\begin{array}{c}r\\ q\end{array}\right)\), built from \(q\) columns of \(\mathbf{\delta}\). \(\mathbf{\delta}\) is said to satisfy the \(\text{CR}(r,s)\) counting rule for \(s\in\mathbb{N}_{0}\) if the matrix \(\mathbf{\delta}_{q,\ell}\) has at least \(2\ell+s\) nonzero rows for all \((q,\ell)\).
Note that the counting rule \(\text{CR}(r,s)\), like the extended row deletion property \(\text{RD}(r,s)\), is invariant to signed permutations. Lemma 8 in Appendix A summarizes further useful properties of \(\text{CR}(r,s)\).
For a given binary matrix \(\mathbf{\delta}\) of dimension \(m\times r\), let \(\Theta_{\delta}\) be the space generated by the non-zero elements of all unordered GLT structure \(\mathbf{\beta}\) with sparsity matrix \(\mathbf{\delta}\) and all their \(2^{r}r!-1\) trivial rotations \(\mathbf{\beta}\mathbf{P}_{\pm}\mathbf{P}_{\rho}\). We prove in Theorem 3 that for GLT structures the counting rule \(\text{CR}(r,s)\) and the extended row deletion property \(\text{RD}(r,s)\) are equivalent conditions for all loading matrices in \(\Theta_{\delta}\), except for a set of measure 0.
**Theorem 3**.: _Let \(\mathbf{\delta}\) be a binary \(m\times r\) matrix with unordered GLT structure. Then the following holds:_
1. _If_ \(\mathbf{\delta}\) _violates the counting rule_ \(\text{CR}(r,s)\)_, then the extended row deletion property_ \(\text{RD}(r,s)\) _is violated for all_ \(\mathbf{\beta}\in\Theta_{\delta}\) _generated by_ \(\mathbf{\delta}\)_._
2. _If_ \(\mathbf{\delta}\) _satisfies the counting rule_ \(\text{CR}(r,s)\)_, then the extended row deletion property_ \(\text{RD}(r,s)\) _holds for all_ \(\mathbf{\beta}\in\Theta_{\delta}\) _except for a set of measure 0._
See Appendix A for a proof. The special case \(s=1\) is relevant for verifying the row deletion property **AR**. It proves that for unordered GLT structures the 3579 counting rule of Sato (1992) is not only a necessary, but also a _sufficient_ condition for **AR** to hold. In addition, this means that the counting rule
needs to be verified only for the sparsity matrix \(\mathbf{\delta}\) of a _single trivial rotation_\(\mathbf{\beta}=\mathbf{\Lambda}\mathbf{P}_{\pm}\mathbf{P}_{\rho}\) rather than for every nonsingular matrix \(\mathbf{G}\). This result is summarized in Corollary 4.
**Corollary 4** (Variance identification rule for GLT structures).: _For any unordered \(m\times r\) GLT structure \(\mathbf{\beta}\), the following holds:_
1. _If_ \(\mathbf{\delta}\) _satisfies the 3579 counting rule, i.e. every column of_ \(\mathbf{\delta}\) _has at least 3 non-zero elements, every pair of columns at least 5 and, more generally, every possible combination of_ \(q=3,\ldots,r\) _columns has at least_ \(2q+1\) _non-zero elements, then variance identification is given for all_ \(\mathbf{\beta}\in\Theta_{\delta}\) _except for a set of measure 0; i.e. for any other factor decomposition of the marginal covariance matrix_ \(\mathbf{\Omega}=\mathbf{\beta}\mathbf{\beta}^{\top}+\mathbf{\Sigma}=\tilde{\mathbf{\beta}}\tilde{ \mathbf{\beta}}^{\top}+\tilde{\mathbf{\Sigma}}\)_, where_ \(\tilde{\mathbf{\beta}}\) _is an unordered GLT matrix, it follows that_ \(\tilde{\mathbf{\Sigma}}=\mathbf{\Sigma}\)_, i.e._ \(\tilde{\mathbf{\beta}}\tilde{\mathbf{\beta}}^{\top}=\mathbf{\beta}\mathbf{\beta}^{\top}\)_, and_ \(\tilde{\mathbf{\beta}}=\mathbf{\beta}\mathbf{P}_{\pm}\mathbf{P}_{\rho}\)_._
2. _If_ \(\mathbf{\delta}\) _violates the 3579 counting rule, then for all_ \(\mathbf{\beta}\in\Theta_{\delta}\) _the row deletion property_ \(\mathbf{AR}\) _does not hold._
3. _For_ \(r=1\)_,_ \(r=2\)_, and_ \(r=3\)_, condition_ \(\text{CR}(r,1)\) _is both sufficient and necessary for variance identification._
A few comments are in order. If \(\mathbf{\delta}\) satisfies \(\text{CR}(r,1)\), then \(\mathbf{AR}\) holds for all \(\mathbf{\beta}\in\Theta_{\delta}\) and a sufficient condition for variance identification is satisfied. As shown by Anderson and Rubin (1956), \(\mathbf{AR}\) is a _necessary_ condition for variance identification only for \(r=1\) and \(r=2\). Tumura and Sato (1980, Theorem 3) show the same for \(r=3\), provided that \(m\geq 7\). It follows that \(\text{CR}(r,1)\) is a _necessary and sufficient_ condition for variance identification for the models summarized in (c). In all other cases, variance identification may hold for loading matrices \(\mathbf{\beta}\in\Theta_{\delta}\), even if \(\mathbf{\delta}\) violates \(\text{CR}(r,1)\).
The definition of unordered GLT structures given in Section 3 imposes no constraint on the pivot rows \(l_{1},\ldots,l_{r}\) beyond the assumption that they are distinct. This flexibility can lead to GLT structures that can never satisfy the 3579 rule, even if all elements below the pivot rows are non-zero. Consider, for instance, a GLT matrix with the pivot row in column \(r\) being equal to \(l_{r}=m-1\). The loading matrix has at most two nonzero elements in column \(r\) and violates the necessary condition for variance identification. This example shows that there is an upper bound for the pivot elements beyond which the 3579 rule can never hold. This insight is formalized in Definition 7.
**Definition 7**.: An unordered GLT structure \(\mathbf{\beta}\) fulfills condition **GLT-AR** if the following constraint on the pivot rows \(l_{1},\ldots,l_{r}\) of \(\mathbf{\beta}\) is satisfied, where \(z_{j}\) is the rank of \(l_{j}\) in the ordered sequence \(l_{(1)}<\ldots<l_{(r)}\):
\[l_{j}\leq m-2(r-z_{j}+1). \tag{17}\]
Evidently, an ordered GLT structure \(\mathbf{\Lambda}\) fulfills condition **GLT-AR** if the pivot rows \(l_{1},\ldots,l_{r}\) of \(\mathbf{\Lambda}\) satisfy the constraint \(l_{j}\leq m-2(r-j+1)\). For the special case of a PLT structure where \(l_{j}=j\), this constraint
reduces to \(m\geq 2r+1\) which is equivalent to a well-known upper bound for the number of factors. For dense unordered GLT structures with \(m\) (non-zero) rows, condition **GLT-AR** is a sufficient condition for **AR**. For sparse GLT structures **GLT-AR** is only a necessary condition for **AR** and the 3579 rule has to be verified explicitly, as shown by the example discussed above. Very conveniently for verifying variance identification in sparse factor analysis based on GLT structures, Theorem 3 and Corollary 4 operate solely on the sparsity matrix \(\mathbf{\delta}\) corresponding to \(\mathbf{\beta}\).
### Variance identification in practice
To verify CR\((r,s)\) in practice, all submatrices of \(q\) columns have to be extracted from the sparsity matrix \(\mathbf{\delta}\) to verify if at least \(2q+1\) rows of this submatrix are non-zero. For \(q=1,2,r-1,r\), this condition is easily verified from simple functionals of \(\mathbf{\delta}\), see Corollary 5 which follows immediately from Theorem 3 (see Appendix A for details).
**Corollary 5** (**Simple counting rules for CR\((r,s)\))**.: _Let \(\mathbf{\delta}\) be a \(m\times r\) unordered GLT sparsity matrix. The following conditions on \(\mathbf{\delta}\) are necessary for CR\((r,s)\) to hold:_
\[\mathbf{1}_{r\times m}\cdot\mathbf{\delta}+\mathbf{\delta}^{\top}(\mathbf{1}_{m \times r}-\mathbf{\delta})\geq 4+s-2\mathbf{I}_{r}, \tag{18}\] \[\mathbf{1}_{1\times m}\cdot\mathbb{I}(\mathbf{\delta}^{\star}>0)\geq 2r+s, \quad\mathbf{\delta}^{\star}=\mathbf{\delta}\cdot\mathbf{I}_{r\times 1},\] (19) \[\mathbf{1}_{1\times m}\cdot\mathbb{I}(\mathbf{\delta}^{\star}>0)\geq 2(r-1 )+s,\quad\mathbf{\delta}^{\star}=\mathbf{\delta}(\mathbf{1}_{m\times m}-\mathbf{I}_{m}), \tag{20}\]
_where the indicator function \(\mathbb{I}(\mathbf{\delta}^{\star}>0)\) is applied element-wise and \(\mathbf{1}_{n\times k}\) denotes a \(n\times k\) matrix of ones. For \(r\leq 4\), these conditions are also sufficient for CR\((r,s)\) to hold for \(\mathbf{\delta}\)._
Using Corollary 5 for \(s=1\), one can efficiently verify, if the 3579 counting rule and hence the row deletion property **AR** holds for unordered GLT factor models with up to \(r\leq 4\) factors. For models with more than four factors (\(r>4\)), a more elaborated strategy is needed. After checking the conditions of Corollary 5, CR\((r,s)\) could be verified for a given binary matrix \(\mathbf{\delta}\) by iterating over all remaining \(r!/(q!(r-q)!)\) subsets of \(q=3,\ldots,r-2\) columns of \(\mathbf{\delta}\). While this is a finite task, such a naive approach may need to visit \(2^{r}-1\) matrices in order to make a decision and the combinatorial explosion quickly becomes an issue in practice as \(r\) increases. Recent work by Hosszejni and Fruhwirth-Schnatter (2022) establishes the applicability of this framework for large models.
## 5 Identification in exploratory factor analysis
In this section, we discuss how the concept of GLT structures is helpful for addressing identification problems in exploratory factor analysis (EFA). Consider data \(\{\mathbf{y}_{1},\ldots,\mathbf{y}_{T}\}\) from a multivariate Gaussian distribution, \(\mathbf{y}_{t}\sim\mathcal{N}_{m}\left(\mathbf{0},\mathbf{\Omega}\right)\), where an investigator wants to perform factor analysis since she expects
that the covariances of the measurements \(y_{it}\) are driven by common factors. In practice, the number of factors is typically unknown and often it is not obvious, whether all \(m\) measurements in \({\bf y}_{t}\) are actually correlated. It is then common to employ EFA by fitting a basic factor model to the entire collection of measurements in \({\bf y}_{t}\), i.e. assuming the model
\[{\bf y}_{t}=\boldsymbol{\beta}_{k}{\bf f}_{t}+\boldsymbol{\epsilon}_{t},\qquad \boldsymbol{\epsilon}_{t}\sim\mathcal{N}_{m}\left(\mathbf{0},\boldsymbol{ \Sigma}_{k}\right), \tag{21}\]
with an _assumed_ number of factors \(k\), a \(m\times k\) loading matrix \(\boldsymbol{\beta}_{k}\) with elements \(\beta_{ij}\) and a diagonal matrix \(\boldsymbol{\Sigma}_{k}\) with strictly positive entries. The EFA model (21) is potentially overfitting in two ways. First, the true number of factors \(r\) is possibly smaller than \(k\), i.e. \(\boldsymbol{\beta}_{k}\) has too many columns. Second, some measurements in \({\bf y}_{t}\) are possibly irrelevant, which means that \(\boldsymbol{\beta}_{k}\) allows for too many non-zero rows. The goal is then to determine the true number of factors and to identify irrelevant measurements from the EFA model (21).
We will address identification under the assumption that the data are generated by a basic factor model with loading matrix \(\boldsymbol{\beta}_{0}\) with \(r\) factors which implies the following covariance matrix \(\boldsymbol{\Omega}\):
\[\boldsymbol{\Omega}=\boldsymbol{\beta}_{0}\boldsymbol{\beta}_{0}^{\top}+ \boldsymbol{\Sigma}_{0}. \tag{22}\]
Instead of (22), for a given \(k\), the EFA model (21) yields the alternative representation of \(\boldsymbol{\Omega}\):
\[\boldsymbol{\Omega}=\boldsymbol{\beta}_{k}\boldsymbol{\beta}_{k}^{\top}+ \boldsymbol{\Sigma}_{k}. \tag{23}\]
The question is then under which conditions can the true loading matrix \(\boldsymbol{\beta}_{0}\) be recovered from (23). Let us assume for the moment that no constraint that resolves rotational invariance is imposed on \(\boldsymbol{\beta}_{0}\) or \(\boldsymbol{\beta}_{k}\).
"Revealing the truth" in an overfitting EFA model.A fundamental problem in factor analysis is the following. If the EFA model is overfitting, i.e. \(k>r\), could we nevertheless recover the true loading matrix \(\boldsymbol{\beta}_{0}\) directly from \(\boldsymbol{\beta}_{k}\)? We will show how this can be achieved mathematically by combining the important work by Tumura and Sato (1980) with the framework of GLT structures. We have demonstrated in Section 2 using example (13) that solutions in an overfitting model can be constructed by adding spurious columns (Reiersol, 1950; Geweke and Singleton, 1980). Additional solutions are obtained as rotations of such solutions. For instance, one of the following solutions may result:
\[\tilde{\boldsymbol{\beta}}_{3}=\left(\begin{array}{cccc}0&\lambda_{11}&0\\ \beta_{23}&\lambda_{21}&0\\ 0&\lambda_{31}&0\\ 0&0&\lambda_{42}\\ 0&0&\lambda_{52}\\ 0&0&\lambda_{62}\end{array}\right),\quad\tilde{\boldsymbol{\beta}}_{3}=\left( \begin{array}{cccc}-\lambda_{11}\sin\alpha&0&\lambda_{11}\cos\alpha\\ \beta_{23}\cos\alpha-\lambda_{21}\sin\alpha&0&\lambda_{21}\cos\alpha\\ -\lambda_{31}\sin\alpha&0&\lambda_{31}\cos\alpha\\ 0&\lambda_{42}&0\\ 0&\lambda_{52}&0\\ 0&\lambda_{62}&0\end{array}\right),\]
both with the same \(\mathbf{\Sigma}_{3}\) as in (13). The first case is a signed permutation of \(\mathbf{\beta}_{3}\), while the second case combines a signed permutation of \(\mathbf{\beta}_{3}\) with a rotation of the spurious and \(\Lambda\)'s first column involving \({\bf P}_{\alpha b}\). In the first case, despite the rotation, both the spurious column and the columns of \(\Lambda\) are clearly visible, while in the second case the presence of a spurious column is by no means obvious and the columns of \(\Lambda\) are disguised.
In general, for an EFA model that is overfitting by a single column, i.e. \(k=r+1\), and \(\mathbf{\beta}_{k}\) is left unconstrained, infinitely many representations \((\mathbf{\beta}_{k},\mathbf{\Sigma}_{k})\) with covariance matrix \(\mathbf{\Omega}=\mathbf{\beta}_{k}\mathbf{\beta$ }_{k}^{\top}+\mbox{\boldmath$\Sigma}_{k}\) can be constructed in the following way. Let the first \(r\) columns of \(\mathbf{\beta}_{k}\) be equal to \(\mathbf{\beta}_{0}\) and append an extra column to its right. In this extra column, which will be called a spurious column, add a single non-zero loading \(\beta_{l_{k},k}\) in any row \(1\leq l_{k}\leq m\) taking any value that satisfies \(0<\beta_{l_{k},k}^{2}<\sigma_{l_{k}}^{2}\); then reduce the idiosyncratic variance in row \(l_{k}\) to \(\sigma_{l_{k}}^{2}-\beta_{l_{k},k}^{2}\); and finally apply an arbitrary rotation \({\bf P}\):
\[\mathbf{\beta}_{k}=\left(\begin{array}{c|c}&0\\ \mathbf{\beta}_{0}&\beta_{l_{k},k}\\ &0\end{array}\right){\bf P},\qquad\mathbf{\Sigma}_{k}=\mbox{Diag} \big{(}\sigma_{1}^{2},\ldots,\sigma_{l_{k}}^{2}-\beta_{l_{k},k}^{2},\ldots, \sigma_{m}^{2}\big{)}\,. \tag{24}\]
Interesting questions are then the following: under which conditions is (24) an exhaustive representation of all possible solutions \(\mathbf{\beta}_{k}\) in an EFA model where the _degree of overfitting_ defined as \(s=k-r\) is equal to one? How can all solutions \(\mathbf{\beta}_{k}\) be represented if \(s>1\)?
Such identifiability problems in overfitting EFA models have been analyzed in depth by Tumura and Sato (1980). They show that a stronger condition than RD\((r,1)\) is needed for \(\mathbf{\beta}_{0}\) in the underlying variance decomposition (22) to ensure that only spurious and no additional common factors are added in the overfitting representation (23). In addition, Tumura and Sato (1980) provide a general representation of the factor loading matrix \(\mathbf{\beta}_{k}\) in overfitting representation (23) with \(k>r\).
**Theorem 6**.: (Tumura and Sato, 1980, Theorem 1) _Suppose that \(\Omega\) has a decomposition as in (22) with \(r\) factors and that for some \(S\in\mathbb{N}\) with \(m\geq 2r+S+1\) the extended row deletion property RD\((r,1+S)\) holds for \(\mathbf{\beta}_{0}\). If \(\Omega\) has another decomposition such that \(\mathbf{\Omega}=\mathbf{\beta}_{k}\mathbf{\beta$ }_{k}^{\top}+\mbox{\boldmath$\Sigma}_{k}\) where \(\mathbf{\beta}_{k}\) is a \(m\times(r+s)\)-matrix of rank \(k=r+s\) with \(1\leq s\leq S\), then there exists an orthogonal matrix \({\bf T}_{k}\) of rank \(k\) such that_
\[\mathbf{\beta}_{k}{\bf T}_{k}=\left(\begin{array}{cc} \mathbf{\beta}_{0}&{\bf M}_{s}\end{array}\right),\qquad\mathbf{\Sigma}_{k}=\mathbf{\Sigma}_{0}-{\bf M}_{s}{\bf M}_{s}^{ \top}, \tag{25}\]
_where the off-diagonal elements of \({\bf M}_{s}{\bf M}_{s}^{\top}\) are zero._
The \(m\times s\)-matrix \({\bf M}_{s}\) is a so-called _spurious factor loading matrix_ that does not contribute to explaining the covariance in \({\bf y}_{t}\), since
\[\mathbf{\beta}_{k}\mathbf{\beta}_{k}^{\top}+\mbox{\boldmath $\Sigma$}_{k}=\mathbf{\beta}_{k}{\bf T}_{k}{\bf T}_{k}{\bf T}_{k}^{ \top}\mathbf{\beta}_{k}^{\top}+\mathbf{\Sigma}_{k}=\mathbf{\beta}_{0}\mathbf{\beta}_{0}^{\top}+{\bf M}_{s}{\bf M}_{s}^ {\top}+(\mathbf{\Sigma}_{0}-{\bf M}_{s}{\bf M}_{s}^{\top})=\mathbf{\beta}_{0}\mathbf{\beta}_{0}^{\top}+\mathbf{\Sigma}_ {0}=\mathbf{\Omega}.\]
While this theorem is an important result, without imposing further structure on the factor loading matrix \(\mathbf{\beta}_{k}\) in the EFA model it cannot be applied immediately to "recover the truth", as the separation of \(\mathbf{\beta}_{k}\) into the true factor loading matrix \(\mathbf{\beta}_{0}\) and the spurious factor loading matrix \(\mathbf{M}_{s}\) is possible only up to a rotation \(\mathbf{T}_{k}\) of \(\mathbf{\beta}_{k}\). However, the truth" in an overfitting EFA model can be recovered, if Tumura and Sato (1980, Theorem 1) is applied within the class of unordered GLT structures introduced in this paper. If we assume that \(\mathbf{\Lambda}\) is a GLT structure which satisfies the extended row deletion property RD\((r,1+S)\), we prove in Theorem 7 the following result. If \(\mathbf{\beta}_{k}\) in an overfitting EFA model is an unordered GLT structure, then \(\mathbf{\beta}_{k}\) has a representation, where the rotation in (25) is a signed permutation \(\mathbf{T}_{k}=\mathbf{P}_{\pm}\mathbf{P}_{\rho}\). Hence, spurious factors in \(\mathbf{\beta}_{k}\) are easily spotted and \(\mathbf{\Lambda}\) can be recovered immediately from \(\mathbf{\beta}_{k}\).
**Definition 8** (Unordered spurious GLT structure).: A \(m\times s\) unordered GLT factor loading matrix \(\mathbf{M}_{s}^{\Lambda}\) with pivots rows \(\{n_{1},\ldots,n_{s}\}\) is an unordered spurious GLT structure if all columns are spurious columns with a single nonzero loading in the corresponding pivot row.
**Theorem 7**.: _Let \(\mathbf{\Lambda}\) be a \(m\times r\) GLT factor loading matrix with pivot rows \(l_{1}<\ldots<l_{r}\) which obeys the extended row deletion property RD\((r,1+S)\) for some \(S\in\mathbb{N}\). Assume that the \(m\times k\) matrix \(\mathbf{\beta}_{k}\) in the EFA variance decomposition \(\mathbf{\Omega}=\mathbf{\beta}_{k}\mathbf{\beta}_{k}^{\top}+\mathbf{\Sigma}_{k}\) is of rank \(\text{\rm rk}\left(\mathbf{\beta}_{k}\right)=k=r+s\), where \(1\leq s\leq S\). If \(\mathbf{\beta}_{k}\) is restricted to be an unordered GLT matrix, then (25) reduces to_
\[\mathbf{\beta}_{k}\mathbf{P}_{\pm}\mathbf{P}_{\rho}=\left(\begin{array}{cc} \mathbf{\Lambda}&\mathbf{M}_{s}^{\Lambda}\end{array}\right),\quad\mathbf{\Sigma}_ {k}=\mathbf{\Sigma}_{0}-\mathbf{M}_{s}^{\Lambda}(\mathbf{M}_{s}^{\Lambda})^{\top},\]
_where \(\mathbf{M}_{s}^{\Lambda}\) is a spurious ordered GLT structure with pivot rows \(n_{1}<\ldots<n_{s}\) which are distinct from the \(r\) pivot rows in \(\mathbf{\Lambda}\). Hence, \(r\) columns of \(\mathbf{\beta}_{k}\) are a signed permutation of the true loading matrix \(\mathbf{\Lambda}\), while the remaining \(s\) columns of \(\mathbf{\beta}_{k}\) are an unordered spurious GLT structure with pivots \(n_{1},\ldots,n_{s}\)._
See Appendix A for a proof.
Identifying irrelevant variables.In applied factor analysis, the assumption that each measurement \(y_{it}\) is correlated with at least one other measurement is too restrictive, because irrelevant measurements might be present that are uncorrelated with all the other measurements. As argued by Boivin and Ng (2006), it is useful to identify such variables. Within the framework of sparse factor analysis, irrelevant variables are identified in Kaufmann and Schuhmacher (2017) by exploring the sparsity matrix \(\mathbf{\delta}\) of a factor loading matrix \(\mathbf{\beta}_{0}\) with respect to zero rows. Since \(\text{\rm Cov}(y_{it},y_{lt})=0\) for all \(l\neq i\), if the entire \(i\)th row of \(\mathbf{\beta}_{0}\) is zero (see also (3)), the presence of \(m_{0}\) irrelevant measurements causes the corresponding \(m_{0}\) rows of \(\mathbf{\beta}_{0}\) and \(\mathbf{\delta}\) to be zero. As before, we assume that the variance decomposition (22) of the underlying basic factor model is variance identified.
Let us first investigate identification of the zero rows in \(\mathbf{\beta}_{0}\) and the corresponding sparsity matrix \(\mathbf{\delta}\) for the case that the assumed and the true number of factors in the EFA model (21) are identical, i.e.
\(k=r\). Since variance identification of (22) in the underlying model holds, we obtain that \(\mathbf{\Sigma}_{0}=\mathbf{\Sigma}_{r}\), \(\mathbf{\beta}_{0}\mathbf{\beta}_{0}^{\top}=\mathbf{\beta}_{r}\mathbf{\beta}_{r}^{\top}\) and \(\mathbf{\beta}_{r}=\mathbf{\beta}_{0}\mathbf{P}\) is a rotation of \(\mathbf{\beta}_{0}\). Therefore, the position of the zero rows both in \(\mathbf{\beta}_{0}\) and \(\mathbf{\beta}_{r}\) are identical and all irrelevant variables can be identified from \(\mathbf{\beta}_{r}\) or the corresponding sparsity matrix \(\mathbf{\delta}\), regardless of the strategy toward rotational invariance.
What makes this task challenging in applied factor analysis is that in practice only the total number \(m\) of observations is known, whereas the investigator is ignorant both about the number of factors \(r\) and the number of irrelevant measurements \(m_{0}\). In such a situation, variance identification of \(\mathbf{\Sigma}_{k}\) for an EFA model with \(k\) assumed factors is easily lost if too many irrelevant variables are included in relation to \(k\). These considerations have important implication for exploratory factor analysis. While the investigator can choose \(k\), she is ignorant about the number of irrelevant variables and the recovered model might not be variance identified. For this reason, it is relevant to verify in any case that the solution \(\mathbf{\beta}_{k}\) obtained from any EFA model satisfies variance identification.
Under **AR** this means that the loading matrix of the correlated measurements, i.e. the non-zero rows of \(\mathbf{\beta}_{0}\), satisfies RD\((r,1)\). If variance identification relies on **AR**, then a minimum requirement for \(\mathbf{\beta}_{k}\) to satisfy RD\((k,1)\) is that \(2k+1\leq m-m_{0}\). If no irrelevant measurement are present, then the well-known upper bound \(k\leq\frac{m-1}{2}\) results. However, if irrelevant measurements are present, then there is a trade-off between \(m_{0}\) and \(k\): the more irrelevant measurements are included, the smaller the maximum number of assumed factors \(k\) has to be. Hence, the presence of \(m_{0}\) zero rows in \(\mathbf{\beta}_{0}\), while \(\mathbf{\beta}_{k}\) in the EFA model is allowed to have \(m\) potentially non-zero rows requires stronger conditions for variance identification than for an EFA model where the underlying loading matrix \(\mathbf{\beta}_{0}\) contains only non-zero rows. More specifically, for a given number \(m_{0}\in\mathbb{N}\) of irrelevant measurements, variance identification necessitates the more stringent upper bound \(k\leq\frac{m-m_{0}-1}{2}\), where \(m-m_{0}\) is the number of non-zero rows. On the other hand, for a given number of factors \(k\) in an EFA model, the maximum number of irrelevant measurements that can be included is given by \(m_{0}\leq m-(2k+1)\).
Identifying the number of factors through an EFA model.Let us assume that the variance decomposition (22) of the unknown underlying basic factor model is identified. As shown by Reiersol (1950), the true number of factors \(r\) is equal to the smallest value \(k\) that satisfies (23). However, in practice, it is not obvious how to solve this "minimization" problem. As the following considerations show, verifying variance identification for \(\mathbf{\beta}_{k}\) in an EFA model can be helpful in this regard.
If \(r\) is unknown, then we need to find a decomposition of \(\mathbf{\Omega}\) as in (23) where \(\mathbf{\Sigma}_{k}\) is variance identified. Since the true underlying decomposition (22) is variance identified, any solution where \(\mathbf{\Sigma}_{k}\) is not variance identified can be rejected. As has been discussed above, any overfitting EFA model, where \(k>r\), has infinitely many decompositions of \(\mathbf{\Omega}\) and therefore is never variance identified. Hence, if any solution \(\mathbf{\Sigma}_{k}\) of an EFA model with \(k\) assumed factors is not variance identified, then we can deduce that \(k\) is bigger than \(r\). On the other hand, if variance identification holds for \(\mathbf{\Sigma}_{k}\), then the decompositions (22)
and (23) are equivalent and we can conclude that \(r=k\), \(\mathbf{\Sigma}_{0}=\mathbf{\Sigma}_{k}\) and therefore \(\mathbf{\beta}_{0}\mathbf{\beta}_{0}^{\top}=\mathbf{\beta}_{k}\mathbf{\beta}_{k}^{\top}\). As a consequence, we can identify the true loading matrix \(\mathbf{\beta}_{0}=\mathbf{\beta}_{k}\mathbf{P}\) from \(\mathbf{\beta}_{k}\) mathematically up to a rotation \(\mathbf{P}\)(Anderson and Rubin, 1956, Lemma 5.1).
This insight shows that verifying variance identification is relevant beyond resolving rotational invariance and is essential for recovering the true number of factors. This has important implications for applied factor analysis. Most importantly, the _rank_ or the _number of non-zero columns_ of a factor loading matrix \(\mathbf{\beta}_{k}\) recovered from an EFA model with _assumed_ number \(k\) of factors might overfit the true number of factors \(r\), if variance identification for \(\mathbf{\Sigma}_{k}\) is not satisfied and the variance decomposition is not unique. Hence, extracting the number of factors from an EFA model makes only sense in connection with ensuring that variance identification holds.
## 6 Illustrative application
### Sparse Bayesian factor analysis
A common goal of Bayesian factor analysis is to identify the unknown factor dimension \(r\) of a factor loading matrix from the overfitting factor model (21) with potentially \(k>r\) factors, see, among many others, Rockova and George (2017), Fruhwirth-Schnatter and Lopes (2018), and Ohn and Kim (2022). Often, spike-and slab priors are employed, where the elements \(\beta_{ij}\) of the loading matrix \(\mathbf{\beta}_{k}\) apriori are allowed to be exactly zero with positive probability. This is achieved through a prior on the corresponding \(m\times k\) sparsity matrix \(\mathbf{\delta}_{k}\). In each column \(j\), the indicators \(\delta_{ij}\) are active apriori with a column-specific probability \(\tau_{j}\), i.e. \(\text{Pr}(\delta_{ij}=1|\tau_{j})=\tau_{j}\) for \(i=1,\ldots,m\), where the slab probabilities \(\tau_{1},\ldots,\tau_{k}\) arise from an exchangeable shrinkage prior:
\[\tau_{j}|k\sim\mathcal{B}\left(\gamma\frac{\alpha}{k},\gamma\right),\quad j= 1,\ldots,k. \tag{26}\]
If \(\gamma\) is unknown, then (26) is called a two-parameter-beta (2PB) prior. If \(\gamma=1\), then (26) is called a one-parameter-beta (1PB) prior and takes the form:
\[\tau_{j}|k\sim\mathcal{B}\left(\frac{\alpha}{k},1\right),\quad j=1,\ldots,k. \tag{27}\]
Prior (27) converges to the Indian buffet process prior (Teh et al., 2007) for \(k\rightarrow\infty\). As recently shown by Fruhwirth-Schnatter (2022), prior (27) has a representation as a cumulative shrinkage process (CUSP) prior (Legramanti et al., 2020).
This specification leads to a Dirac-spike-and-slab prior for the factor loadings,
\[\beta_{ij}|\kappa,\sigma_{i}^{2},\tau_{j}\sim(1-\tau_{j})\Delta_{ 0}+\tau_{j}\mathcal{N}\left(0,\kappa\sigma_{i}^{2}\right), \tag{28}\] \[\sigma_{i}^{2}\sim\mathcal{G}^{-1}\left(c^{\sigma},b^{\sigma} \right),\quad\kappa\sim\mathcal{G}^{-1}\left(c^{\kappa},b^{\kappa}\right),\]
where the columns of the loading matrix are increasingly pulled toward 0 as the column index increases. In (28), a Gaussian slab distribution is assumed with a random global shrinkage parameter \(\kappa\), although other slab distributions are possible, see e.g. Zhao et al. (2016) and Fruhwirth-Schnatter et al. (2022).
The hyperparameters \(\alpha\) and \(\gamma\) are instrumental in controlling prior sparsity. Choosing \(\alpha=k\) and \(\gamma=1\) leads to a uniform distribution for \(\tau_{j}\), with the _smallest_ slab probability \(\tau_{(1)}=\min_{j=1,\ldots,k}\tau_{j}\) also being uniform, while the largest slab probability \(\tau_{(k)}=\max_{j=1,\ldots,k}\tau_{j}\sim\mathcal{B}\left(k,1\right)\), see Fruhwirth-Schnatter (2022). Such a prior is likely to overfit the number of factors, regardless of all other assumptions. A prior with \(\alpha<k\) and \(\gamma=1\) induces sparsity, since the _largest_ slab probability \(\tau_{(k)}\sim\mathcal{B}\left(\alpha,1\right)\), while the smallest slab probability \(\tau_{(1)}\sim\mathcal{B}\left(\alpha/k,1\right)\). To control the small probabilities, which are important in identifying the true number of factors, \(\alpha\) is assumed to be a random parameter and learnt from the data under the prior \(\alpha\sim\mathcal{G}\left(a^{\alpha},b^{\alpha}\right)\). \(\gamma\) controls the prior information in (26). Priors with \(\gamma>1\) and \(\gamma<1\), respectively, decrease and increase the difference between \(\tau_{(1)}\) and \(\tau_{(k)}\). Typically, \(\gamma\) is unknown and is estimated from the data using the prior \(\gamma\sim\mathcal{G}\left(a^{\gamma},b^{\gamma}\right)\).
MCMC estimation.For a given choice of hyperparameters, Markov chain Monte Carlo (MCMC) methods are applied to sample from the posterior distribution \(p(\boldsymbol{\beta}_{k},\boldsymbol{\Sigma}_{k},\boldsymbol{\delta}_{k}| \mathbf{y})\), given \(T\) multivariate observations \(\mathbf{y}=(\mathbf{y}_{1},\ldots,\mathbf{y}_{T})\), see e.g. Kaufmann and Schuhmacher (2019) among many others. In Fruhwirth-Schnatter et al. (2022), such a sampler is developed for GLT factor models. To move between factor models of different factor dimension, Fruhwirth-Schnatter et al. (2022) exploit Theorem 7 to add and delete spurious columns through a reversible jump MCMC (RJMCMC) sampler. For each posterior draw \(\boldsymbol{\beta}_{k}\), the active columns \(\boldsymbol{\beta}_{r}\) (i.e. all columns with at least 2 non-zero elements) and the corresponding sparsity matrix \(\boldsymbol{\delta}_{r}\) are determined. If \(\boldsymbol{\delta}_{r}\) satisfies the counting rule \(\text{CR}(r,1)\), then \(\boldsymbol{\beta}_{r}\) is a signed permutation of \(\boldsymbol{\Lambda}\) with the corresponding covariance matrix \(\boldsymbol{\Sigma}_{r}=\boldsymbol{\Sigma}_{k}+\mathbf{M}_{s}^{\Lambda}( \mathbf{M}_{s}^{\Lambda})^{\top}\), where \(\mathbf{M}_{s}^{\Lambda}\) contains the spurious columns of \(\boldsymbol{\beta}_{k}\). These variance identified draws are kept for further inference and the number of columns of \(\boldsymbol{\beta}_{r}\) is considered a posterior draw of the unknown factor dimension \(r\). This algorithm is easily extended to EFA models without any constraints.
### An illustrative simulation study
For illustration, we perform a simulation study and consider three different data scenarios with \(m=30\) and \(T=150\). In all three scenarios, \(r_{\text{true}}=5\) factors are assumed, however, the zero/non-zero pattern is quite different. The first setting is a _dedicated_ factor model, where the first 6 variables load on factor 1, the next 6 variables load on factor 2, and so forth, and the final 6 variables load on factor 5. A dedicated factor model has a GLT structure by definition. The second scenario is a _block_ factor model, where the first 15 observations load only on factor 1 and 2, while the remaining 15 observations only load on factor 3, 4 and 5 and the covariance matrix has a block-diagonal structure. All loadings within a block are non
zero. The third scenario is a _dense_ factor loading matrix without any zero loadings and the corresponding GLT representation has a PLT structure. For all three scenarios, non-zero factor loadings are drawn as \(\lambda_{ij}=(-1)^{b_{ij}}(1+0.1\mathcal{N}\left(0,1\right))\), where the exponent \(b_{ij}\) is a binary variable with \(\text{Pr}(b_{ij}=1)=0.2\). In all three scenarios, \(\mathbf{\Sigma}_{0}=\mathbf{I}\). 21 data sets are sampled under these three scenarios from the Gaussian factor model (1).
A sparse overfitting factor model is fitted to each simulated data set with the maximum number of factors \(k=14\) being equal to the upper bound. Regarding the structure, we compare a model where the non-zero columns of \(\boldsymbol{\beta}_{k}\) are left unconstrained with a model where a GLT structure is imposed. Inference is based on the Bayesian approach described in Section 6.1 with two different shrinkage priors on the sparsity matrix \(\boldsymbol{\delta}_{k}\): the 1PB prior (27) with random hyperparameter \(\alpha\sim\mathcal{G}\left(6,2\right)\) and the 2PB prior (26) with random hyperparameters \(\alpha\sim\mathcal{G}\left(6,2\right)\) and \(\gamma\sim\mathcal{G}\left(6,6\right)\). MCMC estimation is run for 3000
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & & & \(M_{V}\) & \(\hat{r}\) & \(p(\hat{r}=r_{\text{\tiny true}}|\mathbf{y})\) & MSE\({}_{\Omega}\) \\ Scenario & & Prior & Med(QR) & Med(QR) & Med(QR) & Med(QR) \\ \hline \hline \multirow{2}{*}{Dedic} & GLT & 1PB & 97.0 (91.5,98.3) & 5 (5,5) & 0.90 (0.94,0.99) & 0.018 (0.014,0.030) \\ & & 2PB & 97.6 (87.7,98.9) & 5 (5,5) & 0.99 (0.83,1.00) & 0.019 (0.016,0.027) \\ \cline{2-7} & EFA & 1PB & - & 5 (5,6) & 0.66 (0.09,0.79) & 0.020 (0.015,0.026) \\ & & 2PB & - & 5 (5,6) & 0.69 (0.36,0.80) & 0.019 (0.014,0.024) \\ \cline{2-7} & EFA-V & 1PB & 80.3 (49.8,87.0) & 5 (5,6) & 0.81 (0.17,0.91) & 0.020 (0.015,0.026) \\ & & 2PB & 82.6 (63.4,87.9) & 5 (5,6) & 0.84 (0.53,0.92) & 0.019 (0.014,0.024) \\ \hline \multirow{2}{*}{Block} & GLT & 1PB & 96.5 (39.4,98.9) & 5 (5,5) & 0.99 (0.28,0.99) & 0.12 (0.08,0.18) \\ & & 2PB & 98.7 (61.9,99.4) & 5 (5,5) & 0.99 (0.54,1.00) & 0.10 (0.08,0.14) \\ \cline{2-7} & EFA & 1PB & - & 5 (4,5) & 0.78 (0.22,0.88) & 0.14 (0.11,0.20) \\ & & 2PB & - & 5 (4,5) & 0.79 (0.08,0.89) & 0.12 (0.08,0.24) \\ \cline{2-7} & EFA-V & 1PB & 87.0 (55.0,91.5) & 5 (4,5) & 0.89 (0.09,0.96) & 0.14 (0.11,0.20) \\ & & 2PB & 85.9 (28.3,90.4) & 5 (4,5) & 0.92 (0.03,0.97) & 0.12 (0.08,0.24) \\ \hline \multirow{2}{*}{Dense} & GLT & 1PB & 95.7 (84.6,98.6) & 5 (5,5) & 0.98 (0.92,0.99) & 0.67 (0.44,1.12) \\ & & 2PB & 99.4 (90.8,99.8) & 5 (5,5) & 0.99 (0.93,1.00) & 0.68 (0.51,1.18) \\ \cline{2-7} & EFA & 1PB & - & 5 (5,6) & 0.76 (0.43,0.85) & 0.54 (0.39,0.76) \\ & & 2PB & - & 5 (5,5) & 0.80 (0.66,0.91) & 0.59 (0.43,0.90) \\ \cline{2-7} & EFA-V & 1PB & 84.4 (76.0,90.2) & 5 (5,6) & 0.89 (0.57,0.95) & 0.54 (0.39,0.76) \\ & & 2PB & 89.7 (80.4,93.9) & 5 (5,5) & 0.93 (0.77,0.98) & 0.59 (0.43,0.90) \\ \hline \hline \end{tabular}
* Med is the median and QR are the 5% and the 95% quantile of the various statistics over the 21 simulated data sets.
\end{table}
Table 1: Sparse Bayesian factor analysis under GLT and unconstrained structures (EFA) under a 1PB prior (\(\alpha\sim\mathcal{G}\left(6,2\right)\)) and a 2PB prior (\(\alpha\sim\mathcal{G}\left(6,2\right),\gamma\sim\mathcal{G}\left(6,6\right)\)). GLT and EFA-V use only the variance identified draws (\(M_{V}\) is the percentage of variance identified draws), EFA uses all posterior draws.
iterations after a burn-in of 2000 using the RJMCMC algorithm of Fruhwirth-Schnatter et al. (2022).
For each of the 21 simulated data sets, we evaluate all 12 combinations of data scenarios, structural constraints (GLT versus unconstrained) and priors on the sparsity matrix (1PB versus 2PB) through Monte Carlo estimates of following statistics: to assess the performance in estimating the true number \(r_{\text{true}}\) of factors, we consider the mode \(\hat{r}\) of the posterior distribution \(p(r|\mathbf{y})\) and the magnitude of the posterior ordinate \(p(\hat{r}=r_{\text{true}}|\mathbf{y})\). To assess the accuracy in estimating the covariance matrix \(\mathbf{\Omega}\) of the data, we consider the mean squared error (MSE) defined by
\[\text{MSE}_{\Omega}=\sum_{i}\sum_{\ell\leq i}\text{E}((\mathbf{\Omega}_{r,i \ell}-\mathbf{\Omega}_{i\ell})^{2}\,|\mathbf{y})/(m(m+1)/2),\]
which accounts both for posterior variance and bias of the estimated covariance matrix \(\mathbf{\Omega}_{r}=\boldsymbol{\beta}_{r}\boldsymbol{\beta}_{r}^{\top}+ \mathbf{\Sigma}_{r}\) in comparison to the true matrix. Table 1 reports, for all 12 combinations the median, the 5% and the 95% quantile of these statistics across all simulated data sets. For inference under GLT structures, posterior draws which are not variance identified have been removed. The fraction of variance identified draws is also reported in the table and is in general pretty high. As common for sparse Bayesian factor analysis with unstructured loading matrices, the posterior draws are not screened for variance identification and inference is based on all draws.
Some interesting conclusions can be drawn from Table 1. First of all, sparse Bayesian factor analysis under the GLT constraint successfully recovers the true number of factors in all three scenarios. For most of the simulated data sets, the posterior ordinate \(p(\hat{r}=r_{\text{true}}|\mathbf{y})\) is larger than 0.9. Sparse Bayesian
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & \(M_{V}\) & \(\hat{r}\) & \(p(\hat{r}=r_{\text{true}}|\mathbf{y})\) & \(\text{MSE}_{\Omega}\) \\ Scenario & & Med(QR) & Med(QR) & Med(QR) & Med(QR) \\ \hline Dedic & GLT & 50.6 (32.5,62.2) & 6 (5,7) & 0.38 (0.03,0.68) & 0.02 (0.01,0.03) \\ & EFA & - & 7 (6,8) & 0.06 (0,0.12) & 0.02 (0.02,0.03) \\ & EFA-V & 36.6 (24.4,44.8) & 6 (5,7) & 0.17 (0,0.44) & 0.02 (0.02,0.03) \\ \hline Block & GLT & 53.3 (29.3,71.3) & 5 (4,6) & 0.62 (0.18,0.85) & 0.11 (0.08,0.17) \\ & EFA & - & 6 (6,7) & 0.21 (0.00,0.35) & 0.13 (0.10,0.19) \\ & EFA-V & 43.3 (17.3,52.2) & 5 (5,7) & 0.47 (0.01,0.62) & 0.13 (0.11,0.19) \\ \hline Dense & GLT & 62.4 (45.8,71.3) & 5 (5,6) & 0.69 (0.05,0.84) & 0.62 (0.44,1.31) \\ & EFA & - & 6 (6,7) & 0.12 (0.03,0.34) & 0.52 (0.42,0.74) \\ & EFA-V & 48.1 (30.1,56.7) & 5 (5,6) & 0.46 (0.10,0.63) & 0.52 (0.42,0.73) \\ \hline \hline \end{tabular} Med is the median and QR are the 5% and the 95% quantile of the various statistics over the 21 simulated data sets.
\end{table}
Table 2: Bayesian factor analysis under GLT and unconstrained structures (EFA) under a uniform prior on \(\tau_{j}\). GLT and EFA-V use only the variance identified draws (\(M_{V}\) is the percentage of variance identified draws), EFA uses all posterior draws.
factor analysis with unstructured loading matrices is also quite successful in recovering \(r_{\text{true}}\), but with less confidence. Both over- and underfitting can be observed and the posterior ordinate \(p(\hat{r}=r_{\text{true}}|\mathbf{y})\) is much smaller than under a GLT structure. For both structures, the 2PB prior yields higher posterior ordinates than the 1PB prior.
Recently, Hosszejni and Fruhwirth-Schnatter (2022) proved that the counting rule CR\((r,1)\) can also be applied to verify variance identification for unconstrained loading matrices. As is evident from Table 1, the fraction of variance identified draws is however, much smaller than under GLT structures. Nevertheless, inference w.r.t. to the number of factors can be improved also for an unconstrained EFA model by rejecting all draws that do not obey the counting rule CR\((r,1)\).
It should be emphasized that the ability of Bayesian factor analysis to recover the number of factors from an overfitting model is closely tied to choosing a suitable shrinkage prior on the sparsity matrix \(\boldsymbol{\delta}_{k}\). For illustration, we also consider a uniform prior for \(\tau_{j}\) and report the corresponding statistics in Table 2. As expected from the considerations in Section 6.1, considerable overfitting is observed for all simulated data sets, regardless of the chosen structure.
## 7 Concluding remarks
We have given a full and comprehensive mathematical treatment to generalized lower triangular (GLT) structures, a new identification strategy that improves on the popular positive lower triangular (PLT) assumption for factor loadings matrices. We have proven that GLT retains PLT's good properties: uniqueness and rotational invariance. At the same time and unlike PLT, GLT exists for any factor loadings matrix; i.e. it is not a restrictive assumption. Furthermore, we have shown that verifying variance identification under GLT structures is simple and is based purely on the zero-nonzero pattern of the factor loadings matrix. Additionally, we have embedded the GLT model class into exploratory factor analysis with unknown factor dimension and discussed how easily spurious factors and irrelevant variables are recognized in that setup. At the end, we demonstrated the power of the framework in a simulation study.
|
2301.09067 | Polystability of Stokes representations and differential Galois groups | Polystability of (twisted) Stokes representations (i.e. wild monodromy
representations) will be characterised, in terms of the corresponding
differential Galois group (generalising the Zariski closure of the monodromy
group in the tame case). This extends some results of Richardson. Further, the
intrinsic approach to such results will be established, in terms of reductions
of Stokes local systems. | Philip Boalch, Daisuke Yamakawa | 2023-01-22T07:31:06Z | http://arxiv.org/abs/2301.09067v1 | # Polystability of Stokes representations and differential Galois groups
###### Abstract.
Polystability of (twisted) Stokes representations (i.e. wild monodromy representations) will be characterised, in terms of the corresponding differential Galois group (generalising the Zariski closure of the monodromy group in the tame case). This extends some results of Richardson. Further, the intrinsic approach to such results will be established, in terms of reductions of Stokes local systems.
###### Contents
* 1 Introduction
* 2 Twisted version of Richardson's results
* 3 More general set-up
* 4 Application to wild character varieties
* 5 Stability and polystability of Stokes local systems
## 1. Introduction
We continue our investigations of the nonabelian moduli spaces in 2d gauge theory (the theory of connections on curves). This article is essentially an appendix to the paper [13] that completed the construction of the wild character varieties of smooth curves as affine algebraic Poisson varieties (completing the sequence [5, 6, 8]). This is a purely algebro-geometric approach, complementary to the earlier analytic approaches [3, 1]. Here we will give an intrinsic characterisation of the points of the wild character varieties, generalising existing results in the tame case, and characterise the stable points (generalising a result of [8] in the untwisted wild case). For more background and applications see the reviews in [7, 9, 11] (the first large class of examples of wild character varieties is due to Birkhoff [2] and the simplest case underlies \(U_{q}(\mathfrak{g})\), [7] SS4).
First we will recall the basic statements in the tame case. Let \(G\) be a connected complex reductive group, such as \(\operatorname{GL}_{n}(\mathbb{C})\), and let \(\Sigma^{\circ}\) be a smooth complex algebraic curve. Thus \(\Sigma^{\circ}=\Sigma\setminus\alpha\) for some smooth compact complex algebraic curve \(\Sigma\) (i.e. a
## 1. Introduction
Let \(G\) be a group group and let \(\pi_{1}(\Sigma^{\circ},b)\) be a group homomorphism from \(G\) to \(\pi_{1}(\Sigma^{\circ},b)\). We say that \(G\) is _\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-\(G\)-_\(G\)-_\(G\)-_\(G\)-\(G\)-_\(G\)-_\(G\)-_\(G\)-\(G\)-_\(G\)-_\(G\)-\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_\(G\)-_
[11]). The basic notions from the tame case are generalised as follows:
\[\text{local system} \leadsto\text{Stokes local system}\] fundamental group (or groupoid) \[\leadsto\text{wild surface group (or groupoid)}\] fundamental group representation \[\leadsto\text{Stokes representation}.\]
Once these generalisations are understood then the story proceeds similarly to the tame case (defining a wild representation variety \(\mathcal{R}\) parameterising framed Stokes local systems, and then acting by a reductive group to forget the framings). A key novelty in the wild setting is that there is a breaking of structure group ("fission") near the marked points so the group acting involves a (reductive) subgroup of \(G\). This is intimately related to extra generators in Ramis' description of the differential Galois group [25], generalising Schlesinger's density theorem. However, as we will recall, the basic feature of the tame case remains, that \(\mathcal{R}\) can be written explicitly in terms of a product of simpler pieces (doubles \(\mathbb{D}\) or fission spaces \(\mathcal{A}\)), one for each marked point or handle on \(\Sigma\):
\[\mathcal{R}\ \cong\ (\mathbb{D}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Any framed Stokes \(G\)-local system \(\mathbb{L}\) determines a Stokes representation \(\rho=\rho_{\mathbb{L}}\in\mathcal{R}(\mathbf{\Sigma},\beta)\). Two Stokes local systems are isomorphic if and only if their Stokes representations are in the same \(\mathbf{H}\)-orbit in \(\mathcal{R}(\mathbf{\Sigma},\beta)\).
The notion of \(\rho\) being polystable or stable for the action of \(\mathbf{H}\) on \(\mathcal{R}\) is well-defined, as for any action of a reductive group on an affine variety (as in[26]). The points of the Poisson wild character variety \(\mathcal{M}_{\mathrm{B}}(\mathbf{\Sigma})\) are the polystable (i.e. closed) \(\mathbf{H}\)-orbits.
On the other hand \(\rho\) determines the Galois group \(\mathrm{Gal}(\rho)\subset G\), as in Ramis' density theorem [25] (see SS5.2 below). It involves not just the usual monodromy, but also the formal monodromy, Stokes automorphisms and the Ramis tori.
Finally we can define \(\mathbb{L}\) to be irreducible if it has no proper parabolic reductions, and to be reductive (or "semsimple") if it has an irreducible Levi reduction.
**Theorem 2**.: _Let \(\mathbb{L}\) be a Stokes local system on \(\mathbf{\Sigma}\), and let \(\rho\) be its Stokes representation. The following are equivalent:_
\(\bullet 1)\)_\(\rho\) is polystable,_
\(\bullet 2)\)_\(\mathrm{Gal}(\rho)\) is linearly reductive,_
\(\bullet 3)\)_\(\mathbb{L}\) is a semisimple Stokes local system._
In the tame case (with each irregular class trivial) the groupoid \(\Pi\) becomes Poincare's fundamental groupoid (with a finite number of basepoints), and the theorem is already known [26].
Further we will consider stability (not just polystability). This requires possibly adding a few extra punctures to control the kernel of the action, but no generality is lost (see the discussion after (5)).
**Theorem 3**.: _Let \(\mathbb{L}\) be a Stokes local system on \(\mathbf{\Sigma}\), and let \(\rho\) be its Stokes representation. The following are equivalent:_
\(\bullet 4)\)_\(\rho\) is a stable point of \(\mathcal{R}(\mathbf{\Sigma},\beta)\),_
\(\bullet 5)\)_\(\mathrm{Gal}(\rho)\) is not contained in a proper parabolic subgroup of \(G\),_
\(\bullet 6)\)_\(\mathbb{L}\) is irreducible._
This result was already established in [8] in the case where each irregular class was not twisted.
Recall from [13] that two types of twist are possible: the formal twists (twisted irregular classes, as above), and also interior twists, over the interior of the curve, where we start with a local system of groups \(\mathcal{G}\to\Sigma^{\circ}\), with each fibre isomorphic to \(G\). Similarly we will establish the analogous results in this fully twisted setting. We will suppose that \(\mathcal{G}\) is "out-finite" in the sense of SS5.1 below.
The first main difference (in the presence of interior twists) is that the wild representation variety is replaced by a space of twisted Stokes representations:
\[\mathcal{R}=\mathrm{THom}_{\mathbb{S}}(\Pi,G)\subset\mathrm{Hom}(\Pi,G\ltimes \mathrm{Aut}(G))\]
which is an affine variety equipped with an action of a complex reductive group \(\mathbf{H}\). Secondly \(\operatorname{Gal}(\rho)\) is not a subgroup of \(G\), but rather it comes with a homomorphism \(\operatorname{Gal}(\rho)\to\operatorname{Aut}(G)\), so naturally acts on \(G\) by automorphisms. The image of \(\operatorname{Gal}(\rho)\) in \(\operatorname{Aut}(G)\) will be denoted \(\overline{\operatorname{Gal}}(\rho)\). Then we can define irreducibility (SS5.3) and semisimplicity (SS5.4) for Stokes \(\mathcal{G}\)-local systems, and will prove analogues of the above results:
**Theorem 4**.: _Let \(\mathbf{\Sigma}=(\Sigma,\alpha,\Theta)\) be a wild Riemann surface with group \(\mathcal{G}\to\Sigma^{\circ}\). Let \(\mathbb{L}\) be a Stokes \(\mathcal{G}\)-local system on \(\mathbf{\Sigma}\), and let \(\rho\in\operatorname{THom}_{\mathbb{S}}(\Pi,G)\) be its twisted Stokes representation. The following three conditions are equivalent:_
* \(\rho\) _is a polystable point of_ \(\mathcal{R}(\mathbf{\Sigma},\beta)=\operatorname{THom}_{\mathbb{S}}(\Pi,G)\)_,_
* \(\overline{\operatorname{Gal}}(\rho)\) _is linearly reductive,_
* \(\mathbb{L}\) _is a semisimple Stokes_ \(\mathcal{G}\)_-local system._
_Moreover the following three conditions are also equivalent:_
* \(\rho\) _is a stable point of_ \(\mathcal{R}(\mathbf{\Sigma},\beta)=\operatorname{THom}_{\mathbb{S}}(\Pi,G)\)_,_
* \(\overline{\operatorname{Gal}}(\rho)\subset\operatorname{Aut}(G)\) _does not preserve a proper parabolic subgroup of_ \(G\)_,_
* \(\mathbb{L}\) _is an irreducible Stokes_ \(\mathcal{G}\)_-local system._
In the set-up of Theorems 2,3 with \(\mathcal{G}\) constant it is true that \(a^{\prime})\Leftrightarrow a)\) for all \(a=1,2,\ldots 6\). Thus Thm. 4 implies both Theorems 2,3.
### Layout of the article
Sections 2, 3 generalise some of Richardson's results, in two steps. These are extrinsic results, and may well have further applications, beyond the wild character varieties of curves.
Section 4 then applies these results to the spaces of Stokes representations leading to the wild character varieties \(\mathcal{M}_{\operatorname{B}}(\mathbf{\Sigma})=\mathcal{R}/\mathbf{H}\). The main results are the equivalences \(1^{\prime})\Leftrightarrow 2^{\prime})\) in Cor. 25, and \(4^{\prime})\Leftrightarrow 5^{\prime})\) in Cor. 28.
Section 5 then discusses the intrinsic objects, Stokes local systems, and how stability/polystability can be read off in terms of (twisted) reductions of structure group. The main results are the equivalences \(1^{\prime})\Leftrightarrow 3^{\prime})\) and \(4^{\prime})\Leftrightarrow 6^{\prime})\) in parts 2) and 1) of Thm. 29 respectively.
### Some other directions
Note that for constant general linear groups the irreducible Stokes local systems are equivalent to the input data in the construction of wild harmonic metrics in [27]. Further note that in the fully untwisted case (but any \(G\), as in [8]) the irreducible Stokes local systems fit into the "Betti weight zero" case of the recent extension of the wild nonabelian Hodge correspondence due to Huang-Sun [20] (which looks to be in line with our general conjecture in [10] Rmk. 6, that the "good" meromorphic connections/Higgs fields on parahoric torsors are the right objects to look at).
In another direction one of the key motivations for this work is the fact that any admissible deformation of a wild Riemann surface leads to a local system of wild character varieties, and its monodromy generalises the usual mapping class group actions on character varieties in the tame case. As explained in [3, 4] the motivating examples for this whole line of thought were the Dubrovin-Ugaglia Poisson varieties whose braid group actions come from the braiding of counts of BPS states ([19] Rmk 3.10, related to earlier work of Cecotti-Vafa); these are examples of _twisted_ wild character varieties, involving the non-trivial outer automorphism of \(\operatorname{GL}_{n}(\mathbb{C})\) (so we now have an intrinsic general framework encompassing such examples). Indeed it was by forgetting this twist that the Poisson variety \(G^{*}\) underlying \(U_{q}(\mathfrak{g})\) was recognised and identified as a wild character variety [4]. See e.g. [17, 18, 12] for some recent developments concerning the generalised braid groups that act on wild character varieties, from admissible deformations of more general wild Riemann surfaces.
**Acknowledgements.** These results were completed in 2019 before the first named author moved departments and then learnt of the thesis work leading to [28], that has some overlap with this paper in the tame setting with interior twists, although expressed in a slightly different language. In the intervening years we have not yet managed to incorporate possible simplifications suggested by [28] but thought it reasonable to release our original approach anyway since the scope is larger.
## 2. Twisted version of Richardson's results
As in [26] we use the terminology that an affine algebraic group over \(\mathbb{C}\) is _reductive_ if it is connected and has trivial unipotent radical. It is _linearly reductive_ if its identity component is reductive.
Let \(G\) be a linearly reductive group over \(\mathbb{C}\). Recall that a point \(x\) of an affine \(G\)-variety is said to be _polystable_ if the orbit \(G\cdot x\) is closed. It is said to be _stable_ if it is polystable and the kernel of the action has finite index in its stabiliser \(G_{x}\).
Let \(n\) be a positive integer. For \(\mathbf{x}=(x_{i})_{i=1}^{n}\in G^{n}\), let \(A(\mathbf{x})\subset G\) be the Zariski closure of the subgroup generated by \(x_{1},x_{2},\ldots,x_{n}\). In [26], Richardson examined the simultaneous conjugation action of \(G\) on the product \(G^{n}\) and obtained the following results:
**Theorem 5** ([26, Thm. 3.6]).: _If \(G\) is linearly reductive, a point \(\mathbf{x}\in G^{n}\) is polystable if and only if \(A(\mathbf{x})\) is linearly reductive._
**Theorem 6** ([26, Thm. 4.1]).: _If \(G\) is reductive, a point \(\mathbf{x}\in G^{n}\) is stable if and only if \(A(\mathbf{x})\) is not contained in any proper parabolic subgroup of \(G\)._
In this section we will establish twisted versions of these results, in Thm. 7 and Thm. 12 (2) respectively. Two other characterisations of polystability will also be established, in Thm. 12 (1) and Cor. 18. The subsequent section (SS3) will give a further generalisation.
Assume that \(G\) is reductive and choose \(\phi_{1},\phi_{2},\ldots,\phi_{n}\in\operatorname{Aut}(G)\). Let \(\Gamma\) be the subgroup of \(\operatorname{Aut}(G)\) generated by the inner automorphism group \(\operatorname{Inn}(G)\) and \(\phi_{1},\phi_{2},\ldots,\phi_{n}\)
We assume that the quotient \(\overline{\Gamma}:=\Gamma/\operatorname{Inn}(G)\) is finite and regard \(\Gamma\) as an algebraic group with identity component \(\operatorname{Inn}(G)\cong G/Z(G)\). For any \(\phi\in\Gamma\) let \(G(\phi)\) denote the \(G\)-bitorsor/"twisted group" \(G\times\{\phi\}\subset G\ltimes\Gamma\), as in [13] SS2 (as explained there, the monodromy of a \(\mathcal{G}\)-local system lies in such a space, and \(G(\phi)\) embeds in the group of set-theoretic automorphisms of a fibre). Put
\[X=\prod_{i=1}^{n}G(\phi_{i})\subset(G\ltimes\Gamma)^{n},\]
on which \(G=G(\operatorname{Id})\) acts by the simultaneous conjugation. For \(\mathbf{x}=(x_{i})_{i=1}^{n}\in X\), let \(A(\mathbf{x})\) be the Zariski closure (in the algebraic group \(G\ltimes\Gamma\)) of the subgroup generated by \(x_{1},x_{2},\ldots,x_{n}\). Let \(\overline{A}(\mathbf{x})\subset\Gamma\) be the image of \(A(\mathbf{x})\) under the homomorphism
\[\operatorname{Ad}\colon G\ltimes\Gamma\to\Gamma,\quad x=(g,\phi)\mapsto \operatorname{Ad}(x)=\operatorname{Ad}(g)\circ\phi,\]
where \(\operatorname{Ad}(g)=g(\,\cdot\,)g^{-1}\). Note that \(\overline{A}(\mathbf{x})\) is contained in \(\operatorname{Aut}(G)\) and hence naturally acts on \(G\). Thus in turn, via \(\operatorname{Ad}\), the group \(A(\mathbf{x})\) naturally acts on \(G\).
**Theorem 7**.: _A point \(\mathbf{x}\in X\) is polystable if and only if \(\overline{A}(\mathbf{x})\) is linearly reductive._
_Remark 8_.: The notation \(A(\mathbf{x})\) for the Zariski closure presumably stems from [14] SS3, where \(A\) stands for _adherence_. This may help to avoid possible confusion (since \(\overline{A}(\mathbf{x})\) denotes the image in \(\operatorname{Aut}(G)\) here). Note that [14] SS3.3 shows that if \(H\subset G\) is any subgroup then the Zariski closure of the set \(H\) is a Zariski closed subgroup of \(G\).
To see that Thm. 7 generalises Thm. 5, first note that:
**Proposition 9**.: _Let \(\widetilde{G}\) be a linearly reductive group with identity component \(G\), and let \(A\) be a closed subgroup of \(\widetilde{G}\). Let \(\overline{A}\) be the image of \(A\) under the map \(\operatorname{Ad}\bigr{|}_{G}\colon\widetilde{G}\to\operatorname{Aut}(G)\), and regard it as a quotient algebraic group of \(A\). Then \(A\) is linearly reductive if and only if \(\overline{A}\) is linearly reductive._
**Proof.** Note that we may view \(\operatorname{Ad}\bigr{|}_{G}\) as a map of algebraic groups: \(\operatorname{Aut}(G)\cong\operatorname{Inn}(G)\ltimes\operatorname{Out}(G)\) may have an infinite number of components, but \(\operatorname{Ad}\bigr{|}_{G}(\widetilde{G})\) will only encounter a finite number of them. Let \(K\) be the kernel of the homomorphism \(A\twoheadrightarrow\overline{A}\). Note \(K\cap G\subset Z(G)\), and the identity component \(K^{0}\) of \(K\) is contained in \((\widetilde{G})^{0}=G\). Thus \(K^{0}\subset Z(G)\), which implies \(K^{0}\) is a torus and so \(K\) is linearly reductive. Now [26] 1.2.2 implies \(A\) is linearly reductive if and only if \(\overline{A}\) is linearly reductive.
**Lemma 10**.: _Let \(\widetilde{G}\) be a linearly reductive group with identity component \(G\) and let \(\mathbf{x}\) be a point in an affine \(\widetilde{G}\)-variety \(Y\). Then the following hold._
1. \(\mathbf{x}\) _is polystable for the_ \(\widetilde{G}\)_-action if and only if_ \(\mathbf{x}\) _is polystable for the_ \(G\)_-action._
2. \(\mathbf{x}\) _is stable for the_ \(\widetilde{G}\)_-action if and only if_ \(\mathbf{x}\) _is stable for the_ \(G\)_-action._
**Proof.** The \(\widetilde{G}\)-orbit \(\widetilde{G}\cdot{\bf x}\) is a disjoint union of a finite number of \(G\)-orbits of the same dimension, one of which is \(G\cdot{\bf x}\). Hence \(\widetilde{G}\cdot{\bf x}\) is closed if and only if \(G\cdot{\bf x}\) is closed. The second assertion follows from the equality \(G_{\bf x}=\widetilde{G}_{\bf x}\cap G\) for the stabilisers and a similar one for the kernels. \(\square\)
Thus in particular it is now clear that Thm. 7 generalises Thm. 5.
**Proof** (of Theorem 7). Since \(\overline{\Gamma}\) is finite, we can find a finite subgroup \(\Gamma^{\prime}\) of \(\Gamma\) such that \(\Gamma={\rm Inn}(G)\cdot\Gamma^{\prime}\) thanks to a result of Borel-Serre and Brion (see [15, 16]). For each \(i=1,2,\ldots,n\) take \(g_{i}\in G\) so that \(\phi_{i}^{\prime}:={\rm Inn}(g_{i})\circ\phi_{i}\in\Gamma^{\prime}\). Then we have isomorphisms of bitorsors
\[f_{i}\colon G(\phi_{i}^{\prime})\to G(\phi_{i}),\quad(g,\phi_{i}^{\prime}) \mapsto(gg_{i},\phi_{i})\quad(i=1,2,\ldots,n),\]
which induce a \(G\)-equivariant isomorphism
\[f\colon X^{\prime}:=\prod_{i=1}^{n}G(\phi_{i}^{\prime})\to X,\quad(x_{i}^{ \prime})_{i=1}^{n}\mapsto(f_{i}(x_{i}^{\prime}))_{i=1}^{n}.\]
Since \(f\) is equivariant, a point \({\bf x}\in X\) is stable (resp. polystable) if and only if \(f^{-1}({\bf x})\in X^{\prime}\) is stable (resp. polystable). Also, observe that \({\rm Ad}\circ f_{i}={\rm Ad}\)\((i=1,2,\ldots,n)\) and hence \(\overline{A}({\bf x})=\overline{A}(f^{-1}({\bf x}))\) for all \({\bf x}\in X\). Therefore without loss of generality we may assume that \(\phi_{i}\in\Gamma^{\prime}\) for all \(i=1,2,\ldots,n\).
Put \(\widetilde{G}=G\ltimes\Gamma^{\prime}\). It is a linearly reductive group with identity component \(G\) and \(X\) is a closed subvariety of \(\widetilde{G}^{n}\). By Theorem 5, a point \({\bf x}\in\widetilde{G}^{n}\) is polystable with respect to the simultaneous \(\widetilde{G}\)-conjugation if and only if \(A({\bf x})\subset\widetilde{G}\) is linearly reductive. By Prop. 9 this happens if and only if \(\overline{A}({\bf x})\) is linearly reductive. Together with Lemma 10 this implies the assertion. \(\square\)
Note that in general it really is necessary to work with \(\overline{A}({\bf x})\) rather than \(A({\bf x})\):
**Lemma 11**.: _There are examples of \({\bf x}\in X\) which are polystable but \(A({\bf x})\) is not linearly reductive._
**Proof.** Write \(x_{i}=(g_{i},\phi_{i})\in G(\phi_{i})\). Choose \(g_{1},\ldots,g_{n}\in G\) generating a Zariski dense subgroup of an abelian unipotent subgroup \(U\subset G\), such as a root group. Then define \(\phi_{i}\) to be the inner automorphism \(\phi_{i}(g)=g_{i}^{-1}gg_{i}\). It follows that \(A({\bf x})\cong U\) is not reductive, but \(\overline{A}({\bf x})=\{1\}\) is trivial (and so \({\bf x}\) is polystable by Thm. 7). \(\square\)
**Theorem 12**.: _For any point \({\bf x}\in X\) the following hold:_
1. \({\bf x}\) _is polystable if and only if any_ \(A({\bf x})\)_-invariant parabolic subgroup_ \(P\subset G\) _has an_ \(A({\bf x})\)_-invariant Levi subgroup_ \(L\subset P\)_._
2. \({\bf x}\) _is stable if and only if there are no proper_ \(A({\bf x})\)_-invariant parabolic subgroups of_ \(G\)
We prepare three lemmas.
**Lemma 13**.: _Let \(P\) be a parabolic subgroup of \(G\) and \(L\) be a Levi subgroup of \(P\). Then \(N_{P}(L)=L\)._
**Proof.** Let \(\pi\colon P\to P/R_{u}(P)\) be the quotient of \(P\) by the unipotent radical. Suppose that \(g\in P\) normalises \(L\) and decompose it as \(g=hu\), \(h\in L\), \(u\in R_{u}(P)\). Then for any \(x\in L\) we have \(gxg^{-1}\in L\) and hence \(uxu^{-1}\in L\). Since \(u\in\operatorname{Ker}\pi\) we have \(\pi(uxu^{-1})=\pi(x)\), which implies \(uxu^{-1}=x\) since the restriction of \(\pi\) to \(L\) is injective. Thus \(u\) commutes with \(L\). On the other hand, \(L\) coincides with its centraliser since \(L=C_{G}(S)\) for some torus \(S\subset L\). Thus \(u\in L\) and hence \(u=1\), i.e. \(g\in L\). \(\square\)
**Lemma 14**.: _The kernel of the \(G\)-action on \(X\) is equal to \(Z(G)^{\Gamma}\)._
**Proof.** If \(k\in G\) lies in the kernel, then for any \(g\in G\) and \(i=1,2,\ldots,n\) we have \(k(g,\phi_{i})=(g,\phi_{i})k\), i.e.
\[kg=g\phi_{i}(k).\]
Taking \(g\) to be \(1\) we obtain \(\phi_{i}(k)=k\) (\(i=1,2,\ldots,n\)). Thus \(kg=gk\) for all \(g\in G\), i.e., \(k\in Z(G)\). Since \(\Gamma\) is generated by \(\operatorname{Inn}(G),\phi_{1},\phi_{2},\ldots,\phi_{n}\) and \(\operatorname{Inn}(G)\) trivially acts on \(Z(G)\) we obtain \(k\in Z(G)^{\Gamma}\). The converse is clear. \(\square\)
**Lemma 15**.: \(Z(G)\cap G_{\mathbf{x}}=Z(G)^{\Gamma}\) _for any \(\mathbf{x}\in X\)._
**Proof.** Take any \(\mathbf{x}=(x_{i})_{i=1}^{n}\in X\). For each \(i=1,2,\ldots,n\) we have \(\operatorname{Ad}(x_{i})\phi_{i}^{-1}\in\operatorname{Inn}(G)\), which trivially acts on \(Z(G)\). Thus for any \(g\in Z(G)\cap G_{\mathbf{x}}\) we have
\[\phi_{i}(g)=\operatorname{Ad}(x_{i})(g)=g\quad(i=1,2,\ldots,n),\]
which implies \(g\in Z(G)^{\Gamma}\). The inclusion \(Z(G)^{\Gamma}\subset Z(G)\cap G_{\mathbf{x}}\) is clear. \(\square\)
**Proof** (of Theorem 12). As in the proof of Theorem 7, we may assume that \(\phi_{i}\), \(i=1,2,\ldots,n\) are all contained in a common finite subgroup \(\Gamma^{\prime}\subset\Gamma\) such that \(\Gamma=\operatorname{Inn}(G)\cdot\Gamma^{\prime}\) and put \(\widetilde{G}=G\ltimes\Gamma^{\prime}\). Under this assumption \(\overline{A}(\mathbf{x})\) is linearly reductive if and only if \(A(\mathbf{x})\subset\widetilde{G}\) is linearly reductive, by Prop. 9.
(1) We should show that a subgroup \(H\subset\widetilde{G}\) is linearly reductive if and only if any \(H\)-invariant parabolic in \(G\) has an \(H\)-invariant Levi subgroup. First suppose that \(H\subset\widetilde{G}\) is linearly reductive and \(P\subset G\) is an \(H\)-invariant parabolic subgroup. Since \(H\) is linearly reductive and contained in \(N_{\widetilde{G}}(P)\), it is contained in some Levi subgroup \(\widetilde{L}\) of \(N_{\widetilde{G}}(P)\). Then \(L:=\widetilde{L}\cap G\) is a Levi subgroup of \(N_{\widetilde{G}}(P)\cap G=N_{G}(P)=P\) and normalised by \(H\) (so it is \(H\)-invariant). Conversely, suppose any \(H\)-invariant parabolic in \(G\) has an \(H\)-invariant Levi subgroup. By [26, Prop. 2.6], there exists a
one-parameter subgroup \(\lambda\) of \(G\) such that \(H\subset P_{\widetilde{G}}(\lambda)\) and \(R_{u}(H)\subset U_{\widetilde{G}}(\lambda)\), where
\[P_{\widetilde{G}}(\lambda)=\{\,x\in\widetilde{G}\mid\lim_{t\to 0}\lambda(t)x \lambda(t)^{-1}\text{ exists}\,\}, \tag{1}\]
\[U_{\widetilde{G}}(\lambda)=\{\,x\in\widetilde{G}\mid\lim_{t\to 0}\lambda(t)x \lambda(t)^{-1}=1\,\}, \tag{2}\]
and \(R_{u}(H)\) is the unipotent radical of \(H\). Put \(P=P_{G}(\lambda)=P_{\widetilde{G}}(\lambda)\cap G\), which is a parabolic subgroup of \(G\). Since \(P\) is normalised by \(P_{\widetilde{G}}(\lambda)\), it is normalised by \(H\). Hence \(P\) has an \(H\)-invariant Levi subgroup \(L\) by assumption. We have \(H\subset N_{\widetilde{G}}(L)\) and hence \(R_{u}(H)\subset N_{G}(L)\) (recall that the unipotent radical is connected). Note that \(R_{u}(H)\) is also contained in \(U_{\widetilde{G}}(\lambda)\cap G=R_{u}(P)\), while any non-trivial element of \(R_{u}(P)\) does not normalise the Levi subgroup \(L\) by Lemma 13. Thus \(R_{u}(H)\) is trivial, i.e. \(H\) is linearly reductive.
(2) Suppose that \(\mathbf{x}=(x_{i})_{i=1}^{n}\in X\) is stable and let \(P\) be a \(A(\mathbf{x})\)-invariant parabolic subgroup of \(G\). By Theorem 7, \(\overline{A}(\mathbf{x})\) (and hence \(A(\mathbf{x})\)) is linearly reductive. Since \(A(\mathbf{x})\) normalises \(P\), there exists a Levi subgroup \(\widetilde{L}\) of \(N_{\widetilde{G}}(P)\) containing \(A(\mathbf{x})\). By [26, Prop. 2.4], there exists a one-parameter subgroup \(\lambda\) of \(G\) such that \(N_{\widetilde{G}}(P)=P_{\widetilde{G}}(\lambda)\) and \(\widetilde{L}=C_{\widetilde{G}}(\operatorname{Im}\lambda)\). Since \(A(\mathbf{x})\subset C_{\widetilde{G}}(\operatorname{Im}\lambda)\) each \(x_{i}\) commutes with \(\operatorname{Im}\lambda\) and hence \(\operatorname{Im}\lambda\subset G_{\mathbf{x}}\). The stability now implies that \(\operatorname{Im}\lambda\) is contained in the kernel \(Z(G)^{\Gamma}\) and hence \(P=G\). Conversely, suppose that \(\mathbf{x}\in X\) is not stable. Then if the orbit \(G\cdot\mathbf{x}\) is not closed we argue as follows: By the Hilbert-Mumford criterion, there exists a one-parameter subgroup \(\lambda\) of \(G\) and an element \(\mathbf{y}=(y_{i})_{i=1}^{n}\in X\) such that \(\lim_{t\to 0}\lambda(t)\cdot\mathbf{x}=\mathbf{y}\) and \(G\cdot\mathbf{y}\) is closed. We show that the parabolic subgroup \(P:=P_{G}(\lambda)=P_{\widetilde{G}}(\lambda)\cap G\) is \(A(\mathbf{x})\)-invariant and proper. For any \(g\in P\) and \(i=1,2,\ldots,n\), the limit of
\[\lambda(t)\mathrm{Ad}(x_{i})(g)\lambda(t)^{-1}=\lambda(t)x_{i}\lambda(t)^{-1} \cdot\lambda(t)g\lambda(t)^{-1}\cdot\lambda(t)x_{i}^{-1}\lambda(t)^{-1}\]
as \(t\to 0\) exists. Hence \(P\) is \(A(\mathbf{x})\)-invariant. If \(P\) is not proper, \(\operatorname{Im}\lambda\) is contained in \(Z(G)\cap G_{\mathbf{y}}=Z(G)^{\Gamma}\). Thus \(\lambda(t)\cdot\mathbf{x}=\mathbf{x}\) (\(t\in\mathbb{C}^{*}\)) and hence \(\mathbf{x}=\mathbf{y}\), which contradicts the assumption that \(G\cdot\mathbf{x}\) is not closed. Hence \(P\) is proper. Finally suppose \(G\cdot\mathbf{x}\) is closed, but of the wrong dimension. Then the stabiliser \(G_{\mathbf{x}}\) is linearly reductive ([26] 1.3.3) and the quotient \(G_{\mathbf{x}}/Z(G)^{\Gamma}\) has non-trivial identity component. Hence there exists a one-parameter subgroup \(\lambda\) of \(G_{\mathbf{x}}\) such that \(\operatorname{Im}\lambda\not\subset Z(G)^{\Gamma}\). Since each \(x_{i}\) commutes with \(\lambda\), the parabolic subgroup \(P:=P_{G}(\lambda)\) is \(A(\mathbf{x})\)-invariant. It is proper since Lemma 15 implies \(\operatorname{Im}\lambda\not\subset Z(G)\).
Let us rephrase/abstract the first part of this in a way that will be useful later. Let \(G\) be a reductive group and \(\Lambda\) an algebraic group acting on \(G\) by (algebraic) group automorphisms. Suppose that the action of \(\Lambda\) is effective and the identity component \(\Lambda^{0}\) acts by inner automorphisms. Then we may regard \(\Lambda\) as a subgroup of \(\operatorname{Aut}(G)\) and its image in \(\operatorname{Out}(G)\) is finite.
**Proposition 16**.: \(\Lambda\) _is linearly reductive if and only if any \(\Lambda\)-invariant parabolic subgroup of \(G\) has a \(\Lambda\)-invariant Levi subgroup._
**Proof.** By the result of Borel-Serre and Brion, there exists a finite subgroup \(\Lambda^{\prime}\subset\Lambda\) such that \(\Lambda^{0}\Lambda^{\prime}=\Lambda\). Put \(\widetilde{G}=G\ltimes\Lambda^{\prime}\) and let \(K\subset\widetilde{G}\) be the preimage of \(\Lambda\) under Ad, so that Prop. 9 implies \(\Lambda\) is linearly reductive if and only if \(K\) is linearly reductive. Thus the equivalence follows from the proof of Thm. 12 (1). \(\square\)
In the same setting there is a further characterisation:
**Proposition 17**.: \(\Lambda\) _is linearly reductive if and only if there exists a torus \(S\subset G\) such that \(C_{G}(S)\) is \(\Lambda\)-invariant and has no proper \(\Lambda\)-invariant parabolic subgroups._
**Proof.** Suppose that \(\Lambda\) is linearly reductive. We show that there exists a decreasing sequence of \(\Lambda\)-invariant closed subgroups
\[G=L_{0}\supset P_{1}\supset L_{1}\supset P_{2}\supset L_{2}\supset\cdots \supset P_{r}\supset L_{r}\]
such that each \(P_{i}\) is a proper parabolic subgroup of \(L_{i-1}\), each \(L_{i}\) (\(i>0\)) is a Levi subgroup of \(P_{i}\) and \(L_{r}\) has no proper \(\Lambda\)-invariant parabolic subgroups. If \(L_{0}=G\) has no proper \(\Lambda\)-invariant parabolic subgroups we have nothing to do (just put \(r=0\)). Otherwise we take any proper \(\Lambda\)-invariant parabolic subgroup \(P_{1}\subsetneq L_{0}\). Then by Prop. 16 there exists a \(\Lambda\)-invariant Levi subgroup \(L_{1}\subset P_{1}\). Let \(\Lambda_{1}\) be the quotient of \(\Lambda\) by the kernel of the induced \(\Lambda\)-action on \(L_{1}\), so that \(\Lambda_{1}\) effectively acts on \(L_{1}\). Note that its identity component \(\Lambda_{1}^{0}\) acts by inner automorphisms of \(L_{1}\); indeed, the action of any element of \(\Lambda_{1}^{0}\) is induced from some inner automorphism of \(G\) preserving \(P_{1},L_{1}\) and hence is inner by Lemma 13. Since \(\Lambda\) is linearly reductive \(\Lambda_{1}\) is also linearly reductive, and a subgroup of \(L_{1}\) is \(\Lambda_{1}\)-invariant if and only if it is \(\Lambda\)-invariant as a subgroup of \(L_{0}\). If \(L_{1}\) has no proper \(\Lambda_{1}\)-invariant parabolic subgroup, the sequence \(L_{0}\supset P_{1}\supset L_{1}\) is as desired. Otherwise we take any proper \(\Lambda_{1}\)-invariant parabolic subgroup \(P_{2}\subsetneq L_{1}\). Then by Prop. 16 there exists a \(\Lambda_{1}\)-invariant Levi subgroup \(L_{2}\subset P_{2}\). Iterating this procedure, we obtain a desired decreasing sequence. Since each \(L_{i}\) (\(i>0\)) is a Levi subgroup of \(P_{i}\) there exists a torus \(S_{i}\subset L_{i}\) such that \(L_{i}=C_{L_{i-1}}(S_{i})\). Then \(L_{r}\) is the common centraliser of \(S_{1},S_{2},\ldots,S_{r}\) in \(G\). Hence the torus \(S\subset G\) generated by \(S_{1},S_{2},\ldots,S_{r}\) (note that they commute with each other) is as desired.
Conversely, suppose that there exists a torus \(S\subset G\) such that \(L:=C_{G}(S)\) is \(\Lambda\)-invariant and has no proper \(\Lambda\)-invariant parabolic subgroups. Let \(P\subset G\) be a \(\Lambda\)-invariant parabolic subgroup. Then the intersection \(P\cap L\) is a \(\Lambda\)-invariant parabolic subgroup of \(L\) and hence \(P\cap L=L\), i.e. \(L\subset P\). Since \(L\) is reductive, there exists a Levi subgroup \(M\subset P\) containing \(L\). For any \(\psi\in\Lambda\) the image \(\psi(M)\) of \(M\) is also a Levi subgroup of \(P\) containing \(L\). Since \(L\) contains a maximal torus of \(P\) and any maximal torus of \(P\) is contained in a unique Levi subgroup, we have \(\psi(M)=M\). Hence \(M\) is \(\Lambda\)-invariant, which together with Prop. 16 shows that \(\Lambda\) is linearly reductive. \(\square\)
Applying this to the set-up of the present section (with \(\Lambda=\overline{A}(\mathbf{x})\)) yields:
**Corollary 18**.: _The following are equivalent:_
_0) A point \({\bf x}\in X\) is polystable,_
_1) The group \(\overline{A}({\bf x})\) is linearly reductive,_
_2) Any \(A({\bf x})\)-invariant parabolic in \(G\) has an \(A({\bf x})\)-invariant Levi subgroup,_
_3) There exists a subtorus \(S\subset G\) such that \(C_{G}(S)\) is \(A({\bf x})\)-invariant and has no proper \(A({\bf x})\)-invariant parabolic subgroups,_
_4) There exists an \(A({\bf x})\)-invariant Levi subgroup \(L\) of a parabolic of \(G\), such that \(L\) has no proper \(A({\bf x})\)-invariant parabolic subgroups._
Note that 3) and 4) are trivially equivalent since centralisers of tori in \(G\) are exactly the Levi subgroups of parabolics.
## 3. More general set-up
The results of the previous section will now be generalised, in a form more directly useful in the context of Stokes local systems. Return to the set-up of Thm. 7 (with \(n\geq 1\)), but now further choose an integer \(m\geq 1\) and tori \(\mathbb{T}_{1},\ldots,\mathbb{T}_{m}\subset G\). Write \({\bf T}=\mathbb{T}_{1}\times\cdots\times\mathbb{T}_{m}\subset G^{m}\), let \(H_{i}=C_{G}(\mathbb{T}_{i})\) and \({\bf H}=H_{1}\times\cdots\times H_{m}=C_{G^{m}}({\bf T})\subset G^{m}\). We allow some of the \(\mathbb{T}_{i}\) to be a point, in which case \(H_{i}=G\). In this section we will study the stability and polystability for the action of \({\bf H}\) on
\[X:=G^{m-1}\times\prod_{i=1}^{n}G(\phi_{i}) \tag{3}\]
given by
\[{\bf h}\cdot({\bf C},{\bf M})=(h_{2}C_{2}h_{1}^{-1},\ldots,h_{m}C_{m}h_{1}^{-1 },h_{1}M_{1}h_{1}^{-1},\ldots,h_{1}M_{n}h_{1}^{-1})\]
where \({\bf h}=(h_{1},\ldots,h_{m}),{\bf C}=(C_{2},\ldots,C_{m}),{\bf M}=(M_{1}, \ldots,M_{n}),C_{i}\in G,M_{i}\in G(\phi_{i})\).
Thus if \(m=1\) and \(\mathbb{T}_{1}=1\) we recover the situation of Thm. 7. The case \(m=1\) and \(\mathbb{T}_{1}\) arbitrary but each \(\phi_{i}=1\) was studied by Richardson in [26] Thm. 13.2,14.1 (taking \(S=\mathbb{T}_{1}\) acting on \(G\) by conjugation). More generally, in effect, [8] Cor. 9.6 studied the notion of stability in the case with \(m\) arbitrary and each \(\phi_{i}=1\), making the link to the differential Galois group of irregular connections (whence the \(\mathbb{T}_{i}\) are the Ramis/exponential tori).
For \({\bf x}=({\bf C},{\bf M})\in X\) let \(A({\bf x})\subset G\ltimes\Gamma\) be the Zariski closure of the subgroup generated by
\[M_{1},M_{2},\ldots,M_{n},\mathbb{T}_{1},C_{2}^{-1}\mathbb{T}_{2}C_{2},\ldots, C_{m}^{-1}\mathbb{T}_{m}C_{m}.\]
Let \(\overline{A}({\bf x})\subset\Gamma\) be the image of \(A({\bf x})\) in \(\Gamma\subset{\rm Aut}(G)\), as before. Recall that a subset of \(G\) is \(A({\bf x})\)-invariant if it is preserved by \(\overline{A}({\bf x})\subset{\rm Aut}(G)\).
**Theorem 19**.: _(1) A point \({\bf x}\in X\) is polystable for the \({\bf H}\) action if and only if \(\overline{A}({\bf x})\) is linearly reductive._
_(2) A point \({\bf x}\in X\) is stable for the \({\bf H}\) action if and only if there are no proper \(A({\bf x})\)-invariant parabolic subgroups in \(G\)._
As in the last section, one can rephrase polystability in several different ways:
**Corollary 20**.: _A point \({\bf x}\in X\) is polystable if and only if_
_1) The group \(\overline{A}({\bf x})\) is linearly reductive, or_
_2) Any \(A({\bf x})\)-invariant parabolic in \(G\) has an \(A({\bf x})\)-invariant Levi subgroup, or_
_3) There exists a subtorus \(S\subset G\) such that \(C_{G}(S)\) is \(A({\bf x})\)-invariant and has no proper \(A({\bf x})\)-invariant parabolic subgroups._
**Proof** (of Thm. 19). Let \(G\times{\bf H}\) act on \(\widetilde{X}:=G^{m}\times\prod_{1}^{N}G(\phi_{i})\) via
\[(g,{\bf h})\cdot({\bf C},{\bf M})=(h_{1}C_{1}g^{-1},h_{2}C_{2}g^{-1},\ldots,h_ {m}C_{m}g^{-1},gM_{1}g^{-1},\ldots,gM_{n}g^{-1})\]
where \({\bf h}=(h_{1},\ldots,h_{m}),{\bf C}=(C_{1},\ldots,C_{m}),{\bf M}=(M_{1}, \ldots,M_{n}),C_{i}\in G,M_{i}\in G(\phi_{i})\). (Up to relabelling and incrementing \(m\) this is the special case where \(H_{1}=G\).) Consider the \({\bf H}\)-equivariant map \(\widetilde{X}\to X\) taking \(({\bf C};{\bf M})\) to
\[(C_{2}C_{1}^{-1},\ldots,C_{m}C_{1}^{-1};C_{1}M_{1}C_{1}^{-1},\ldots,C_{1}M_{N} C_{1}^{-1})\in X.\]
It expresses \(\widetilde{X}\) as a principal \(G\)-bundle over \(X\) (the fibres are exactly the \(G\)-orbits). Any point \({\bf x}\in X\) has a unique lift \(\widetilde{{\bf x}}\in\widetilde{X}\) with \(C_{1}=1\). It follows that the \({\bf H}\) orbit of \({\bf x}\in X\) is closed if and only if the \(G\times{\bf H}\) orbit of \(\widetilde{{\bf x}}\) is closed in \(\widetilde{X}\).
Now choose \({\bf t}=(t_{1},\ldots,t_{m})\in{\bf T}\) so that \(t_{i}\) generates a Zariski dense subgroup of \({\mathbb{T}}_{i}\) for each \(i\). In particular \(H_{i}=C_{G}(t_{i})\). Consider the simultaneous conjugation action of \(G\) on \(Y:=G^{m}\times\prod_{1}^{N}G(\phi_{i})\), and the \(G\)-equivariant embedding
\[\pi:\widetilde{X}/{\bf H}\hookrightarrow Y;\quad[({\bf C};{\bf M})]\mapsto({ \bf C}^{-1}{\bf t}{\bf C},{\bf M}).\]
Thus \(\widetilde{X}/{\bf H}\) is identified with a closed subvariety of \(Y\) and \(\widetilde{X}\) is a \(G\)-equivariant principal \({\bf H}\)-bundle over \(\widetilde{X}/{\bf H}\). It follows that the \(G\times{\bf H}\) orbit of \(\widetilde{{\bf x}}\) is closed in \(\widetilde{X}\) if and only if the \(G\) orbit of \(\pi(\widetilde{{\bf x}})\) is closed in \(Y\).
Hence part (1) of the theorem follows from Thm. 7 (applied to \(Y\)). To deal with stability we need to consider the stabilisers and the kernels of the actions.
**Lemma 21**.: _The stabiliser of any point \({\bf x}\in X\) for the \({\bf H}\)-action is canonically isomorphic to the stabiliser of the point \(\pi(\widetilde{{\bf x}})\in Y\) for the \(G\)-action._
**Proof.** Suppose \(g\in G\) fixes \(({\bf C}^{-1}{\bf t}{\bf C},{\bf M})\) and let \(h_{i}=C_{i}gC_{i}^{-1}\) (\(i=1,2,\ldots,m\)) and \({\bf h}=(h_{1},\ldots,h_{m})\). Note \(h_{1}=g\) because \(C_{1}=1\). Since \(g\) commutes with each \(C_{i}^{-1}t_{i}C_{i}\), each \(h_{i}\) commutes with \(t_{i}\), which implies \({\bf h}\in{\bf H}\). We have \(h_{i}C_{i}g^{-1}=C_{i}\) (\(i=1,2,\ldots,m\)) by the definition of \(h_{i}\) and \(gM_{j}g^{-1}=M_{j}\) (\(j=1,2,\ldots,N\)) as \(g\) centralises \({\bf M}\). Hence the pair \((g,{\bf h})\in G\times{\bf H}\) stabilises \(\widetilde{{\bf x}}\in\widetilde{X}\), and hence \({\bf h}\) stabilises
\(\mathbf{x}\in X\). Conversely if \(\mathbf{h}=(h_{1},\ldots,h_{m})\in\mathbf{H}\) fixes \(\mathbf{x}\), then let \(g=h_{1}\). It follows immediately that \((g,\mathbf{h})\in G\times\mathbf{H}\) fixes \(\widetilde{\mathbf{x}}\) and thus that \(g\) fixes \(\pi(\widetilde{\mathbf{x}})\). Clearly the two correspondences are inverses of each other.
As in Lemma 14 one has:
**Lemma 22**.: (1) _The kernel of the \(G\)-action on \(Y\) (or \(\widetilde{X}/\mathbf{H}\)) is the \(\Gamma\)-invariant subgroup \(Z(G)^{\Gamma}\) of the center of \(G\), and (2) The kernel of the \(\mathbf{H}\)-action on \(X\) is the subgroup of elements \((h_{1},\ldots,h_{m})\in\mathbf{H}\) satisfying \(h_{1}=h_{2}=\cdots=h_{m}\in Z(G)^{\Gamma}\)._
Note that these two groups correspond to each other under the correspondence of Lem. 21 (for any \(\mathbf{x}\in X\)). It follows that \(\mathbf{x}\in X\) is stable if and only if \(\pi(\widetilde{\mathbf{x}})\in Y\) is stable. By Thm. 12 (2) applied to \(Y\), this happens if and only if there are no proper \(A(\mathbf{x})\) invariant parabolic subgroups of \(G\).
## 4. Application to wild character varieties
Recall from [8, 13] that the wild character variety \(\mathcal{M}_{\mathrm{B}}(\mathbf{\Sigma})=\mathrm{THom}_{\mathrm{S}}(\Pi,G)/ \mathbf{H}\) is determined by an irregular curve/wild Riemann surface \(\mathbf{\Sigma}=(\Sigma,\alpha,\Theta)\) with group \(\mathcal{G}\), where \(\Sigma\) is a compact smooth complex algebraic curve, \(\alpha\subset\Sigma\) is a non-empty finite subset, \(\mathcal{G}\to\Sigma^{\circ}:=\Sigma\setminus\alpha\) is a local system of groups over the punctured curve (with each fibre isomorphic to some fixed connected complex reductive group \(G\)), and \(\Theta\) consists of the data of an irregular class \(\Theta_{a}\) at each point \(a\in\alpha\) (in the sense of [13] SS3.5--it is the class of a graded \(\mathcal{G}\) local system).
As in [13] SS4, \(\mathbf{\Sigma}\) then determines an auxiliary surface \(\widetilde{\Sigma}\), equipped with boundary circles \(\partial\), halos \(\mathbb{H}\subset\widetilde{\Sigma}\), and tangential punctures \(e(\mathbb{A})\). Further \(\mathbf{\Sigma}\) determines a local system \(\mathbb{T}\to\partial\) of finite dimensional complex tori, the Ramis tori ([13] p.9). Choosing a finite set \(\beta=\{b_{1},\ldots,b_{m}\}\subset\partial\) of basepoints (with one point in each component circle, as in [13] SS4.1) then determines the wild surface groupoid \(\Pi=\Pi_{1}(\widetilde{\Sigma},\beta)\), the fundamental groupoid of the auxiliary surface with these basepoints, as in [13] SS5. The local system \(\mathcal{G}\) is determined by a map \(f:\Pi\to\mathrm{Aut}(G)\) and this determines the space of \(f\)-twisted representations
\[\mathrm{THom}(\Pi,G)=\{\rho\in\mathrm{Hom}(\Pi,G\ltimes\mathrm{Aut}(G))\ \big{|}\ \rho(\gamma)\in G(f(\gamma))\ \mathrm{for\ all}\ \gamma\in\Pi\}\]
of \(\Pi\), as in [13] SS5. Since \(\widetilde{\Sigma}\) is just a punctured real surface with boundary, choosing generating paths in \(\Pi\) yields an isomorphism \(\mathrm{THom}(\Pi,G)\cong G^{N}\) of spaces, for some integer \(N\), so it is a smooth affine variety. In turn the wild representation variety \(\mathcal{R}=\mathrm{THom}_{\mathrm{S}}(\Pi,G)\) is the closed subvariety of \(\mathrm{THom}(\Pi,G)\) cut out by the two Stokes conditions, in [13] Defn. 18. Intrinsically, \(\mathcal{R}\) is the moduli space of framed Stokes local systems, as in [13] Prop. 19, framed via a graded isomorphism to a standard fibre \(\mathcal{F}_{i}\) at each basepoint \(b_{i}\in\beta\) ([13] SS4.1). The group \(H_{i}=\mathrm{GrAut}(\mathcal{F}_{i})=C_{G}(\mathbb{T}_{i})\subset G=\mathrm{ Aut}(\mathcal{F}_{i})\) acts transitively on the set of framings at \(b_{i}\), where \(\mathbb{T}_{i}=\mathbb{T}_{b_{i}}\). Thus the
group \({\bf H}=\prod_{b_{i}\in\beta}H_{i}\) acts naturally on \({\mathcal{R}}\). The wild character variety \({\mathcal{M}}_{\rm B}(\Sigma)\) is the affine geometric invariant theory quotient \({\mathcal{R}}/{\bf H}\), and so its points are the closed \({\bf H}\) orbits in \({\mathcal{R}}\). This leads directly to the key statement:
**Proposition 23**.: _The wild representation variety \({\mathcal{R}}={\rm THom}_{\rm S}(\Pi,G)\) may be embedded in an \({\bf H}\)-equivariant way, as a closed subvariety of the space \(X=G^{m-1}\times\prod_{1}^{n}G(\phi_{i})\) of (3), for suitable \(n\) and automorphisms \(\{\phi_{i}\}\subset{\rm Aut}(G)\), with \(m=\#\alpha\)._
**Proof.** This comes down to considering the inclusion \({\rm THom}_{\rm S}(\Pi,G)\subset{\rm THom}(\Pi,G)\) as a closed subvariety (forgetting the Stokes conditions, as in [13] Defn. 18), and then identifying \({\rm THom}(\Pi,G)\cong X\) in an \({\bf H}\)-equivariant way. Both of these are straightforward. In particular \(M_{j}=\rho(\gamma_{j})\) for generators \(\gamma_{j}\) of \(\pi_{1}(\widetilde{\Sigma},b_{1})\), and \(C_{i}=\rho(\chi_{i})\) for paths \(\chi_{i}\) from \(b_{1}\) to \(b_{i}\), for \(i=2,\ldots,m\). \(\Box\)
In order to define the Galois group \({\rm Gal}(\rho)\) of \(\rho\in{\rm THom}_{\rm S}(\Pi,G)\) we will assume that \({\mathcal{G}}\) is "Out-finite", in the sense that the monodromy group \(f(\pi_{1}(\widetilde{\Sigma},b_{1}))\subset{\rm Aut}(G)\) of \({\mathcal{G}}\) has finite image in \({\rm Out}(G)={\rm Aut}(G)/\,{\rm Inn}(G)\). Then the group \(\Gamma\) generated by \(f(\pi_{1}(\widetilde{\Sigma},b_{1}))\) and \({\rm Inn}(G)\) is an algebraic group, as in SS2 above. Thus any Stokes representation takes values in the algebraic group \(G\ltimes\Gamma\subset G\ltimes{\rm Aut}(G)\), and so we can consider the Zariski closure of its monodromy. The Galois group is defined by adding the Ramis tori as well: If \(t\in{\mathbb{T}}_{i}\subset G\) and \(\chi\) is any path in \(\widetilde{\Sigma}\) from \(b_{1}\) to \(b_{i}\), consider the element
\[C^{-1}tC\in G=G({\rm Id})\subset G\ltimes\Gamma, \tag{4}\]
where \(C=\rho(\chi)\).
**Definition 24**.: The differential Galois group \({\rm Gal}(\rho)\) of \(\rho\) is the Zariski closure of the subgroup of \(G\ltimes\Gamma\) generated by \(\rho(\pi_{1}(\widetilde{\Sigma},b_{1}))\) and all of the tori (4) (as \(i,t,\chi\) vary).
It follows that \({\rm Gal}(\rho)\) acts on \(G\) by group automorphisms, via the adjoint action of \(G\ltimes\Gamma\) on \(G=G({\rm Id})\). Let \(\overline{{\rm Gal}}(\rho)\subset\Gamma\subset{\rm Aut}(G)\) be the resulting image of \({\rm Gal}(\rho)\). This definition is (of course) motivated by Ramis' description ([25], [24] Thm. 21, [22] Thm. III.3.11) of the differential Galois group of an algebraic connection on a vector bundle.
**Corollary 25**.: _A Stokes representation \(\rho\in{\rm THom}_{\rm S}(\Pi,G)\) is polystable for the action of \({\bf H}\) if and only if \(\overline{{\rm Gal}}(\rho)\) is a linearly reductive group._
**Proof.** This now follows from part (1) of Thm. 19, via Prop. 23, since \(\overline{{\rm Gal}}(\rho)\) matches up with \(\overline{A}({\bf x})\). \(\Box\)
Special cases include:
\(\bullet\) If \({\mathcal{G}}\) has finite monodromy then \(\rho\in{\rm THom}_{\rm S}(\Pi,G)\) is polystable if and only if \({\rm Gal}(\rho)\) is a linearly reductive group.
\(\bullet\) If \(\mathcal{G}\) is a constant general linear group then \(\rho\in\operatorname{Hom}_{\mathbb{S}}(\Pi,G)\) is polystable if and only if \(\rho\) is the direct sum of irreducible Stokes representations.
Recall that \(Z(G)^{\Gamma}\) is the \(\Gamma\) invariant subgroup of the centre of \(G\), and it embeds diagonally in \(\mathbf{H}\). To deal with stability we will avoid degenerate cases by assuming:
\[\text{\it The kernel of the action of $\mathbf{H}$ on $\operatorname{THom}_{ \mathbb{S}}(\Pi,G)$ is $Z(G)^{\Gamma}$.} \tag{5}\]
The lemma below shows one can always add one or two punctures to ensure this condition holds. Note that no generality is lost: any symplectic leaf \(\mathcal{M}_{\mathrm{B}}(\boldsymbol{\Sigma},\mathcal{C})\subset\mathcal{M}_{ \mathrm{B}}(\boldsymbol{\Sigma})\) will also be a symplectic leaf of the larger Poisson variety obtained by first making such additional punctures (namely that with trivial monodromy around the new punctures). For example the usual character variety of a genus \(g>0\) compact Riemann surface \(\Sigma\) is a (very special) symplectic leaf of the character variety of \(\Sigma\setminus a\), for any point \(a\in\Sigma\).
**Lemma 26**.: _Suppose \(m\geq 1\) and \(a_{1}\in\alpha\) has trivial irregular class, and if \(g=0\) then \(m\geq 2\). Then \(\operatorname{THom}_{\mathbb{S}}(\Pi,G)\) is a smooth non-empty affine variety and the kernel \(\mathbf{K}\) of the \(\mathbf{H}\) action is \(Z(G)^{\Gamma}\) embedded diagonally in \(\mathbf{H}\)._
**Proof.** It is nonempty as it is the fusion of some fission spaces and some internally fused doubles: Recall that \(\operatorname{THom}_{\mathbb{S}}(\Pi,G)\) can be described as the quasi-Hamiltonian \(G\)-reduction
where each \(\mathbb{D}_{i}\) is a (twisted) internally fused double ([13] p.23, [8] Thm 8.2). Since \(\mathcal{A}(Q_{1})\cong\mathbf{D}(G)\) is the double of \(G\), it follows that
so that \(\operatorname{THom}_{\mathbb{S}}(\Pi,G)\) is the product of some smooth nonempty affine varieties. Note that \(\mathbf{H}\) still acts on \(\operatorname{THom}_{\mathbb{S}}(\Pi,G)\) and this includes \(H_{1}=G\).
Suppose \(\mathbf{k}\in\mathbf{K}\) and suppose (as usual) that the framings of \(\mathcal{G}\) are such that the monodromy in \(\operatorname{Aut}(G)\) of \(\mathcal{G}\) is trivial along the \(m-1\) chosen paths \(\chi_{i}:b_{1}\to b_{i}\). Then \(\mathbf{k}\) acts on \(C_{i}=M_{\chi_{i}}\) as \(k_{i}C_{i}k_{1}^{-1}\). Taking \(C_{2}=C_{3}=\cdots C_{m}=1\) implies \(k_{1}=k_{2}=\cdots=k_{m}\). If \(m\geq 2\) then the fact that all \(C_{2}\in G\) are fixed implies \(k_{1}\in Z(G)\). On the other hand if \(m=1\) then \(g>0\) so looking at \(\mathbb{D}_{1}\) there is \(\phi\in\operatorname{Aut}(G)\) so that \(k_{1}A\phi(k_{1}^{-1})=A\) for all \(A\in G\). This implies \(k_{1}=\phi(k_{1})\) and \(k_{1}\in Z(G)\). Thus in all case \(k_{1}\) is central. Then looking at any loop \(\gamma\) based at \(b_{1}\) leads to a relation of the form \(k_{1}M_{\gamma}\phi_{\gamma}(k_{1}^{-1})=M_{\gamma}\). Thus since \(k_{1}\) is central this implies \(k_{1}=\phi_{\gamma}(k_{1})\), so \(k_{1}\in Z(G)^{\Gamma}\). \(\square\)
_Remark 27_.: Note that it follows in general (as in [8] Thm. 8.2) that if \(\operatorname{THom}_{\mathbb{S}}(\Pi,G)\) is nonempty (and \(m>0\)) then it is a smooth affine variety.
Part (2) of Theorem 12 then implies:
**Corollary 28**.: _A Stokes representation \(\rho\in\operatorname{THom}_{\mathbb{S}}(\Pi,G)\) is stable for the action of \(\mathbf{H}\) if and only if there is no proper parabolic subgroup \(P\subset G\) stabilised by the action of \(\operatorname{Gal}(\rho)\)._
**Proof.** This follows from Thm. 12 since \(\mathcal{R}\) is closed in \(X\), \(\operatorname{Gal}(\rho)\) matches up with \(A(\mathbf{x})\), and the kernel of the \(\mathbf{H}\) action on \(\mathcal{R}\) and on \(X\) is the same. \(\square\)
## 5. Stability and polystability of Stokes local systems
This section will consider the intrinsic objects (Stokes local systems) underlying Stokes representations, and define the notions of "irreducible" and "reductive" for Stokes local systems, in terms of reductions of structure group. Then we will deduce:
**Theorem 29**.: _Suppose \(\mathbb{L}\) is a Stokes local system and \(\rho\in\operatorname{THom}_{\mathbb{S}}(\Pi,G)\) is the monodromy of \(\mathbb{L}\)._
_1) \(\rho\) is stable for the action of \(\mathbf{H}\) if and only if \(\mathbb{L}\) is irreducible,_
_2) \(\rho\) is polystable for the action of \(\mathbf{H}\) if and only if \(\mathbb{L}\) is reductive._
### Graded local systems
A Stokes local system is a special type of \(\mathbb{T}\)-graded local system (in the sense of Defn. 30 below), so to clarify the ideas we will focus on them here--the results for Stokes local systems follow almost immediately.
Let \(S\) be a connected real oriented surface of finite topological type. Let \(\mathbb{H}\subset S\) be an open subset, and let \(\mathbb{T}\to\mathbb{H}\) be a local system of complex tori over \(\mathbb{H}\). We allow the fibres of \(\mathbb{T}\) to have different dimensions in different components of \(\mathbb{H}\). Fix a connected complex reductive group \(G\), and a local system \(\mathcal{G}\to S\) of groups, such that each fibre of \(\mathcal{G}\) is isomorphic to \(G\). We will assume throughout that \(\mathcal{G}\) is "Out-finite". This means that the monodromy of \(\mathcal{G}\) has finite image in \(\operatorname{Out}(G)\). In more detail, given a basepoint \(b\in S\) and a framing \(G\cong\mathcal{G}_{b}\) of \(\mathcal{G}\) at \(b\), then the monodromy representation \(f:\pi_{1}(S,b)\to\operatorname{Aut}(G)\) of \(\mathcal{G}\) is such that the monodromy group \(f(\pi_{1}(S,b))\subset\operatorname{Aut}(G)\) has finite image in \(\operatorname{Out}(G)\). Of course if \(G\) is semisimple then \(\operatorname{Out}(G)\) is finite and so this is no restriction.
Recall that a \(\mathcal{G}\)-local system over \(S\) is a local system \(\mathbb{L}\to S\) which is a \(\mathcal{G}\)-torsor (cf. e.g. [13] SS2.1), and it determines a local system \(\operatorname{Aut}(\mathbb{L})\to S\) of groups (each fibre of which is also isomorphic to \(G\)).
**Definition 30**.: A \(\mathbb{T}\)-graded \(\mathcal{G}\)-local system over \(S\) is a \(\mathcal{G}\)-local system \(\mathbb{L}\to S\) together with an embedding
\[\mathbb{T}\hookrightarrow\operatorname{Aut}(\mathbb{L})\big{|}_{\mathbb{H}}\]
of local systems of groups over \(\mathbb{H}\).
For brevity we will simply call this a "graded local system on \(S\)", and write \(\mathbb{T}\hookrightarrow\operatorname{Aut}(\mathbb{L})\) for the grading. A Stokes local system \(\mathbb{L}\) (in the sense of [8, 13]) is a special type of \(\mathbb{T}\)-graded local system, taking \(S=\widetilde{\Sigma}\) to be the auxiliary surface, \(\mathbb{H}\) to be the
union of the halos, and \(\mathbb{T}\) to be the image in \(\operatorname{Aut}(\mathbb{L})\big{|}_{\mathbb{H}}\) of the exponential torus \(\mathcal{T}\). (Note that \(\mathbb{T}\) is determined just by the irregular class of \(\mathbb{L}\), in the sense of [13] SS3.5, since as explained there the class determines the finite rank local system \(I\subset\mathcal{I}\) of lattices, and \(\mathbb{T}\) is the local system of tori with character lattice \(I\).)
### Galois group
If \(\mathbb{L}\to S\) is a \(\mathbb{T}\)-graded local system, and \(b\in S\) is a basepoint, define \(\operatorname{Gal}(\mathbb{L})\) to be the Zariski closure of the group generated by the monodromy of \(\mathbb{L}\) and all the tori \(\mathbb{T}\) (after transporting them to \(b\)). In more detail, first identify \(\mathcal{G}_{b}\cong G,\mathbb{L}_{b}\cong\mathcal{F}\) (the trivial \(G\)-torsor, as in [13] SS2) and define \(\Gamma\subset\operatorname{Aut}(G)\) to be the group generated by the monodromy of \(\mathcal{G}\) and \(\operatorname{Inn}(G)\) (as in SS2 above). If \(\gamma\in\pi_{1}(S,b)\) let \(\rho(\gamma)\in G(f(\gamma))\subset G\ltimes\Gamma\hookrightarrow\operatorname {Perm}(\mathcal{F})\) be the monodromy of \(\mathbb{L}\) around \(\gamma\) (where \(\operatorname{Perm}(\mathcal{F})\) is the group of all permutations of the fibre \(\mathcal{F}\)). Similarly if \(p\in\mathbb{H},t\in\mathbb{T}_{p}\subset\operatorname{Aut}(\mathbb{L})_{p}\cong G\) and \(\chi\) is any path in \(S\) from \(b\) to \(p\), consider the element
\[C^{-1}tC\in\operatorname{Aut}(\mathcal{F})=G=G(\operatorname{Id})\subset G \ltimes\Gamma, \tag{6}\]
where \(C\) is the transport of \(\mathbb{L}\) along \(\chi\).
**Definition 31**.: The differential Galois group \(\operatorname{Gal}(\mathbb{L})\) of \(\mathbb{L}\) is the Zariski closure of the subgroup of \(G\ltimes\Gamma\) generated by \(\rho(\pi_{1}(S,b))\) and all of the tori (6) (as \(p,t,\chi\) vary).
It follows that \(\operatorname{Gal}(\mathbb{L})\) acts on \(G\) by group automorphisms, via the adjoint action of \(G\ltimes\Gamma\) on \(G=G(\operatorname{Id})\). Let \(\overline{\operatorname{Gal}}(\mathbb{L})\subset\Gamma\subset\operatorname{ Aut}(G)\) be the resulting image of \(\operatorname{Gal}(\mathbb{L})\). Up to isomorphism the affine algebraic group \(\operatorname{Gal}(\mathbb{L})\) and its action on \(G\) do not depend on the choice of basepoint \(b\) or framings.
### Irreducible graded local systems
Define a graded local system \(\mathbb{L}\to S\) to be _reducible_ if \(\operatorname{Aut}(\mathbb{L})\) has a sublocal system of proper parabolic subgroups, containing \(\mathbb{T}\). In other words there is a sublocal system \(\mathcal{P}\subset\operatorname{Aut}(\mathbb{L})\) such that \(1)\) each fibre \(\mathcal{P}_{b}\) is a proper parabolic subgroup of \(\operatorname{Aut}(\mathbb{L})_{b}\), and \(2)\) the grading \(\mathbb{T}\hookrightarrow\operatorname{Aut}(\mathbb{L})\) factors through \(\mathcal{P}\). Such \(\mathbb{L}\) is _irreducible_ if it is not reducible.
**Lemma 32**.: \(\mathbb{L}\) _is reducible if and only if \(\operatorname{Gal}(\mathbb{L})\) preserves a proper parabolic subgroup of \(G\) (recalling that \(\operatorname{Gal}(\mathbb{L})\) naturally acts on \(G\) by group automorphisms)._
**Proof.** Suppose \(P\subset G\cong\operatorname{Aut}(\mathbb{L}_{b})\) is preserved by \(\operatorname{Gal}(\mathbb{L})\). Then \(P\) is the fibre at \(b\) of a local system of parabolic subgroups \(\mathcal{P}\subset\operatorname{Aut}(\mathbb{L})\), since the monodromy of \(\operatorname{Aut}(\mathbb{L})\) is given by the adjoint action of the monodromy of \(\mathbb{L}\), which is in \(\operatorname{Gal}(\mathbb{L})\). Moreover \(\mathbb{T}\hookrightarrow\mathcal{P}\), since the transport to \(b\) of each fibre of \(\mathbb{T}\) is in \(G\) and preserves \(P\) (so is in \(P\), since \(N_{G}(P)=P\)). The converse is similar, taking the fibre at \(b\) of \(\mathcal{P}\subset\operatorname{Aut}(\mathbb{L})\). \(\square\)
This can be related to (twisted) reductions of structure group as follows (compare [13] Defn. 11).
**Definition 33**.: Suppose \(\mathbb{L}\) is a \(\mathbb{T}\)-graded \(\mathcal{G}\)-local system.
\(\bullet\) A _reduction_ of \(\mathbb{L}\) is a \(\mathbb{T}\)-graded \(\mathcal{P}\)-local system \(\mathbb{P}\to S\) (for some local system of groups \(\mathcal{P}\to S\)), such that \(\mathbb{L}\) is a twisted pushout of \(\mathbb{P}\). This means that there is a \(\mathcal{G}\) local system \(\mathbb{M}\) with an embedding \(\mathcal{P}\hookrightarrow\operatorname{Aut}(\mathbb{M})\), together with an isomorphism \(\mathbb{L}\cong\mathbb{P}\times_{\mathcal{P}}\mathbb{M}\) (of graded \(\mathcal{G}\) local systems),
\(\bullet\) A reduction \(\mathbb{P}\) of \(\mathbb{L}\) is a _parabolic reduction_ if the fibres of \(\mathcal{P}\) embed as parabolic subgroups of the fibres of \(\operatorname{Aut}(\mathbb{M})\) (each of which is isomorphic to \(G\)),
\(\bullet\) Similarly it is a _Levi reduction_ if the fibres of \(\mathcal{P}\) embed as Levi subgroups of parabolic subgroups of the fibres of \(\operatorname{Aut}(\mathbb{M})\),
\(\bullet\) The reduction is _proper_ if the fibres of \(\mathcal{P}\) embed as proper subgroups.
**Lemma 34**.: \(\mathbb{L}\) _is reducible if and only if it has a proper parabolic reduction of structure group._
**Proof.** Given \(\mathcal{P}\subset\operatorname{Aut}(\mathbb{L})\) with \(\mathbb{T}\hookrightarrow\mathcal{P}\big{|}_{\mathbb{H}}\), then taking \(\mathbb{P}=\mathcal{P}\) and \(\mathbb{M}=\mathbb{L}\) gives the desired reduction. Conversely given \(\mathbb{M},\mathcal{P},\mathbb{P}\) then \(\operatorname{Aut}(\mathbb{P})\) gives the desired parabolic sublocal system in \(\operatorname{Aut}(\mathbb{L})\). \(\square\)
Note that part 1) of Thm. 29 now follows immediately from Cor. 28, noting that \(\operatorname{Gal}(\mathbb{L})=\operatorname{Gal}(\rho)\).
This irreducibility condition can also be spelt out in terms of Stokes representations and compatible systems of parabolics (as in [8] SS9).
If \(\mathcal{G}\) is constant then we can use the usual (simpler) notion of reduction of structure group (then \(\operatorname{Gal}\subset G\) and we don't need twisted reductions, i.e. we can take \(\mathbb{M}\) to be the trivial \(G\)-torsor).
### Reductive/semisimple graded local systems
If \(\mathbb{L}\to S\) is a graded local system and \(\mathcal{L}\to S\) is a Levi reduction of \(\mathbb{L}\) (in the sense of Defn. 33) then \(\mathcal{L}\) is itself a graded local system, and so we can ask if \(\mathcal{L}\) is reducible or not. Define \(\mathbb{L}\) to be _reductive_ (or "semisimple") if it has an irreducible Levi reduction. Similarly to Lemma 34 one can rephrase this in terms of \(\operatorname{Aut}(\mathbb{L})\):
**Lemma 35**.: \(\mathbb{L}\) _is reductive if and only if there is a sublocal system \(\mathcal{E}\subset\operatorname{Aut}(\mathbb{L})\) such that 1) each fibre \(\mathcal{E}_{p}\) is a Levi subgroup of a parabolic of \(\operatorname{Aut}(\mathbb{L})_{p}\), 2) \(\mathbb{T}\subset\mathcal{E}\), and 3) \(\mathcal{E}\) is irreducible in the sense that it has no proper sublocal systems of parabolic subgroups, containing \(\mathbb{T}\)._
Recall \(\overline{\operatorname{Gal}}(\mathbb{L})\subset\Gamma\subset\operatorname{Aut }(G)\) is the image of \(\operatorname{Gal}(\mathbb{L})\) in \(\Gamma\).
**Proposition 36**.: \(\mathbb{L}\) _is reductive if and only if \(\overline{\operatorname{Gal}}(\mathbb{L})\) is a linearly reductive group._
**Proof.** By Prop. 17\(\overline{\operatorname{Gal}}(\mathbb{L})\) is a linearly reductive group if and only if there is a subgroup \(L\subset G\) such that 1) \(L\) is a Levi subgroup of a parabolic subgroup of \(G\), 2)
\(L\) is preserved by the action of \(\operatorname{Gal}(\mathbb{L})\), and \(3)\)\(L\) has no proper parabolic subgroups that are \(\operatorname{Gal}(\mathbb{L})\) invariant. (This uses the fact that the centralisers of tori in \(G\) are exactly the Levi subgroups of parabolics.) As in Lem. 32 the existence of such \(L\) is the same as \(\mathbb{L}\) having an irreducible Levi reduction.
Part 2) of Thm. 29 now follows immediately from Cor. 25.
Note that if \(\mathcal{G}\) is constant (or has finite monodromy) then this is the same as \(\operatorname{Gal}(\mathbb{L})\) being linearly reductive.
_Remark 37_.: Note that if \(\mathbb{L}\) is a Stokes \(\mathcal{G}\)-local system then it makes no difference if we insist on only looking at reductions that are Stokes local systems: i.e. \(\mathbb{L}\) is semisimple if it has a Levi reduction to an irreducible Stokes \(\mathcal{L}\)-local system, and \(\mathbb{L}\) is reducible if it has a parabolic reduction to a Stokes \(\mathcal{P}\)-local system. To see this we need to check \(1)\) that the Stokes conditions on the monodromies (of the reductions) around the tangential punctures are automatic, and \(2)\) that there is no loss of generality in assuming that the local systems of parabolic/Levi subgroups are untwisted around the tangential punctures. This is now an easy exercise: \(1)\) follows since the Stokes groups are controlled by \(\mathbb{T}\), and \(2)\) follows by considering the proof of Lem. 34, and the analogous proof of Lem. 35.
_Remark 38_.: In the case where \(\mathcal{G}\) is a constant general linear group with fibre \(G=\operatorname{GL}_{n}(\mathbb{C})\) for some \(n\), then a Stokes \(\mathcal{G}\)-local system \(\mathbb{L}\) is equivalent to a Stokes local system \(\mathbb{V}\) of rank \(n\) vector spaces, as in [11]. Then \(\mathbb{L}\) is irreducible if and only if \(\mathbb{V}\) has no nontrival proper Stokes sublocal systems, and it is reductive if and only if \(\mathbb{V}=\bigoplus\mathbb{V}_{i}\) is the direct sum of some irreducible Stokes local systems \(\mathbb{V}_{i}\).
Note that there are thus many simple criteria to ensure points of \(\mathcal{R}\) are stable, and they will be studied systematically elsewhere. For example in the constant \(\operatorname{GL}_{n}(\mathbb{C})\) setting, all the Stokes local systems on \(\boldsymbol{\Sigma}\) are irreducible if at one of the punctures the irregular class just has one Stokes circle \(I\subset\mathcal{I}\) with \(\operatorname{Ram}(I)=n\) (e.g. if \(\operatorname{slope}(I)=k/n\) with \((k,n)=1\) this is Katz's irreducibility criterion [21] (2.2.8)).
|
2301.11628 | Spectroscopic and interferometric signatures of magnetospheric accretion
in young stars | Methods. We use the code MCFOST to solve the non-LTE problem of line
formation in non-axisymmetric accreting magnetospheres. We compute the
Br{\gamma} line profile originating from accretion columns for models with
different magnetic obliquities. We also derive monochromatic synthetic images
of the Br{\gamma} line emitting region across the line profile. This spectral
line is a prime diagnostics of magnetospheric accretion in young stars and is
accessible with the long baseline near-infrared interferometer GRAVITY
installed at the ESO Very Large Telescope Interferometer.
Results. We derive Br{\gamma} line profiles as a function of rotational phase
and compute interferometric observables, visibilities and phases, from
synthetic images. The line profile shape is modulated along the rotational
cycle, exhibiting inverse P Cygni profiles at the time the accretion shock
faces the observer. The size of the line's emission region decreases as the
magnetic obliquity increases, which is reflected in a lower line flux. We apply
interferometric models to the synthetic visibilities in order to derive the
size of the line-emitting region. We find the derived interferometric size to
be more compact than the actual size of the magnetosphere, ranging from 50 to
90\% of the truncation radius. Additionally, we show that the rotation of the
non-axisymmetric magnetosphere is recovered from the rotational modulation of
the Br{\gamma}-to-continuum photo-centre shifts, as measured by the
differential phase of interferometric visibilities. | B. Tessore, A. Soulain, G. Pantolmos, J. Bouvier, C. Pinte, K. Perraut | 2023-01-27T10:14:56Z | http://arxiv.org/abs/2301.11628v1 | # Spectroscopic and interferometric signatures of magnetospheric accretion in young stars
###### Abstract
Context:
Aims:We aim to assess the complementarity between spectroscopic and interferometric observations in the characterisation of the inner star-disc interaction region of young stars.
Methods:We use the code MCFOST to solve the non-LTE problem of line formation in non-axisymmetric accreting magnetospheres. We compute the Br\(\gamma\) line profile originating from accretion columns for models with different magnetic obliquities. We also derive monochromatic synthetic images of the Br\(\gamma\) line emitting region across the line profile. This spectral line is a prime diagnostics of magnetospheric accretion in young stars and is accessible with the long baseline near-infrared interferometer GRAVITY installed at the ESO Very Large Telescope Interferometer.
Results:We derive Br\(\gamma\) line profiles as a function of rotational phase and compute interferometric observables, visibilities and phases, from synthetic images. The line profile shape is modulated along the rotational cycle, exhibiting inverse P Cygni profiles at the time the accretion shock faces the observer. The size of the line's emission region decreases as the magnetic obliquity increases, which is reflected in a lower line flux. We apply interferometric models to the synthetic visibilities in order to derive the size of the line-emitting region. We find the derived interferometric size to be more compact than the actual size of the magnetosphere, ranging from 50 to 90% of the truncation radius. Additionally, we show that the rotation of the non-axisymmetric magnetosphere is recovered from the rotational modulation of the Br\(\gamma\)-to-continuum photo-centre shifts, as measured by the differential phase of interferometric visibilities.
Conclusions:Based on the radiative transfer modelling of non-axisymmetric accreting magnetospheres, we show that simultaneous spectroscopic and interferometric measurements provide a unique diagnostics to determine the origin of the Br\(\gamma\) line emitted by young stellar objects and are ideal tools to probe the structure and dynamics of the star-disc interaction region.
## 1 Introduction
The early evolution of low mass stars (\(M_{*}<2~{}M_{\odot}\)) during the classical T Tauri (CTT) phase depends on the interaction between the star and its accretion disc, on a distance of a few stellar radii. At the truncation radius, matter from the disc surface is channelled onto the stellar surface following the magnetic field lines and forming an accretion funnel or column (Ghosh et al., 1977; Zanni and Ferreira, 2009; Romanova and Owocki, 2016; Pontolmos et al., 2020). The star-disc interaction is responsible for accretion and ejection phenomena that have a strong impact on spectral lines formed in the close vicinity of the star's surface.
Ghosh et al. (1977) developed an analytical model of magnetospheric accretion around a rotating neutron star with a dipolar magnetic field. Hartmann et al. (1994) applied this magnetospheric accretion model to the formation of emission lines in the spectrum of T Tauri stars. This fundamental paper sets the general theoretical framework for the density and temperature distributions in aligned axisymmetric magnetospheres. The coupling between this representation of magnetospheric accretion in T Tauri systems with radiative transfer calculations has provided a crucial tool to interpret spectroscopic, photometric, and interferometric observations. The sensitivity of hydrogen lines to the parameters of the magnetospheric models was studied in detail by Muzerolle et al. (2001), improving the earlier calculations by Hartmann et al. (1994).
Near-infrared observations of the Brackett \(\gamma\) (Br\(\gamma\)) line with the Very Large Telescope Interferometer (VLTI) GRAVITY instrument (Gravity Collaboration et al., 2017) also probe the inner part of the star-disc interaction region (Gravity Collaboration et al., 2020; Bouvier et al., 2020). However, it is still difficult to associate the characteristic sizes derived from interferometry with the actual size of the magnetospheric accretion region, a key parameter in our understanding of the star-disc interaction.
In this paper, we aim at studying the formation of the Br\(\gamma\) line and compute its spectroscopic and interferometric signatures for non-axisymmetric models of the inner star-disc interaction region, akin to state-of-art MHD simulations (Romanova and Owocki, 2016). In particular, we want to clarify the meaning of the sizes inferred through near-infrared interferometric observations and how they compare with the overall size of the magnetospheric accretion region.
In sections SS2 and SS3, we describe the model used to compute the line formation in accreting magnetospheres. We discuss spectroscopic and interferometric signatures in sections SS4 and SS5, respectively.
## 2 Radiative transfer framework
We use the code MCFOST 1(Pinte et al., 2006, 2009; Tessore et al., 2021) to compute emergent line fluxes from multidimen
sional models of magnetospheres for a 20-level hydrogen atom. The atomic model, with 19 bound levels and the ground state of HII, consists of 171 bound-bound transitions (atomic lines) and 19 bound-free transitions (continua). We focus here on the Br\(\gamma\) line at 2.1661 \(\upmu\)m although, the Balmer lines H\(\alpha\) and H\(\beta\) and the Paschen \(\beta\) line (Pa\(\beta\)) are modelled as well. These specific hydrogen lines are commonly used to characterise accretion and ejection phenomena in young systems (Folha & Emerson, 2001; Alencar et al., 2012; Bouvier et al., 2020; Pouilly et al., 2020; Sousa et al., 2021). The method to solve for the non-LTE populations of hydrogen and the microphysics are the same as in Tessore et al. (2021). The updated version of the code we use now simultaneously solves the charge equation and the statistical equilibrium equations, which has been proven to increase the convergence in chromospheric conditions (Leenaarts et al., 2007). We tested our code for different magnetospheric models taken as benchmarks in Muzerolle et al. (2001) and Kurosawa et al. (2006). The results of this comparison are presented and discussed in Appendix A.
## 3 Magnetospheric accretion model
Matter from the circumstellar disc is channelled onto the stellar surface along the dipolar magnetic field lines. The stellar magnetic field truncates the disc at a distance \(R_{t}\) from the star, the truncation radius. In practice, the interaction between the stellar magnetic field and the disc takes place over a small region between \(R_{t}\) and \(R_{t}\) + \(\delta r\). Both \(R_{t}\) and \(\delta r\) are used to define the size of the disc region magnetically connected to the star. As the gas approaches the stellar surface, it decelerates in a shock and is heated at coronal temperatures. Theoretical models of accretion shocks by Calvet & Gullbring (1998) show that the optically thin emission of the pre/post-shock dominates below the Balmer jump and that the optically thick emission of the heated photosphere contributes to the total continuum emission at larger wavelengths. In the following, we only consider the contribution of the heated photosphere to the shock radiation. The shock2 temperature is computed from the energy of the gas infalling onto the stellar surface following the prescription of Romanova et al. (2004) unless specified. This approach assumes energy conservation and that the shock radiates as a black body, meaning that its temperature is determined by the specific kinetic energy and enthalpy of the gas deposited at the stellar surface. The shock temperature hence derived is of the order of 4500 K - 6000 K.
Footnote 2: We assume that the shock region is unresolved and is part of the stellar surface.
### The stellar surface
The stellar surface is considered as the inner boundary of the model and emits as a blackbody whose temperature is determined by the stellar parameters. Throughout the paper, the stellar parameters are \(T_{*}=4,000\) K, \(M_{*}=0.5\)\(M_{\odot}\), and \(R_{*}=2\)\(R_{\odot}\). We set the distance to the star at 140 pc, which is typical of the nearest star forming regions such as Upper Scorpius (\(\approx 146\)\(pc\) Galli et al., 2018) or Taurus (\(\approx 130\)\(pc\) Galli et al., 2018).
### Geometry of the accretion turnels
We consider 3D non-axisymmetric models of the magnetospheric accretion region. These models are parametrised by the same set of parameters as the axisymmetric magnetospheric model of Hartmann et al. (1994) (see also Muzerolle et al., 1998, 2001; Kurosawa et al., 2006; Lima et al., 2010; Kurosawa et al., 2011; Dmitriev et al., 2019).The density and the velocity fields of the accretion columns are fully described with a set of independent parameters: the mass accretion rate \(\dot{M}\), the rotation period \(P_{rot}\), \(R_{t}\), and \(\delta_{r}\).
For our study, the value of \(\dot{M}\), \(R_{t}\) and \(\delta_{r}\), and of the temperature of the magnetosphere are fixed. The impact of these parameters on the line formation has been discussed thoroughly in Muzerolle et al. (1998, 2001, see also App. A). The line's response to the mass accretion rate and to the temperature is an essential proxy for understanding the physics of the star-disc interaction region. We use a mass accretion rate \(\dot{M}=10^{-8}\)\(M_{\odot}\) yr\({}^{-1}\), a truncation radius \(R_{t}=4\)\(R_{*}\), and \(\delta_{r}=1\)\(R_{*}\). The value of the rotation period is deduced from the maximum truncation radius (\(R_{t}+\delta_{r}\)), imposing that stable accretion occurs at 90% of the corotation radius, consistent with the work of Blinova et al. (2016). The rotation period is therefore fixed at \(P_{rot}=6\) days, corresponding to slowly rotating T Tauri stars (see Herbst et al., 2007; Bouvier et al., 2014, for a review). The rotational velocity for that period is thus of the order of 80 km s\({}^{-1}\) at the outer edge of the magnetosphere.
When the magnetic field axis (\(\mu\)) is misaligned with respect to the rotational axis (\(\Omega\)) of the star, the geometry of the accretion flow changes dramatically. The equations for the magnetic field components of a non-axisymmetric dipole, i.e. with a non-zero obliquity, are provided in Mahdavi & Kenyon (1998). The parameter \(\beta_{ma}\) describes the angle between the dipole moment and the star's rotational axis, the magnetic obliquity.
We approximate the density, \(\rho\), along the non-axisymmetric magnetic field lines with,
\[\rho=\alpha\frac{B}{v}=\alpha B\frac{\rho_{ani}}{B_{ani}}, \tag{1}\]
where \(\alpha\) is a constant along a given field line and \(B\) the analytic misaligned dipolar field. \(v\), \(\rho_{asi}\), and \(B_{ani}\) denote the velocity field, density, and dipolar magnetic field, respectively, and they are taken from the axisymmetric model of Hartmann et al. (1994). In other words, the 3D density structure is computed from Eq. (1) under the assumption that the infalling gas has a velocity field on the poloidal plane. The value of \(\alpha\) is computed
Figure 1: Density distribution of a non-axisymmetric model with an obliquity of \(10^{\circ}\). The rotation axis of the star \(\Omega\) is shown with a white arrow and the dipole axis, \(\mu\), with a red arrow. The density is computed from Eqs. (1) and (2). The colour map scales with the density.
from the numerical integration 3 of the mass flux over the shock area,
Footnote 3: For this 3D magnetospheric accretion model, an explicit formula for the shock area does not exist (see also Mahdavi & Kenyon, 1998)
\[\dot{M}=\int\rho\mathbf{v}\cdot\mathbf{dS}, \tag{2}\]
where \(dS\) is the surface element and \(\mathbf{v}\) the velocity field. In our model, the value of \(\dot{M}\) is an input parameter and is held constant. Therefore, \(\alpha\) is obtained to ensure consistency between Eqs. (1) and (2). We compute five models with an obliquity \(\beta_{ma}\) ranging from **five** to forty degrees in step of ten degrees, representative of what has been measured for T Tauri stars with spectroscopy (McGinnis et al., 2020) and spectropolarimetry (Donati et al., 2008, 2010, 2013; Johnstone et al., 2014; Pouilly et al., 2020). For these non-axisymmetric models, the shortest field lines - defining the main accretion columns4 - carry most of the gas density. We remove the longest field lines - the secondary columns - in our modelling as in Esau et al. (2014). This yields models with one crescent-shaped accretion spot per stellar hemisphere reminiscent of numerical simulations of misaligned dipoles (Romanova et al., 2003).
Footnote 4: Geometrically, the shortest field lines obey the following criterion \(\cos\phi^{\prime}\times z>0\) where \(\phi^{\prime}\) is the azimuth in the frame aligned with the dipole axis and \(z\) the coordinate parallel to the rotation axis.
Figure 1 shows the density of a non-axisymmetric magnetosphere with an obliquity of \(10^{\circ}\).
### Temperature of the tunnels
The temperature of the magnetospheric accretion region is not well constrained. The determination of the temperature by Martin (1996) from first principles was not able to reproduce the observations. A self-consistent calculation of the temperature of the magnetosphere is beyond the scope of this paper. Instead, we adopt here the temperature profile of Hartmann et al. (1994), which has been extensively used in the past to model line fluxes from accreting T Tauri stars. The temperature is computed using a volumetric heating rate (\(\propto r^{-3}\)) and balancing the energy input with the radiative cooling rates of Hartmann et al. (1982). The exact balance between the heating and cooling mechanisms is unknown. Instead, the temperature profile is normalised to a free parameter, \(T_{max}\), that sets the value of the maximum temperature in the funnel flow. In the following, we have set the temperature maximum to \(T_{max}=8,000\) K.
## 4 Spectroscopic signatures
Thanks to the Doppler shift of the funnel flow, it is possible to reconstruct the origin of the emission line by looking at the brightness maps in various velocity channels. Figure 2 shows the contribution of the different parts of the magnetosphere to the total integrated Br\(\gamma\) line flux at a given velocity for an inclination of \(30^{\circ}\), matching the model illustrated in Fig. 1. At those density and temperature, the continuum emission comes from the stellar surface (\(\mathrm{I}_{\mathrm{surf}}/\mathrm{I}_{\mathrm{mag}}>100\)). Locally, the continuum emission from the shock is three times larger than the emission from the star. Overall, given the small covering area of the accretion shock (around 1%), the total continuum emission at the frequency of the Br\(\gamma\) line is dominated by the star's radiation, \(\mathrm{F}_{\mathrm{shock}}/\mathrm{F}_{*}=3\%\). The low-velocity components (\(<50\) km s\({}^{-1}\)) of the line form in the regions where the projected velocity along the line-of-sight is close to zero and near the disc. The geometry of the non-axisymmetric model, defined in SS3, is responsible for a rotational modulation of the integrated line flux. Many classical T Tauri stars shows modulated photometric variability (e.g Cody et al., 2014) and, more directly related to the magnetospheric region, many also show rotational modulation of the longitudinal component of the stellar magnetic field (e.g Donati et al., 2020). Indeed, the periodic variability of optical and emission line profiles has been reported in various systems (for instance Sousa et al., 2016; Alencar et al., 2018; Bouvier et al., 2020), which indicates that the emission region is stable on a timescale of several rotation periods.
Figure 3 shows the variability of the Br\(\gamma\) line at different phases of rotation at an inclination of \(60^{\circ}\) for different obliquities. The origin of the rotational phase is defined such that at phase 0.5, the accretion shock is facing the observer. The red-shifted absorption seen for the Br\(\gamma\) line at phases 0.250, 0.47 and 0.69, results from a lower source function of the gas above the shock (see App. A). From observations, red-shifted absorption in the Pa\(\beta\) and Br\(\gamma\) lines are seen in less than 34% and 20% of the line profiles, respectively (Folha & Emerson, 2001). The inverse P Cygni profile disappears when the shock, or a significant fraction of it, is hidden on the opposite side of the star. The line, with either a double-peaked profile or a moderate red-shifted absorption, is reminiscent of Reipurth et al. (1996) cases II and IV. While the profiles with redshifted absorption agree with observations, those that display an M-shape are usually not observed in young stellar objects. This suggests that magnetospheric accretion is not the only contribution to the profile, which can also be impacted by various types of outflows (e.g., stellar, interface, and disk winds Lima et al., 2010; Kurosawa et al., 2011). The optically thick accretion disc is not included in our models. The effect of the disc emission and absorption on the spectroscopic and interferometric observables will be discussed in a subsequent paper.
We also observe a decrease of the line flux as the obliquity increases. Figure 4 shows the radius encompassing 90% of the total line flux, \(R_{90}\), at an inclination of \(60^{\circ}\) for non-axisymmetric models with different obliquities for the H\(\alpha\), H\(\beta\), Pa\(\beta\) and Br\(\gamma\) lines.
As \(\beta_{ma}\) increases, the volume of the magnetospheric accretion region decreases because the arc length of the accreting field lines shortens. Therefore, the total flux, for all lines, decreases accordingly, independently of the viewing angle of the system. However, we also note a dependence of \(R_{90}\) with the line. The
Figure 4: Radius encompassing 90% of the total flux (\(R_{90}\)) for each line as a function of the obliquity, \(\beta_{ma}\). Hydrogen lines are labelled with different colours.
value of \(R_{90}\) represents the size of the emitting region in a given line, which is a function of density and temperature, and of the viewing angle.
## 5 Interferometric signatures
In this section, we compute the size of the Br\(\gamma\) line-emitting region inferred from interferometric observations, and we compare it to model flux radii (see SS4).
### Interferometric observables
The interferometric observables are derived from the radiative transfer (RT) model using the ASPRO25 software developed by the Jean-Marie Mariotti Center (JMMC). These observables represent what would be observed with GRAVITY in the near-infrared. We consider the configuration obtained with the Very Large Telescope (i.e. 4x8m telescopes), encompassing a range of baselines from 35 to 135 m. With a typical night of 8 hours, we compute one observing point per hour for the six baselines of the VLTI to increase the Fourier sampling, namely u-v coverage, which is crucial for the fitting part of our approach. As described in Bourges & Duvert (2016), we derive the observables from the RT images (see Fig. 2) by computing the complex visibility in each spectral channel around the Br\(\gamma\) line and interpolating them to match GRAVITY's spectral resolution (R = 4000). Specifically, we simulate a total of 37 spectral channels (from 2.161 to 2.171 um with a step of 2.8 10\({}^{-4}\) um) for the six projected baselines repeated eight times. Within this range, 31 spectral channels are used to measure the K-band continuum and six channels sample the Br\(\gamma\) line emitting region.
Footnote 5: Available at [https://www.jmmc.fr](https://www.jmmc.fr)
Figure 10 illustrates the resulting u-v plane projected on-sky for a typical object observed at the VLTI with a declination of -34\({}^{\circ}\) (e.g. TW Hydrae).
Figure 10 shows the interferometric observables along the rotational cycle for a model with an inclination of 60\({}^{\circ}\) and an obliquity of 10\({}^{\circ}\). The two main observables are: the modulus of the complex visibility - the visibility amplitude - and the differential phase - its argument - dispersed in wavelength. The phase is normalised to zero in the continuum. The visibility amplitude can then be used to estimate the object's size, while the phase measures the photo-centre shifts between the line-emitting re
Figure 2: Origin of the emission seen across the Brackett \(\gamma\) line. The contribution of individual images to the total line flux is indicated on the central image showing the line profile. The brightness maps are in units of the maximum emission. The emission of the stellar surface is saturated. Orange to red colours indicates the regions of maximum emission. The system is seen at an inclination of 30\({}^{\circ}\) and an rotational phase of \(\sim\)0.25, similar to Fig. 1.
gion and the continuum. The phase can only be used as a relative measurement (e.g. between the line and the continuum), the absolute phase being lost due to a combination of atmospheric and instrumental effects. We repeat the simulated observations and compute nine datasets over a rotational cycle sampled every 40 degrees ( 0.11 in phase). In this study, we are interested in the line's emitting region only. Therefore, we use pure line quantities, instead of total visibilities and phases, to remove the contribution from the stellar surface (see appendix B for the derivation of the pure line interferometric quantities).
### Physical characteristic and sizes
Once the interferometric observables are computed, we apply standard modelling methods to interpret the data (Berger 2003). Firstly, we average the visibility amplitude of the six spectral channels available within the Br\(\gamma\) line6. We use the average visibilities to recover the global size of the Br\(\gamma\) emitting region, where the different velocities probe specific parts of the moving material within the magnetosphere. Then, we fit the averaged visibility amplitude using elongated Gaussian or uniform disc models. Such models are typically used in interferometry to estimate the system's characteristic size and on-sky orientation. The source's brightness distribution is defined by its half-flux radius in the case of a Gaussian disc or its radius for the uniform disc model, and an elongation factor and a position angle. In the following, we adopt the definition of "radius" for both models, which corresponds to the half-flux semi-major axis for the Gaussian model and the semi-major axis for the uniform disc model. The recovered sizes and orientations are represented in the top panel of Fig. 5. While neither model can fully account for the size of the magnetosphere, the uniform disc probes a larger area of the magnetosphere, while the Gaussian disc seems limited to the most luminous parts. We note that the fit of the visibility is equally good for both models and, thus, does not allow us to discriminate between the models from the synthetic visibilities only (middle-top, Fig. 5).
Footnote 6: Five and four spectral channels only were used at phase 0.25 and 0.47, respectively, due to a limited line-to-continuum ratio (see Appendix B for details).
In order to quantify the physical meaning of the interferometric measurements, we compare the interferometric sizes with reference flux radii of the RT models. We set these radii to represent 50, 80, 90, and 99% of the total flux emitted by the magnetospheric accretion region. Figure 6 compares the sizes derived with interferometry to the characteristic radii of the RT models.
We find that the size derived from the uniform disc model is modulated around an average value of 3.5 R\({}_{*}\) corresponding to 90% of the Br\(\gamma\) emitting region. The size obtained by interferometry appears to be modulated by the position of the funnel flows close to the star, with a minimum located around phase 0.8. The Gaussian model exhibits the same modulation but with a lower amplitude (\(2.1\pm 0.4\) R\({}_{*}\)) and appears sensitive to the magnetosphere's innermost region, close to the 50% flux radius. The size derived from the uniform disc model emerges as being the most appropriate to recover the reference model size, accounting for at least 80% of the total flux emitted by the magnetosphere.
Figure 3: Brackett \(\gamma\) line variability along the rotational cycle. Each column corresponds to a specific rotational phase. At phase 0, the shock area is unseen on the stellar surface, while phase of 0.5, the shock is fully seen on the visible hemisphere. The colours correspond to different values of the obliquity. All fluxes are computed with an inclination of 60\({}^{\circ}\).
The derived orientations obtained from interferometry seem to be particularly representative of the position of the accretion funnel flow and the on-sky orientation of the Br\(\gamma\) emitting region (Fig. 5). The measured position angle agrees with the magnetosphere's orientation, particularly when the shock faces the observer (phase = 0.5). Nevertheless, it appears somewhat hazardous to decipher the shape and orientation of the emitting region across the rotational cycle from this observable only, as different magnetospheric configurations can be described by a very similar interferometric model (e.g. phases 0.03 and 0.25). A stronger constraint on the orientation of the funnel flows arises from differential phase measurements.
### Differential phases and photo-centre shifts
From the differential phases, we can derive the photo-centre shift between the continuum and the Br\(\gamma\) line emitting region. In the regime of marginally resolved sources, there is a direct relationship between the projected photo-centre displacement vector (**P**) and the phase along each baseline (Lachaume, 2003):
\[\phi_{i}=-2\pi\frac{B_{i}}{\lambda}\textbf{P}, \tag{3}\]
where \(\phi_{i}\) is the differential phase measured for the \(i\)th baseline, \(B_{i}\) is the length of the corresponding baseline, and \(\lambda\) is the effective wavelength of the spectral channel. A four telescope beam-combiner like GRAVITY gives access to six projected baselines that enable us to accurately retrieve the value and orientation of the photo-centre shifts in each spectral channel (Le Bouquin et al., 2009; Waisberg et al., 2017). Such a measurement results in a position-velocity plot of the displacement of the photo-centre across the Br\(\gamma\) line relative to the continuum. This is illustrated in the bottom panels of Figure 5.
The photo-centre shifts trace the accretion funnel flow's direction and follow the stellar rotation. For instance, when the northern accretion shock (N-shock) is located behind the star (phase = 0), the accreting material falls onto the stellar surface in the direction of the observer. Accordingly, the photo-centre measured in the blue-shifted part of the line profile (\(\simeq\) -75 km s\({}^{-1}\)) lies on the blue-shifted part of the velocity map, corresponding to the approaching funnel flow. Equivalently, the photo-centre measured in positive velocity channels of the line profile (\(\simeq\) +75 km s\({}^{-1}\)) is shifted towards the receding funnel flow. In contrast, when the shock faces the observer (phase = 0.5), the velocity map goes from blue to red in the east-west direction, and the photo-centre shifts recover this trend as demonstrated at phase 0.47.
We can thus identify three privileged directions and shapes of the photo-centre shifts: - linear north-south at phase \(\simeq\) 0 (N-shock behind), - S-shape at phase 0.25 and 0.69 and - linear east-west at phase \(\simeq\) 0.5 (N-shock in front). The differential phase is, therefore, a key ingredient to recover the geometry and orientation of the line-emitting region, tracing the moving material along a rotational cycle.
### Signal-to-noise considerations
As a proof-of-concept, the results presented above assume infinite signal-to-noise ratio. The goal is to predict the spectroscopic and interferometric signatures of the magnetospheric accretion process. Thus, the models predict typical visibility amplitudes ranging from 1 down to 0.97 (see Fig. 5). Such a modest interferometric signal requires a measurement accuracy of about 1% to be securely detected. Similarly, the models predict a deviation of the differential phases by 1 to 2 degrees (Fig. 5), which requires an accuracy of order of a fraction of a degree to yield a robust detection. Recent interferometric studies performed with VLTI/GRAVITY in the K-band demonstrate that these levels of accuracy can be routinely obtained indeed with reasonable exposure times on young stellar objects (e.g. Bouvier et al., 2020; Gravity Collaboration et al., 2020, 2022), or active galactic nuclei (Gravity Collaboration et al., 2018).
## 6 Summary and conclusion
We presented non-LTE radiative transfer modelling of the Brackett \(\gamma\) line emission for non-axisymmetric models of accreting magnetospheres. We used the equations of a misaligned dipolar magnetic field to derive the geometry of the magnetospheric accretion region for different obliquities of the magnetic dipole. We used MCFOST to compute radiative signatures of the Br\(\gamma\) line along a full stellar rotational cycle. Further, we derived near-infrared interferometric observables for the line, comparable to what the GRAVITY instrument has already measured for T Tauri stars.
The main conclusions of this study are the following:
1. The total flux in the line, and the line-to-continuum ratio, depends on the obliquity of the dipole. As the obliquity increases, the size of the emitting region decreases, leading to a lower integrated flux. Also, projection effects make the emission region of lines forming close to the stellar surface appearing narrower.
2. The Br\(\gamma\) line total flux varies with the rotational phase due to the non-axisymmetry of the models induced by the magnetic obliquity. The line profiles exhibit a red-shifted absorption, that is an inverse P Cygni profile, when a significant fraction of the accretion shock is aligned with the observer's line of sight. When the shock is hidden on the opposite side of the star, the line profiles exhibit a double-peaked shape, reminiscent of the lines formed in rotating envelope. The latter is due to the relatively large rotational velocity of the magnetospheric model (\(\sim\)80 km s\({}^{-1}\)).
Figure 6: Interferometric radii as a function of the rotational phase. Uniform and Gaussian disc models are shown with green and yellow markers, respectively. Blue lines correspond to the radii encompassing 50, 80, 90 and 99% of the total RT model’s flux. The blue shaded areas represent the standard deviation of these radii across the rotational phase. The red shaded area indicates the inner (\(R_{t}\)) and outer radius (\(R_{t}\) + \(\delta r\)) of the RT model.
3. Near-infrared interferometric observations in the Br\(\gamma\) line directly probe the size of the magnetospheric accretion region. The Gaussian disc model is sensitive to the brightest parts of the magnetosphere, up to 50% of the truncation radius, while a uniform disc model grasps 90% of the magnetosphere. It is of prime importance to consider this aspect when estimating magnetospheric radius from interferometric measurements. In both cases, the measured radius varies with the rotational phase (due to the non-axisymmetry of the dipole). A robust interferometric estimate of the magnetospheric radius therefore requires monitoring the system over a full rotational cycle.
4. The combined knowledge of the differential phase and the associated photo-centre shifts gives hints on the object orientation and geometry. More specifically, the relative direction of the photo-centre shifts indicates the changing orientation of the accreting material along the rotational cycle in the non-axisymmetric case.
Near-infrared interferometry of the Brackett \(\gamma\) line is used to characterise the inner star-disc interaction region, and offers a good estimate to the size of the line's forming region, at sub-au precision. Comparing this size with reference model radii, such as the truncation radius, allows us to distinguish between multiple origins of the Br\(\gamma\) line, within or beyond these radii (e.g. magnetosphere, stellar and disc winds, jets). Further, simultaneous spectroscopic and interferometric observations along a rotational cycle, have the potential to unveil the geometry and orientation of the line's emitting region. The variability of the line associated with the photo-centre shifts, provides a unique and unambiguous proxy of the physical processes occurring in the magnetosphere of young accreting systems, within a few hundredths of an astronomical unit around the central star.
## Appendix A Benchmark
We present here the comparison between line profiles obtained with MCFOST and previous studies. The magnetospheric model corresponds to the axisymmetric and compact configuration of Muzerolle et al. (2001) with a fixed shock temperature at 8 000 K, a rotation period of 5 days and the following canonical T Tauri parameters: \(M_{*}=0.8\,M_{\odot}\), \(R_{*}=2\,R_{\odot}\) and \(T_{*}=4\,000\,K\). The inclination of the system is 60 degrees. The continuum emission of the stellar surface (shock and photosphere) is constant for all models. Figures 1, 2, 3, and 4 show the H\(\alpha\), H\(\beta\), Pa\(\beta\) and Br\(\gamma\) lines profiles for different values of \(T_{max}\) and \(M\). An inverse P Cygni profile, with a red-shifted absorption, is seen for all lines although it is dependent on the value of the mass accretion rate and of the maximum temperature. For a given mass accretion rate, an increase of the maximum temperature results in a higher line emission peak and a shallower red-shifted absorption. As the temperature increases, the line source function increases, which is the cause of a higher emission above the continuum emission. The appearance of the red-shifted absorption component is caused by absorption from the gas above the stellar surface. It is controlled by the ratio between the source function of the line in the accretion funnel and that of the underlying continuum from the stellar surface, especially at low mass accretion rates and temperatures. Eventually, for the highest mass accretion rate and temperature, the lines become so optically thick that the red-shifted absorption is washed out by the large wings of the line. The red-shifted absorption is more pronounced for lines forming closer to the accretion shock like the H\(\beta\) line. At a temperature larger than 8,000 K and a mass accretion rate above \(10^{-8}\,M_{\odot}\,\mathrm{yr}^{-1}\), the continuum emission from the magnetosphere becomes important and the line-to-continuum ratio decreases. This effect is seen for instance in the H\(\alpha\) line (see Fig. 1). When the mass accretion rate increases for a given temperature, the density of the magnetosphere increases. As a consequence, the line source function increases. At high temperature and high density, the background continuum emission of the magnetosphere dominates for certain wavelengths, and absorption occurs. The latter effect is seen in the Pa\(\beta\) (Fig. 3) and Br\(\gamma\) (Fig. 4) lines where the strong continuum contribution at the disc surface leads to absorption at low velocities, where the lines source function is small. These results are consistent with the previous studies and demonstrate the robustness of our code for modelling the close environment of T Tauri stars (Tessore et al., 2021).
## Appendix B Derivation of the interferometric pure-line phase and visibility
We focus on the magnetospheric emission probed by the Br\(\gamma\) line and, therefore, aim to remove any additional contributions (stellar photosphere, the accretion shocks, dusty disc, etc...). Following Kraus et al. (2008); Bouvier et al. (2020b), we compute the continuum-subtracted observables, the so-called pure line visibility and phase, by using the emission line profiles computed in Sect. 3. This is of prime importance in the case of Br\(\gamma\) line as the magnetospheric emission is quite faint in the infrared (\(\approx 1.3\) excess flux compared to the continuum, Fig. 3). The derivation of the pure line quantities is only possible if the source is marginally resolved (i.e, size \(<\lambda/2B\)).
In this case, the pure line visibility \(V_{line}\) and phase \(\Phi_{line}\) are given by:
\[V_{Line}=\frac{F_{L/C}V_{Tot}-V_{Cont}}{F_{L/C}-1}, \tag{1}\]
\[\Phi_{Line}=arcsin\left(\frac{F_{L/C}}{F_{L/C}-1}\frac{V_{Tot}}{V_{Line}} \sin\Phi_{Tot}\right). \tag{2}\]
Where \(F_{L/C}\) denotes the line-to-continuum flux ratio as taken from the normalised spectrum (Fig. 3), \(V_{Cont}\) is the visibility computed in the continuum (star+shock only), and \(V_{Tot}\), \(\Phi_{Tot}\) are the total complex quantities measured by GRAVITY. In Eq. (1), we note that in the case when \(F_{L/C}\) is close to one, the derived \(V_{Line}\) cannot exist (converges to infinity). Such non-ideal profiles appear if the red absorption becomes too important. Therefore, we assume to discard the affected spectral channels for phases 0.25 and 0.47, where \(F_{L/C}\) is too close to one: - one point (\(v=53\) km s\({}^{-1}\)) at phases 0.25 and - two points (\(v=15\), 53 km s\({}^{-1}\)) at phase 0.47.
Figure 10: Dependence of H\(\alpha\) line flux with mass accretion rates \(\dot{M}\) and maximum temperatures \(T_{max}\). The inclination of the system is \(60^{\circ}\).
Figure 11: Same as Fig. 10 for H\(\beta\)
Figure 4: Same as Fig. 1 for Br\(\gamma\).
Figure 3: Same as Fig. 1 for Pa\(\beta\).
Figure 11: Fourier sampling (u-v coverage) of the simulated data. The different colours correspond to the six different baselines of the VLTI. The eight points per baseline represent a typical observational sequence with one data point per hour. |
2307.04815 | Parameter and coupling estimation in small groups of Izhikevich neurons | Nowadays, experimental techniques allow scientists to have access to large
amounts of data. In order to obtain reliable information from the complex
systems which produce these data, appropriate analysis tools are needed}. The
Kalman filter is a {frequently used} technique to infer, assuming a model of
the system, the parameters of the model from uncertain observations. A
well-known implementation of the Kalman filter, the Unscented Kalman filter
(UKF), was recently shown to be able to infer the connectivity of a set of
coupled chaotic oscillators. {I}n this work, we test whether the UKF can also
reconstruct the connectivity of {small groups of} coupled neurons when their
links are either electrical or chemical {synapses}. {In particular, w}e
consider Izhikevich neurons, and aim to infer which neurons influence each
other, considering {simulated spike trains as the experimental observations
used by the UKF}. First, we {verify} that the UKF can recover the parameters of
a single neuron, even when the parameters vary in time. Second, we analyze
small neural ensembles and}} demonstrate that the UKF allows inferring the
connectivity between the neurons, even for heterogeneous, directed, and
{temporally evolving} networks. {Our results show that time-dependent parameter
and coupling estimation is possible in this nonlinearly coupled system. | R. P. Aristides, A. J. Pons, H. A. Cerdeira, C. Masoller, G. Tirabass | 2023-06-16T10:27:50Z | http://arxiv.org/abs/2307.04815v1 | # Parameter and coupling estimation in small networks of Izhikevich's neurons
###### Abstract
Nowadays, experimental techniques allow scientists to have access to large amounts of data. In order to obtain reliable information from the complex systems which produce these data, appropriate analysis tools are needed. The Kalman filter is a frequently used technique to infer, assuming a model of the system, the parameters of the model from uncertain observations. A well-known implementation of the Kalman filter, the Unscented Kalman filter (UKF), was recently shown to be able to infer the connectivity of a set of coupled chaotic oscillators. In this work, we test whether the UKF can also reconstruct the connectivity of small groups of coupled neurons when their links are either electrical or chemical synapses. In particular, we consider Izhikevich neurons, and aim to infer which neurons influence each other, considering simulated spike trains as the experimental observations used by the UKF. First, we verify that the UKF can recover the parameters of a single neuron, even when the parameters vary in time. Second, we analyze small neural ensembles and demonstrate that the UKF allows inferring the connectivity between the neurons, even for heterogeneous, directed, and temporally evolving networks. Our results show that time-dependent parameter and coupling estimation is possible in this nonlinearly coupled system.
The Kalman filter is a popular technique that can be employed to infer the parameters of a model given uncertain observations, and it has found applications in diverse fields. In the field of neuroscience, it has been used, for example, to estimate the parameters of neural models, and for real-time decoding of brain signals for brain-machine interfaces. However, the neural models that have been considered contain a large number of parameters, which makes a systematic exploration of the parameter space unfeasible. Here we study a neural model, the Izhikevich model, which realistically reproduces many neural states, even though it is computationally low-cost. Having a small number of parameters and, at the same time, showing very rich dynamical regimes, the Izhikevich model is an ideal candidate for a systematic exploration of the parameter space and the study of neurons coupled with different topologies. We analyze the suitability of the Kalman filter to estimate the model's parameters and we discuss its main limitations.
+
Footnote †: preprint: AIP/123-QED
## I Introduction
One of the main challenges that neuroscience has faced for a long time is the determination of brain topology, which is morphologically diverse and complex. Besides, the elements that form the brain network, the neurons, are also diverse and complex. Neurons show reproducible nonlinear responses to stochastic stimuli [1]. Hence, they can be modeled as stochastic nonlinear dynamical systems [2].
Although much progress has been made on the relationship between topology and dynamics in the brain, scientists are still far from having a good understanding [3; 4]. Mathematical models of the whole brain or just of a tiny fraction of billions of neurons, as well as information-based data analysis techniques are powerful tools for shedding light on the above relationship [5].
However, a realistic estimation of the models' states and parameters is a very difficult challenge, and different approaches based on control theory have been developed [6]. A well-known method is the Kalman filter [7; 8; 9].
The Kalman filter allows inferring optimal parameters of a model given uncertain observations, balancing the effects of measurement noise, disturbances, and model uncertainties, and has found applications in many fields of science and technology [10]. In neuroscience, the Kalman filter has been used, for example, for decoding brain signals for brain-machine interfaces [11; 12; 13]. It has also been used to estimate the parameters of neural models [14; 15; 16; 17; 18; 19]. However, the models that have been considered, such as the Morris-Lecar or the Hodgkin-Huxley, contain a large number of parameters that make a systematic exploration of the parameter space unfeasible. Here we study the Izhikevich model [20](IM) because it reproduces many important properties of biological neurons and, at the same time, has a small number of parameters and is computationally low-cost [21]. Therefore, the Izhikevich model is an ideal candidate for a systematic exploration of the parameter space allowing a study of small ensembles of coupled neurons.
We analyze under which conditions a nonlinear version of the Kalman filter, the Unscented Kalman Filter (UKF) [22; 23],
provides a good estimation of the IM parameters and we discuss its main limitations. We show that the UKF is able to recover the parameters of an isolated neuron and the external current that is exciting its activity. We also show that the UKF is able to do so even in the case of time-dependent input currents. Then, we study small networks with different topologies, with both electrical and chemical couplings, and show that UKF is able to recover the topology of the network using observations of the dynamic variables, assuming the coupling strength, electrical or chemical, and all the internal parameters are known.
## II Methods
### Model
The Izhikevich model (IM) was introduced by Eugene M. Izhikevich [20] as an alternative to more realistic but computationally expensive neuron models [24]. Despite its simplicity, it can be used to model a broad variety of neuron types [21] and dynamical regimes. Here will focus on single Izhikevich neurons in the chaotic regime -- that is neurons for which the spiking dynamics is irregular, aperiodic -- and small networks of chaotic neurons linked by electrical or chemical couplings.
The state of an Izhikevich neuron \(i\) is fully specified by two state variables. \(x_{i}\) represents the neuron membrane potential and \(y_{i}\) represents the membrane recovery variable accounting for the activation of the ionic currents.
Let \([x_{1},y_{1},\ldots,x_{i},y_{i},\ldots]^{T}\) be the state vector of the neurons, the equations governing the system are given by
\[\begin{split}\dot{x}_{i}&=0.04\,x_{i}^{2}+5x_{i}+1 40-y_{i}+I+E_{i}+C_{i}+\sigma_{Z}\xi_{i}^{x}\\ \dot{y}_{i}&=a\left(b\,x_{i}-y_{i}\right)+\sigma_{Z }\xi_{i}^{y}\end{split} \tag{1}\]
with the after-spike reset condition:
\[\begin{split}\text{if}\quad x_{i}>30,\quad\text{then}\begin{cases} x_{i}\to c,\\ y_{i}\to y_{i}+d.\end{cases}\end{split} \tag{2}\]
\(a\) is a small parameter representing the slow time-scale of \(y_{i}\), \(b\) is the coupling strength between the state variables, and the external currents are modeled by \(I\). All parameters here, included \(x\), \(y\) and time are dimensionless. The parameters \(a\), \(b\), \(c\) and \(d\) can be fitted to obtain a specific firing pattern of the neuron. The last term in Eq.(1) represents random fluctuations and we refer to it as dynamic or process noise. \(\xi_{i}^{x}\) and \(\xi_{i}^{y}\) represent Gaussian white noises with zero mean and unity variance. \(\sigma_{Z}\) is the noise strength and for simplicity, it is the same for \(x\) and \(y\). For a system of \(N\) neurons, the dynamical noise can be thought as a random \(2N\)-dimensional vector with zero mean and covariance matrix \(\bar{Q}_{Z}=\sigma_{Z}^{2}\mathbb{I}\), where \(\mathbb{I}\) is the identity matrix.
The electrical coupling between neurons is described by a system of ordinary first-order differential equations with different levels of detail that represent various degrees of physiological descriptions [25]. Here we consider the simplest coupling, namely linear diffusive coupling, \(E_{i}\) is given by
\[E_{i}=g_{e}\sum_{j=1}^{N}A_{ij}^{e}(x_{j}-x_{i}). \tag{3}\]
where \(g_{e}\) is the coupling conductance and \(A_{ij}^{e}\) are the coefficients of the adjacency matrix: \(A_{ij}^{e}=1\) whenever neuron \(i\) is connected to neuron \(j\), otherwise \(A_{ij}^{e}=0\).
The coupling \(C_{i}\) comprises inputs delivered through chemical synapses to neuron \(i\) from all other neurons in the network. It is given by [26]:
\[C_{i}=g_{c}(x_{i}-\mu_{s})\sum_{j=1}^{N}A_{ij}^{e}\zeta(x_{j}). \tag{4}\]
\(g_{c}\) is the synaptic coupling strength, \(\mu_{s}\) is the reversal potential, and the sigmoid function \(\zeta(x)\) is defined as
\[\zeta(x_{j})=[1+\exp(-\varepsilon(x_{j}-\theta))]^{-1}, \tag{5}\]
where \(\varepsilon\) controls the slope of the sigmoidal function and \(\theta\) is the synaptic firing threshold. This function represents the activation of the postsynaptic current when a presynaptic neuron sends an action potential, that is when \(x\) becomes larger than \(\theta\). Hence, a neuron \(i\) receives a chemical synapse from a neuron \(j\) only if \(x_{j}\) is larger than \(\theta\). The value of the prefactor \((x_{i}-\mu_{s})\) in Eq. 4 controls whether the synapses are inhibitory or excitatory. In particular, we chose \(\mu_{s}\) such as \((x_{i}-\mu_{s})<0\), that is inhibitory chemical synapses. The numerical values of all parameters are given in Table 12.
Footnote 2: The values of \(\mu_{s}\) are given in Table 12.
Throughout this study, we use symmetric \(A^{e}\) matrices and asymmetric \(A^{e}\) matrices, because the electrical coupling is symmetric, but the chemical coupling is directional. Here we only focus on a proof-of-concept demonstration of the UKF's ability to infer coupling topology; in the future, we plan to study the more challenging (and realistic) scenario of heterogeneous, excitatory, or inhibitory chemical synapses.
### The Unscented Kalman Filter
The Kalman filter makes a prediction for the future state of a system, resulting from the state evolution of a dynamical model, and then corrects it using the information coming from experimental data. Even though it was originally developed for linear systems, soon it was extended to include nonlinearities. Different nonlinear extensions were created. The UKF is one nonlinear version of the filter which has a good performance in terms of computational effort.
Following the notation in Forero et al. [28], we consider the extended state \(\mathbf{\hat{u}}\) as the vector given by the state variables \(x_{i}\) and \(y_{i}\) of the \(N\) neurons \((i=1,...,N)\) and all the parameters we want to retrieve. Our process model to be employed in the UKF will be \(\mathbf{\hat{u}}_{k+1}=\mathbf{\bar{a}}(\mathbf{\hat{u}}_{k})\), where \(k\) is the timestep index. \(\mathbf{\bar{a}}\) is given by the deterministic part of Eq. (1) for the state variables and is the identity operator for the parameters, as we assume they are constant. In the UKF algorithm, the estimation for \(\mathbf{\hat{u}}_{k+1}\) predicted using the stochastic dynamical model
is corrected by an experimental piece of data. However, this experimental data will necessarily have some uncertainty resulting from the measurement process, represented by a measurement function. Our measurement function is a selection of the state variables from the extended vector \(\tilde{\mathbf{u}}_{k}\), which are perturbed by the measurement noise with standard deviation \(\sigma_{\nu}\): \(x_{i}\to x_{i}+\sigma_{\nu}\,\chi_{i}^{x}\) and \(y_{i}\to y_{i}+\sigma_{\nu}\,\chi_{i}^{y}\), where \(\chi\) represents Gaussian white noise. Thus, the covariance matrix of the measurement noise will be \(\bar{Q}_{\nu}=\sigma_{\nu}^{2}\mathbb{I}\). The covariance of the estimated state is \(P=\sigma_{P}\mathbb{I}\), which is evolved by the UKF algorithm from the initial values given in Table 2.
### Implementation
To generate the synthetic data that we use as experimental observations, we numerically solve Eq. (1) with a fourth-order Runge-Kutta method, an integration step of \(dt=0.01\), and the parameters reported in Table 1, keeping the measurements with sampling rate equal to the integration step. With these parameters single (uncoupled) neurons display chaotic dynamics [27], as depicted in Fig. 1(a). Initial conditions for the simulations were drawn from a normal distribution centered at a fixed point of Eq. (1) in the case of no coupling, \((-56.25,-112.5)\), with a standard deviation equal to 1.
Throughout the study, we employ the UKF implemented by the Python package _FilterPy_[29]. The confidence in the process model (\(\bar{Q}_{Z}\)) and the measurements (\(\bar{Q}_{\nu}\)) are kept constant (see Table 1). An initial transient of 50000 timesteps was discarded in all runs.
The UKF requires an initial guess for the parameters that we want to estimate. To test the robustness of the UKF, we consider different initial guesses for each run, which are selected from a uniform distribution in the ranges given in Table 2.
To quantify the performance of the UKF in recovering the adjacency matrix \(g_{e}A^{KF}=G_{e}^{KF}\), we use the Euclidean distance between the original and the recovered matrix:
\[D(G,G^{KF})=\sqrt{\sum_{i,j}(G_{i,j}-G_{i,j}^{KF})^{2}}. \tag{6}\]
We quantify the performance using the full coupling matrix \(G\), as we want to test not only if the UKF is able to reconstruct the connectivity, but also if it can devise the correct coupling strength without being informed that the coupling strength is the same for all links. Also, we chose to use the Euclidean distance because it is a simple, straight-forward measure to compare two graphs of weighted links with a single figure.
## III Results
### Estimation of the parameters of a single neuron
First, we illustrate the effectiveness of the UKF in estimating the parameters of a single Izhikevich neuron. The parameters that we attempt to estimate are \(a,b,c,d,\) and \(I\). Note that the parameters \(c\) and \(d\) only appear in the resetting dynamics, therefore the UKF can only update them in the event of a spike. Moreover, the equation for \(\dot{y}\) contains a product \(ab\), which can increase the uncertainty of the estimation. For example, an underestimation of \(a\) can compensate an overestimation of \(b\). To avoid such problems, we estimated \(ab\) and \(a\) independently.
To recover the unknown parameters, we consider 100 simulated time series as input, each with a different initial parameter guess drawn uniformly from the intervals reported in Table
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \(a\) & \(b\) & \(c\) & \(d\) & \(I\,(I_{s})\) & \(g_{e}\) & \(g_{c}\) & \(\mu_{s}\) & \(\epsilon\) & \(\theta\) & \(\alpha\) & \(\omega\) & \(\sigma_{Z}\) & \(\sigma_{\nu}\) \\ \hline \(0.2\) & \(2\) & \(-56\) & \(-16\) & \(-99\) & \((0,0.1)\) & \((0,0.05)\) & \(35\) & \(7\) & \(0\) & \(3\) & \(0.15\) & \(0.025\) & \(0.15\) \\ \end{tabular}
\end{table}
Table 2: Initial guesses of the parameters used by the UKF.
Figure 1: (a) Time evolution of the variables \(x(t)\) (upper curve) and \(y(t)\) (lower curve) of an isolated Izhikevich neuron, simulated with Eq. (1) with parameters given in Table 1. (b) Response to an external modulated current \(I(t)=I_{s}+\alpha\sin(\omega t)\). In each panel, the current input is represented by the dashed line. The values of the parameters are given in Table 1.
II. These intervals have been chosen because in those ranges the spiking of the neuron will be chaotic, which is a piece of information we can infer from the spike sequence. Results are shown in Fig. 2. For all cases, the real value of the parameter is within the range of the standard deviation. As the estimation of \(c\) and \(d\) is only updated when the neuron spikes, the duration of the simulated time series required to obtain a reliable estimation is larger than for the other parameters.
We point out that using the UKF to estimate \(c\) and \(d\) is unnecessary because a direct estimation of these parameters can be done easily by checking the values of \(x\) and \(y\) after a spike.
Now, we test the estimation of time-varying parameters. Specifically, we consider a sinusoidal external current, \(I(t)=I_{s}+3\alpha\sin(\omega t)\), and estimate \(a\), \(b\) and \(I(t)\) (wrongly assuming that the current is constant). The effect of such a current on the neuron's dynamics is shown in Fig. 1(b), where we see that bursts of spikes are followed by periods of subthreshold oscillations. Since at constant \(I\) the Izhikevich model displays a great variety of dynamical behaviors including bursting [20], the inspection of the time series does not provide evidence of the presence of a sinusoidal input current.
The results of the parameter estimation are shown in Fig. 3. The recovered values of \(a\) and \(b\) are comparable to those obtained in the previous parameter estimation (see Fig. 2). The estimated value of \(I\) oscillates with a frequency equal to \(\omega\), suggesting that \(I\) is not constant.
Next, we substitute the expression of \(I(t)\) in the model, Eq. (1), and separately estimate \(I_{s}\), \(\alpha\) and \(\omega\). In this case, we also need to include time as an additional dimension of the extended vector space, with dynamic equation \(i=1\). The results of this approach are shown in Fig. 4. The UKF can estimate the correct parameters of the oscillation in the majority of cases. However, large departures from the correct values can be observed, which could be due to the fact that the model with constant \(I\) can produce similar output dynamics.
### Estimation of network connectivity
We now consider small networks of Izhikevich neurons, and we investigate the capacity to recover the adjacency matrix \(A\) assuming that the coupling strengths, \(g_{e}\) and \(g_{c}\), and all the internal parameters are known. Of course, this is not possible in experiments, and we use these assumptions as a first step for testing the neural network reconstruction problem using the UKF approach: if, given these assumptions, the network cannot be inferred, we can conclude that the UKF approach is not useful; on the other hand, if we succeed in reconstructing the network with these assumptions, as a next step we will test the UKF approach having less information, for instance, assuming a different neuron model, unknown internal parameters, unknown coupling strengths. We run the UKF algorithm, considering each element of \(A\) as an additional dimension of the extended vector \(\hat{\mathbf{u}}\).
We consider networks with \(N=4\) neurons, which can be seen as building blocks of bigger networks. However, we must keep in mind that complex systems usually display emergent collective behaviour when the number of elements is large enough, and therefore, while the UKF algorithm may succeed in reconstructing the topology of a small network, the collective behaviour that may emerge for a large enough number of neurons (and/or the large number of parameters to be inferred), will probably cause the UKF algorithm to fail. Therefore, the study of the role of the network size is of course important and additional work is planned, that will be reported
Figure 2: Parameter estimation for a single neuron as a function of the simulation time. The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter values are given in Table 1.
elsewhere.
The network topologies are shown in Appendix A, see Figs. 8 and 9. The simulated time evolution of the membrane potentials of all nodes for each network topology are also shown in Figs. 8 and 9. The network dynamics differ in their level of synchronization. First, we consider electrical coupling only (\(g_{c}=0\)), such that the adjacency matrix is symmetric and we only need to determine its upper triangular elements \(G_{e}=g_{e}A^{e}\) with \(g_{e}=0.05\).
The results for the different topologies are displayed in Fig. 5, where we see that the UKF gives an excellent estimation of \(G_{e}\), as the Euclidean distance \(D(G_{e},G_{e}^{KF})\) approaches 0.
To study the effect of chemical synaptic coupling in the UKF estimation, we add direct links between some nodes in the formerly symmetric networks, as shown in Appendix A, Fig. 8. We estimate two adjacency matrices, one encoding electrical coupling, \(g_{e}A^{e}=G_{e}\) with \(A^{e}=(A^{e})^{T}\), and the other encoding chemical coupling, \(g_{e}A^{e}=G_{c}\) with \(A^{e}\neq(A^{e})^{T}\). We chose \(g_{e}=0.1\) and \(g_{c}=0.05\). Note that we increase \(g_{e}\) compared to the previous case to test the robustness of the UKF against synchronized states. Since we use inhibitory synapses, the firing rate decreases slightly. In the limit of total synchronization the input through electric coupling, \(E_{i}\) goes to zero, while \(C_{i}\) is exactly the same for each element in the network.
We see in Figs. 6(a) and 6(b) that even with two coupling schemes, the chemical one being nonlinear, the UKF can estimate the correct coupling matrices. All networks are heterogeneous in the sense that the number of connections is not
Figure 3: Parameter estimation for a single neuron with a time-dependent external current, which is modeled as a constant input. The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter values are given in Table 1.
Figure 4: As Fig. 3, but explicitly modeling the input current as \(I=I_{s}+\alpha\sin(\alpha t)\). The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter values are given in Table 1.
the same for the different neurons. As pointed out by Forero et al. [28], the UKF is robust against synchronization, which is confirmed here.
Here we presented reconstruction results in the case of inhibitory synapses, however, we checked that the UKF provide similar results also for excitatory synapses and a mix of excitatory and inhibitory synapses, provided it knows which synapses are excitatory and which are inhibitory.
We highlight that for all cases the Euclidean distance \(D(G,G^{KF})\) saturates below \(10^{-2}\). Furthermore, to verify that all the links were correctly estimated we classified the performance of the UKF using the Receiver Operating Characteristic (ROC) curve [30]. If the UKF recovers the right connectivity, then the Area Under the ROC Curve (AUC) [30] will be 1. For all cases studied, we obtained an AUC \(>0.99\), implying a perfect reconstruction of the underlying topologies, that is, the UKF predicts a link between two neurons \(i\) and \(j\) only if \(A_{ij}=1\). We believe that the UKF is robust against noise as long as noise can be seen as a small perturbation to the system and the dynamics is not driven by it.
### Estimation of network connectivity in temporal networks
Finally, we consider temporal networks, in which \(G_{e,ij}=g_{e}\,A_{ij}\) varies with time. This is the case in many applications of network theory [31], and in neuroscience, it is especially important since it can be linked to plasticity [32].
We model time-varying networks by considering couplings between neurons that switch on at a simulation time \(t=200\). More precisely, three single neurons connect at \(t=200\) in a linear chain, (\(1\leftrightarrow 2\leftrightarrow 3\)),
\[g_{e}\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&0\end{bmatrix}\to g_{e}\begin{bmatrix}0&1&0\\ 1&0&1\\ 0&1&0\end{bmatrix}\,. \tag{7}\]
We assume that all the internal parameters are known and we only estimate the network's topology.
The results are presented in Fig. 7. Before the coupling is switched on, the UKF has quickly inferred the absence of coupling (as \(D\to 0\)). After the coupling is switched on, \(D\) first increases sharply and then decreases steadily for all values of \(g_{e}\). This means that the UKF can detect the emergence of coupling and estimate the \(G_{e}\) matrix correctly. However, the estimation after the change in the network takes more time than the initial estimation of the null adjacency matrix. This is because the covariance on the matrix coefficients will decrease, meaning high confidence in the inferred matrix before the coupling is switched. When the matrix is changed, the filter has to adjust to the new state, but the low covariance will make the convergence rate slow. Nevertheless, the filter is eventually able to recover the right network structure. Different initial model or state covariances are expected to impact the convergence time, both before and after turning on the coupling. Higher covariances will result in higher variability in the predictions. This variability is more efficient in capturing changes in the parameters. On the contrary, when covariances are smaller, the predictions are less prone to change and thus adapt to new values. The right choice of this parameter will result in a responsive system with sufficiently stable inferred parameters.
## IV Discussion and conclusions
We studied the capability of the UKF for recovering the parameters of a single neuron and of small neural ensembles modeled with the Izhikevich model. We simulated the equa
Figure 5: Evolution of the distance \(D\), which represents the Euclidean distance between real coupling matrix \(G_{e}\) and the estimated one \(G_{e}^{KF}\). N1, N2, N3, N4, and N5 represent different topologies, as shown in Fig. 8. For all topologies \(D\) decreases sharply at first, then it saturates below \(10^{-2}\). The coupling strength is \(g_{e}=0.05\) and the dynamics displayed by the networks are shown in Fig. 8.
Figure 6: Evolution of the distance \(D\) for (a) electrical coupling and (b) chemical coupling, with coupling strengths \(g_{e}=0.1\) and \(g_{c}=0.05\). The distance between the original and the estimated adjacency matrices, \(D(G,G^{KF})\), decreases sharply with simulation time, saturating below \(10^{-2}\) for all network topologies. The dynamics displayed by the networks are shown in Fig. 9.
tions governing the system dynamics and used the simulated time series as experimental observations to feed the UKF algorithm, with confidence regulated by \(\tilde{Q}_{V}\). The IM was the process model with confidence regulated by \(\tilde{Q}_{Z}\).
When the parameters of an isolated neuron are constant in time, the UKF is able to estimate all the parameters. Second, we studied an isolated neuron with a sinusoidal input current, which displayed bursting spike dynamics. Due to the rich variety of dynamical behaviors of the IM, it is not trivial to identify the cause of the bursting activity. Still, even when modeling the current as a constant, the UKF retrieved the neuron parameters and the average value of the current and suggested an oscillating current. When including the oscillating current in the process model, the UKF was able to provide a reasonable estimate of the amplitude (\(\alpha\)), the mean (\(I\)), and frequency of the oscillation (\(\omega\)).
We have also estimated the connectivity of small networks of Izhikevich neurons with known internal parameters. First, we analyzed the five possible network topologies for four neurons with undirected electrical coupling. Then, we added directed chemical connections to the same networks. The UKF was able to recover the connectivity for all the networks regardless of the synchronization level.
Finally, we addressed the problem of temporal networks by analyzing a network of three electrically coupled neurons, in which the topology changed from no coupling to a chain topology. The UKF was able to identify the change in the network and estimate the connectivity correctly
The results presented here were obtained considering measurements of both \(x\) and \(y\). Beyond that, we conducted a preliminary analysis of the applicability of the UKF when only measurements of the \(x\) variable are available. Our results suggest that the UKF is still able to recover the parameters of a single neuron and the network connectivity. However, to obtain good estimates, the UKF hyperparameters had to be carefully tuned, in particular, the standard deviation \(\sigma_{\text{v}}\) and the initial condition for \(\sigma_{P}\).
As in experimental measurements only short time series with limited temporal resolution can be recorded, further work is needed to clarify the impact of the duration of the time series and the sampling time. While the results presented here were obtained using each simulated data point (i.e, using the integration step as sampling time), preliminary studies suggest that the UKF is robust to downsampling up to 1:20, if the time series is long enough.
Future work should also address larger networks and different types of neurons. In fact, as discussed before, complex systems usually display emergent collective behavior when the number of elements is large enough. Therefore, the UKF algorithm may succeed in reconstructing the topology of a small network, but will probably fail for a large number of neurons, or when there is a large number of unknown parameters. Therefore, further work is planned to test the UKF algorithm when the networks are larger and when the internal and coupling parameters are unknown. While we expect that the UKF algorithm will fail to reconstruct the network, it may yield some information that can be useful for inferring some properties of the real network (e.g., the average degree, the degree distribution, the modularity, etc.).
Finally, it will be interesting to check if the UKF can differentiate between inhibitory and excitatory synapses.
###### Acknowledgements.
R.P.A. acknowledges financial support from Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior-Brasil (CAPES), Finance Code 001. H. A. C. thanks ICTP-SAIFR and FAPESP grant 2021/14335-0. G.T. and C.M. acknowledge the support of the ICREA ACADEMIA program of Generalitat de Catalunya and Ministerio de Ciencia e Innovacion, Spain, project PID2021-123994NB-C21.
## Appendix A Topologies and synchronization quantification
The network topologies considered when studying electrical coupling between nodes are presented in Fig. 8. The black links represent undirected connections between nodes, so the resulting adjacency matrices are symmetric (\(A=A^{T}\)). The simulated time evolution of the membrane potentials of all nodes for each network topology is also shown in Fig. 8 on the right side, together with two synchronization measures, the Kuramoto order parameter [33]\(R\) and the synchronization error \(Err\).
To evaluate the Kuramoto order parameter, we assign a phase \(\phi\) for each neuron time series that grows linearly at each spike with a gain of \(2\pi\) as defined in Ivanchenko et al. [34]. The Kuramoto order parameter is given by
\[R=\frac{\left\langle|\sum_{j=1}^{N}e^{i\phi_{j}(t)}|\right\rangle_{t}}{N}\, \tag{10}\]
where \(N\) is the number of oscillators considered in the measure and the average is taken over time. For totally synchronized systems, \(R=1\). For totally unsynchronized systems, \(R\approx 0\).
Likewise, the synchronization error gives us an idea of how synchronized the system is, we apply it directly to the time
Figure 7: Evolution of the distance \(D\) between the adjacency matrix and the estimated one in the case of time-dependent coupling. \(D\) is depicted as a function of time, for different coupling strengths. The vertical dashed line represents the instant in which the coupling is turned on and the horizontal one marks the zero.
series. First, we calculate the average membrane potential \(\bar{x}\) of all oscillators in the network. Then, we compute how much each oscillator deviates from \(\bar{x}\). Thus, the synchronization error is computed as
\[Err=\left\langle\frac{\sum_{i=1}^{N}|x_{i}(t)-\bar{x}(t)|}{N}\right\rangle_{t}. \tag{30}\]
Hence, \(Err=0\) in the case of total synchronization, where \(x_{i}=x_{j}\), \(\forall\ (i,j)\in[1,N]\). While for unsynchronized systems, \(Err\) may assume large values.
When both electrical and chemical coupling between nodes are considered, we use the topologies presented in Fig. 9. The adjacency matrices are not symmetric (\(A\neq A^{T}\)), and all the networks are heterogeneous, meaning that the nodes have a different number of connections. The simulated time evolution of the membrane potentials of all nodes for each network topology is also shown in Fig. 9 on the right side.
|
2308.02439 | A large language model-assisted education tool to provide feedback on
open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | 2023-07-25T19:49:55Z | http://arxiv.org/abs/2308.02439v1 | # A large language model-assisted education tool to provide feedback on open-ended responses
###### Abstract
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
language models Automated learning assessment Automated grading Education
## Introduction
Open-ended questions -- questions that require students to produce multi-word, nontrivial responses -- are a popular assessment tool in educational environments because they offer students the chance to explore their understanding of learning material. Such questions provide valuable insight into students' grasp of complex concepts and their problem-solving approaches. However, grading open-ended questions can be time-consuming, subjective, and -- especially in the case of large class sizes -- prone to attentional errors. These factors create a critical bottleneck in precision education.
Large Language Models (LLMs) present an opportunity to automate and promote equity in learning assessments, providing rapid valuable feedback to students while reducing the burden on instructors. We developed a tool that automatically assesses students' responses to open-ended questions by evaluating their responses against a set of instructor-defined criteria. To use our tool, the instructor poses a question along with optional grading criteria. Students respond to these questions, and their answers are relayed to a server. The responses are paired with the grading criteria (which are not revealed to the student), forming a payload for a large language model (LLM). The LLM then generates automated feedback, suggesting areas for improvement to the student.
Here, we describe the technical design of our tool, _FreeText_, and showcase its utility in educational environments spanning topics and complexity. We further outline the implications of our work for teaching complex subjects, and the potential role of large language models in education (**Fig. 1**). We share our source code and a public URL (see _Supplemental Materials_), allowing educators to experiment with _FreeText_ firsthand.
Figure 1: **Sketch comparing grading throughput and quality of feedback to students among various assessment methodologies** The \(y\)-axis represents throughout (i.e., rapidity of feedback generation and number of assignments evaluated per real-world unit-time or cost), and the \(x\)-axis represents feedback quality (a qualitative measure of personalization and detail of feedback given to students). LLMs have the potential to fill a niche among educational tools by striking a balance between quantity and quality, delivering high throughput with feedback quality comparable to human graders. Improving in technology (_I_aster GPU) across, better LLM architectures) will continue to push throughput upward, and improvements in prompt design (or other domain-specific adaptations) will improve the quality of LLM-generated feedback.
## Related Work
Automated grading is a longstanding pursuit in the field of education technology. Early automated grading tools focused on'solvable' tasks like math or programming assignments, where grading generally relies on unit tests or direct output comparisons Hollingsworth (1960); Ureell II and Wallace (2019); Orr and Russell (2021); Messer et al. (2023). These approaches often overlook less easily-quantified but nonetheless critical indicators of learning and understanding, such as design quality, code maintainability, or potential areas of student confusion. Modern tools, like AutoGrader, which provides real-time grading for programming exercises, remain narrowly focused on output correctness and do not sufficiently account for documentation or maintainability Liu et al. (2019).
Assessing students' understanding from natural language responses, however, presents different challenges and has seen significant evolution. Early Automated Short Answer Grading (ASAG) models employed statistical or domain-specific neural network approaches Heilman and Madnani (2013); Riordan et al. (2017); Sung et al. (2019). In recent years, LLMs have been shown to outperform domain-specific language models Radford et al. (2019); Mizumoto et al. (2019); Brown et al. (2020); Chung et al. (2022). LLMs facilitate grading of open-ended assignment responses, without the need for task-specific fine-tuning Cao (2023); Mizumoto and Eguchi (2023); Yoon (2023). However, Kortemeyer (2023) revealed that while LLMs like GPT-4 could be useful for preliminary grading of introductory physics assignments, they fell short for natural-language responses required in comprehensive exam grading. Further, while LLMs like GitHub Copilot streamline the process of code generation and review, they can fall short on more nuanced programming tasks and open-ended evaluation Finnie-Ansley et al. (2022). Thus, in their current state, LLMs should be treated as a useful but fallible tool, with final assessments still in the hands of (human) instructors.
It is also important to consider how students perceive AI graders and how automated graders are deployed to educational settings Burrows et al. (2015); Saha et al. (2019); Zhu et al. (2022). Many comment on the socio-technical dynamics of automated grading, including the potential for introduction of machine bias (e.g., Hsu et al. (2021)). The use of NLP for short answer grading is not a trivial task and has been set as an evaluation challenge in its own right Dzikovska et al. (2013).
To address the evolving needs of grading open-ended responses, our framework proposes four key enhancements. First, it is specifically designed for open-ended questions, which are not typically well-served by the rubric-based grading of most ed-tech tools. Second, our system leverages LLMs to deliver rapid, personalized feedback for student responses without explicitly attempting to produce a quantitative grade. Third, our framework introduces a feedback loop to continually improve instructor-provided prompts, question suggestions, and grading criteria. Lastly, our tool integrates with the Jupyter Notebook environment, extensively utilized in fields such as computer science, data science, and statistics.
## Approach
We have designed our tool for use in a variety of educational contexts, ranging from primary school education to graduate courses. _FreeText_ enables educators to integrate open-ended questions into their curriculum without incurring an instructor labor cost. This allows students to gain rapid, individualized, and sophisticated feedback, thereby creating a highly effective learning loop that can enhance the absorption of course materials. It guides students in refining their responses, enhancing their understanding and application of concepts in each iteration. This feedback is generated by a large language model (LLM), which circumvents the attentional errors often made by human graders, particularly when assessing a large volume of assignments. The LLM is capable of delivering intricate responses to students swiftly, as demonstrated by the examples provided in Table 1.
Our software is packaged as a Python library. LLM interactions are handled by the _Guidance_ Python package Microsoft (2023). User interfaces and a JSON HTTP API are supported by FastAPI Lathkar (2023). We support traditional (e.g., JSON files, SQLite) as well as cloud-based data storage drivers. Our server can be run at low financial and computational cost through the combination of serverless deployment (e.g., to AWS Lambda) and serverless databases (e.g., AWS DynamoDB). Student responses are not stored by _FreeText_ infrastructure by default.
Any _Guidance_-compatible LLM may be swapped into the Freetext server. That is, by default we access LLMs through the OpenAI API, but it is easy to swap in locally hosted or fine-tuned models: thus, privileged or sensitive information may be kept to on-premise compute resources, or users may opt to change which API-based LLM is accessed. For example, a more powerful LLM may be selected in cases where course content is particularly complex, or a simpler model may be used for more elementary course content.
One front-end that students can access is a Jupyter Notebook widget, developed using IPyWidgets Kluyver et al. (2016), making it easy to incorporate natural language short-answer questions as part of a notebook-based active-learning environment.
The widget communicates with the backend
Python server described above. The widget is designed to be easily integrated into lecture and homework notebooks, enabling instructors to easily enrich existing teaching materials. A distinctive feature of our system is the intermediary server which equips the large language model with 'held-out' information, such as a rubric for correct responses, accessible only to the LLM and instructor, and not to the student. This establishes the useful informational asymmetry between the evaluator and the student.
To include the widget in a Python environment, the instructor can include the following code:
!pip install freetext_jupyter from freetext_jupyter import FreetextWidget
FreetextWidget( # This ID is generated by the instructor. "07b2c3ef-0f97-46bc-al1e-..." )
When executed in a Jupyter notebook cell, this code will access the HTTP API to replace the widget with the corresponding question text for the student. Upon encountering the widget in a notebook, the student is presented with an open-ended question accompanied by a text box for response input. When they submit their response, the system transmits it to the server for combination with the feedback criteria set by the instructor.
In the next stage, the student response and the pre-defined feedback criteria are bundled into a payload dispatched to a large language model. The LLM processes this payload and produces personalized feedback to the response. This feedback is relayed back to the student with seconds of latency through the web or notebook interface, offering them the immediate opportunity to reflect, amend, and improve their response as desired (**Fig. 2**).
Our tool is designed to be easily deployable and scalable. The _FreeText_ server can be run in resource-constrained or serverless platforms such as AWS Lambda. This allows for easy deployment and scaling, which is particularly important for large-scale projects and massive-scale courses (van Viegen et al., 2021). Our API can also be combined with other existing educational tools in order to capture and store student responses for instructor review.
### Question Design
Instructors can provide a question for students to answer -- either programmatically, by accessing our HTTP API -- or graphically in the browser using the simple web application UI. Instructors can also provide optional assessment criteria -- text like _'make sure the student mentions DNA base pairs in their answer."_
_FreeText_ can use question content to automatically establish grading criteria, or it can use the assessment criteria to improve the text of the question. The latter process works by asking the AI to serve as a student and answer a question while oblivious to the instructor's grading criteria. Then, the answer is automatically evaluated by a separate instantiation of the LLM -- this time, against the instructor criteria. The assessment model determines if the student has been unfairly penalized due to omission of requirements (or a lack of clarity) in the original question text. If so, the question is updated to better encompass the requirements of the grading criteria.
This process of iteratively incorporating assessment criteria is subtly different from simply including the criteria in the question text: For example, if the question text is, _"What is the Rosetta Stone?"_ and the criteria include, _"Mention why the Ptolemaic dynasty created the Rosetta Stone"_, a _bad_ question update would be to explicitly ask about the Egyptian political system, as this gives the student more information than the instructor originally intended. A _better_ question update would be _"Explain what the Rosetta Stone is and the context of its creation,"_ because this nudges the student to discuss the right material but does not give any new information.
### Question Presentation
There are two built-in methods to present questions to students: the first is a simple web API, which can be used standalone, coupled with response-collection tools, or embedded within other web applications. The second is a Jupyter Notebook widget that can be embedded in tutorial coding notebooks.
The JSON web API endpoints may be accessed directly by application code, or students can access a simple web user interface. This interface comprises a question display and a textbox for student responses (see _Supplemental Materials_). Feedback to students is rendered beneath the response box upon answer submission, and students may reuse the same page to re-submit amended answers.
The Jupyter Notebook widget is designed to make it easy for instructors to include open-ended questions in their assignments and subject the grading of student responses to custom grading criteria. This flexibility makes it easy for instructors to tailor the tool to their specific needs and teaching style.
### Feedback to Students
Our tool provides two types of feedback to students. The first is a holistic text response that provides feedback on the entire answer as a whole. The second is span-bound feedback (referring to a specific substring of the response) that can be used to highlight specific parts of the text that are erroneous or otherwise need
student attention. For example, if a student's answer is correct but they misattribute a quote, the _FreeText_ server could highlight the attribution specifically to give feedback. The type of feedback returned can be specified by the instructor during question creation.
## 3 Discussion
Here we introduced _FreeText_, a framework capable of defining questions, collecting student responses, transmitting these responses alongside instructor expectations to a large language model (LLM), and generating rapid and personalized feedback for the students. Notably, the entirety of the student-facing workflow can be encapsulated within a Jupyter notebook, facilitating real-time enhancement of students' understanding of the course material. _FreeText_ is not confined to a web application and Jupyter notebooks, or the academic subjects mentioned above. The _FreeText_ Server can integrate with any application that consumes a JSON HTTP API, expanding its potential to a wider range of educational settings.
Our system's broad applicability becomes evident when considering diverse learning models, such as the pod-based approach adopted by the online course Neuromatch Academy (van Viegen et al., 2021) in the field of computational neuroscience. In such settings, small student groups or 'pods' collaboratively tackle assignments and projects. Teaching Assistants, tasked with providing feedback, can benefit from our tool, as it can streamline grading processes, reducing potential for attentional errors and freeing up instructors to deliver more personalized guidance to students.
Fully automated student evaluation is challenging both from a technical perspective and from a human perspective, and thus _FreeText_ is designed not to fully automate grading, but to serve as a useful tool benefiting both students and instructors. _FreeText_ benefits students by providing rapid and personalized feedback on short-answer questions. _FreeText_ benefits instructors by helping them to design better questions and grading criteria, by providing first-pass material for learning assessments, and by alleviating some of the burden of providing individualized instruction in large classes. LLMs in general, and _FreeText_ specifically, are not a replacement human instructors, but they can nonetheless fill a niche among education technologies.
LLMs undoubtedly hold immense power and potential. However, it is crucial to have an in-depth discussion about their ethical implications, especially in education. A key issue to consider is the potential biases that LLMs can introduce. These biases could unintentionally touch on sensitive subjects or unintentionally overlook marginalized groups. Instructors have a role to play by carefully designing their questions and assessment criteria. Further, students should be made aware of the nature of the system they are interacting with and its potential to make mistakes or act on internalized biases (Hsu et al., 2021). On the other hand, automated systems such as _FreeText_ present an opportunity to reduce instructors' unconscious biases by evaluating all students' responses equally and without any explicit identification.
Furthermore, we must consider the broader dynamics of the AI ecosystem. The realm of LLMs is not limited to the offerings of large AI conglomerates like OpenAI. A burgeoning industry of alternative LLMs, both from smaller commercial entities and open-source initiatives (Anthropic, 2023; Taori et al., 2023; Touvron et al., 2023; Wolf et al., 2020), is flourishing. Our
Figure 2: **A sequence diagram illustrating the flow of information within the FreeText system.****A**. First, an instructor formulates a question by supplying a student-facing question (“Question”) along with grading criteria for the LLM to evaluate student responses. In return, the educator obtains a unique identifier from the database, instrumental in retrieving the question text in the following step. **B**. Equipped with a unique Question identifier, a student provides an answer to the educator’s query (“Response”). The API receives this request, pairing the Response with a Prompt based upon the educator’s question and criteria, and directs them towards a large language model for evaluation. **C**. A screenshot of the _FreeText_ Jupyter widget integrated into an interactive code notebook.
framework is designed to be model-agnostic and can be readily adapted to integrate these alternative models.
Reliance solely on models from a single entity such as OpenAI raises two significant concerns. First, it centralizes the concentration of AI development resources and power, thereby exacerbating the already pronounced inequalities in the global AI landscape. Second, it can lead to a homogenization of the knowledge and perspectives propagated by AI models, potentially resulting in a limited and biased worldview. _FreeText_ is therefore _deliberately_ agnostic to the underlying LLM model and technologies.
We intend for our tool to enrich and expand students' educational experience, particularly in large-scale or resource-constrained course settings where detailed human intervention may be limited. Ongoing work includes the careful critique and evaluation of _FreeText_ outputs by expert instructors, taking advantage of upcoming opportunities to apply this technology in a large class setting.
Embracing both technical as well as human diversity helps mitigate many of the concerns raised above and enriches the AI ecosystem. A broad range of perspectives stalls the monopolization of AI technology and fosters a more balanced, equitable, and robust AI landscape. This viewpoint aligns with our belief in the need for broad and diverse human inputs, both in the creation of AI models and in their applications in society.
## Supplemental Materials
Full-resolution versions of all images and tables from this publication are available at [https://llm4edu.experiments.kordinglab.com/paper](https://llm4edu.experiments.kordinglab.com/paper).
The FreeText server will be hosted temporarily for public use at [https://llm4edu.experiments.kordinglab.com/app](https://llm4edu.experiments.kordinglab.com/app), with an interactive example assignment available at [https://llm4edu.experiments.kordinglab.com/app/assignments/1393754a-d80f-474d-bff7-bifec36cdbb7](https://llm4edu.experiments.kordinglab.com/app/assignments/1393754a-d80f-474d-bff7-bifec36cdbb7). Educators may contact us at the correspondence email of this preprint for a token, which is required to create new questions on our public instance.
Our Jupyter Notebook Widget is available on GitHub at [https://github.com/KordingLab/fretext-jupyter](https://github.com/KordingLab/fretext-jupyter), and is powered by the FreeText Server, which can be found at [https://github.com/KordingLab/llm4teach-fretext-server](https://github.com/KordingLab/llm4teach-fretext-server).
## Acknowledgements
Research in this publication was supported by the National Institutes of Health under award number UC2-NS128361. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
|
2310.17518 | Existence and uniqueness results to a quasilinear singular Lane-Emden
Neumann system | We establish the existence and uniqueness of solutions for quasilinear
singular Lane-Emden type systems subjected to Neumann boundary conditions. The
approach is chiefly based on sub-supersolutions method. | Nouredine Medjoudj, Abdelkrim Moussaoui | 2023-10-26T16:12:40Z | http://arxiv.org/abs/2310.17518v1 | # Existence and uniqueness results to a quasilinear singular Lane-Emden Neumann system
###### Abstract.
We establish the existence and uniqueness of solutions for quasilinear singular Lane-Emden type systems subjected to Neumann boundary conditions. The approach is chiefly based on sub-supersolutions method.
Key words and phrases:Singularity, Lane-Emden system, Neumann boundary conditions, sub-supersolutions, uniqueness 2010 Mathematics Subject Classification: 35J75, 35J62, 35J92
## 1. Introduction
Let \(\Omega\) be a bounded domain in \(\mathbb{R}^{N}\) (\(N\geq 2\)) having a smooth boundary \(\partial\Omega.\) Given \(1<p_{i}<N\) for \(i=1,2,\) we consider the following quasilinear Lane-Emden type system
\[\text{(P)}\qquad\left\{\begin{array}{ll}-\Delta_{p_{1}}u+|u|^{p_{1}-2}u=u^{ \alpha_{1}}+v^{\beta_{1}}&\text{in }\Omega,\\ -\Delta_{p_{2}}v+|v|^{p_{2}-2}v=u^{\alpha_{2}}+v^{\beta_{2}}&\text{in }\Omega,\\ u,v>0&\text{in }\Omega,\\ \frac{\partial u}{\partial\eta}=\frac{\partial v}{\partial\eta}=0&\text{on } \partial\Omega,\end{array}\right.\]
where \(\eta\) is the unit outer normal to \(\partial\Omega,\) while \(\Delta_{p_{i}}\) denotes the \(p_{i}\)-Laplace operator, namely \(\Delta_{p_{i}}:=\text{div}(|\nabla w|^{p_{i}-2}\nabla w),\)\(\forall\,w\in W^{1,p_{i}}(\Omega).\) We consider system (P) in a singular case assuming that the exponents verify the condition
\[-1<\alpha_{1},\beta_{2}<0,\ \ -1<\beta_{1}<p_{1}-1\ \ \text{and}\ \ -1<\alpha_{2}<p_{2}-1. \tag{1.1}\]
A solution \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\) of (P) is understood in the weak sense, that is,
\[\left\{\begin{array}{ll}\int_{\Omega}|\nabla u|^{p_{1}-2}\nabla u\nabla \varphi\,\mathrm{d}x+\int_{\Omega}|u|^{p_{1}-2}u\varphi\,\mathrm{d}x=\int_{ \Omega}(u^{\alpha_{1}}+v^{\beta_{1}})\varphi\,\mathrm{d}x\\ \int_{\Omega}|\nabla v|^{p_{2}-2}\nabla v\nabla\psi\,\mathrm{d}x+\int_{ \Omega}|v|^{p_{2}-2}v\,\psi\,\mathrm{d}x=\int_{\Omega}(u^{\alpha_{2}}+v^{ \beta_{2}})\,\psi\,\mathrm{d}x\end{array}\right. \tag{1.2}\]
for all \((\varphi,\psi)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega).\)
System (P) is a natural extension and generalization of the celebrated Lane-Emden equation
\[\Delta w+w^{\gamma}=0\ \text{in }\Omega, \tag{1.3}\]
subjected to Neumann boundary conditions. It is introduced by Homer Lane [21], who was interested in computing both the temperature and the density
of mass on the surface of the sun. The Lane-Emden equation (1.3) has been the focus of a huge number of works which we shall not discuss merely mentioning some of them [2, 3, 12]. It is involved in wide range of phenomena in mathematical physics and chemistry, specifically in the areas of conformal geometry, thermal explosion, isothermal gas spheres and thermionic currents [31]. In astrophysics, it describes the behavior of the density of a gas sphere in hydrostatic equilibrium. The index \(\gamma,\) called the polytropic index, is related to the ratio \(r\) of the specific heats of the gas throught \(\gamma=\frac{1}{r-1}.\) It is used to determine the structure of the interior of polytropic stars, which are subject to the influence of their own gravitational field [5]. The case \(\gamma<0\) in (1.3) is highly challenging as it brings out a singularity at the origin. It has attracted considerable interest in recent years. We mention [8] where it is shown that problem (1.3) subject to Dirichlet boundary conditions admits a unique positive solution \(u\) in \(\mathcal{C}^{2}(\Omega)\cap\mathcal{C}(\overline{\Omega})\). In addition, this solution \(u\) belongs to \(\mathcal{C}^{2}(\Omega)\cap\mathcal{C}^{1}(\overline{\Omega})\) once \(-1<\gamma<0.\) When the quasilinear \(p\)-Laplacian operator is involved in singular problem (1.3), [15] provides a unique solution in \(\mathcal{C}^{1,\tau}(\overline{\Omega}),\)\(\tau\in(0,1),\) for \(-1<\gamma<0.\)
Singularities are an important feature in the study of the problem (P). They occur near the origin under assumption (1.1) on nonlinearities. This fact represents a serious difficulty to overcome especially since a very marked singularity character is considered for (P), resulting from (1.1) when all exponents are negative. Actually, singularities are involved in a wide range of important elliptic problems which have been studied extensively in recent years, see for instance [1, 8, 10, 11, 13, 14, 15, 17, 19, 24, 25, 26, 28, 27] and there references. The semilinear case, that is when \(p_{1}=p_{2}=2,\) has been widely investigated, especially in the context of the Gierer Meinhardt system (see, e.g., [27] and the references therein). However, as far as we know, [16] is the only paper where Neumann boundary conditions is considered for singular quasilinear systems. We emphasize that system (P) cannot be incorporated neither in Gierer-Meinhardt system addressed in [27] nor in the convective system studied in [16] even if gradient terms involved are canceled.
In the aforementioned papers, two complementary structures are separately discussed. In [1, 10, 11, 14, 19, 26], the systems examined are cooperative while in [13, 28, 24, 25], they are competitive. For system (P), these both structures are closely related to the sign of exponents \(\alpha_{2}\) and \(\beta_{1}.\) Namely, (P) is cooperative if \(\min\{\alpha_{2},\beta_{1}\}>0\) whereas for opposite strict inequality, (P) is competitive. It should be noted that, in the present paper, these both complementary structures for the system (P) are handled simultaneously without referring to them despite their important structural disparity that makes the right hand side in (P) behaving in a drastically different way.
Motivated by the aforementioned facts, our main concern is the question of existence of solutions for system (P). By adequate truncation and
owing to Schauder's fixed point theorem, we first develop two new sub-supersolutions Theorems for a general singular Neumann Systems involving \(p\)-Laplacian operator (cf. Theorems 2.1 and 2.2, section 2). These results address respectively the case of bounded and unbounded nonlinearities, restricted to the rectangle formed by the sub-supersolution pair. Thence, the nature and properties of the sub-supersolutions are meaningfully impacted which, henceforth, can be constructed on a wider choice of functions than specified in the literature. Namely, this enable to consider subsolutions subject to homogeneous Dirichlet boundary conditions, what was not possible in [16] under assumption (H) stated therein. It is worth noting that, unlike to what has been stated in [27], sub-supersolutions cannot have both zero traces on the boundary \(\partial\Omega\). This would lead to a solution for problem (P) with zero trace condition on \(\partial\Omega\) and therefore, according to [17, Lemma 3.1], its normal derivative would be nontrivial, which is absurd. Another important issue being addressed by Theorems 2.1 and 2.2 concern the property sign of normal derivative on \(\partial\Omega\) of sub-supersolutions. It is established that the condition (2.3) is crucial to handle quasilinear Neumann problems via sub-supersolutions method and therefore, in no case could be ignored. Furthermore, in the aforementioned Theorems, we mention that no sign condition is required on the right-hand side nonlinearities. Hence, they can be used for large classes of quasilinear singular problems, including those discussed in [16, 27].
In the prospect of applying Theorems 2.1 and 2.2, pairs of sub-supersolutions for system (P) are constructed by exploiting spectral properties of the \(p\)-Laplacian operator as well as properties of torsion problems, both subject to Dirichlet or Neumann boundary conditions (cf. section 4). A suitable adjustment of constants is also required. Moreover, by making some specific and necessary adjustments, it is quite possible to construct a pair of sub-supersolution in the spirit of [16] which verifies assumption (2.3). Nevertheless, although their new form is appropriate, it no longer leads to an infinite sequence of solutions, as it is stated in [16]. This seems to be unfeasible under assumption (2.3).
The sub-supersolutions pairs establish a location of solutions of (P) provided by Theorems 3.1 and 3.2 (cf. section 3). We show that their regularity property depend on the behavior of the subsolution near the boundary \(\partial\Omega\). Precisely, when the subsolution has a zero trace condition on \(\partial\Omega\), solutions \((u,v)\) of (P) are bounded in \(L^{\infty}(\Omega)\times L^{\infty}(\Omega)\), whereas in the complementary case, \((u,v)\) are bounded in \(\mathcal{C}^{1}(\overline{\Omega})\times\mathcal{C}^{1}(\overline{\Omega})\). Note that the absence of \(\mathcal{C}^{1}\)-bound for solutions in the previous case is rather due to the fact that we have not been able to find in the literature an equivalent of the regularity theorem for singular Dirichlet problems [17, Lemma 3.1] to the case of Neumann boundary conditions. Regularity results for singular Neumann problems remains an open question.
A second main objective of our work is to provide a criterion ensuring the uniqueness of a solution to problem (P). In this direction, the uniqueness
result is established for \(\mathcal{C}^{1}\)-bound solutions. The proof is based on the adaption of argument by Krasnoselskii [18] where properties of sub-supersolutions are crucial. In this part of our work, we assume all exponents in (1.1) negative, subject to some additional restrictions as described in our result (cf. section 3). It is worth noting that the previous argument does not apply to \(L^{\infty}\)- bound solutions simply because the obtained supersolutions are not comparable to the distance function \(d(x)\) in \(\Omega\).
The rest of this article is organized as follows. Section 2 contains the existence theorems involving sub-supersolutions. Section 3 presents abstract existence and uniqueness results. Section 4 deals with the existence and uniqueness of solutions for problem (P).
## 2. Sub-super-solution theorems
The Sobolev spaces \(W^{1,p_{i}}(\Omega)\) and \(W^{1,p_{i}}_{0}(\Omega)\) will be equipped with the norm
\[\|w\|_{1,p_{i}}:=\left(\|w\|_{p_{i}}^{p_{i}}+\|\nabla w\|_{p_{i}}^{p_{i}} \right)^{\frac{1}{p_{i}}},\quad w\in W^{1,p_{i}}(\Omega),\]
\[\|w\|_{1,p_{i}}:=\|\nabla w\|_{p_{i}},\quad w\in W^{1,p_{i}}_{0}(\Omega),\]
where, as usual,
\[\|w\|_{p_{i}}:=\left\{\begin{array}{ll}\left(\int_{\Omega}|w(x)|^{p_{i}} \mathrm{d}x\right)^{\frac{1}{p_{i}}}&\mbox{ if }p_{i}<+\infty,\\ &\\ \mathop{ess\,\sup}_{x\in\Omega}|w(x)|&\mbox{ otherwise.}\end{array}\right.\]
We denote by \((W^{1,p_{i}}(\Omega))^{*}\) the topological dual space of \(W^{1,p_{i}}(\Omega)\). Moreover,
\[W^{1,p_{i}}_{+}(\Omega):=\{w\in W^{1,p_{i}}(\Omega):w\geq 0\},\quad W^{1,p_{i}} _{b}(\Omega):=W^{1,p_{i}}(\Omega)\cap L^{\infty}(\Omega),\]
and
\[W^{1,p_{i}}_{0,b}(\Omega):=W^{1,p_{i}}_{0}(\Omega)\cap L^{\infty}(\Omega)\]
We also utilize
\[\mathcal{C}^{1,\tau}_{+}(\overline{\Omega}) : =\{w\in\mathcal{C}^{1,\tau}(\overline{\Omega}):w\geq 0\mbox{ for all }x\in\overline{\Omega}\},\] \[int\mathcal{C}^{1,\tau}_{+}(\overline{\Omega}) : =\{w\in\mathcal{C}^{1,\tau}_{+}(\overline{\Omega}):w>0\mbox{ for all }x\in\overline{\Omega}\},\mbox{ for }\tau\in(0,1).\]
In what follows, we set \(r^{\pm}:=\max\{\pm r,0\}\) and we denote by \(\gamma_{0}\) the unique continuous linear map \(\gamma_{0}:W^{1,p_{i}}(\Omega)\to L^{p_{i}}(\partial\Omega)\) known as the trace map such that \(\gamma_{0}(u)=u_{/\partial\Omega}\), for all \(u\in W^{1,p}(\Omega)\) and verifies the property
\[\gamma_{0}(u^{+})=\gamma_{0}(u)^{+}\ \mbox{ for all }u\in W^{1,p_{i}}(\Omega), \tag{2.1}\]
(see, e.g., [23]).
Hereafter, we denote by \(d(x)\) the distance from a point \(x\in\overline{\Omega}\) to the boundary \(\partial\Omega\), where \(\overline{\Omega}=\Omega\cup\partial\Omega\) is the closure of \(\Omega\subset\mathbb{R}^{N}\). For \(1<r<N\) and \(-r<s\leq 0\), it is known that
\[\left(\int_{\Omega}d(x)^{s}|u(x)|^{r}\mathrm{d}x\right)^{\frac{1}{r}}\leq C\| u\|_{1,r}\quad\forall\,u\in W^{1,r}(\Omega),\]
with suitable \(C>0\); see [29, Theorem 19.9, case (19.29)]. Accordingly, by Holder's inequality, if \(-1<\beta\leq 0\) then
\[\int_{\Omega}|d(x)^{\beta}u(x)|\,\mathrm{d}x\leq|\Omega|^{\frac{1}{r^{\prime}}} \left(\int_{\Omega}d(x)^{\beta r}|u(x)|^{r}\mathrm{d}x\right)^{\frac{1}{r}} \leq C|\Omega|^{\frac{1}{r^{\prime}}}\|u\|_{1,r},\ \ u\in W^{1,r}(\Omega). \tag{2.2}\]
Finally, we say that \(j:\Omega\times\mathbb{R}^{2}\to\mathbb{R}\) is a Caratheodory function provided
* \(x\mapsto j(x,s,t)\) is measurable for every \((s,t)\in\mathbb{R}^{2}\), and
* \((s,t)\mapsto j(x,s,t)\) is continuous for almost all \(x\in\Omega\).
This section investigates the existence of solutions to system
\[(\mathrm{P}_{f_{1},f_{2}})\qquad\left\{\begin{array}{ll}-\Delta_{p_{1}}u+|u |^{p_{1}-2}u=f_{1}(x,u,v)&\mbox{in }\Omega,\\ -\Delta_{p_{2}}v+|v|^{p_{2}-2}v=f_{2}(x,u,v)&\mbox{in }\Omega,\\ \frac{\partial u}{\partial\eta}=\frac{\partial v}{\partial\eta}=0&\mbox{on } \partial\Omega,\end{array}\right.\]
where \(f_{i}:\Omega\times\mathbb{R}^{2}\to\mathbb{R}\) satisfy Caratheodory's conditions. The following assumptions will be posited.
* With appropriate \((\underline{u},\underline{v}),(\overline{u},\overline{v})\in W^{1,p_{1}}_{b}( \Omega)\times W^{1,p_{2}}_{b}(\Omega)\) such that (2.3) \[\max\{\frac{\partial u}{\partial\eta},\frac{\partial v}{\partial\eta}\}\leq 0 \leq\min\{\frac{\partial\overline{u}}{\partial\eta},\frac{\partial\overline{v} }{\partial\eta}\}\mbox{ on }\partial\Omega,\] one has \(\underline{u}\leq\overline{u}\), \(\underline{v}\leq\overline{v}\), as well as (2.4) \[\left\{\begin{array}{ll}\int_{\Omega}|\nabla\underline{u}|^{p_{1}-2}\nabla \underline{u}\nabla\varphi\,\mathrm{d}x+\int_{\Omega}|\underline{u}|^{p_{1}- 2}\underline{u}\varphi\,\mathrm{d}x\\ -\int_{\partial\Omega}|\nabla\underline{u}|^{p_{1}-2}\frac{\partial u}{ \partial\eta}\gamma_{0}(\varphi)\,\,\mathrm{d}s\leq\int_{\Omega}f_{1}(\cdot, \underline{u},v)\varphi\,\mathrm{d}x,\\ \int_{\Omega}|\nabla\underline{v}|^{p_{2}-2}\nabla\underline{v}\nabla\psi\, \mathrm{d}x+\int_{\Omega}|\underline{v}|^{p_{2}-2}\underline{v}\,\psi\, \mathrm{d}x\\ -\int_{\partial\Omega}|\nabla\underline{v}|^{p_{2}-2}\frac{\partial\underline {v}}{\partial\eta}\gamma_{0}(\psi)\,\,\mathrm{d}s\leq\int_{\Omega}f_{2}( \cdot,u,\underline{v})\psi\,\mathrm{d}x,\end{array}\right.\] (2.5) \[\left\{\begin{array}{ll}\int_{\Omega}|\nabla\overline{u}|^{p_{1}-2}\nabla \overline{u}\,\nabla\varphi\,\mathrm{d}x+\int_{\Omega}|\overline{u}|^{p_{1}- 2}\overline{u}\,\varphi\,\mathrm{d}x\\ -\int_{\partial\Omega}|\nabla\overline{u}|^{p_{1}-2}\frac{\partial\overline{u }}{\partial\eta}\gamma_{0}(\varphi)\,\,\mathrm{d}s\geq\int_{\Omega}f_{1}( \cdot,\overline{u},v)\varphi\,\mathrm{d}x,\\ \int_{\Omega}|\nabla\overline{v}|^{p_{2}-2}\nabla\overline{v}\,\nabla\psi\, \mathrm{d}x+\int_{\Omega}|\overline{v}|^{p_{2}-2}\overline{v}\,\psi\, \mathrm{d}x\\ -\int_{\partial\Omega}|\nabla\overline{v}|^{p_{2}-2}\frac{\partial\overline{v }}{\partial\eta}\gamma_{0}(\psi)\,\,\mathrm{d}s\geq\int_{\Omega}f_{2}(\cdot,u,\overline{v})\psi\,\mathrm{d}x\end{array}\right.\] for all \((\varphi,\psi)\in W^{1,p_{1}}_{+}(\Omega)\times W^{1,p_{2}}_{+}(\Omega)\), \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\) such that \((u,v)\in[\underline{u},\overline{u}]\times[\underline{v},\overline{v}]\).
* For appropriate \(C>0\) and \(\gamma\in(-1,0)\) one has \[\max|f_{i}(x,s,t)|\leq Cd(x)^{\gamma}\quad\mbox{in}\quad\Omega\times[ \underline{u},\overline{u}]\times[\underline{v},\overline{v}].\]
Note that under \((\mathrm{H}_{2})\) and by virtue of the Hardy-Sobolev inequality type in (2.2), the integrals involving \(f_{1}\) and \(f_{2}\) in (2.4) and (2.5) take sense.
**Theorem 2.1**.: _Suppose \((\mathrm{H}_{1})\)-\((\mathrm{H}_{2})\) hold true. Then, problem \((\mathrm{P}_{f_{1},f_{2}})\) possesses a solution \((u,v)\in W^{1,p_{1}}_{b}(\Omega)\times W^{1,p_{2}}_{b}(\Omega)\) such that_
\[\underline{u}\leq u\leq\overline{u}\quad\mbox{and}\quad\underline{v}\leq v \leq\overline{v}. \tag{2.6}\]
_Moreover, \(\frac{\partial u}{\partial\eta}=\frac{\partial v}{\partial\eta}=0\) on \(\partial\Omega\)._
Proof.: Given \((z_{1},z_{2})\in L^{p_{1}}(\Omega)\times L^{p_{2}}(\Omega)\), we define
\[\mathrm{T}_{1}(z_{1}):=\left\{\begin{array}{ll}\underline{u}&\text{when }z_{1} \leq\underline{u},\\ z_{1}&\text{if }\underline{u}\leq z_{1}\leq\overline{u},\\ \overline{u}&\text{otherwise},\end{array}\right.\quad\mathrm{T}_{2}(z_{2}):= \left\{\begin{array}{ll}\underline{v}&\text{when }z_{2}\leq\underline{v},\\ z_{2}&\text{if }\underline{v}\leq z_{2}\leq\overline{v},\\ \overline{v}&\text{otherwise}.\end{array}\right. \tag{2.7}\]
Assumption (\(\mathrm{H}_{2}\)) together with Hardy-Sobolev type inequality (2.2) we infer that
\[f_{i}(x,\mathrm{T}_{1}(z_{1}),\mathrm{T}_{2}(z_{2}))\in\left(W^{1,p_{i}}( \Omega)\right)^{*},\text{ for }i=1,2.\]
Then, from Minty-Browder Theorem (see, e.g., [4, Theorem V.15]), it follows that the auxiliary problem
\[\left\{\begin{array}{ll}-\Delta_{p_{1}}u+|u|^{p_{1}-2}u=f_{1}(x,\mathrm{T}_ {1}(z_{1}),\mathrm{T}_{2}(z_{2}))&\text{in }\Omega,\\ -\Delta_{p_{2}}v+|v|^{p_{2}-2}v=f_{2}(x,\mathrm{T}_{1}(z_{1}),\mathrm{T}_{2}( z_{2}))&\text{in }\Omega,\\ \frac{\partial u}{\partial\eta}=\frac{\partial v}{\partial\eta}=0&\text{on } \partial\Omega,\end{array}\right. \tag{2.8}\]
admits a unique solution \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\).
Let us introduce the operator
\[\begin{array}{ccc}\mathcal{T}:&L^{p_{1}}(\Omega)\times L^{p_{2}}(\Omega)& \rightarrow&W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\\ &(z_{1},z_{2})&\mapsto&(u,v).\end{array}\]
We note from (2.8) that any fixed point of \(\mathcal{T}\) within \([\underline{u},\overline{u}]\times[\underline{v},\overline{v}]\) coincides with the weak solution of (\(\mathrm{P}_{f_{1},f_{2}}\)).
Let us show that \(\mathcal{T}\) is continuous. Let \((z_{1,n},z_{2,n})\rightarrow(z_{1},z_{2})\) in \(L^{p_{1}}(\Omega)\times L^{p_{2}}(\Omega)\) for all \(n\). Denote \((u_{n},v_{n})=\mathcal{T}(z_{1,n},z_{2,n})\), which reads as
\[\begin{array}{c}\int_{\Omega}|\nabla u_{n}|^{p_{1}-2}\nabla u_{n}\nabla \varphi_{1}\,\mathrm{d}x+\int_{\Omega}|u_{n}|^{p_{1}-2}u_{n}\varphi_{1}\, \mathrm{d}x\\ =\int_{\Omega}f_{1}(x,\mathrm{T}_{1}(z_{1,n}),\mathrm{T}_{2}(z_{2,n}))\varphi_ {1}\,\mathrm{d}x\end{array} \tag{2.9}\]
and
\[\begin{array}{c}\int_{\Omega}|\nabla v_{n}|^{p_{2}-2}\nabla v_{n}\nabla \varphi_{2}\,\mathrm{d}x+\int_{\Omega}|v_{n}|^{p_{2}-2}v_{n}\varphi_{2}\, \mathrm{d}x\\ =\int_{\Omega}f_{2}(x,\mathrm{T}_{1}(z_{1,n}),\mathrm{T}_{2}(z_{2,n}))\varphi_ {2}\,\mathrm{d}x\end{array} \tag{2.10}\]
for all \(\varphi_{i}\in W^{1,p_{i}}(\Omega),\ i=1,2\). Inserting \((\varphi_{1},\varphi_{2})=(u_{n},v_{n})\) in (2.9) and (2.10), using (\(\mathrm{H}_{2}\)) and (2.2), we get
\[\|u_{n}\|_{1,p_{1}}^{p_{1}}=\int_{\Omega}f_{1}(x,\mathrm{T}_{1}(z_{1,n}), \mathrm{T}_{2}(z_{2,n}))u_{n}\,\mathrm{d}x\leq C\int_{\Omega}d(x)^{\gamma}u_{n }\,\mathrm{d}x \tag{2.11}\]
and
\[\|v_{n}\|_{1,p_{2}}^{p_{2}}=\int_{\Omega}f_{2}(x,\mathrm{T}_{1}(z_{1,n}), \mathrm{T}_{2}(z_{2,n}))v_{n}\,\mathrm{d}x\leq C\int_{\Omega}d(x)^{\gamma}v_{n }\,\mathrm{d}x. \tag{2.12}\]
Since \(-1<\gamma<0\), by virtue of the Hardy-Sobolev inequality type in (2.2), the last integrals in (2.11) and (2.12) are finite and it holds
\[\|u_{n}\|_{1,p_{1}}^{p_{1}}\leq C_{1}\,\|u_{n}\|_{1,p_{1}} \tag{2.13}\]
and
\[\|v_{n}\|_{1,p_{2}}^{p_{2}}\leq C_{1}\,\|v_{n}\|_{1,p_{2}}\,, \tag{2.14}\]
for a certain \(C_{1}>0\) independent of \(n\). Thus, \(\{u_{n}\}\) and \(\{v_{n}\}\) are bounded in \(W^{1,p_{1}}(\Omega)\) and \(W^{1,p_{2}}(\Omega)\), respectively. So, passing to relabeled subsequences, we get
\[(u_{n},v_{n})\rightharpoonup(u,v)\;\;\text{in}\;W^{1,p_{1}}(\Omega)\times W^{1, p_{2}}(\Omega), \tag{2.15}\]
for certain \((u,v)\) in \(W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\). Setting \(\varphi_{1}=u_{n}-u\) in (2.9) we find that
\[\begin{array}{l}\int_{\Omega}|\nabla u_{n}|^{p_{1}-2}\nabla u_{n}\nabla(u_{n }-u)\,\mathrm{d}x+\int_{\Omega}|u_{n}|^{p_{1}-2}u_{n}(u_{n}-u)\,\mathrm{d}x\\ =\int_{\Omega}f_{1}(x,\mathrm{T}_{1}(z_{1,n}),\mathrm{T}_{2}(z_{2,n}))(u_{n}-u )\,\,\mathrm{d}x\end{array}\]
Note that \((\mathrm{H}_{2})\) as well as (2.2) ensure that
\[f_{1}(x,\mathrm{T}_{1}(z_{1,n}),\mathrm{T}_{2}(z_{2,n}))(u_{n}-u)\in L^{1}( \Omega). \tag{2.16}\]
Thus, Fatou's Lemma implies
\[\begin{array}{l}\lim_{n\to\infty}\sup\int_{\Omega}f_{1}(x,\mathrm{T}_{1}(z_ {1,n}),\mathrm{T}_{2}(z_{2,n}))(u_{n}-u)\,\,dx\\ \leq\int_{\Omega}\lim_{n\to\infty}\sup\left(f_{1}(x,\mathrm{T}_{1}(z_{1,n}), \mathrm{T}_{2}(z_{2,n}))(u_{n}-u)\right)\,\,dx\to 0,\end{array}\]
showing that
\[\lim_{n\to\infty}\sup\left\langle-\Delta_{p_{1}}u_{n}+|u_{n}|^{p_{1}-2}u_{n}, u_{n}-u\right\rangle\leq 0.\]
Likewise, we prove that
\[\lim_{n\to\infty}\sup\left\langle-\Delta_{p_{2}}v_{n}+|v_{n}|^{p_{2}-2}v_{n}, v_{n}-v\right\rangle\leq 0.\]
Then, the \(S_{+}\)-property of \(-\Delta_{p_{i}}\) on \(W^{1,p_{i}}(\Omega)\) (see, e.g., [23, Proposition 2.72]) along with (2.15) implies
\[(u_{n},v_{n})\to(u,v)\;\text{in}\;W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega). \tag{2.17}\]
Through (2.9), (2.10) and the invariance of \(L^{p_{1}}(\Omega)\times L^{p_{2}}(\Omega)\) by \(\mathcal{T}\), we infer that \((u,v)=\mathcal{T}(z_{1},z_{2})\), as desired.
Let us verify that \(\mathcal{T}(L^{p_{1}}(\Omega)\times L^{p_{2}}(\Omega))\) is a relatively compact subset. If \((u_{n},v_{n}):=\mathcal{T}(y_{1,n},y_{2,n})\), \(n\in\mathbb{N}\), (2.9) and (2.10) can be written. Hence, the previous argument yields a pair \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\) fulfilling (2.17), possibly along a subsequence.
We are thus in a position to apply Schauder's fixed point theorem to the map \(\mathcal{T}\), which establishes the existence of \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\) satisfying \((u,v)=\mathcal{T}(u,v).\) Due to [7, Theorem 3], one has
\[\frac{\partial u}{\partial\eta}=\frac{\partial v}{\partial\eta}=0\;\;\text{ on}\;\partial\Omega.\]
Hence, \((u,v)\) is a solution of \((\mathrm{P}_{f_{1},f_{2}})\). Let us show that (2.6) is fulfilled. Put \(\zeta=(\underline{u}-u)^{+}\) and suppose \(\zeta\neq 0\). Then, from \((\mathrm{H}_{1})\), (2.7) and (2.8), we get
\[\begin{array}{l}\int_{\{u<\underline{u}\}}|\nabla u|^{p_{1}-2}\nabla u\nabla \zeta\,\,\mathrm{d}x+\int_{\{u<\underline{u}\}}|u|^{p_{1}-2}u\zeta\,\,\mathrm{d}x \\ =\int_{\Omega}|\nabla u|^{p_{1}-2}\nabla u\nabla\zeta\,\,\mathrm{d}x+\int_{ \Omega}|u|^{p_{1}-2}u\zeta\,\,\mathrm{d}x=\int_{\Omega}f_{1}(x,\mathrm{T}_{1} (u),\mathrm{T}_{2}(v))\zeta\,\,\mathrm{d}x\\ =\int_{\{u<\underline{u}\}}f_{1}(x,\mathrm{T}_{1}(u),\mathrm{T}_{2}(v))\zeta\, \,\mathrm{d}x=\int_{\{u<\underline{u}\}}f_{1}(x,\underline{u},\mathrm{T}_{2}( v))\zeta\,\,\mathrm{d}x\\ \geq\int_{\{u<\underline{u}\}}|\nabla\underline{u}|^{p_{1}-2}\nabla\underline{u} \nabla\zeta\,\,\mathrm{d}x+\int_{\{u<\underline{u}\}}|\underline{u}|^{p_{1}-2 }\underline{u}\zeta\,\,\mathrm{d}x-\int_{\partial\Omega}|\nabla\underline{u}|^{ p_{1}-2}\frac{\partial u}{\partial\eta}\gamma_{0}(\zeta)\,\,\mathrm{d}s.\end{array}\]
which by (H\({}_{1}\)) and (2.1), implies that
\[\begin{array}{l}\int_{\{u<\underline{u}\}}(|\nabla\underline{u}|^{p_{1}-2} \nabla\underline{u}-|\nabla u|^{p_{1}-2}\nabla u)\nabla\zeta\ \mathrm{d}x+\int_{\{u<\underline{u}\}}(|\underline{u}|^{p_{1}-2} \underline{u}-|u|^{p_{1}-2}u)\zeta\ \mathrm{d}x\\ \leq\int_{\partial\Omega}|\nabla\underline{u}|^{p_{1}-2}\frac{ \partial u}{\partial\eta}\gamma_{0}(\zeta)\ \mathrm{d}s\leq 0,\end{array}\]
a contradiction. Hence \(u\geq\underline{u}\) in \(\Omega\). Arguing similarly, set \(\hat{\zeta}=(u-\overline{u})^{+}\) and assume that \(\hat{\zeta}\neq 0.\) Then
\[\begin{array}{l}\int_{\{u>\overline{u}\}}|\nabla u|^{p_{1}-2} \nabla u\nabla\hat{\zeta}\ \mathrm{d}x+\int_{\{u>\overline{u}\}}|u|^{p_{1}-2}u\hat{\zeta}\ \mathrm{d}x\\ =\int_{\Omega}|\nabla u|^{p_{1}-2}\nabla u\nabla\hat{\zeta}\ \mathrm{d}x+\int_{ \Omega}|u|^{p_{1}-2}u\hat{\zeta}\ \mathrm{d}x=\int_{\Omega}f_{1}(x,\mathrm{T}_{1}(u), \mathrm{T}_{2}(v))\hat{\zeta}\ \mathrm{d}x\\ =\int_{\{u>\overline{u}\}}f_{1}(x,\mathrm{T}_{1}(u),\mathrm{T}_{2}(v))\hat{ \zeta}\ \mathrm{d}x=\int_{\{u>\overline{u}\}}f_{1}(x,\overline{u},\mathrm{T}_{2}(v)) \hat{\zeta}\ \mathrm{d}x\\ \leq\int_{\{u>\overline{u}\}}|\nabla\overline{u}|^{p_{1}-2}\nabla\overline{u} \nabla\hat{\zeta}\ \mathrm{d}x+\int_{\{u>\overline{u}\}}|\overline{u}|^{p_{1}-2} \overline{u}\hat{\zeta}\ \mathrm{d}x-\int_{\partial\Omega}|\nabla\overline{u}|^{p_{1}-2} \frac{\partial\overline{u}}{\partial\eta}\gamma_{0}(\hat{\zeta})\ \mathrm{d}s\end{array}\]
which leads to
\[\begin{array}{l}\int_{\{u>\overline{u}\}}(|\nabla u|^{p_{1}-2} \nabla u-|\nabla\overline{u}|^{p_{1}-2}\nabla\overline{u})\nabla\hat{\zeta}\ \mathrm{d}x+\int_{\{u>\overline{u}\}}(|u|^{p_{1}-2}u-|\overline{u}|^{p_{1}-2} \overline{u})\hat{\zeta}\ \mathrm{d}x\\ \leq-\int_{\partial\Omega}|\nabla\overline{u}|^{p_{1}-2}\frac{ \partial\overline{u}}{\partial\eta}\gamma_{0}(\hat{\zeta})\ \mathrm{d}s\leq 0,\end{array}\]
a contradiction. Thus, we have \(u\leq\overline{u}\) in \(\Omega\). A quite similar argument provides that \(\underline{v}\leq v\leq\overline{v}\) in \(\Omega\). Therefore, since \((\underline{u},\underline{v}),(\overline{u},\overline{v})\in L^{\infty}(\Omega )\times L^{\infty}(\Omega)\), we conclude that \((u,v)\in W^{1,p_{1}}_{b}(\Omega)\times W^{1,p_{2}}_{b}(\Omega)\). This completes the proof.
Instead of (H\({}_{2}\)), if we assume that \(f_{1}\) and \(f_{2}\) are bounded in \(\Omega\times[\underline{u},\overline{u}]\times[\underline{v},\overline{v}]\), the conclusion of Theorem 2.1 remains true but assigning more regularity to the solution.
* There exists a constant \(M>0\) such that \[|f_{i}(x,u,v)|\leq M\ \mbox{in}\ \Omega\times[\underline{u},\overline{u}] \times[\underline{v},\overline{v}],\ \mbox{for}\ i=1,2.\]
**Theorem 2.2**.: _Suppose \((\mathrm{H}_{1})\) and \((\mathrm{H}_{3})\) hold true. Then, problem \((\mathrm{P}_{f_{1},f_{2}})\) possesses a solution \((u,v)\in\mathcal{C}^{1,\tau}(\overline{\Omega})\times\mathcal{C}^{1,\tau}( \overline{\Omega})\) with suitable \(\tau\in(0,1)\) such that_
\[\underline{u}\leq u\leq\overline{u}\quad\mbox{and}\quad\underline{v}\leq v \leq\overline{v}. \tag{2.18}\]
_Moreover, \(\frac{\partial u}{\partial\eta}=\frac{\partial v}{\partial\eta}=0\) on \(\partial\Omega\)._
Proof.: The proof is similar in the spirit to the one of Theorem 2.1. Given \((z_{1},z_{2})\in\mathcal{C}(\overline{\Omega})\times\mathcal{C}(\overline{ \Omega})\), we introduce the auxiliary system
\[\left\{\begin{array}{ll}-\Delta_{p_{1}}u+|u|^{p_{1}-2}u=f_{1}(x,\mathrm{T}_{ 1}(z_{1}),\mathrm{T}_{2}(z_{2}))&\mbox{in}\ \Omega,\\ -\Delta_{p_{2}}v+|v|^{p_{2}-2}v=f_{2}(x,\mathrm{T}_{1}(z_{1}),\mathrm{T}_{2}(z _{2}))&\mbox{in}\ \Omega,\\ \frac{\partial u}{\partial\eta}=\frac{\partial v}{\partial\eta}=0&\mbox{on}\ \partial\Omega,\end{array}\right. \tag{2.19}\]
where the operators \(\mathrm{T}_{1}\) and \(\mathrm{T}_{2}\) are defined by (2.7). Notice that \((\mathrm{H}_{3})\) together with Minty-Browder Theorem (see, e.g., [4, Theorem V.15]) ensure that (2.19) admits a unique solution \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega).\) Let us introduce the operator
\[\begin{array}{cccc}\mathcal{T}:&\mathcal{C}(\overline{\Omega})\times \mathcal{C}(\overline{\Omega})&\rightarrow&\mathcal{C}^{1}(\overline{\Omega}) \times\mathcal{C}^{1}(\overline{\Omega})\\ &(z_{1},z_{2})&\mapsto&(u,v).\end{array}\]
Observe from (2.19) that any fixed point of \(\mathcal{T}\) within \([\underline{u},\overline{u}]\times[\underline{v},\overline{v}]\) coincides with the weak solution of \((\mathrm{P}_{f_{1},f_{2}})\).
By \((\mathrm{H}_{3})\) and according to the regularity result [22], it follows that \((u,v)\in\mathcal{C}^{1,\tau}(\overline{\Omega})\times\mathcal{C}^{1,\tau}( \overline{\Omega})\) and
\[\|u\|_{\mathcal{C}^{1,\tau}(\overline{\Omega})},\|v\|_{\mathcal{C}^{1,\tau}( \overline{\Omega})}\leq L_{0}, \tag{2.20}\]
for some constant \(L_{0}>0\) independent of \(u\) and \(v\). Then, the compactness of the embedding \(\mathcal{C}^{1,\tau}(\overline{\Omega})\subset\mathcal{C}^{1}(\overline{\Omega})\) implies that \(\mathcal{T}(\mathcal{C}(\overline{\Omega})\times\mathcal{C}(\overline{\Omega}))\) is a relatively compact subset of \(\mathcal{C}^{1}(\overline{\Omega})\times\mathcal{C}^{1}(\overline{\Omega})\). This proves that \(\mathcal{T}\) is compact.
Next, we show the continuity of \(\mathcal{T}\) with respect to the topology of \(\mathcal{C}(\overline{\Omega})\times\mathcal{C}(\overline{\Omega})\). Let \((z_{1,n},z_{2,n})\to(z_{1},z_{2})\) in \(\mathcal{C}(\overline{\Omega})\times\mathcal{C}(\overline{\Omega})\) for all \(n\) and denote \((u_{n},v_{n})=\mathcal{T}(z_{1,n},z_{2,n})\). Repeating the previous argument in the proof of Theorem 2.1 we obtain
\[(u_{n},v_{n})\to(u,v)\text{ in }W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}( \Omega),\]
for certain \((u,v)\) in \(W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\) and \((u,v)=\mathcal{T}(z_{1},z_{2})\). On the basis of [22], the sequence \(\{(u_{n},v_{n})\}\) is bounded in \(\mathcal{C}^{1,\tau}(\overline{\Omega})\times\mathcal{C}^{1,\tau}(\overline{ \Omega})\) for certain \(\tau\in(0,1)\). Then, through the compact embedding \(\mathcal{C}^{1,\tau}(\overline{\Omega})\subset\mathcal{C}^{1}(\overline{ \Omega})\), along a relabeled subsequence, there holds \((u_{n},v_{n})\to(u,v)\) in \(\mathcal{C}^{1}(\overline{\Omega})\times\mathcal{C}^{1}(\overline{\Omega})\), showing that \(\mathcal{T}\) is continuous.
We are thus in a position to apply Schauder's fixed point theorem to the map \(\mathcal{T}\), which establishes the existence of \((u,v)\in\mathcal{C}^{1}(\overline{\Omega})\times\mathcal{C}^{1}(\overline{ \Omega})\) satisfying \((u,v)=\mathcal{T}(u,v).\) By \((\mathrm{H}_{3})\) together with [22], we infer that \((u,v)\in\mathcal{C}^{1,\tau}(\overline{\Omega})\times\mathcal{C}^{1,\tau}( \overline{\Omega})\), \(\tau\in(0,1)\). The rest of the proof runs as the one of Theorem 2.1.
_Remark 2.3_.: Theorems 2.1 and 2.2 remain valid if we replace Neumann boundary conditions with Dirichlet ones.
## 3. Abstract results
**Theorem 3.1**.: _For a constant \(\Lambda>0\), let \((\underline{u}_{\Lambda},\underline{v}_{\Lambda})\) and \((\overline{u}_{\Lambda},\overline{v}_{\Lambda})\) be sub-supersolutions pairs of problem \((\mathrm{P})\). Assume (1.1) holds and suppose there exists a constant \(\rho>0\) such that_
\[\underline{u}_{\Lambda},\underline{v}_{\Lambda}>\rho\text{ a.e. in }\overline{\Omega}. \tag{3.1}\]
_Then, problem \((\mathrm{P})\) admits a solution \((u,v)\in int\mathcal{C}^{1,\tau}_{+}(\overline{\Omega})\times int\mathcal{C} ^{1,\tau}_{+}(\overline{\Omega}),\) for certain \(\tau\in(0,1)\), within \([\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_{ \Lambda},\overline{v}_{\Lambda}]\). Moreover, if \(\alpha_{2},\beta_{1}\in(-1,0)\) and_
\[\max\{\frac{-\gamma_{1}\alpha_{i}}{p_{1}-1},\frac{-\gamma_{2}\beta_{i}}{p_{2}- 1}\}<p_{i}-1,\ i=1,2, \tag{3.2}\]
_with_
\[\gamma_{i}=\max\{-\alpha_{i},-\beta_{i}\},\]
_then, \((u,v)\) is unique._
Proof.: By (1.1), (3.1) and for all \((u,v)\in[\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_{ \Lambda},\overline{v}_{\Lambda}]\), we have
\[u^{\alpha_{1}}+v^{\beta_{1}}\leq\left\{\begin{array}{ll}\underline{u}_{\Lambda }^{\alpha_{1}}+\overline{v}_{\Lambda}^{\beta_{1}}&\text{ if }\beta_{1}>0\\ \underline{u}_{\Lambda}^{\alpha_{1}}+\underline{v}_{\Lambda}^{\beta_{1}}& \text{ if }\beta_{1}<0\end{array}\right.\leq\left\{\begin{array}{ll}\rho^{\alpha_{1}}+ \|\overline{v}_{\Lambda}\|_{\infty}^{\beta_{1}}&\text{ if }\beta_{1}>0\\ \rho^{\alpha_{1}}+\rho^{\beta_{1}}&\text{ if }\beta_{1}<0,\end{array}\right.\]
as well as
\[u^{\alpha_{2}}+v^{\beta_{2}}\leq\left\{\begin{array}{ll}\overline{u}_{\Lambda }^{\alpha_{2}}+\underline{v}_{\Lambda}^{\beta_{2}}&\text{ if }\alpha_{2}>0\\ \underline{u}_{\Lambda}^{\alpha_{2}}+\underline{v}_{\Lambda}^{\beta_{2}}& \text{ if }\alpha_{2}<0\end{array}\right.\leq\left\{\begin{array}{ll}\| \overline{u}_{\Lambda}\|_{\infty}^{\alpha_{2}}+\rho^{\beta_{2}}&\text{ if }\alpha_{2}>0\\ \rho^{\alpha_{2}}+\rho^{\beta_{2}}&\text{ if }\alpha_{2}<0.\end{array}\right.\]
Then, thanks to Theorem 2.2, we conclude that (P) possesses a solution \((u,v)\in\mathcal{C}^{1,\tau}(\overline{\Omega})\times\mathcal{C}^{1,\tau}( \overline{\Omega})\), with suitable \(\tau\in(0,1)\), within \([\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_{\Lambda },\overline{v}_{\Lambda}]\). Due to (3.1), we deduce that \((u,v)\in int\mathcal{C}^{1,\tau}_{+}(\overline{\Omega})\times int\mathcal{C}^{ 1,\tau}_{+}(\overline{\Omega})\).
We proceed to show that \((u,v)\) is a unique solution in \([\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_{\Lambda },\overline{v}_{\Lambda}]\) for \(\alpha_{2},\beta_{1}\in(-1,0)\). To this end, let \((u_{1},v_{1})\) and \((u_{2},v_{2})\) be two distinct positive solutions of (P) within \([\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_{\Lambda },\overline{v}_{\Lambda}]\). Set
\[\tau=\sup\{c\in\mathbb{R}_{+},\;\;cu_{2}\leq u_{1}\;\text{ and }cv_{2}\leq v_{1} \;\text{ in }\Omega\}.\]
Then \(0<\tau<\infty\) because
\[\min\{\frac{\rho}{||u_{2}||_{\infty}},\frac{\rho}{||v_{2}||_{\infty}}\}\leq \tau\leq\max\{\frac{||u_{1}||_{\infty}}{\rho},\frac{||v_{1}||_{\infty}}{\rho}\} \tag{3.3}\]
If we manage to show that \(\tau\geq 1\), we are done as this entails \(u_{2}\leq u_{1}\) and \(v_{2}\leq v_{1}\) in \(\Omega\) and thus, by interchanging the roles of \((u_{1},v_{1})\) and \((u_{2},v_{2})\) we get \(u_{2}\geq u_{1}\) and \(v_{2}\geq v_{1}\) in \(\Omega\). By contradiction, suppose that \(0<\tau<1\). Then, by (1.1) and since \(\alpha_{2},\beta_{1}<0\), we infer that
\[\min\{\tau^{-\alpha_{i}},\tau^{-\beta_{i}}\}\geq\tau^{\gamma_{i}}\;\text{ and }\;\;\min\{\tau^{-\frac{\gamma_{1}\alpha_{i}}{p_{1}-1}},\tau^{-\frac{\gamma_{2} \beta_{i}}{p_{2}-1}}\}\geq\tau^{\hat{\gamma}_{i}},\]
with
\[\hat{\gamma}_{i}=\max\{\frac{-\gamma_{1}\alpha_{i}}{p_{1}-1},\frac{-\gamma_{2} \beta_{i}}{p_{2}-1}\}>0,\text{for }i=1,2.\]
A direct computations show that
\[\begin{array}{l}-\Delta_{p_{1}}u_{2}+|u_{2}|^{p_{1}-2}u_{2}=u_{2}^{\alpha_{1} }+v_{2}^{\beta_{1}}=\frac{(\tau u_{2})^{\alpha_{1}}}{\tau^{\alpha_{1}}}+ \frac{(\tau v_{2})^{\beta_{1}}}{\tau^{\beta_{1}}}\\ \geq\tau^{\gamma_{1}}\left((\tau u_{2})^{\alpha_{1}}+(\tau v_{2})^{\beta_{1}} \right)\geq\tau^{\gamma_{1}}(u_{1}^{\alpha_{1}}+v_{1}^{\beta_{1}})\\ =\tau^{\gamma_{1}}\left(-\Delta_{p_{1}}u_{1}+|u_{1}|^{p_{1}-2}u_{1}\right)\\ =-\Delta_{p_{1}}(\tau^{\frac{\gamma_{1}}{p_{1}-1}}u_{1})+|\tau^{\frac{\gamma_{ 1}}{p_{1}-1}}u_{1}|^{p_{1}-2}(\tau^{\frac{\gamma_{1}}{p_{1}-1}}u_{1})\end{array}\]
and similarly
\[\begin{array}{l}-\Delta_{p_{2}}v_{2}+|v_{2}|^{p_{2}-2}v_{2}=u_{2}^{\alpha_{2} }+v_{2}^{\beta_{2}}=\frac{(\tau u_{2})^{\alpha_{2}}}{\tau^{\alpha_{2}}}+\frac{ (\tau v_{2})^{\beta_{2}}}{\tau^{\beta_{2}}}\\ \geq\tau^{\gamma_{2}}\left((\tau u_{2})^{\alpha_{2}}+(\tau v_{2})^{\beta_{2}} \right)\geq\tau^{\gamma_{2}}(u_{1}^{\alpha_{2}}+v_{1}^{\beta_{2}})\\ =-\Delta_{p_{2}}(\tau^{\frac{\gamma_{2}}{p_{2}-1}}v_{1})+|\tau^{\frac{\gamma_{ 2}}{p_{2}-1}}v_{1}|^{p_{2}-2}(\tau^{\frac{\gamma_{2}}{p_{2}-1}}v_{1}).\end{array}\]
The weak comparison principle (see [30, Lemma 3.2]) yields
\[u_{2}\geq\tau^{\frac{\gamma_{1}}{p_{1}-1}}u_{1}\;\text{ and }\;v_{2}\geq\tau^{\frac{ \gamma_{2}}{p_{2}-1}}v_{1}\;\text{ in }\Omega. \tag{3.4}\]
Using (3.4) in the equations for \((u_{1},v_{1})\), we get
\[-\Delta_{p_{1}}u_{1}+|u_{1}|^{p_{1}-2}u_{1}=u_{1}^{\alpha_{1}}+v_{1} ^{\beta_{1}}=\frac{(\tau^{\frac{\gamma_{1}}{p_{1}-1}}u_{1})^{\alpha_{1}}}{\tau^ {\frac{\gamma_{1}}{p_{1}-1}}}+\frac{(\tau^{\frac{\gamma_{2}}{p_{2}-1}}v_{1})^{ \beta_{1}}}{\tau^{\frac{\gamma_{2}}{p_{2}-1}}}\] \[\geq\tau^{\hat{\gamma}_{1}}(u_{2}^{\alpha_{1}}+v_{2}^{\beta_{1}}) =\tau^{\hat{\gamma}_{1}}\left(-\Delta_{p_{1}}u_{2}+|u_{2}|^{p_{1}-2}u_{2}\right)\] \[=-\Delta_{p_{1}}(\tau^{\frac{\hat{\gamma}_{1}}{p_{1}-1}}u_{2})+| \tau^{\frac{\hat{\gamma}_{1}}{p_{1}-1}}u_{2}|^{p_{1}-2}(\tau^{\frac{\hat{ \gamma}_{1}}{p_{1}-1}}u_{2}).\]
and
\[-\Delta_{p_{2}}v_{1}+|v_{1}|^{p_{2}-2}v_{1}=u_{1}^{\alpha_{2}}+v _{1}^{\beta_{2}}=\frac{(\tau^{\frac{\gamma_{1}}{p_{1}-1}}u_{1})^{\alpha_{2}}} {\tau^{\frac{\gamma_{2}}{p_{1}-1}}}+\frac{\frac{(\tau^{\frac{\gamma_{2}}{p_{2 }-1}}}{p_{2}-1}v_{1})^{\beta_{2}}}{\tau^{\frac{\gamma_{2}}{p_{2}-1}}}\] \[\geq\tau^{\hat{\gamma}_{2}}(u_{2}^{\alpha_{2}}+v_{2}^{\beta_{2}}) =-\Delta_{p_{2}}(\tau^{\frac{\hat{\gamma}_{2}}{p_{2}-1}}v_{2})+|\tau^{\frac{ \hat{\gamma}_{2}}{p_{2}-1}}v_{2}|^{p_{2}-2}(\tau^{\frac{\hat{\gamma}_{2}}{p_{2 }-1}}v_{2})\text{ in }\Omega.\]
Owing to [30, Lemma 3.2] we derive
\[u_{1}\geq\tau^{\frac{\hat{\gamma}_{1}}{p_{1}-1}}u_{2}\ \ \text{and}\ \ v_{1}\geq\tau^{\frac{\hat{\gamma}_{2}}{p_{2}-1}}v_{2}\ \ \text{in }\Omega.\]
In view of (3.2), we deduce that
\[1>\tau^{\frac{\hat{\gamma}_{1}}{p_{1}-1}}>\tau\ \ \text{and}\ \ 1>\tau^{\frac{\hat{ \gamma}_{2}}{p_{2}-1}}>\tau,\]
a contradiction with the definition of \(\tau\). Thus, \(\tau\geq 1\) and therefore, \((u_{1},v_{1})=(u_{2},v_{2})\), completing the proof of the theorem.
The existence result in the case of zero trace subsolutions on the boundary \(\partial\Omega\) is formulated as follows.
**Theorem 3.2**.: _For a constant \(\Lambda>0\), let \((\underline{u}_{\Lambda},\underline{v}_{\Lambda})\) and \((\overline{u}_{\Lambda},\overline{v}_{\Lambda})\) be sub-supersolutions pairs of problem_ (P) _and assume that assumption (1.1) holds. If \((\underline{u}_{\Lambda},\underline{v}_{\Lambda})\in W^{1,p_{1}}_{0,b}(\Omega) \times W^{1,p_{2}}_{0,b}(\Omega)\) and there exists a constant \(c>0\) such that_
\[\underline{u}_{\Lambda},\underline{v}_{\Lambda}\geq cd(x)\text{ a.e. in }\Omega, \tag{3.5}\]
_then, problem_ (P) _admits a solution \((u,v)\in W^{1,p_{1}}_{b}(\Omega)\times W^{1,p_{2}}_{b}(\Omega)\) within \([\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_{\Lambda },\overline{v}_{\Lambda}]\)._
Proof.: By (1.1) and (3.5), for all \((u,v)\in[\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_ {\Lambda},\overline{v}_{\Lambda}]\), we have
\[u^{\alpha_{1}}+v^{\beta_{1}} \leq \left\{\begin{array}{ll}\underline{u}_{\Lambda}^{\alpha_{1}}+ \overline{v}_{\Lambda}^{\beta_{1}}&\text{ if }\beta_{1}>0\\ \underline{u}_{\Lambda}^{\alpha_{1}}+\underline{v}_{\Lambda}^{\beta_{1}}& \text{ if }\beta_{1}<0\end{array}\right.\] \[\leq \left\{\begin{array}{ll}(cd(x))^{\alpha_{1}}+\|\overline{v}_{ \Lambda}\|_{\infty}^{\beta_{1}}&\text{ if }\beta_{1}>0\\ (cd(x))^{\alpha_{1}}+(cd(x))^{\beta_{1}}&\text{ if }\beta_{1}<0\end{array}\right.\] \[\leq \left\{\begin{array}{ll}(c^{\alpha_{1}}+d(x)^{-\alpha_{1}}\,\| \overline{v}_{\Lambda}\|_{\infty}^{\beta_{1}})d(x)^{\alpha_{1}}&\text{ if }\beta_{1}>0\\ (c^{\alpha_{1}}+c^{\beta_{1}})\max\{d(x))^{\alpha_{1}},d(x))^{\beta_{1}}\}& \text{ if }\beta_{1}<0\end{array}\right.\] \[\leq C_{0}\left\{\begin{array}{ll}d(x)^{\alpha_{1}}&\text{ if }\beta_{1}>0\\ \max\{d(x)^{\alpha_{1}},d(x)^{\beta_{1}}\}&\text{ if }\beta_{1}<0\end{array}\right.,\text{ in }\Omega\]
and similarly
\[u^{\alpha_{2}}+v^{\beta_{2}} \leq \left\{\begin{array}{ll}\overline{u}_{\Lambda}^{\alpha_{2}}+ \underline{v}_{\Lambda}^{\beta_{2}}&\mbox{ if }\alpha_{2}>0\\ \underline{u}_{\Lambda}^{\alpha_{2}}+\underline{v}_{\Lambda}^{\beta_{2}}& \mbox{ if }\alpha_{2}<0\end{array}\right.\leq\left\{\begin{array}{ll}\| \overline{u}_{\Lambda}\|_{\infty}^{\alpha_{2}}+(cd(x))^{\beta_{2}}&\mbox{ if }\alpha_{2}>0\\ (cd(x))^{\alpha_{2}}+(cd(x))^{\beta_{2}}&\mbox{ if }\alpha_{2}<0.\end{array}\right.\] \[\leq \tilde{C}_{0}\left\{\begin{array}{ll}d(x)^{\beta_{2}}&\mbox{ if } \alpha_{2}>0\\ \max\{d(x)^{\alpha_{2}},d(x)^{\beta_{2}}\}&\mbox{ if }\alpha_{2}<0\end{array} \right.\mbox{, in }\Omega,\]
for certain constants \(C_{0},\tilde{C}_{0}>0\). Then, thanks to Theorem 2.1, we conclude that there exists a solution \((u,v)\in W_{b}^{1,p_{1}}(\Omega)\times W_{b}^{1,p_{2}}(\Omega)\) of problem (P) within \([\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_{\Lambda },\overline{v}_{\Lambda}]\). This proves the theorem.
_Remark 3.3_.: It is worth noting that the supersolution \((\overline{u}_{\Lambda},\overline{v}_{\Lambda})\) in Theorem 3.2 should not belong to \(W_{0}^{1,p_{1}}(\Omega)\times W_{0}^{1,p_{2}}(\Omega)\) because if so, the solution \((u,v)\) would be of zero trace on the boundary \(\partial\Omega.\) Hence, [17, Lemma 3.1] ensures that \((u,v)\in\mathcal{C}^{1,\tau}(\overline{\Omega})\times\mathcal{C}^{1,\tau}( \overline{\Omega})\) which, by Hopf's Lemma (see, e.g., [1]), implies that \(\frac{\partial\overline{u}}{\partial\eta},\frac{\partial\overline{v}}{ \partial\eta}<0\) on \(\partial\Omega,\) absurd.
_Remark 3.4_.: The argument used to show the uniqueness result of Theorem 3.1 is not applicable in the context of Theorem 3.2. Being impossible to construct supersolutions for problem (P) behaving like the distance function \(d(x)\) (see Remark 3.3), then it would not be possible to get an estimate of type (3.3).
## 4. Existence and uniqueness results
Our goal is to construct sub- and super-solution pairs of (P). With this aim, consider the following nonlinear Dirichlet and Neumann eigenvalue problems
\[\left\{\begin{array}{ll}-\Delta_{p_{i}}\phi_{1,p_{i}}+|\phi_{1,p_{i}}|^{p_{ i}-2}\phi_{1,p_{i}}=\lambda_{1,p_{i}}|\phi_{1,p_{i}}|^{p_{i}-2}\phi_{1,p_{i}} \mbox{ in }\Omega\\ \phi_{1,p_{i}}=0\mbox{ on }\partial\Omega,\ i=1,2,\end{array}\right. \tag{4.1}\]
\[\left\{\begin{array}{ll}-\Delta_{p_{i}}\hat{\phi}_{1,p_{i}}+|\hat{\phi}_{1,p _{i}}|^{p_{i}-2}\hat{\phi}_{1,p_{i}}=\hat{\lambda}_{1,p_{i}}|\hat{\phi}_{1,p_ {i}}|^{p_{i}-2}\hat{\phi}_{1,p_{i}}\mbox{ in }\Omega\\ \frac{\partial\hat{\phi}_{1,p_{i}}}{\partial\eta}=0\mbox{ on }\partial\Omega,\ i=1,2, \end{array}\right. \tag{4.2}\]
where \(\phi_{1,p_{i}}\in\mathcal{C}^{1}_{+}(\overline{\Omega})\) and \(\hat{\phi}_{1,p_{i}}\in int\mathcal{C}^{1}_{+}(\overline{\Omega})\) are the eigenfunctions corresponding to the first eigenvalues \(\lambda_{1,p_{i}},\hat{\lambda}_{1,p_{i}}>0,\) respectively (see [23]). Recall that
\[\phi_{1,p_{i}}\geq c_{0}d(x)\mbox{ in }\Omega,\ \mbox{ and }\ \frac{\partial\phi_{1,p_{i}}}{\partial\eta}<0\mbox{ on }\partial\Omega, \tag{4.3}\]
for a constant \(c_{0}>0,\) and there exists a constant \(\mu>0\) such that
\[\hat{\phi}_{1,p_{i}}(x)>\mu\mbox{ for all }x\in\overline{\Omega}. \tag{4.4}\]
Consider the homogeneous Dirichlet and Neumann problems
\[\left\{\begin{array}{ll}-\Delta_{p_{i}}y_{i}+|y_{i}|^{p_{i}-2}y_{i}=1\mbox{ in }\Omega,\\ y_{i}=0\mbox{ on }\partial\Omega,\ i=1,2\end{array}\right. \tag{4.5}\]
\[\left\{\begin{array}{l}-\Delta_{p_{i}}\hat{y}_{i}+|\hat{y}_{i}|^{p_{i}-2}\hat{y }_{i}=1\ \mbox{in}\ \Omega\\ \frac{\partial\hat{y}_{i}}{\partial\eta}=0\ \mbox{on}\ \partial\Omega,\ i=1,2, \end{array}\right. \tag{4.6}\]
which admit unique solutions \(y_{i}\in\mathcal{C}^{1,\tau}(\overline{\Omega})\) and \(\hat{y}_{i}\in int\mathcal{C}^{1}_{+}(\overline{\Omega})\) satisfying
\[\frac{d}{c}\leq y_{i}\leq cd\ \mbox{in}\ \Omega,\ \ \frac{\partial y_{i}}{ \partial\eta}<0\ \mbox{on}\ \partial\Omega, \tag{4.7}\]
\[\frac{\hat{\phi}_{1,p_{i}}}{\hat{c}}\leq\hat{y}_{i}\leq\hat{c}\hat{\phi}_{1,p_ {i}}\ \mbox{in}\ \Omega, \tag{4.8}\]
for some constants \(c,\hat{c}>1\) and \(L>0\) (see [9, 30, 32] ).
### \(C^{1}\)- bound solutions
**Theorem 4.1**.: _Assume (1.1) is satisfied. Then, if \(\Lambda>0\) is big enough, problem_ (P) _admits a solution \((u,v)\in int\mathcal{C}^{1,\tau}_{+}(\overline{\Omega})\times int\mathcal{C}^ {1,\tau}_{+}(\overline{\Omega}),\) for certain \(\tau\in(0,1)\), such that_
\[(u,v)\in[\Lambda^{-1}\hat{\phi}_{1,p_{1}},\Lambda\hat{y}_{1}]\times[\Lambda^{ -1}\hat{\phi}_{1,p_{2}},\Lambda\hat{y}_{2}]. \tag{4.9}\]
_which is unique once \(\alpha_{2},\beta_{1}\in(-1,0)\) and assumption (3.2) is fulfilled._
Proof.: For a constant \(\Lambda>0\) which will be specified later, let us show that \(\Lambda(\hat{y}_{1},\hat{y}_{2})\) satisfy (2.5). With this aim, pick \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\) within \([\Lambda^{-1}\hat{\phi}_{1,p_{1}},\Lambda\hat{y}_{1}]\times[\Lambda^{-1}\hat{ \phi}_{1,p_{2}},\Lambda\hat{y}_{2}]\). From (4.8) and (1.1), if \(\beta_{1},\alpha_{2}>0\), we get
\[\begin{array}{l}(\Lambda\hat{y}_{1})^{\alpha_{1}}+(\Lambda\hat{y}_{2})^{ \beta_{1}}\leq(\Lambda\frac{\hat{\phi}_{1,p_{1}}}{\hat{c}})^{\alpha_{1}}+( \Lambda\hat{c}\hat{\phi}_{1,p_{2}})^{\beta_{1}}\\ \leq\hat{c}^{-\alpha_{1}}+(\Lambda\hat{c}||\hat{\phi}_{1,p_{2}}||_{\infty})^{ \beta_{1}}\leq\Lambda^{\beta_{1}}(1+(\hat{c}||\hat{\phi}_{1,p_{2}}||_{\infty} )^{\beta_{1}})\ \ \mbox{in}\ \Omega,\end{array} \tag{4.10}\]
\[\begin{array}{l}(\Lambda\hat{y}_{1})^{\alpha_{2}}+(\Lambda\hat{y}_{2})^{ \beta_{2}}\leq(\Lambda\hat{c}\hat{\phi}_{1,p_{1}})^{\alpha_{2}}+(\Lambda\frac{ \hat{\phi}_{1,p_{2}}}{\hat{c}})^{\beta_{2}}\\ \leq(\Lambda\hat{c}||\hat{\phi}_{1,p_{1}}||_{\infty})^{\alpha_{2}}+\hat{c}^{- \beta_{2}}\leq\Lambda^{\alpha_{2}}(1+(\hat{c}||\hat{\phi}_{1,p_{1}}||_{\infty} )^{\alpha_{2}})\ \ \mbox{in}\ \Omega,\end{array} \tag{4.11}\]
while if \(\beta_{1},\alpha_{2}<0\), we obtain
\[\begin{array}{l}(\Lambda\hat{y}_{1})^{\alpha_{1}}+(\Lambda^{-1}\hat{\phi}_{1,p_{2}})^{\beta_{1}}\leq(\Lambda\frac{\hat{\phi}_{1,p_{1}}}{\hat{c}})^{\alpha_ {1}}+(\Lambda^{-1}\mu)^{\beta_{1}}\\ \leq\hat{c}^{-\alpha_{1}}+(\Lambda^{-1}\mu)^{\beta_{1}}\leq\Lambda^{-\beta_{1}} (1+\mu^{\beta_{1}})\ \ \mbox{in}\ \Omega,\end{array} \tag{4.12}\]
\[\begin{array}{l}(\Lambda^{-1}\hat{\phi}_{1,p_{1}})^{\alpha_{2}}+(\Lambda\hat{ y}_{2})^{\beta_{2}}\leq(\Lambda^{-1}\mu)^{\alpha_{2}}+(\Lambda\frac{\hat{\phi}_{1,p_{2}}} {\hat{c}})^{\beta_{2}}\\ \leq(\Lambda^{-1}\mu)^{\alpha_{2}}+\hat{c}^{-\beta_{2}}\leq\Lambda^{-\alpha_{2} }(\mu^{\alpha_{2}}+1)\ \ \mbox{in}\ \Omega,\end{array} \tag{4.13}\]
provided \(\Lambda>0\) is sufficiently large. By (4.6) one has
\[-\Delta_{p_{i}}(\Lambda\hat{y}_{i})+|\Lambda\hat{y}_{i}|^{p_{i}-2}(\Lambda\hat{y }_{i})=\Lambda^{p_{i}-1}\ \mbox{in}\ \Omega,\ \mbox{for}\ i=1,2. \tag{4.14}\]
Then, in view of (1.1), gathering (4.10)-(4.14) together imply
\[-\Delta_{p_{1}}(\Lambda\hat{y}_{1})+|\Lambda\hat{y}_{1}|^{p_{1}-2}( \Lambda\hat{y}_{1})\] \[\geq \left\{\begin{array}{ll}(\Lambda\hat{y}_{1})^{\alpha_{1}}+( \Lambda\hat{y}_{2})^{\beta_{1}}&\mbox{if}\ \beta_{1}>0\\ (\Lambda\hat{y}_{1})^{\alpha_{1}}+(\Lambda^{-1}\hat{\phi}_{1,p_{2}})^{\beta_{1} }&\mbox{if}\ \beta_{1}<0\end{array}\right.\] \[\geq (\Lambda\hat{y}_{1})^{\alpha_{1}}+v^{\beta_{1}}\ \ \mbox{in}\ \Omega \tag{4.15}\]
and
\[-\Delta_{p_{2}}(\Lambda\hat{y}_{2})+|\Lambda\hat{y}_{2}|^{p_{2}-2}( \Lambda\hat{y}_{2})\] \[\geq \left\{\begin{array}{ll}(\Lambda\hat{y}_{1})^{\alpha_{2}}+( \Lambda\hat{y}_{2})^{\beta_{2}}&\mbox{if $\alpha_{2}>0$}\\ (\Lambda^{-1}\hat{\phi}_{1,p_{1}})^{\alpha_{2}}+(\Lambda\hat{y}_{2})^{\beta_{ 2}}&\mbox{if $\alpha_{2}<0$}\end{array}\right.\] \[\geq u^{\alpha_{1}}+(\Lambda\hat{y}_{2})^{\beta_{2}}\ \mbox{ in $\Omega$}, \tag{4.17}\]
for \(\Lambda>1\) large enough. According to (4.6), (2.3) is fulfilled for \((\overline{u},\overline{v}):=\Lambda(\hat{y}_{1},\hat{y}_{2})\) and therefore, in view of (4.15)-(4.17), (2.5) too.
We claim that (2.4) holds for \((\underline{u},\underline{v}):=\Lambda^{-1}(\hat{\phi}_{1,p_{1}},\hat{\phi}_{ 1,p_{2}})\). From (4.2), (4.8) and (1.1), we have
\[(\Lambda^{-1}\hat{\phi}_{1,p_{1}})^{\alpha_{1}}+v^{\beta_{1}}\geq( \Lambda^{-1}\hat{\phi}_{1,p_{1}})^{\alpha_{1}}\geq(\Lambda^{-1}||\hat{\phi}_{ 1,p_{1}}||_{\infty})^{\alpha_{1}}\ \mbox{ in $\Omega$}, \tag{4.20}\] \[u^{\alpha_{2}}+(\Lambda^{-1}\hat{\phi}_{1,p_{2}})^{\beta_{2}}\geq (\Lambda^{-1}\hat{\phi}_{1,p_{2}})^{\beta_{2}}\geq(\Lambda^{-1}||\hat{\phi}_{ 1,p_{2}}||_{\infty})^{\beta_{2}}\ \mbox{ in $\Omega$}, \tag{4.19}\]
as well as
\[\begin{array}{ll}-\Delta_{p_{i}}(\Lambda^{-1}\hat{\phi}_{1,p_{i}})+|\Lambda^ {-1}\hat{\phi}_{1,p_{i}}|^{p_{i}-2}(\Lambda^{-1}\hat{\phi}_{1,p_{i}})=\hat{ \lambda}_{1,p_{i}}(\Lambda^{-1}\hat{\phi}_{1,p_{i}})^{p_{i}-1}\\ \leq\hat{\lambda}_{1,p_{i}}(\Lambda^{-1}||\hat{\phi}_{1,p_{1}}||_{\infty})^{ p_{i}-1}\mbox{ in $\Omega$},\mbox{ for $i=1,2$}.\end{array} \tag{4.21}\]
Then, gathering (4.19)-(4.21) together, since (2.3) is fulfilled (in view of (4.2)), we deduce that (2.4) is achieved for \(\Lambda>0\) large enough. Consequently, considering that functions \(\Lambda^{-1}\hat{\phi}_{1,p_{1}}\) and \(\Lambda^{-1}\hat{\phi}_{1,p_{2}}\) fulfill (3.1), Theorem 3.1 ensures the existence of a positive solution \((u,v)\in int\mathcal{C}_{+}^{1,\tau}(\overline{\Omega})\times int\mathcal{C}_ {+}^{1,\tau}(\overline{\Omega})\), verifying (4.9). Moreover, \((u,v)\) is unique once \(\alpha_{2},\beta_{1}\in(-1,0)\) and fulfill assumption (3.2) in Theorem 3.1. This ends the proof.
By making some specific and necessary adjustments, it is quite possible to construct a pair of sub-supersolution in the spirit of [16]. However, it should be noted that the sub-supersolution produced in [16] does not fulfill the required assumptions (2.4)-(2.5), in particular the condition (2.3) which in no case could be ignored.
Let \(\Lambda>0\) be a constant so large such that
\[\Lambda>\max_{i=1,2}\{2(1+\frac{3}{(3^{p_{i}-1}-2^{p_{i}-1})^{\frac{1}{p_{i}-1 }}}+||\hat{\phi}_{1,p_{i}}||_{\infty}+\|y_{i}\|_{\infty})\}. \tag{4.22}\]
Define
\[\underline{u}_{\Lambda}:=\Lambda^{-1}(\Lambda-\hat{\phi}_{1,p_{1}}),\ \ \ \underline{v}_{\Lambda}:=\Lambda^{-1}(\Lambda-\hat{\phi}_{1,p_{2}}), \tag{4.23}\]
\[\overline{u}_{\Lambda}:=\Lambda(\Lambda-y_{1}),\ \ \ \overline{v}_{\Lambda}:=\Lambda( \Lambda-y_{2}). \tag{4.24}\]
Obviously, \(\underline{u}_{\Lambda}\leq\overline{u}_{\Lambda}\) and \(\underline{v}_{\Lambda}\leq\overline{v}_{\Lambda}\). Moreover, via (4.5) and (4.2) we have
\[\frac{\partial\underline{u}_{\Lambda}}{\partial\eta}=-\Lambda\frac{\partial \hat{\phi}_{1,p_{1}}}{\partial\eta}=0,\ \ \frac{\partial\underline{v}_{\Lambda}}{\partial\eta}=-\Lambda\frac{\partial\hat{ \phi}_{1,p_{1}}}{\partial\eta}=0\ \ \mbox{on}\ \ \partial\Omega \tag{4.25}\]
and
\[\frac{\partial\overline{u}_{\Lambda}}{\partial\eta}=-\Lambda\frac{\partial y_{1 }}{\partial\eta}>0,\ \ \ \frac{\partial\overline{v}_{\Lambda}}{\partial\eta}=-\Lambda\frac{\partial y_{2}}{ \partial\eta}>0\ \ \ \mbox{on}\ \ \partial\Omega, \tag{4.26}\]
showing that \((\underline{u}_{\Lambda},\underline{v}_{\Lambda})\) and \((\overline{u}_{\Lambda},\overline{v}_{\Lambda})\) fulfill assumption (2.3).
**Theorem 4.2**.: _Assume (1.1) holds. Then, for \(\Lambda>0\) is big enough, problem_ (P) _admits a solution \((u,v)\in int\mathcal{C}^{1,\tau}_{+}(\overline{\Omega})\times int\mathcal{C}^{ 1,\tau}_{+}(\overline{\Omega}),\)\(\tau\in(0,1)\), within \([\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_{\Lambda},\overline{u}_{\Lambda}]\), which is unique once \(\alpha_{2},\beta_{1}\in(-1,0)\) and assumption (3.2) is fulfilled._
Proof.: Let \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\) within \([\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_{\Lambda },\overline{v}_{\Lambda}]\). From (4.22) observe that
\[\Lambda-||\hat{\phi}_{1,p_{i}}||_{\infty}>\frac{\Lambda}{2}\text{ and }\Lambda- \|y_{i}\|_{\infty}>\frac{\Lambda}{2},\text{ for }i=1,2. \tag{4.27}\]
Moreover, on the assumption that
\[\Lambda>\frac{6}{(3^{p_{i}-1}-2^{p_{i}-1})^{\frac{1}{p_{i}-1}}} \tag{4.28}\]
we derive that
\[(\frac{\Lambda}{2})^{p_{i}-1}-(\frac{\Lambda}{3})^{p_{i}-1}>1,\text{ for }i=1,2. \tag{4.29}\]
By (4.27), (4.24) and (4.29), a direct computations give
\[\begin{array}{l}-\Delta_{p_{1}}\overline{u}_{\Lambda}+|\overline{u}_{\Lambda }|^{p_{1}-2}\overline{u}_{\Lambda}=\Delta_{p_{1}}(\Lambda y_{1})+\overline{u} _{\Lambda}^{p_{1}-1}\\ =\Lambda^{p_{1}-1}(-1+y_{1}^{p_{1}-1}+(\Lambda-y_{1})^{p_{1}-1})\\ \geq\Lambda^{p_{1}-1}(-1+(\Lambda-\|y_{1}\|_{\infty})^{p_{1}-1})\\ \geq\Lambda^{p_{1}-1}(-1+(\frac{\Lambda}{2})^{p_{1}-1})\\ \geq\Lambda^{p_{1}-1}(\frac{\Lambda}{3})^{p_{1}-1}\geq(\frac{\Lambda^{2}}{3})^ {p_{1}-1}\text{ in }\Omega.\end{array} \tag{4.30}\]
If \(\beta_{1}>0\), it follows from (4.27), (4.24) and (1.1) that
\[\begin{array}{l}\overline{u}_{\Lambda}^{\alpha_{1}}+v^{\beta_{1}}\leq \overline{u}_{\Lambda}^{\alpha_{1}}+\overline{v}_{\Lambda}^{\beta_{1}}=\Lambda ^{\alpha_{1}}(\Lambda-y_{1})^{\alpha_{1}}+\Lambda^{\beta_{1}}(\Lambda-y_{2})^ {\beta_{1}}\\ \leq\Lambda^{\alpha_{1}}(\Lambda-\|y_{1}\|_{\infty})^{\alpha_{1}}+\Lambda^{2 \beta_{1}}\leq 1+\Lambda^{2\beta_{1}}\text{ in }\Omega,\end{array} \tag{4.31}\]
while, if \(\beta_{1}<0\), we get
\[\begin{array}{l}\overline{u}_{\Lambda}^{\alpha_{1}}+v^{\beta_{1}}\leq \overline{u}_{\Lambda}^{\alpha_{1}}+\underline{v}_{\Lambda}^{\beta_{1}}=\Lambda ^{\alpha_{1}}(\Lambda-y_{1})^{\alpha_{1}}+\Lambda^{-\beta_{1}}(\Lambda-\hat{ \phi}_{1,p_{2}})^{\beta_{1}}\\ \leq\Lambda^{\alpha_{1}}(\Lambda-\|y_{1}\|_{\infty})^{\alpha_{1}}+\Lambda^{- \beta_{1}}(\Lambda-||\hat{\phi}_{1,p_{2}}||_{\infty})^{\beta_{1}}\\ \leq 1+2^{-\beta_{1}}\Lambda^{-\beta_{1}}\Lambda^{\beta_{1}}=1+2^{-\beta_{1}} \text{ in }\Omega.\end{array} \tag{4.32}\]
Then, for \(\Lambda>0\) large enough, (4.30)-(4.32) result in
\[-\Delta_{p_{1}}\overline{u}_{\Lambda}+|\overline{u}_{\Lambda}|^{p_{1}-2} \overline{u}_{\Lambda}\geq\overline{u}_{\Lambda}^{\alpha_{1}}+v^{\beta_{1}} \text{ in }\Omega,\]
for all \(v\in[\underline{v}_{\Lambda},\overline{v}_{\Lambda}]\). Similar arguments apply to the second equation in (P) lead to
\[-\Delta_{p_{2}}\overline{v}_{\Lambda}+|\overline{v}_{\Lambda}|^{p_{2}-2} \overline{v}_{\Lambda}\geq u^{\alpha_{2}}+\overline{v}_{\Lambda}^{\beta_{1}} \text{ in }\Omega,\]
for all \(u\in[\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\). Bearing in mind (4.26), it follows that \((\overline{u}_{\Lambda},\overline{v}_{\Lambda})\) in (4.24) verifies assumption (2.5). Consequently, \((\overline{u}_{\Lambda},\overline{v}_{\Lambda})\) is a supersolution of (P).
Next, we prove that \((\underline{u}_{\Lambda},\underline{v}_{\Lambda})\) in (4.23) is a subsolution of (P). It is important to mention that the eigenvalues in (4.2) verify \(\hat{\lambda}_{1,p_{1}},\hat{\lambda}_{1,p_{2}}>1\). Hence, using (1.1), (4.2) and (4.27), we obtain
\[\begin{array}{l}-\Delta_{p_{1}}\underline{u}_{\Lambda}+|\underline{u}_{ \Lambda}|^{p_{1}-2}\underline{u}_{\Lambda}=\Delta_{p_{1}}(\Lambda^{-1}\hat{ \phi}_{1,p_{1}})+\underline{u}_{\Lambda}^{p_{1}-1}\\ =\Lambda^{-(p_{1}-1)}(1-\hat{\lambda}_{1,p_{1}})\hat{\phi}_{1,p_{1}}^{p_{1}-1 }+\Lambda^{-(p_{1}-1)}(\Lambda-\hat{\phi}_{1,p_{1}})^{p_{1}-1}\\ \leq\Lambda^{-(p_{1}-1)}(\Lambda-\hat{\phi}_{1,p_{1}})^{p_{1}-1}\leq\Lambda^{- (p_{1}-1)}(\Lambda-||\hat{\phi}_{1,p_{1}}||_{\infty})^{p_{1}-1}\\ \leq(\frac{\Lambda-||\hat{\phi}_{1,p_{1}}||_{\infty}}{\Lambda})^{p_{1}-1}\leq 1 \text{ in }\Omega\end{array} \tag{4.33}\]
and
\[\begin{array}{l}-\Delta_{p_{2}}\underline{v}_{\Lambda}+|\underline{v}_{ \Lambda}|^{p_{2}-2}\underline{v}_{\Lambda}=\Delta_{p_{2}}(\Lambda^{-1}\hat{ \phi}_{1,p_{2}})+\underline{v}_{\Lambda}^{p_{2}-1}\\ =\Lambda^{-(p_{2}-1)}(1-\hat{\lambda}_{1,p_{2}})\hat{\phi}_{1,p_{2}}^{p_{2}-1 }+\Lambda^{-(p_{2}-1)}(\Lambda-\hat{\phi}_{1,p_{2}})^{p_{2}-1}\\ \leq\Lambda^{-(p_{2}-1)}(\Lambda-\hat{\phi}_{1,p_{2}})^{p_{2}-1}\leq\Lambda^{- (p_{2}-1)}(\Lambda-||\hat{\phi}_{1,p_{2}}||_{\infty})^{p_{2}-1}\\ \leq(\frac{\Lambda-||\hat{\phi}_{1,p_{2}}||_{\infty}}{\Lambda})^{p_{2}-1}\leq 1 \text{ in }\Omega\end{array} \tag{4.34}\]
as well as
\[\underline{u}_{\Lambda}^{\alpha_{1}}+v^{\beta_{1}}\geq\underline{u}_{\Lambda }^{\alpha_{1}}=\Lambda^{-\alpha_{1}}(\Lambda-\hat{\phi}_{1,p_{1}})^{\alpha_{1 }}\geq\Lambda^{-\alpha_{1}}\Lambda^{\alpha_{1}}=1\text{ in }\Omega, \tag{4.35}\]
and
\[u_{\Lambda}^{\alpha_{2}}+\underline{v}_{\Lambda}^{\beta_{2}}\geq\underline{v }_{\Lambda}^{\beta_{2}}=\Lambda^{-\beta_{2}}(\Lambda-\hat{\phi}_{1,p_{2}})^{ \beta_{2}}\geq\Lambda^{-\beta_{2}}\Lambda^{\beta_{2}}=1\text{ in }\Omega, \tag{4.36}\]
for all \((u,v)\in[\underline{u}_{\Lambda},\overline{u}_{\Lambda}]\times[\underline{v}_ {\Lambda},\overline{v}_{\Lambda}]\). Then, in view of (4.25), we deduce from (4.33)-(4.36) that \((\underline{u}_{\Lambda},\underline{v}_{\Lambda})\) in (4.23) fulfill (2.4) and therefore, \((\underline{u}_{\Lambda},\underline{v}_{\Lambda})\) is a subsolution of (P). Using (4.27), we derive that
\[\underline{u}_{\Lambda}\geq\Lambda^{-1}(\Lambda-\hat{\phi}_{1,p_{1}})\geq \Lambda^{-1}(\Lambda-||\hat{\phi}_{1,p_{1}}||_{\infty})\geq\Lambda^{-1}(\tfrac {\Lambda}{2})=\tfrac{1}{2}\]
and
\[\underline{v}_{\Lambda}\geq\Lambda^{-1}(\Lambda-\hat{\phi}_{1,p_{2}})\geq \Lambda^{-1}(\Lambda-||\hat{\phi}_{1,p_{2}}||_{\infty})\geq\Lambda^{-1}(\tfrac {\Lambda}{2})=\tfrac{1}{2}.\]
Consequently, (3.1) is verified and thence, Theorem 3.1 guarantees the existence of a positive solution \((u,v)\in int\mathcal{C}_{+}^{1,\tau}(\overline{\Omega})\times int\mathcal{C}_{ +}^{1,\tau}(\overline{\Omega})\), verifying (4.9), which is unique once \(\alpha_{2},\beta_{1}\in(-1,0)\) and assumption (3.2) is fulfilled. This completes the proof.
### \(L^{\infty}\)- bound solutions
In this part, we focus on subsolutions with zero trace condition on the boundary \(\partial\Omega\). Combined with (1.1), this type of subsolutions bring to the forefront the singularity at the origin, generated by negative exponents, which, until now, has been rather discreet due to the strict positivity of the sub-supersolutions constructed. Obviously, this will create additional difficulties in getting sub-supersolutions, especially when the exponents are assumed all negative. Moreover, singularities combined with zero trace subsolutions on the boundary \(\partial\Omega\) constitute a significant barrier to get regular solutions. At this point, note that we were not able to find in the literature the Neumann counterpart of the regularity result for Dirichlet singular problems in [17, Lemma 3.1]. This question remains open.
**Theorem 4.3**.: _Assume (1.1) is fulfilled with \(\alpha_{2},\beta_{1}>0\). Then, problem_ (P) _admits a solution \((u,v)\in W^{1,p_{1}}_{b}(\Omega)\times W^{1,p_{2}}_{b}(\Omega)\) such that_
\[(u,v)\in[\Lambda^{-1}\phi_{1,p_{1}},\Lambda\hat{y}_{1}]\times[\Lambda^{-1}\phi_ {1,p_{2}},\Lambda\hat{y}_{2}], \tag{4.37}\]
_provided \(\Lambda>1\) is big enough._
Proof.: First, putting \((\underline{u},\underline{v}):=(\Lambda^{-1}\phi_{1,p_{1}},\Lambda^{-1}\phi_ {1,p_{2}})\) note that (4.3) implies that (2.3) is satisfied. Let \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\) within \([\Lambda^{-1}\phi_{1,p_{1}},\Lambda\hat{y}_{1}]\times[\Lambda^{-1}\phi_{1,p_{2 }},\Lambda\hat{y}_{2}]\). In view of (4.1) and (4.6), we get
\[\begin{array}{l}-\Delta_{p_{i}}\left(\Lambda^{-1}\phi_{1,p_{i}}\right)+| \Lambda^{-1}\phi_{1,p_{i}}|^{p_{i}-2}\left(\Lambda^{-1}\phi_{1,p_{i}}\right)= \lambda_{1,p_{i}}(\Lambda^{-1}\phi_{1,p_{i}})^{p_{i}-1}\\ \leq\lambda_{1,p_{i}}(\Lambda^{-1}||\phi_{1,p_{1}}||_{\infty})^{p_{i}-1}\ \ \mbox{in $\Omega$}\end{array} \tag{4.38}\]
and
\[-\Delta_{p_{i}}\left(\Lambda\hat{y}_{i}\right)+|\Lambda\hat{y}_{i}|^{p_{i}-2} \left(\Lambda\hat{y}_{i}\right)=\Lambda^{p_{i}-1}\ \ \mbox{in $\Omega$, \ for $i=1,2$.} \tag{4.39}\]
By (1.1) one has
\[(\Lambda^{-1}\phi_{1,p_{1}})^{\alpha_{1}}+v^{\beta_{1}}\geq( \Lambda^{-1}\phi_{1,p_{1}})^{\alpha_{1}}\geq(\Lambda^{-1}||\phi_{1,p_{1}}||_{ \infty})^{\alpha_{1}}\ \ \mbox{in $\Omega$,} \tag{4.41}\] \[u^{\alpha_{2}}+(\Lambda^{-1}\phi_{1,p_{2}})^{\beta_{2}}\geq( \Lambda^{-1}||\phi_{1,p_{2}}||_{\infty})^{\beta_{2}}\ \ \mbox{in $\Omega$,} \tag{4.40}\]
while, from (4.8) and (4.4), it hold
\[\begin{array}{l}(\Lambda\hat{y}_{1})^{\alpha_{1}}+v^{\beta_{1}}\leq( \Lambda\hat{y}_{1})^{\alpha_{1}}+(\Lambda\hat{y}_{2})^{\beta_{1}}\leq(\Lambda \hat{\frac{\phi_{1,p_{1}}}{c}})^{\alpha_{1}}+(\Lambda\hat{c}\hat{\phi}_{1,p_{ 2}})^{\beta_{1}}\\ \leq(\Lambda\frac{\mu}{c})^{\alpha_{1}}+(\Lambda\hat{c}||\hat{\phi}_{1,p_{2}}|| _{\infty})^{\beta_{1}}\leq\Lambda^{\beta_{1}}((\frac{\mu}{c})^{\alpha_{1}}+( \hat{c}||\hat{\phi}_{1,p_{2}}||_{\infty})^{\beta_{1}})\ \ \mbox{in $\Omega$}\end{array} \tag{4.42}\]
and
\[\begin{array}{l}u^{\alpha_{2}}+(\Lambda\hat{y}_{2})^{\beta_{2}}\leq( \Lambda\hat{y}_{1})^{\alpha_{2}}+(\Lambda\hat{y}_{2})^{\beta_{2}}\leq( \Lambda\hat{c}\hat{\phi}_{1,p_{1}})^{\alpha_{2}}+(\Lambda\frac{\hat{\phi}_{1,p_ {2}}}{c})^{\beta_{2}}\\ \leq(\Lambda\hat{c}||\hat{\phi}_{1,p_{1}}||_{\infty})^{\alpha_{2}}+(\Lambda \frac{\mu}{c})^{\beta_{2}}\leq\Lambda^{\alpha_{2}}((\hat{c}||\hat{\phi}_{1,p_ {1}}||_{\infty})^{\alpha_{2}}+(\frac{\mu}{c})^{\beta_{2}})\ \ \mbox{in $\Omega$.}\end{array} \tag{4.43}\]
Then, gathering (4.38)-(4.43) together, we conclude that (2.4) and (2.5) are achieved for \(\Lambda>0\) large enough. Hence, \(\Lambda^{-1}(\phi_{1,p_{1}},\phi_{1,p_{2}})\) and \(\Lambda(\hat{y}_{1},\hat{y}_{2})\) are a sub-supersolutions of problem (P). By (4.3), assumption (3.5) is fulfilled. Consequently, owing to Theorem 3.2, there exists a solution \((u,v)\in W^{1,p_{1}}_{b}(\Omega)\times W^{1,p_{2}}_{b}(\Omega)\) of system (P) verifying (4.37).
To deal with the case when all exponents are negative in (1.1), we slightly modify the location area of potential solutions by moving the upper limits of the rectangle formed through sub-supersolutions. To do so, assume \(-1<\alpha_{2},\beta_{1}<0\) and let \(\hat{z}_{1},\hat{z}_{2}\in W^{1,p_{i}}_{b}(\Omega)\) be the unique solutions of Neumann Problems
\[-\Delta_{p_{1}}\hat{z}_{1}+|\hat{z}_{1}|^{p_{1}-2}\hat{z}_{1}=d(x)^{\beta_{1}} \ \mbox{in $\Omega$, }\ \frac{\partial\hat{z}_{1}}{\partial\eta}=0\ \mbox{on $\partial\Omega$,} \tag{4.44}\]
\[-\Delta_{p_{2}}\hat{z}_{2}+|\hat{z}_{2}|^{p_{2}-2}\hat{z}_{2}=d(x)^{\alpha_{2}} \ \mbox{in $\Omega$, }\ \frac{\partial\hat{z}_{2}}{\partial\eta}=0\ \mbox{on $\partial\Omega$.} \tag{4.45}\]
Note that, the Hardy-Sobolev type inequality (2.2) guarantees that the right-hand sides of (4.44) and (4.45) belong to \(W^{-1,p_{1}}(\Omega)^{*}\) and \(W^{-1,p_{2}}(\Omega)^{*}\), respectively. Consequently, Minty-Browder theorem (see, e.g., [4, Theorem
V.15]) implies the existence and uniqueness of \(\hat{z}_{i}\in W^{1,p_{i}}(\Omega)\) in (4.44) and (4.45), \(i=1,2\). Moreover, by weak comparison principle (see [30]), it is readily seen that there is a constant \(c_{1}>0\) such that
\[\hat{z}_{i}\geq c_{1}\hat{\phi}_{1,p_{i}}\text{ in }\Omega,\text{ for }i=1,2. \tag{4.46}\]
Next, we provide the \(L^{\infty}\)-bound of \(\hat{z}_{i}\).
**Lemma 4.4**.: _Under assumption_
\[0>\beta_{1},\alpha_{2}>\frac{-1}{N}, \tag{4.47}\]
_solutions \(\hat{z}_{1}\in W^{1,p_{1}}(\Omega)\) and \(\hat{z}_{2}\in W^{1,p_{2}}(\Omega)\) of problems (4.44) and (4.45) are bounded in \(L^{\infty}(\Omega)\)._
Proof.: We only show the \(L^{\infty}\)-bound of \(\hat{z}_{1}\) in (4.44) because that of \(\hat{z}_{2}\) in (4.45) can be justified similarly. Inspired by [10, Lemma 2], for each \(k\in\mathbb{N}\), set
\[A_{k}=\left\{x\in\Omega:\hat{z}_{1}(x)>k\right\}.\]
It is readily seen that \(|A_{k}|\to 0\), as well as,
\[\|d(x)^{\beta_{1}}\|_{L^{N}(A_{k})}\to 0\ \text{ as }k\to\infty, \tag{4.48}\]
because \(\hat{z}_{1}\in L^{1}(\Omega)\) and \(d(x)^{\beta_{1}}\in L^{N}(\Omega)\) (due to (4.47)).
Test with \((\hat{z}_{1}-k)^{+}\) in (4.44), it results in
\[\int_{\Omega}|\nabla\hat{z}_{1}|^{p_{1}-2}\nabla\hat{z}_{1}\nabla(\hat{z}_{1}- k)^{+}\mathrm{d}x+\int_{\Omega}\hat{z}_{1}^{p_{1}-1}(\hat{z}_{1}-k)^{+}\,\mathrm{d}x= \int_{\Omega}d(x)^{\beta_{1}}(\hat{z}_{1}-k)^{+}\mathrm{d}x.\]
Thus
\[\begin{array}{l}\int_{\Omega}|\nabla\hat{z}_{1}|^{p_{1}-2}\nabla\hat{z}_{1} \nabla(\hat{z}_{1}-k)^{+}\mathrm{d}x+\int_{\Omega}\hat{z}_{1}^{p_{1}-1}(\hat{ z}_{1}-k)^{+}\,\mathrm{d}x\\ \geq\int_{\Omega}|\nabla\hat{z}_{1}|^{p_{1}-2}\nabla\hat{z}_{1}\nabla(\hat{z}_ {1}-k)^{+}\mathrm{d}x+\int_{\Omega}((\hat{z}_{1}-k)^{+})^{p_{1}-1}(\hat{z}_{1 }-k)^{+}\,\mathrm{d}x\\ \geq\int_{A_{k}}|\nabla\hat{z}_{1}|^{p_{1}}\mathrm{d}x+\int_{A_{k}}(\hat{z}_{1 }-k)^{p_{1}}\,\mathrm{d}x\end{array} \tag{4.49}\]
while, by Holder inequality together with Sobolev embedding \(W^{1,1}(\Omega)\hookrightarrow L^{\frac{N}{N-1}}(\Omega)\), one has
\[\begin{array}{l}\int_{\Omega}d(x)^{\beta_{1}}(\hat{z}_{1}-k)^{+}\mathrm{d}x =\int_{A_{k}}d(x)^{\beta_{1}}(\hat{z}_{1}-k)^{+}\mathrm{d}x\\ \leq\|d^{\beta_{1}}\|_{L^{N}(A_{k})}\|(\hat{z}_{1}-k)^{+}\|_{L^{\frac{N}{N-1}} (A_{k})}\leq\|d^{\beta_{1}}\|_{L^{N}(A_{k})}\|(\hat{z}_{1}-k)^{+}\|_{L^{\frac{N }{N-1}}(\Omega)}\\ \leq c_{p_{1}}\|d^{\beta_{1}}\|_{L^{N}(A_{k})}\|(\hat{z}_{1}-k)^{+}\|_{W^{1,1 }(\Omega)}=c_{p_{1}}\|d^{\beta_{1}}\|_{L^{N}(A_{k})}\|\hat{z}_{1}-k\|_{W^{1,1} (A_{k})}.\end{array} \tag{4.50}\]
Therefore, from (4.49)-(4.50), it turn out that
\[\int_{A_{k}}|\nabla\hat{z}_{1}|^{p_{1}}\mathrm{d}x+\int_{A_{k}}(\hat{z}_{1}-k) ^{p_{1}}\,\mathrm{d}x\leq c_{p_{1}}\|d^{\beta_{1}}\|_{L^{N}(A_{k})}\int_{A_{k }}(|\nabla\hat{z}_{1}|+(\hat{z}_{1}-k))\mathrm{d}x. \tag{4.51}\]
Young's inequality yields
\[\int_{A_{k}}(|\nabla\hat{z}_{1}|+(\hat{z}_{1}-k))\mathrm{d}x\leq\int_{A_{k}}| \nabla\hat{z}_{1}|^{p_{1}}\mathrm{d}x+\int_{A_{k}}(\hat{z}_{1}-k)^{p_{1}} \mathrm{d}x+2|A_{k}|, \tag{4.52}\]
which, combined with (4.51), immediately leads to
\[\begin{array}{l}\int_{A_{k}}|\nabla\hat{z}_{1}|^{p_{1}}\mathrm{d}x+\int_{A_{k}}( \hat{z}_{1}-k)^{p_{1}}\,\mathrm{d}x\\ \leq c_{p_{1}}\|d^{\beta_{1}}\|_{L^{N}(A_{k})}\left(\int_{A_{k}}|\nabla\hat{z}_ {1}|^{p_{1}}\mathrm{d}x+\int_{A_{k}}(\hat{z}_{1}-k)^{p_{1}}\mathrm{d}x+2|A_{k} |\right).\end{array}\]
Thereby, for \(k\) large enough, the limit (4.48) implies
\[\int_{A_{k}}|\nabla\hat{z}_{1}|^{p_{1}}\mathrm{d}x+\int_{A_{k}}(\hat{z}_{1}-k) ^{p_{1}}\,\mathrm{d}x\leq C_{1}|A_{k}|.\]
Replacing the last inequality in (4.52), we achieve
\[\int_{A_{k}}(|\nabla\hat{z}_{1}|+(\hat{z}_{1}-k))\mathrm{d}x\leq C_{2}|A_{k}|.\]
Now, once again applying Holder inequality and the above Sobolev embedding, it follows that
\[\int_{A_{k}}(\hat{z}_{1}-k)\mathrm{d}x\leq|A_{k}|^{\frac{1}{N}}\|(\hat{z}_{1} -k)\|_{L^{\frac{N}{N-1}}(A_{k})}\leq C|A_{k}|^{\frac{1}{N}}\int_{A_{k}}(|\nabla \hat{z}_{1}|+(\hat{z}_{1}-k))\mathrm{d}x\]
and so,
\[\int_{A_{k}}(\hat{z}_{1}-k)\mathrm{d}x\leq C_{3}|A_{k}|^{1+\frac{1}{N}}.\]
Then, owing to [20, Lemma 5.1, Chaper 2], we conclude that there is \(\mathrm{K}>0\), independent of \(\hat{z}_{1}\), such that
\[\hat{z}_{1}(x)\leq\mathrm{K}\text{ a.e. in }\Omega,\]
showing that \(\hat{z}_{1}\in L^{\infty}(\Omega)\). Similar argument produces that \(\hat{z}_{2}\in L^{\infty}(\Omega)\) in (4.45). This ends the proof.
**Theorem 4.5**.: _Assume (1.1) is fulfilled with_
\[\max\{-1,-(p_{2}-1)\}<\alpha_{2}<0\ \text{ and }\ \max\{-1,-(p_{1}-1)\}<\beta_{1}<0. \tag{4.53}\]
_Then, problem_ (P) _admits a solution \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\) verifying_
\[(u,v)\in[\Lambda^{-1}\phi_{1,p_{1}},\Lambda\hat{z}_{1}]\times[\Lambda^{-1} \phi_{1,p_{2}},\Lambda\hat{z}_{2}], \tag{4.54}\]
_provided \(\Lambda>0\) is big enough. Moreover, if (4.47) is fullfield, then \((u,v)\in W^{1,p_{1}}_{b}(\Omega)\times W^{1,p_{2}}_{b}(\Omega)\)._
Proof.: It is readily seen that inequalities in (4.38), (4.40) and (4.41) hold true even for \(-1<\alpha_{2},\beta_{1}<0\). Then, \(\Lambda^{-1}(\phi_{1,p_{1}},\phi_{1,p_{2}})\) is a subsolution of system (P) satisfying (3.5). Let us show that \(\Lambda(\hat{z}_{1},\hat{z}_{2})\) is a supersolution of (P). By (4.46), (4.4) and (4.3), we get
\[\begin{array}{l}(\Lambda\hat{z}_{1})^{\alpha_{1}}+v^{\beta_{1}}\leq( \Lambda\hat{z}_{1})^{\alpha_{1}}+(\Lambda^{-1}\phi_{1,p_{2}})^{\beta_{1}}\\ \leq(\Lambda c_{1}\hat{\phi}_{1,p_{1}})^{\alpha_{1}}+(\Lambda^{-1}\phi_{1,p_{ 2}})^{\beta_{1}}\\ \leq 1+(\Lambda^{-1}c_{0}d(x))^{\beta_{1}}\\ \leq((\Lambda^{-1}d(x))^{-\beta_{1}}+c_{0}^{\beta_{1}})(\Lambda^{-1}d(x))^{ \beta_{1}}\\ \leq(1+c_{0}^{\beta_{1}})(\Lambda^{-1}d(x))^{\beta_{1}}\text{ in }\Omega,\end{array} \tag{4.55}\]
and similarly
\[\begin{array}{l}u^{\alpha_{2}}+(\Lambda\hat{z}_{2})^{\beta_{2}}\leq(\Lambda^{-1} \phi_{1,p_{1}})^{\alpha_{2}}+(\Lambda\hat{z}_{2})^{\beta_{2}}\\ \leq(\Lambda^{-1}\phi_{1,p_{1}})^{\alpha_{2}}+(\Lambda c_{1}\hat{\phi}_{1,p_{2} })^{\beta_{2}}\\ \leq(\Lambda^{-1}c_{0}d(x))^{\alpha_{2}}+1\\ \leq(c_{0}^{\alpha_{2}}+(\Lambda^{-1}d(x))^{-\alpha_{2}})(\Lambda^{-1}d(x))^{ \alpha_{2}}\\ \leq(c_{0}^{\alpha_{2}}+1)(\Lambda^{-1}d(x))^{\alpha_{2}}\text{ in }\Omega,\end{array} \tag{4.56}\]
for all \((u,v)\in[\Lambda^{-1}\phi_{1,p_{1}},\Lambda\hat{z}_{1}]\times[\Lambda^{-1}\phi _{1,p_{2}},\Lambda\hat{z}_{2}]\), provided that \(\Lambda>0\) is large. On the other hand, (4.44) and (4.45) imply
\[-\Delta_{p_{1}}(\Lambda\hat{z}_{1})+|\Lambda\hat{z}_{1}|^{p_{1}-2}(\Lambda \hat{z}_{1})=\Lambda^{p_{1}-1}d(x)^{\beta_{1}}\text{ in }\Omega \tag{4.57}\]
and
\[-\Delta_{p_{2}}(\Lambda\hat{z}_{2})+|\Lambda\hat{z}_{2}|^{p_{2}-2}(\Lambda \hat{z}_{2})=\Lambda^{p_{2}-1}d(x)^{\alpha_{2}}\text{ in }\Omega. \tag{4.58}\]
Then, it turns out from (4.55)-(4.58) and (4.53) that (2.5) is fulfilled for \(\Lambda>0\) large enough. Hence, \(\Lambda(\hat{z}_{1},\hat{z}_{2})\) is a supersolution of problem (P). Consequently, owing to Theorem 3.2, there exists a solution \((u,v)\in W^{1,p_{1}}(\Omega)\times W^{1,p_{2}}(\Omega)\) of system (P) verifying (4.54). Moreover, according to Lemma 4.4 and (4.54), we infer that \((u,v)\in W^{1,p_{1}}_{b}(\Omega)\times W^{1,p_{2}}_{b}(\Omega)\) once (4.47) is fulfilled. This ends the proof.
|
2303.17420 | Global existence and optimal time-decay rates of the compressible
Navier-Stokes-Euler system | In this paper, we consider the Cauchy problem of the multi-dimensional
compressible Navier-Stokes-Euler system for two-phase flow motion, which
consists of the isentropic compressible Navier-Stokes equations and the
isothermal compressible Euler equations coupled with each other through a
relaxation drag force. We first establish the local existence and uniqueness of
the strong solution for general initial data in a critical homogeneous Besov
space, and then prove the global existence of the solution if the initial data
is a small perturbation of the equilibrium state. Moreover, under the
additional condition that the low-frequency part of the initial perturbation
also belongs to another Besov space with lower regularity, we obtain the
optimal time-decay rates of the global solution toward the equilibrium state.
These results imply that the relaxation drag force and the viscosity
dissipation affect regularity properties and long time behaviors of solutions
for the compressible Navier-Stokes-Euler system. | Hai-Liang Li, Ling-Yun Shou | 2023-03-30T14:40:21Z | http://arxiv.org/abs/2303.17420v3 | # Global existence and optimal time-decay rates of the compressible Navier-Stokes-Euler system
###### Abstract
In this paper, we consider the Cauchy problem of the multi-dimensional compressible Navier-Stokes-Euler system for two-phase flow motion, which consists of the isentropic compressible Navier-Stokes equations and the isothermal compressible Euler equations coupled with each other through a relaxation drag force. We first establish the local existence and uniqueness of the strong solution for general initial data in a critical homogeneous Besov space, and then prove the global existence of the solution if the initial data is a small perturbation of the equilibrium state. Moreover, under the additional condition that the low-frequency part of the initial perturbation also belongs to another Besov space with lower regularity, we obtain the optimal time-decay rates of the global solution toward the equilibrium state. These results imply that the relaxation drag force and the viscosity dissipation affect regularity properties and long time behaviors of solutions for the compressible Navier-Stokes-Euler system.
**Key words:** Two-phase flow, Navier-Stokes equations, Euler equations, critical regularity, global existence, optimal time-decay rates
Introduction
We consider the coupled compressible Navier-Stokes-Euler (NS-Euler) system for two-phase flow motion in \(\mathbb{R}^{d}\) (\(d\geq 2\)) as follows:
\[\begin{cases}\partial_{t}\rho+\operatorname{div}\left(\rho u\right)=0,\\ \partial_{t}(\rho u)+\operatorname{div}\left(\rho u\otimes u\right)+\nabla P( \rho)=\mu\Delta u+(\mu+\lambda)\nabla\mathrm{div}\,u-\kappa n(u-w),\\ \partial_{t}n+\operatorname{div}\left(nw\right)=0,\\ \partial_{t}(nw)+\operatorname{div}\left(nw\otimes w\right)+\nabla n=\kappa n (u-w),\qquad x\in\mathbb{R}^{d},\quad t>0,\end{cases} \tag{1.1}\]
with the initial data
\[(\rho,u,n,w)(x,0)=(\rho_{0},u_{0},n_{0},w_{0})(x)\to(\bar{\rho},0,\bar{n},0), \qquad|x|\to\infty, \tag{1.2}\]
where \((\bar{\rho},0,\bar{n},0)\) (\(\bar{\rho},\bar{n}>0\)) is the constant state. The unknowns are the densities \(\rho=\rho(x,t)\geq 0,n=n(x,t)\geq 0\) and the velocities \(u=u(x,t)\in\mathbb{R}^{d},w=w(x,t)\in\mathbb{R}^{d}\). Furthermore, the pressure function \(P(\rho)\in C^{\infty}(\mathbb{R}_{+})\) satisfies \(P^{\prime}(\rho)>0\) for \(\rho>0\), the drag force coefficient \(\kappa>0\) is a constant, and the viscosity coefficients \(\mu\) and \(\lambda\) satisfy
\[\mu>0,\qquad 2\mu+\lambda>0.\]
The NS-Euler system (1.1) was derived in [9, 12] as a hydrodynamic limit of the compressible Navier-Stokes-Vlasov-Fokker-Planck model describing the dynamics of small particles dispersed in a fluid, such as the sedimentation of suspensions, sprays, combustion [24, 34, 43], and so on.
There has been much important progress made recently on the analysis of two-phase flow models, refer to [2, 3, 4, 5, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 22, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 44, 45] and references therein. However, for the NS-Euler system (1.1), there are few important results to our knowledge on the well-posedness and asymptotic behaviors of solutions, cf. [9, 12, 36, 45]. If the initial data is a small perturbation of the equilibrium state in the three-dimensional Sobolev space \(H^{s}\) (\(s\geq 3\)), Choi [9] established the global existence and uniqueness of strong solutions for (1.1) in both the whole space and the periodic domain, and further got the exponential time stability in the periodic case. Choi and Jung [12] showed the global existence and uniqueness of solutions to (1.1) near the equilibrium state in a three-dimensional bounded domain. In addition, Wu, Zhang and Zou [45] and Tang and Zhang [36] obtained the optimal algebraic time-decay rates of global solutions for (1.1) with Sobolev regularity in the three-dimensional whole space if the initial data further belongs to \(L^{1}\). For the pressureless NS-Euler system (i.e., without the pressure term \(\nabla n\) in (1.1)\({}_{4}\)), the global dynamics of strong solutions near the equilibrium state was studied in [10], and the finite-time blow-up phenomena of classical solutions was investigated in [11]. When the density-dependent viscosity term \(\operatorname{div}\left(n\mathbb{D}w\right)\) in (1.1)\({}_{4}\) is taken into account, which can be derived from the Champman-Enskog expansion of compressible Navier-Stokes-Vlasov-Fokker-Planck model in [29], the existence and nonlinear stability of steady-states to the inflow/outflow problem in the half line are proved in [27, 28].
However, there are no any results about the NS-Euler system (1.1) in critical spaces. The main purpose in this paper is to study the well-posedness and optimal time-decay rates of solutions to the multi-dimensional Cauchy problem (1.1)-(1.2) in the critical regularity framework. To be more precise, it is proved that for general initial data in the \(L^{2}\)-type critical Besov space, a unique strong solution to the Cauchy problem (1.1)-(1.2) exists locally in time, which is shown to be a global one when the initial data is near the constant equilibrium state. In addition, the optimal algebra time-decay rates of the global solution to its equilibrium state are obtained under the additional mild assumption that the low-frequency part of the initial data is either bounded or small in another Besov space with lower regularity.
Without loss of generality, set
\[P^{\prime}(\bar{\rho})=\kappa=\bar{\rho}=\bar{n}=\mu=1,\qquad\lambda=-1.\]
Define the perturbation
\[a:=\rho-1,\qquad a_{0}:=\rho_{0}-1,\qquad b:=\log n,\qquad b_{0}=\log n_{0}.\]
Then, the Cauchy problem (1.1)-(1.2) can be reformulated into
\[\begin{cases}\partial_{t}a+\operatorname{div}u=-\text{div}\,(au),\\ \partial_{t}u+\nabla a-\Delta u+u-w=-u\cdot\nabla u+G,\\ \partial_{t}b+\operatorname{div}w=-w\cdot\nabla b,\\ \partial_{t}w+\nabla b+w-u=-w\cdot\nabla w,\qquad x\in\mathbb{R}^{d},\quad t >0,\\ (a,u,b,w)(x,0)=(a_{0},u_{0},b_{0},w_{0})(x)\to(0,0,0,0),\quad|x|\to\infty,\end{cases} \tag{1.3}\]
where \(G\) is the nonlinear term
\[G:=g(a)\nabla a+f(a)\Delta u+h(a,b)(u-w), \tag{1.4}\]
with
\[g(a):=-\frac{P^{\prime}(1+a)}{1+a}+1,\quad f(a):=-\frac{a}{a+1},\quad h(a,b):=( e^{b}-1)\frac{a}{a+1}+\frac{a}{a+1}-e^{b}+1.\]
First, we have the local existence and uniqueness of the strong solution to the Cauchy problem (1.3) for general initial data in the critical Besov space as follows:
**Theorem 1.1**.: _Assume that the initial data \((a_{0},u_{0},b_{0},w_{0})\) satisfies_
\[a_{0}\in\dot{B}^{\frac{d}{2}}_{2,1},\quad\inf_{x\in\mathbb{R}^{d}}(1+a_{0})(x )>0,\quad\ u_{0}\in\dot{B}^{\frac{d}{2}-1}_{2,1},\quad\ (b_{0},w_{0})\in\dot{B}^{\frac{d}{2}-1}_{2,1}\cap\dot{B}^{\frac{d}{2}+1}_{2,1}. \tag{1.5}\]
_Then, there exists a time \(T>0\) such that the Cauchy problem (1.3) admits a unique strong solution \((a,u,b,w)\) satisfying for \(t\in[0,T]\) that_
\[\begin{cases}a\in\mathcal{C}([0,T];\dot{B}^{\frac{d}{2}-1}_{2,1}),\quad\inf_{(x,t)\in\mathbb{R}^{d}\times[0,T]}(1+a)(x,t)>0\\ u\in\mathcal{C}([0,T];\dot{B}^{\frac{d}{2}-1}_{2,1})\cap L^{1}(0,T;\dot{B}^{ \frac{d}{2}+1}_{2,1}),\\ (b,w)\in\mathcal{C}([0,T];\dot{B}^{\frac{d}{2}-1}_{2,1}\cap\dot{B}^{\frac{d}{2} +1}_{2,1}).\end{cases} \tag{1.6}\]
_In addition to (1.5), if assume \(a_{0}\in\dot{B}^{\frac{d}{2}-1}_{2,1}\), then \(a\in\mathcal{C}([0,T];\dot{B}^{\frac{d}{2}-1}_{2,1})\) holds._
Then, we establish the global existence of the strong solution to the Cauchy problem (1.3) for the initial data close to the equilibrium state below:
**Theorem 1.2**.: _For any \(d\geq 2\), there exists a constant \(\varepsilon_{0}>0\) such that if the initial data \((a_{0},u_{0},b_{0},w_{0})\) satisfies \(a_{0}\in\dot{B}^{\frac{d}{2}-1}_{2,1}\cap\dot{B}^{\frac{d}{2}}_{2,1}\), \(u_{0}\in\dot{B}^{\frac{d}{2}-1}_{2,1}\), \((b_{0},w_{0})\in\dot{B}^{\frac{d}{2}-1}_{2,1}\cap\dot{B}^{\frac{d}{2}+1}_{2, 1}\) and_
\[\mathcal{X}_{0}:=\|(a_{0},u_{0},b_{0},w_{0})^{\ell}\|_{\dot{B}^{\frac{d}{2}-1 }_{2,1}}+\|a^{h}_{0}\|_{\dot{B}^{\frac{d}{2}}_{2,1}}+\|u^{h}_{0}\|_{\dot{B}^{ \frac{d}{2}-1}_{2,1}}+\|(b_{0},w_{0})^{h}\|_{\dot{B}^{\frac{d}{2}+1}_{2,1}} \leq\varepsilon_{0}, \tag{1.7}\]
_then the Cauchy problem (1.3) admits a unique global strong solution \((a,u,b,w)\), which satisfies_
\[\begin{cases}a^{\ell}\in\mathcal{C}_{b}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}-1 }_{2,1})\cap L^{1}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}+1}_{2,1}),&\quad a^{h} \in\mathcal{C}_{b}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}}_{2,1})\cap L^{1}( \mathbb{R}^{+};\dot{B}^{\frac{d}{2}}_{2,1}),\\ u^{\ell}\in\mathcal{C}_{b}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}-1}_{2,1})\cap L ^{1}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}+1}_{2,1}),&\quad u^{h}\in\mathcal{C} _{b}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}-1}_{2,1})\cap L^{1}(\mathbb{R}^{+}; \dot{B}^{\frac{d}{2}+1}_{2,1}),\\ b^{\ell}\in\mathcal{C}_{b}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}-1}_{2,1})\cap L ^{1}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}+1}_{2,1}),&\quad b^{h}\in\mathcal{C} _{b}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}+1}_{2,1})\cap L^{1}(\mathbb{R}^{+}; \dot{B}^{\frac{d}{2}+1}_{2,1}),\\ w^{\ell}\in\mathcal{C}_{b}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}-1}_{2,1}) \cap L^{1}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}+1}_{2,1}),&\quad w^{h}\in \mathcal{C}_{b}(\mathbb{R}^{+};\dot{B}^{\frac{d}{2}+1}_{2,1})\cap L^{1}( \mathbb{R}^{+};\dot{B}^{\frac{d}{2}+1}_{2,1}),\\ (u-w)^{\ell}\in L^{1}(\mathbb{R}_{+};\dot{B}^{\frac{d}{2}}_{2,1})\cap L^{2}( \mathbb{R}^{+};\dot{B}^{\frac{d}{2}-1}_{2,1}),\end{cases} \tag{1.8}\]
_and_
\[\begin{split}&\|(a,u,b,w)\|^{\ell}_{L^{\infty}_{t}(\dot{B}^{ \frac{d}{2}-1}_{2,1})}+\|(a,u,b,w)\|^{\ell}_{L^{1}_{t}(\dot{B}^{\frac{d}{2}+1}_ {2,1})}+\|u-w\|^{\ell}_{L^{1}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}+\|u-w\|^{\ell} _{L^{2}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}\\ &\quad+\|a\|^{h}_{L^{\infty}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}+\|u \|^{h}_{L^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}+\|(b,w)\|^{h}_{L^{ \infty}_{t}(\dot{B}^{\frac{d}{2}+1}_{2,1})}+\|a\|^{h}_{L^{1}_{t}(\dot{B}^{\frac {d}{2}}_{2,1})}+\|(u,b,w)\|^{h}_{L^{1}_{t}(\dot{B}^{\frac{d}{2}+1}_{2,1})}\\ &\leq C\mathcal{X}_{0},\qquad t>0,\end{split} \tag{1.9}\]
_for \(C>0\) a constant independent of time._
For Besov spaces, the readers can refer to Definitions 4.1-4.2 in Appendix.
**Remark 1.1**.: _The regularity \(L^{1}(\mathbb{R}_{+};\dot{B}^{\frac{d}{2}+1}_{2,1})\) of the velocity \(w\) in (1.8) comes essentially from the coupling of the relaxation drag force term on the relative velocity \(u-w\) and the viscosity dissipation on the velocity \(u\). Furthermore, due to the influences of the relaxation drag force term, the regularity \(L^{1}(\mathbb{R}_{+},\dot{B}^{\frac{d}{2}}_{2,1})\cap L^{2}(\mathbb{R}_{+},\dot{B} ^{\frac{d}{2}-1}_{2,1})\) of the relative velocity \(u-w\) in (1.8) is stronger than the regularity \(L^{1}(\mathbb{R}_{+},\dot{B}^{\frac{d}{2}+1}_{2,1})\cap L^{2}(\mathbb{R}_{+}, \dot{B}^{\frac{d}{2}}_{2,1})\) of the solution \((a,u,b,w)\) for low frequencies._
Moreover, we have the optimal time-decay rates of the global solution to the Cauchy problem (1.3) if the low-frequency part of the initial data is further bounded in \(\dot{B}^{\sigma_{0}}_{2,\infty}\) for \(\sigma_{0}\in[-\frac{d}{2},\frac{d}{2}-1)\) as follows:
**Theorem 1.3**.: _For any \(d\geq 2\), let the assumptions of Theorem 1.2 hold, and \((a,u,b,w)\) be the corresponding global strong solution to the Cauchy problem (1.3) given by Theorem 1.2. If the initial data \((a_{0},u_{0},b_{0},w_{0})\) further satisfies its low-frequency part_
\[(a_{0},u_{0},b_{0},w_{0})^{\ell}\in\dot{B}^{\sigma_{0}}_{2,\infty}\qquad\text{ for}\quad\sigma_{0}\in[-\frac{d}{2},\frac{d}{2}-1), \tag{1.10}\]
_then it holds for any \(t\geq 1\) that_
\[\begin{cases}\|(a,u,b,w)(t)\|^{\ell}_{\dot{B}^{\sigma_{0}}_{2,1}}\leq C\delta_ {0}(1+t)^{-\frac{1}{2}(\sigma-\sigma_{0})},\qquad\quad\sigma\in(\sigma_{0}, \frac{d}{2}-1],\\ \|a(t)\|^{h}_{\dot{B}^{\frac{d}{2}}_{2,1}}+\|(u,b,w)(t)\|^{h}_{\dot{B}^{\frac{ d}{2}+1}_{2,1}}\leq C\delta_{0}(1+t)^{-\frac{1}{2}(\frac{d}{2}-1-\sigma_{0})},\\ \|(u-w)(t)\|^{\ell}_{\dot{B}^{\sigma_{0}}_{2,\infty}}\leq C\delta_{0}(1+t)^{- \sigma_{*}},\end{cases} \tag{1.11}\]
_with a constant \(C>0\) independent of time, \(\sigma_{*}:=\min\{\frac{1}{2},\frac{1}{2}(\frac{d}{2}-1-\sigma_{0})\}>0\) and_
\[\delta_{0}:=\|(a_{0},u_{0},b_{0},w_{0})^{\ell}\|_{\dot{B}^{\sigma_{0}}_{2, \infty}}+\|a^{h}_{0}\|_{\dot{B}^{\frac{d}{2}}_{2,1}}+\|u^{h}_{0}\|_{\dot{B}^{ \frac{d}{2}-1}_{2,1}}+\|(b_{0},w_{0})^{h}\|_{\dot{B}^{\frac{d}{2}+1}_{2,1}}. \tag{1.12}\]
_Furthermore, for \(d\geq 3\) and \(\sigma_{0}\in[-\frac{d}{2},\frac{d}{2}-2)\), the relative velocity \(u-w\) satisfies_
\[\|(u-w)(t)\|^{\ell}_{\dot{B}^{\sigma}_{2,1}}\leq C\delta_{0}(1+t)^{-\frac{1}{ 2}(1+\sigma-\sigma_{0})},\qquad\sigma\in(\sigma_{0},\frac{d}{2}-2]. \tag{1.13}\]
**Remark 1.2**.: _Theorem 1.3 implies that the solution \((a,u,b,w)\) to the Cauchy problem (1.3) decays at the same rate \((1+t)^{-\frac{1}{2}(\sigma-\sigma_{0})}\) in \(\dot{B}^{\sigma}_{2,1}\) as the solution of the heat equation with initial data in \(\dot{B}^{\sigma_{0}}_{2,\infty}\); however, due to the dissipation effect of the relaxation drag force, the relative velocity \(u-w\) decays at the faster rate \((1+t)^{-\frac{1}{2}(1+\sigma-\sigma_{0})}\) in \(\dot{B}^{\sigma}_{2,1}\)._
When the low-frequency part of the initial data is suitably small in \(\dot{B}^{\sigma_{0}}_{2,\infty}\) for \(\sigma_{0}\in[-\frac{d}{2},\frac{d}{2}-1)\), we can also prove the optimal time-decay rates of the global solution to the Cauchy problem (1.3).
**Theorem 1.4**.: _For any \(d\geq 2\), let the assumptions of Theorem 1.2 hold, and \((a,u,b,w)\) be the global strong solution to the Cauchy problem (1.3) given by Theorem 1.2. There exists a constant \(\varepsilon_{1}>0\) such that if the initial data \((a_{0},u_{0},b_{0},w_{0})\) further satisfies_
\[\|(a_{0},u_{0},b_{0},w_{0})^{\ell}\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}\leq \varepsilon_{1}\qquad\text{for}\quad\sigma_{0}\in[-\frac{d}{2},\frac{d}{2}-1), \tag{1.14}\]
_then it holds for any \(t\geq 1\) that_
\[\begin{cases}\|(a,u,b,w)(t)\|^{\ell}_{\dot{B}^{\sigma}_{2,1}}\leq C\delta_{0}( 1+t)^{-\frac{1}{2}(\sigma-\sigma_{0})},\qquad\sigma\in(\sigma_{0},\frac{d}{2} +1],\\ \|a(t)\|^{h}_{\dot{B}^{\frac{d}{2}}_{2,1}}+\|(u,b,w)(t)\|^{h}_{\dot{B}^{\frac{d} {2}+1}_{2,1}}\leq C\delta_{0}(1+t)^{-\frac{1}{2}(d+1-2\sigma_{0}-2\varepsilon)},\\ \|(u-w)(t)\|^{\ell}_{\dot{B}^{\sigma_{0}}_{2,\infty}}\leq C\delta_{0}(1+t)^{- \frac{1}{2}},\\ \|(u-w)(t)\|^{\ell}_{\dot{B}^{\sigma}_{2,1}}\leq C\delta_{0}(1+t)^{-\frac{1}{ 2}(1+\sigma-\sigma_{0})},\qquad\sigma\in(\sigma_{0},\frac{d}{2}],\end{cases} \tag{1.15}\]
_where \(\delta_{0}\) is denoted by (1.12), \(C>0\) is a constant independent of time, and \(\varepsilon\in(0,1]\) is any small constant._
**Remark 1.3**.: _Compared with the time-decay rates of the solution \((a,u,b,w)\) to the Cauchy problem (1.3) in Theorem 1.3, it is shown in Theorem 1.4 that the low-frequency part \((a,u,b,w)^{\ell}\) has the optimal time-decay rate \((1+t)^{-\frac{1}{2}(\sigma-\sigma_{0})}\) in \(\dot{B}^{\sigma}_{2,1}\) for higher regularity indexes \(\sigma\in(\frac{d}{2}-1,\frac{d}{2}+1]\), the high-frequency part \((a,u,b,w)^{h}\) decays at the faster rate \((1+t)^{-\frac{1}{2}(d+1-2\sigma_{0}-2\varepsilon)}\) in the same Besov space, and furthermore the relative velocity \(u-w\) decays at the rate \((1+t)^{-\frac{1}{2}(1+\sigma-\sigma_{0})}\) in \(\dot{B}^{\sigma}_{2,1}\) for higher regularity indexes \(\sigma\in(\frac{d}{2}-2,\frac{d}{2}]\) without the restrictions \(d\geq 3\) and \(\sigma_{0}<\frac{d}{2}-2\)._
**Remark 1.4**.: _By (1.15) and interpolation arguments, for \(\Lambda:=(-\Delta)^{\frac{1}{2}}\), \(p\geq 2\) and \(t\geq 1\), the following optimal \(L^{p}\) time-decay rates hold\(:\)_
\[\left\{\begin{aligned} &\|\Lambda^{\sigma}a(t)\|_{L^{p}}\lesssim(1+t)^{- \frac{1}{2}(\sigma+\frac{d}{2}-\frac{d}{p}-\sigma_{0})},&\sigma+ \frac{d}{2}-\frac{d}{p}\in(\sigma_{0},\frac{d}{2}],\\ &\|\Lambda^{\sigma}(u,b,w)(t)\|_{L^{p}}\lesssim(1+t)^{-\frac{1}{2 }(\sigma+\frac{d}{2}-\frac{d}{p}-\sigma_{0})},&\sigma+\frac{d}{2} -\frac{d}{p}\in(\sigma_{0},\frac{d}{2}],\\ &\|\Lambda^{\sigma}(u-w)(t)\|_{L^{p}}\lesssim(1+t)^{-\frac{1}{2}( 1+\sigma+\frac{d}{2}-\frac{d}{p}-\sigma_{0})},&\sigma+\frac{d}{2} -\frac{d}{p}\in(\sigma_{0},\frac{d}{2}].\end{aligned}\right. \tag{1.16}\]
We would like to mention that the important progress has been obtained about the well-posedness and optimal time-decay rates of solutions to the Cauchy problem for the isentropic compressible Navier-Stokes equations in the \(L^{2}\)-type or \(L^{p}\)-type critical Besov spaces, refer to [6, 8, 15, 16, 19, 23, 37, 41] and references therein. Complete overviews on Fourier analysis methods for the compressible Navier-Stokes equations are presented in [1, 18].
Meanwhile, for the Cauchy problem of the compressible Euler equations with damping, the global existence and optimal time-decay rates of small classical solutions with critical regularity were investigated in either the inhomogeneous Besov spaces [38, 39, 40] or the homogeneous setting [13, 14].
We explain the main ideas to prove above Theorems 1.2-1.4 about the global existence and optimal time-decay rates of the strong solution to the Cauchy problem (1.3) in the framework of critical Besov spaces. The NS-Euler model (1.1) (i.e., (1.3)\({}_{1}\)-(1.3)\({}_{4}\)) can be viewed as a coupled system of the compressible Navier-Stokes equations (1.1)\({}_{1}\)-(1.1)\({}_{2}\) and the compressible Euler equations (1.1)\({}_{3}\)-(1.1)\({}_{4}\) through the drag force source terms \(n(w-u)\) and \(n(u-w)\), respectively. However, in order to derive the regularity \(L^{1}(\mathbb{R}_{+},\dot{B}^{\frac{d}{2}+1}_{2,1})\) for the velocity \(u\) in (1.1)\({}_{1}\)-(1.1)\({}_{2}\) as that made in [15] for the compressible Navier-Stokes equations, we require the regularity \(L^{1}(\mathbb{R}_{+},\dot{B}^{\frac{d}{2}-1}_{2,1})\) for \(w\) in the source term \(n(w-u)\). Meanwhile, to get the regularity \(L^{1}(\mathbb{R}_{+},\dot{B}^{\frac{d}{2}}_{2,1}\cap\dot{B}^{\frac{d}{2}+1}_{ 2,1})\) for the velocity \(w\) in (1.1)\({}_{3}\)-(1.1)\({}_{4}\) by similar arguments used in [14] for the compressible Euler equations with damping, we need the regularity \(L^{1}(\mathbb{R}_{+},\dot{B}^{\frac{d}{2}-1}_{2,1}\cap\dot{B}^{\frac{d}{2}+1}_ {2,1})\) for \(u\) in the source term \(n(u-w)\). Unfortunately, the regularity \(L^{1}(\mathbb{R}_{+},\dot{B}^{\frac{d}{2}-1}_{2,1})\) required for the velocity \(w\) in (1.1)\({}_{1}\)-(1.1)\({}_{2}\) is not consistent with the critical regularity \(L^{1}(\mathbb{R}_{+},\dot{B}^{\frac{d}{2}}_{2,1}\cap\dot{B}^{\frac{d}{2}+1}_{ 2,1})\) for \(w\) in (1.1)\({}_{3}\)-(1.1)\({}_{4}\), and
neither is for the velocity \(u\). In addition, since the NS-Euler model (1.1) does not satisfy the well-known "Shizuta-Kawashima" condition in the study of hyperbolic-parabolic composite systems (cf. [14, 25, 35, 39]) due to the drag force term, it is not obvious to analyze the dissipative structures of (1.1). These cause essential difficulties to enclose the uniform-in-time a-priori estimates of the local solution in the critical Besov space and extend it globally in time.
To overcome these difficulties, we combine the linear part of (1.3)\({}_{1}\)-(1.3)\({}_{4}\) together in frequency localization and make full use of the dissipative properties of the relative velocity \(u-w\) and the velocity \(u\) to establish the estimates of the velocity \(w\) as follows:
\[\|\dot{\Delta}_{j}(u-w)\|_{L^{2}}^{2}+2^{2j}\|\dot{\Delta}_{j}u\|_{L^{2}}^{2} \gtrsim\begin{cases}2^{2j}\|\dot{\Delta}_{j}w\|_{L^{2}}^{2},&\quad\text{if $j \leq 0$},\\ \|\dot{\Delta}_{j}w\|_{L^{2}}^{2},&\quad\text{if $j\geq-1$}.\end{cases} \tag{1.17}\]
With the help of (1.17), we derive both the \(\widetilde{L}_{t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}-1})\cap L_{t}^{1}(\dot{ B}_{2,1}^{\frac{d}{2}+1})\)-estimate of \((a,u,b,w)\) for low frequencies and the \(\widetilde{L}_{t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}-1})\cap L_{t}^{1}(\dot{ B}_{2,1}^{\frac{d}{2}-1})\)-estimate of \((\nabla a,u,b,w)\) for high frequencies. In addition, employing the \(L_{t}^{1}(\dot{B}_{2,1}^{\frac{d}{2}+1})\)-estimate of \(u\) for high frequencies obtained by the viscosity term and treating (1.1)\({}_{3}\)-(1.1)\({}_{4}\) as the damped compressible Euler system with the given force \(u\), we establish the higher order \(\widetilde{L}_{t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}+1})\cap L_{t}^{1}(\dot{ B}_{2,1}^{\frac{d}{2}+1})\)-estimates of \((b,w)\).
However, due to the difficulty caused by the nonlinear term \(h(a,b)(u-w)\) in (1.3)\({}_{2}\), the regularities estimates of the solution \((a,u,b,w)\) are not enough to enclose the a-priori estimates. To overcome this difficulty, we observe that the relative velocity \(u-w\) satisfies
\[\partial_{t}(u-w)+2(u-w)=-\nabla a+\Delta u+\nabla b-u\cdot\nabla u+w\cdot \nabla w+G. \tag{1.18}\]
Employing the estimates of \((a,u,b,w)\) and (1.18), we further obtain the \(L_{t}^{1}(\dot{B}_{2,1}^{\frac{d}{2}})\cap\widetilde{L}_{t}^{2}(\dot{B}_{2,1 }^{\frac{d}{2}-1})\)-estimate of the relative velocity \(u-w\), which is stronger than the \(L_{t}^{1}(\dot{B}_{2,1}^{\frac{d}{2}+1})\cap\widetilde{L}_{t}^{2}(\dot{B}_{2, 1}^{\frac{d}{2}})\)-estimate of \((a,u,b,w)\) for low frequencies. Combing the above estimates of \((a,u,b,w)\) and \(u-w\) together, we enclose the a-priori estimates of the solution \((a,u,b,w)\) to the Cauchy problem (1.3) (refer to Lemmas 2.2 and 2.5).
If \((a_{0},u_{0},b_{0},w_{0})^{\ell}\) is further bounded in \(\dot{B}_{2,\infty}^{\sigma_{0}}\) for \(\sigma_{0}\in[-\frac{d}{2},\frac{d}{2}-1)\), motivated by the interesting works [21, 37], we establish different time-weighted energy estimates to derive the optimal time-decay rates of the solution \((a,u,b,w)\) in (1.11)\({}_{1}\)-(1.11)\({}_{2}\) (refer to Lemma 3.1-3.3), and furthermore take advantage of the damped equation (1.18) to get the faster time-decay rates in (1.11)\({}_{3}\) and (1.13). When \(\|(a_{0},u_{0},b_{0},w_{0})^{\ell}\|_{\dot{B}_{2,\infty}^{\sigma_{0}}}\) is sufficiently small, we also show more time-decay rates of \((a,u,b,w)\) and \(u-w\) in (1.15) (refer to Lemmas 3.4-3.6) in the spirit of [19, 41]. It should be emphasized that the rate \((1+t)^{-\frac{1}{2}}\) of the relative velocity \(u-w\) in \(\dot{B}_{2,\infty}^{\sigma_{0}}\) is the key point to derive the rate \((1+t)^{-\frac{1}{2}(\frac{d}{2}+1-\sigma_{0})}\) of the nonlinear term \(\|h(a,b)(u-w)\|_{\dot{B}_{2,\infty}^{\sigma_{0}}}\) in (3.56) and enclose the energy estimates.
The rest of the paper is organized as follows. In Section 2, we prove Theorems 1.1-1.2 and establish the a-priori estimates of the solution to the Cauchy problem (1.3). In Section 3, we carry out the proofs
of Theorems 1.3-1.4 on the optimal time-decay rates of the global solution. In Section 4, we present some notations of Besov spaces and recall related analysis tools used in this paper.
## 2 Global existence
### Local existence
First, we give a brief proof of Theorem 1.1 on the local existence and uniqueness of the strong solution to the Cauchy problem (1.3).
_Proof of Theorem 1.1:_ For any integer \(n\geq 1\), let \(L_{n}^{2}\) be the set of \(L^{2}\) functions spectrally supported in the annulus \(\mathcal{C}_{n}:=\{\xi\in\mathbb{R}^{d}\mid\frac{1}{n}\leq|\xi|\leq n\}\), and denote the Friedrichs projectors \(\dot{\mathbb{E}}_{n}\) by \(\dot{\mathbb{E}}_{n}f:=\mathcal{F}^{-1}(\mathbf{1}_{\mathcal{C}_{n}}\mathcal{F }f)\) for any \(f\in L^{2}(\mathbb{R}^{d})\). Let \(u_{L}\) be the global solution of the linear problem
\[\begin{cases}\partial_{t}u_{L}-\Delta u_{L}=0,\quad x\in\mathbb{R}^{d},\quad t >0,\\ u_{L}(x,0)=u_{0}(x),\quad x\in\mathbb{R}^{d}.\end{cases} \tag{2.1}\]
To approximate \((\frac{1}{1+a}-1,u-u_{L},b,w)\), we aim to solve the following problems for \(n\geq 1\):
\[\begin{cases}\partial_{t}a_{*}^{n}+\dot{\mathbb{E}}_{n}\big{(}(u_{*}^{n}+u_{L} ^{n})\cdot\nabla a_{*}^{n}\big{)}=-\dot{\mathbb{E}}_{n}\big{(}(1+a_{*}^{n}) \mathrm{div}\,(u_{*}^{n}+u_{L}^{n})\big{)},\\ \partial_{t}u_{*}^{n}+\dot{\mathbb{E}}_{n}\big{(}(u_{*}^{n}+u_{L}^{n})\cdot \nabla u_{*}^{n}+u_{*}^{n}\cdot\nabla u_{L}^{n}-(1+a_{*}^{n})\Delta u_{*}^{n} \big{)}\\ \quad=\dot{\mathbb{E}}_{n}\big{(}-a_{*}^{n}\widetilde{u}_{L}^{n}-u_{L}^{n} \cdot\nabla u_{L}^{n}+\nabla\int_{0}^{a_{*}}P^{\prime}(\frac{1}{1+s})\frac{1} {1+s}ds+e^{b^{n}}(1+a_{*}^{n})(w^{n}-u_{*}^{n}-u_{L}^{n})\big{)},\\ \partial_{t}b^{n}+\dot{\mathbb{E}}_{n}\big{(}w^{n}\cdot\nabla b^{n}\big{)}+ \mathrm{div}\,w^{n}=0,\\ \partial_{t}w^{n}+\dot{\mathbb{E}}_{n}\big{(}w^{n}\cdot\nabla w^{n}\big{)}+ \nabla b^{n}+w^{n}=u_{*}^{n}+u_{L}^{n},\quad x\in\mathbb{R}^{d},\quad t>0,\\ (a_{*}^{n},u_{*}^{n},b^{n},w^{n})(x,0)=(a_{*0}^{n},0,b_{0}^{n},w_{0}^{n})(x): =(\dot{\mathbb{E}}_{n}(\frac{1}{1+a_{0}}-1),0,\dot{\mathbb{E}}_{n}b_{0},\dot{ \mathbb{E}}_{n}w_{0})(x),\quad x\in\mathbb{R}^{d},\end{cases} \tag{2.2}\]
with \(u_{L}^{n}:=\dot{\mathbb{E}}_{n}u_{L}\). Since all the Sobolev norms are equivalent in (2.2) due to the Bernstein inequality, it is easy to verify that for \(n\geq n_{0}\) with a sufficiently large integer \(n_{0}>1\), \(a_{*0}^{n}\) satisfies \(\inf_{x\in\mathbb{R}^{d}}(1+a_{0*}^{n})(x)>0\), and (2.2) is an ordinary differential system in \(L_{n}^{2}\) and locally Lipschitz with respect to the variable \((a^{n},u^{n},b^{n},w^{n})\). Hence, by the Cauchy-Lipschitz theorem (cf. [1], Page 124), there is a maximal time \(T_{*}^{n}>0\) such that the unique solution \((a_{*}^{n},u_{*}^{n},b^{n},w^{n})\in\mathcal{C}([0,T_{*}^{n});L_{n}^{2})\) to the problem (2.2) exists, and \(1+a_{*}^{n}\) is strictly bounded away from zero for any \(n\geq n_{0}\).
By direct computations on (2.2)-(2.1), we can prove that there exists a time \(T\in(0,T_{*}^{n})\) and a small
constant \(\eta>0\) independent of \(n\geq n_{0}\) such that the following uniform estimates hold:
\[\begin{cases}\|u_{L}^{n}\|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}+1_{*}}_{2,1})}\leq\eta ^{2},\qquad\|u_{L}^{n}\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1_{*} }_{2,1})}\leq 2\|u_{0}\|_{\dot{B}^{\frac{d}{2}-1_{*}}_{2,1}},\\ \|u_{*}^{n}\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1_{*}}_{2,1})}+ \|u_{*}^{n}\|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}+1_{*}}_{2,1})}\leq\eta,\\ \|a_{*}^{n}\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}\leq 2 \|\frac{1}{1+a_{0}}-1\|_{\dot{B}^{\frac{d}{2}}_{2,1}}+1,\\ \|(b^{n},w^{n})\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1_{*}}_{2,1 })\cap\dot{B}^{\frac{d}{2}+1_{*}}_{2,1}}\leq 2\|(b_{0},w_{0})\|_{\dot{B}^{ \frac{d}{2}-1_{*}}_{2,1}\cap\dot{B}^{\frac{d}{2}+1}_{*,1}}.\end{cases} \tag{2.3}\]
Indeed, the proofs of \(\eqref{eq:2.3}_{1}\)-\(\eqref{eq:2.3}_{3}\) are based on the estimates for transport equations and parabolic equations with variable coefficients established in [16]. And by similar arguments as used in [38, 39], we can have \(\eqref{eq:2.3}_{4}\) by estimating the damped compressible Euler equations \(\eqref{eq:2.2}_{3}\)-\(\eqref{eq:2.2}_{4}\) with given forces \(u_{L}^{n}\) and \(u_{*}^{n}\) satisfying \(\eqref{eq:2.3}_{1}\)-\(\eqref{eq:2.3}_{2}\). Here we omit the details for brevity.
According to (2.2)-(2.3), the Aubin-Lions lemma and the cantor diagonal argument, there is a limit \((a_{*},u_{*},b,w)\) such that as \(n\to\infty\), up to a subsequence, the approximate sequence \((a_{*}^{n},u_{*}^{n},b^{n},w^{n})\) converges to \((a_{*},u_{*},b,w)\) weakly in \(L^{2}(0,T;\dot{H}^{\frac{d}{2}})\) and strongly in \(L^{2}(0,T;\dot{H}^{\frac{d}{2}-\varepsilon}_{loc})\) for any \(\varepsilon>0\), which implies that \((a,u,b,w)(x,t)\) with \(a:=\frac{1}{1+a_{*}}-1\) and \(u:=u_{*}+u_{L}\) solves (1.3) in the sense of distributions. Then, we can conclude by the uniform estimates (2.3) and the Fatou property that for \(t\in[0,T]\), \((a,u,b,w)(x,t)\) satisfies the properties (1.6) and therefore is indeed a strong solution to the Cauchy problem (1.3). By similar arguments as used in [16] with some modifications, we are able to prove the uniqueness of solutions in \(\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})\cap\widetilde{L}^{ \infty}_{t}(\dot{B}^{\frac{d}{2}-2}_{2,1})\times\widetilde{L}^{\infty}_{t}( \dot{B}^{\frac{d}{2}-2}_{2,1})\times\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{ d}{2}-2}_{2,1})\) for \(d\geq 3\) and \(\widetilde{L}^{\infty}_{t}(\dot{B}^{0}_{2,\infty})\cap\widetilde{L}^{\infty}_ {t}(\dot{B}^{-1}_{2,\infty})\times\widetilde{L}^{\infty}_{t}(\dot{B}^{-1}_{2, \infty})\) for \(d=2\). The details are omitted here. If additionally \(a_{0}\in\dot{B}^{\frac{d}{2}-1}_{2,1}\) holds, then \(a\in\mathcal{C}([0,T];\dot{B}^{\frac{d}{2}-1}_{2,1})\) follows from (1.6) and standard estimates for the transport equation (1.3)\({}_{1}\). The proof of Theorem 1.1 is completed.
### The a-priori estimates
In order to prove the global existence of the solution to the Cauchy problem (1.3), we need to establish the uniform-in-time a-priori estimates below.
**Proposition 2.1**.: _For given time \(T>0\), suppose that the strong solution \((a,u,b,w)\) to the Cauchy problem (1.3) satisfies for \(t\in(0,T)\) that_
\[\begin{split}\mathcal{X}(t)&:=\|(a,u,b,w)\|_{ \widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{\ell}+\|(a,u,b,w) \|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}+1}_{2,1})}^{\ell}+\|u-w\|_{L^{1}_{t}(\dot{B }^{\frac{d}{2}}_{2,1})}^{\ell}+\|u-w\|_{\widetilde{L}^{2}_{t}(\dot{B}^{\frac{d} {2}-1}_{2,1})}^{\ell}\\ &\quad+\|a\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}}_{ 2,1})}^{h}+\|u\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{h} +\|(b,w)\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}+1_{*}}_{2,1})}^{h} +\|a\|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}^{h}+\|(u,b,w)\|_{L^{1}_{t}(\dot {B}^{\frac{d}{2}+1}_{2,1})}^{h}\\ &\leq 2C_{0}\mathcal{X}_{0},\qquad t\in(0,T),\end{split} \tag{2.4}\]
_where \(C_{0}>1\) is a constant independent of the time \(T>0\), and \(\mathcal{X}_{0}\) is defined by (1.7). There exist a
small constant \(\varepsilon_{0}>0\) such that if \(\mathcal{X}_{0}\leq\varepsilon_{0}\), then it holds_
\[\mathcal{X}(t)\leq C_{0}\mathcal{X}_{0},\qquad t\in(0,T). \tag{2.5}\]
The proof of Proposition 2.1 consists of Lemmas 2.1-2.5 below.
_Proof of Theorem 1.2:_ Let the assumptions of Theorem 1.2 hold. According to Theorem 1.1, there exists a time \(T_{0}>0\) such that the Cauchy problem (1.3) has a unique strong solution \((a,u,b,w)(x,t)\) for \(t\in(0,T_{0}]\) satisfying (1.6). By virtue of Proposition 2.1, the solution \((a,u,b,w)\) indeed satisfies \(\sup_{t\in(0,T_{0}]}\mathcal{X}(t)\leq C_{0}\mathcal{X}_{0}\), and therefore by the standard continuity arguments, we can extend the solution \((a,u,b,w)\) globally in time and verify that \((a,u,b,w)\) satisfies the properties (1.8)-(1.9).
### Low-frequency analysis
In this subsection, we establish the a-priori estimates of solutions to the Cauchy problem (1.3) in the low-frequency region \(\{\xi\in\mathbb{R}^{d}\ |\ |\xi|\leq\frac{8}{3}\}\). Note that (2.4) implies
\[\sup_{(x,t)\in\mathbb{R}^{d}\times(0,T)}|a(x,t)|\leq\frac{1}{2}\Rightarrow \frac{1}{2}\leq\rho=1+a\leq\frac{3}{2},\quad\text{if }\mathcal{X}_{0}<<1. \tag{2.6}\]
The property (2.6) will be used to handle the nonlinear terms \(f(a)\), \(g(a)\) and \(h(a,b)\) in (1.4) by virtue of the composition estimates (4.6). For any \(j\in\mathbb{Z}\), applying the operator \(\dot{\Delta}_{j}\) to (1.3)\({}_{1}\)-(1.3)\({}_{2}\), we get
\[\begin{cases}\partial_{t}\dot{\Delta}_{j}a+\operatorname{div}\dot{\Delta}_{j} u=-\operatorname{div}\dot{\Delta}_{j}(au),\\ \partial_{t}\dot{\Delta}_{j}u+\nabla\dot{\Delta}_{j}a-\Delta\dot{\Delta}_{j}u +\dot{\Delta}_{j}(u-w)=-\dot{\Delta}_{j}(u\cdot\nabla u)+\dot{\Delta}_{j}G, \\ \partial_{t}\dot{\Delta}_{j}b+\operatorname{div}\dot{\Delta}_{j}w=-\dot{\Delta }_{j}(w\cdot\nabla b),\\ \partial_{t}\dot{\Delta}_{j}w+\nabla\dot{\Delta}_{j}b+\dot{\Delta}_{j}(w-u)=- \dot{\Delta}_{j}(w\cdot\nabla w).\end{cases} \tag{2.7}\]
First, we derive a low-frequency Lyapunov type inequality of (2.7).
**Lemma 2.1**.: _Let \((a,u,b,w)\) be any strong solution to the Cauchy problem (1.3). Then, it holds for any \(j\leq 0\) that_
\[\begin{split}&\frac{d}{dt}\mathcal{E}_{1,j}(t)+\mathcal{D}_{1,j }(t)\\ &\lesssim\big{(}\|\dot{\Delta}_{j}(2^{j}au,u\cdot\nabla u,w \cdot\nabla b,w\cdot\nabla w)\|_{L^{2}}+\|\dot{\Delta}_{j}G\|_{L^{2}}\big{)} \|\dot{\Delta}_{j}(a,u,b,w)\|_{L^{2}},\end{split} \tag{2.8}\]
_where \(\mathcal{E}_{1,j}(t)\) and \(\mathcal{D}_{1,j}(t)\) are defined by_
\[\begin{cases}\mathcal{E}_{1,j}(t):=\frac{1}{2}\|\dot{\Delta}_{j}(a,u,b,w)\|_ {L^{2}}^{2}+\eta_{1}\big{(}\dot{\Delta}_{j}u\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}+\eta_{1}\big{(}\dot{\Delta}_{j}w\ |\ \nabla\dot{\Delta}_{j}b\big{)}_{L^{2}},\\ \mathcal{D}_{1,j}(t):=\|\nabla\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\|\dot{\Delta}_ {j}(u-w)\|_{L^{2}}^{2}\\ \qquad\qquad+\eta_{1}\Big{(}\|\nabla\dot{\Delta}_{j}a\|_{L^{2}}^{2}-\| \operatorname{div}\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\big{(}\dot{\Delta}_{j}(u-w) -\Delta\dot{\Delta}_{j}u\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\Big{)}\\ \qquad\qquad+\eta_{1}\Big{(}\|\nabla\dot{\Delta}_{j}b\|_{L^{2}}^{2}-\| \operatorname{div}\dot{\Delta}_{j}w\|_{L^{2}}^{2}+\big{(}\dot{\Delta}_{j}(w-u) \ |\ \nabla\dot{\Delta}_{j}b\big{)}_{L^{2}}\Big{)},\end{cases} \tag{2.9}\]
_with constant \(\eta_{1}\in(0,1)\) to be determined later._
**Proof.** Taking the \(L^{2}\) inner product of \((\ref{2.7})_{1}\) and \((\ref{2.7})_{2}\) with \(\dot{\Delta}_{j}a\) and \(\dot{\Delta}_{j}u\), respectively, we have
\[\begin{split}&\frac{1}{2}\frac{d}{dt}\|\dot{\Delta}_{j}(a,u)\|_{L^{ 2}}^{2}+\|\nabla\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\big{(}\dot{\Delta}_{j}(u-w) \ |\ \dot{\Delta}_{j}u\big{)}_{L^{2}}\\ &\leq\big{(}\dot{\Delta}_{j}(au)\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{ L^{2}}-\big{(}\dot{\Delta}_{j}(u\cdot\nabla u+G)\ |\ \dot{\Delta}_{j}u\big{)}_{L^{2}}.\end{split} \tag{2.10}\]
By \((\ref{2.7})_{3}\)-\((\ref{2.7})_{4}\), one deduces after a direct computation that
\[\frac{1}{2}\frac{d}{dt}\|\dot{\Delta}_{j}(b,w)\|_{L^{2}}^{2}+\big{(}\dot{ \Delta}_{j}(w-u)\ |\ \dot{\Delta}_{j}w\big{)}_{L^{2}}=-\big{(}\dot{\Delta}_{j}(w\cdot\nabla b)\ |\ \dot{\Delta}_{j}b\big{)}_{L^{2}}-\big{(}\dot{\Delta}_{j}(w\cdot\nabla w)\ |\ \dot{\Delta}_{j}w\big{)}_{L^{2}}. \tag{2.11}\]
The combination of \((\ref{2.10})\)-\((\ref{2.11})\) leads to
\[\begin{split}&\frac{1}{2}\frac{d}{dt}\|\dot{\Delta}_{j}(a,u,b,w)\|_{L^ {2}}^{2}+\|\nabla\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\|\dot{\Delta}_{j}(u-w)\|_{L ^{2}}^{2}\\ &\quad\leq\|\dot{\Delta}_{j}(au)\|_{L^{2}}\|\nabla\dot{\Delta}_{ j}a\|_{L^{2}}+\big{(}\|\dot{\Delta}_{j}(u\cdot\nabla u,w\cdot\nabla b,w\cdot \nabla w)\|_{L^{2}}+\|\dot{\Delta}_{j}G\|_{L^{2}}\big{)}\|\dot{\Delta}_{j}(u, b,w)\|_{L^{2}},\end{split} \tag{2.12}\]
In order to obtain the dissipation of \(a\) and \(b\), we make use of \((\ref{2.7})_{1}\)-\((\ref{2.7})_{2}\) to have
\[\begin{split}&\frac{d}{dt}\big{(}\dot{\Delta}_{j}u\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}+\|\nabla\dot{\Delta}_{j}a\|_{L^{2}}^{2}- \|\text{div}\,\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\big{(}\dot{\Delta}_{j}(u-w)- \Delta\dot{\Delta}_{j}u\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\\ &\leq\big{(}\|\text{div}\,\dot{\Delta}_{j}(u\cdot\nabla u)\|_{L^{ 2}}+\|\nabla\text{div}\,\dot{\Delta}_{j}(au)\|_{L^{2}}+\|\text{div}\,\dot{ \Delta}_{j}G\|_{L^{2}}\big{)}\|\dot{\Delta}_{j}(a,u)\|_{L^{2}},\end{split} \tag{2.13}\]
and
\[\begin{split}&\frac{d}{dt}\big{(}\dot{\Delta}_{j}w\ |\ \nabla\dot{\Delta}_{j}b\big{)}_{L^{2}}+\|\nabla\dot{\Delta}_{j}b\|_{L^{2}}^{2}- \|\text{div}\,\dot{\Delta}_{j}w\|_{L^{2}}^{2}+\big{(}\dot{\Delta}_{j}(w-u)\ |\ \dot{ \Delta}_{j}b\big{)}_{L^{2}}\\ &\quad\leq\|\text{div}\,\dot{\Delta}_{j}(w\cdot\nabla b,w\cdot \nabla w)\|_{L^{2}}\|\dot{\Delta}_{j}(b,w)\|_{L^{2}}.\end{split} \tag{2.14}\]
According to \((\ref{2.12})\)-\((\ref{2.14})\) and the Bernstein inequality, \((\ref{2.8})\) follows.
Then, we have the following low-frequency estimates of solutions to the Cauchy problem \((\ref{1.3})\).
**Lemma 2.2**.: _Let \(T>0\) be any given time, and \((a,u,b,w)\) be any strong solution to the Cauchy problem \((\ref{1.3})\) for \(t\in(0,T)\). Then it holds_
\[\begin{split}&\|(a,u,b,w)\|_{\widetilde{L}_{t}^{\infty}(\dot{B}_{ 2,1}^{\frac{d}{2}-1})}^{\ell}+\|(a,u,b,w)\|_{L_{1}^{\ell}(\dot{B}_{2,1}^{\frac{d }{2}+1})}^{\ell}\\ &\quad+\|u-w\|_{L_{1}^{\ell}(\dot{B}_{2,1}^{\frac{d}{2}})}^{ \ell}+\|u-w\|_{\widetilde{L}_{t}^{2}(\dot{B}_{2,1}^{\frac{d}{2}-1})}^{\ell}\\ &\lesssim\mathcal{X}_{0}+\mathcal{X}^{2}(t),\quad t\in(0,T),\end{split} \tag{2.15}\]
_where \(\mathcal{X}_{0}\) and \(\mathcal{X}(t)\) are defined though \((\ref{2.7})\) and \((\ref{2.4})\), respectively._
**Proof.** Recall that \(\mathcal{E}_{1,j}(t)\) and \(\mathcal{D}_{1,j}(t)\) given by \((\ref{2.9})\) satisfy the Lyapunov type inequality \((\ref{2.8})\). One can show for any \(j\leq 0\) that
\[(\frac{1}{2}-\frac{4}{3}\eta_{1})\|\dot{\Delta}_{j}(a,u,b,w)\|_{L^{2}}^{2}\leq \mathcal{E}_{1,j}(t)\leq(\frac{1}{2}+\frac{4}{3}\eta_{1})\|\dot{\Delta}_{j}(a,u,b, w)\|_{L^{2}}^{2}, \tag{2.16}\]
\[\begin{split}\mathcal{D}_{1,j}(t)&\geq\frac{9}{16}2^{2j} \|\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\|\dot{\Delta}_{j}(u-w)\|_{L^{2}}^{2}+\eta_{ 1}(\frac{9}{32}\|\dot{\Delta}_{j}(a,b)\|_{L^{2}}^{2}\\ &\qquad-C2^{2j}\|\dot{\Delta}_{j}u\|_{L^{2}}^{2}-C\|\dot{\Delta}_ {j}w\|_{L^{2}}^{2}-C\|\dot{\Delta}_{j}(u-w)\|_{L^{2}}^{2})\\ &\geq(\frac{9}{16}2^{2j}-C\eta_{1})\|\dot{\Delta}_{j}u\|_{L^{2}}^ {2}+(1-C\eta_{1})\|\dot{\Delta}_{j}(u-w)\|_{L^{2}}^{2}\\ &\quad-C\eta_{1}\|\dot{\Delta}_{j}w\|_{L^{2}}^{2}+\frac{9\eta_{1 }}{32}\|\dot{\Delta}_{j}(a,b)\|_{L^{2}}^{2},\end{split} \tag{2.17}\]
where \(C>1\) denotes a sufficiently large constant independent of time. Choosing a sufficiently small constant \(\eta_{1}\in(0,1)\), we deduce by (2.16)-(2.17) for any \(j\leq 0\) that
\[\mathcal{E}_{1,j}(t)\sim\|\dot{\Delta}_{j}(a,u,b,w)\|_{L^{2}}^{2}, \tag{2.18}\]
and
\[\begin{split}\mathcal{D}_{1,j}(t)&\gtrsim 2^{2j}\| \dot{\Delta}_{j}u\|_{L^{2}}^{2}+\|\dot{\Delta}_{j}(u-w)\|_{L^{2}}^{2}-\eta_{1} \|\dot{\Delta}_{j}w\|_{L^{2}}^{2}+\eta_{1}\|\dot{\Delta}_{j}(a,b)\|_{L^{2}}^{2 }\\ &\gtrsim 2^{2j}\|\dot{\Delta}_{j}(a,u,b,w)\|_{L^{2}}^{2},\end{split} \tag{2.19}\]
where in the last inequality one has used the key fact
\[\|\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\|\dot{\Delta}_{j}(u-w)\|_{L^{2}}^{2}\geq \frac{1}{2}\|\dot{\Delta}_{j}w\|_{L^{2}}^{2}. \tag{2.20}\]
By (2.8) and (2.18)-(2.19), the following inequality holds:
\[\begin{split}&\frac{d}{dt}\mathcal{E}_{1,j}(t)+2^{2j}\mathcal{ E}_{1,j}(t)\\ &\lesssim\big{(}\|\dot{\Delta}_{j}(2^{j}au,u\cdot\nabla u,w\cdot \nabla b,w\cdot\nabla w)\|_{L^{2}}+\|\dot{\Delta}_{j}G\|_{L^{2}}\big{)}\sqrt{ \mathcal{E}_{1,j}(t)},\quad j\leq 0.\end{split} \tag{2.21}\]
Thence we divide (2.21) by \(\big{(}\mathcal{E}_{1,j}(t)+\varepsilon_{*}^{2}\big{)}^{\frac{1}{2}}\) for \(\varepsilon_{*}>0\), integrate the resulting inequality over \([0,t]\) and then take the limit as \(\varepsilon_{*}\to 0\) to obtain
\[\begin{split}&\|\dot{\Delta}_{j}(a,u,b,w)\|_{L^{2}}+2^{2j}\int_{0}^ {t}\|\dot{\Delta}_{j}(a,u,b,w)\|_{L^{2}}d\tau\\ &\lesssim\|\dot{\Delta}_{j}(a_{0},u_{0},b_{0},w_{0})\|_{L^{2}} \\ &\quad+\int_{0}^{t}\big{(}\|\dot{\Delta}_{j}(2^{j}au,u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{L^{2}}+\|\dot{\Delta}_{j}G\|_{L^{2}}\big{)} d\tau.\end{split} \tag{2.22}\]
Multiplying (2.22) by \(2^{j(\frac{d}{2}-1)}\), taking the supremum on \([0,t]\) and then summing over \(j\leq 0\), we have
\[\begin{split}&\|(a,u,b,w)\|_{\widetilde{L}_{t}^{\infty}(\dot{B}_{2, 1}^{\frac{d}{2}-1})}^{\ell}+\|(a,u,b,w)\|_{L_{1}^{1}(\dot{B}_{2,1}^{\frac{d}{2} +1})}^{\ell}\\ &\quad\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|_{\dot{B}_{2,1}^{\frac{ d}{2}-1}}^{\ell}+\|au\|_{L_{1}^{1}(\dot{B}_{2,1}^{\frac{d}{2}})}^{\ell}+\|(u\cdot \nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{L_{1}^{1}(\dot{B}_{2,1}^{\frac{d}{2} -1})}^{\ell}+\|G\|_{L_{1}^{1}(\dot{B}_{2,1}^{\frac{d}{2}-1})}^{\ell}.\end{split} \tag{2.23}\]
From (4.1) and the definition of \(\mathcal{X}(t)\), it is easy to check that
\[\begin{split}&\left\{\begin{split}&\|a\|_{\widetilde{L}_{t}^{ \infty}(\dot{B}_{2,1}^{\frac{d}{2}-1}\cap\dot{B}_{2,1}^{\frac{d}{2}})}+\|u \|_{\widetilde{L}_{t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}-1})}+\|(b,w)\|_{ \widetilde{L}_{t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}-1}\cap\dot{B}_{2,1}^{ \frac{d}{2}+1})}\lesssim\mathcal{X}(t),\\ &\|(u,b,w)\|_{L_{1}^{1}(\dot{B}_{2,1}^{\frac{d}{2}+1})}+\|(a,u,b, w)\|_{\widetilde{L}_{t}^{2}(\dot{B}_{2,1}^{\frac{d}{2}})}+\|u-w\|_{\widetilde{L}_{t}^{ 2}(\dot{B}_{2,1}^{\frac{d}{2}-1})}\lesssim\mathcal{X}(t).\end{split}\right.\end{split} \tag{2.24}\]
To simplify calculations, we will employ the estimates (2.24) frequently to control the nonlinear terms on the right-hand side of (2.23). It follows by (2.24)\({}_{2}\) and (4.3) that
\[\|au\|_{L^{1}_{t}(B^{\frac{d}{2}}_{2,1})}\lesssim\|a\|_{\widetilde{L}^{2}_{t}(B^ {\frac{d}{2}}_{2,1})}\|u\|_{\widetilde{L}^{2}_{t}(B^{\frac{d}{2}}_{2,1})} \lesssim\mathcal{X}^{2}(t). \tag{2.25}\]
Due to (2.24)\({}_{2}\) and (4.4), it also holds
\[\|(u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{L^{1}_{t}(B^{\frac{d}{2}-1 }_{2,1})}\lesssim\|(u,b,w)\|^{2}_{\widetilde{L}^{2}_{t}(B^{\frac{d}{2}}_{2,1} )}\lesssim\mathcal{X}^{2}(t). \tag{2.26}\]
To handle the nonlinear term \(G\), we obtain from (2.4), (2.6) and the continuity of composition functions in Lemma 4.4 that
\[\|(g(a),f(a))\|_{B^{\frac{d}{2}}_{2,1}}\lesssim\|a\|_{\dot{B}^{\frac{d}{2}}_{2, 1}},\quad\|h(a,b)\|_{\dot{B}^{\frac{d}{2}}_{2,1}}\lesssim\|a\|_{\dot{B}^{\frac {d}{2}}_{2,1}}\|b\|_{\dot{B}^{\frac{d}{2}}_{2,1}}+\|a\|_{\dot{B}^{\frac{d}{2}}_ {2,1}}+\|b\|_{\dot{B}^{\frac{d}{2}}_{2,1}}\lesssim\|(a,b)\|_{\dot{B}^{\frac{d}{ 2}}_{2,1}},\]
which together with (2.24) and (4.3)-(4.4) yields
\[\begin{split}\|G\|_{L^{1}_{t}(B^{\frac{d}{2}-1}_{2,1})}& \lesssim\|a\|^{2}_{\widetilde{L}^{2}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}+\|a\|_{ \widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}\|u\|_{L^{1}_{t}(\dot{ B}^{\frac{d}{2}+1}_{2,1})}\\ &+\|(a,b)\|_{\widetilde{L}^{1}_{t}(B^{\frac{d}{2}}_{2,1})}\|u-w \|_{\widetilde{L}^{2}_{t}(B^{\frac{d}{2}-1}_{2,1})}\\ &\lesssim\mathcal{X}^{2}(t).\end{split} \tag{2.27}\]
We substitute (2.25)-(2.27) into (2.23) to get
\[\begin{split}&\|(a,u,b,w)\|^{\ell}_{\widetilde{L}^{\infty}_{t}( \dot{B}^{\frac{d}{2}-1}_{2,1})}+\|(a,u,b,w)\|^{\ell}_{L^{1}_{t}(\dot{B}^{\frac {d}{2}+1}_{2,1})}\\ &\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|^{\ell}_{\dot{B}^{\frac{d} {2}-1}_{2,1}}+\mathcal{X}^{2}(t).\end{split} \tag{2.28}\]
In addition, applying the operator \(\dot{\Delta}_{j}\) to (1.18), taking the \(L^{2}\)-inner product of the resulting equation with \(\dot{\Delta}_{j}(u-w)\) and then using the Bernstein inequality, we derive
\[\begin{split}&\frac{d}{dt}\|\dot{\Delta}_{j}(u-w)\|^{2}_{L^{2}}+ \|\dot{\Delta}_{j}(u-w)\|^{2}_{L^{2}}\\ &\lesssim\big{(}2^{j}\|\dot{\Delta}_{j}(a,u,b)\|_{L^{2}}+\|\dot{ \Delta}_{j}(u\cdot\nabla u,w\cdot\nabla w)\|_{L^{2}}+\|\dot{\Delta}_{j}G\|_{L ^{2}}\big{)}\|\dot{\Delta}_{j}(u-w)\|_{L^{2}},\quad j\leq 0,\end{split} \tag{2.29}\]
which gives rise to
\[\begin{split}&\|\dot{\Delta}_{j}(u-w)\|^{2}_{L^{2}}+\int_{0}^{t} \|\dot{\Delta}_{j}(u-w)\|^{2}_{L^{2}}d\tau\\ &\lesssim\|\dot{\Delta}_{j}(u_{0},w_{0})\|^{2}_{L^{2}}+2^{j}\| \dot{\Delta}_{j}(a,u,b)\|_{L^{2}_{t}(L^{2})}\|\dot{\Delta}_{j}(u-w)\|_{L^{2}_{ t}(L^{2})}\\ &\quad+\big{(}\|\dot{\Delta}_{j}(u\cdot\nabla u,w\cdot\nabla w) \|_{L^{1}_{t}(L^{2})}+\|\dot{\Delta}_{j}G\|_{L^{1}_{t}(L^{2})}\big{)}\|\dot{ \Delta}_{j}(u,w)\|_{L^{\infty}_{t}(L^{2})}\\ &\leq C\big{(}\|\dot{\Delta}_{j}(u_{0},w_{0})\|^{2}_{L^{2}}+\| \dot{\Delta}_{j}(u,w)\|^{2}_{L^{\infty}_{t}(L^{2})}+2^{2j}\|\dot{\Delta}_{j}(a, u,b)\|^{2}_{L^{2}_{t}(L^{2})}\big{)}\\ &\quad+C\big{(}\|\dot{\Delta}_{j}(u\cdot\nabla u,w\cdot\nabla w) \|^{2}_{L^{1}_{t}(L^{2})}+\|\dot{\Delta}_{j}G\|^{2}_{L^{1}_{t}(L^{2})}\big{)}+ \frac{1}{2}\int_{0}^{t}\|\dot{\Delta}_{j}(u-w)\|^{2}_{L^{2}}d\tau,\quad j\leq 0. \end{split} \tag{2.30}\]
Multiplying (2.30) by \(2^{j(\frac{d}{2}-1)}\) and thence summing it over \(j\leq 0\), we have by (2.26)-(2.28) and (4.2) that
\[\begin{split}\|u-w\|^{\ell}_{\widetilde{L}_{t}^{2}(\hat{B}_{2,1} ^{\frac{d}{2}-1})}&\lesssim\|(u_{0},w_{0})\|^{\ell}_{\hat{B}_{2,1 }^{\frac{d}{2}-1}}+\|(u,w)\|^{\ell}_{\widetilde{L}_{t}^{\infty}(\hat{B}_{2,1} ^{\frac{d}{2}-1})}+\|(a,u,b)\|^{\ell}_{\widetilde{L}_{t}^{2}(\hat{B}_{2,1}^{ \frac{d}{2}})}\\ &\quad+\|(u\cdot\nabla u,w\cdot\nabla w)\|^{\ell}_{L_{1}^{1}( \hat{B}_{2,1}^{\frac{d}{2}-1})}+\|G\|^{\ell}_{L_{t}^{1}(\hat{B}_{2,1}^{\frac{ d}{2}-1})}\\ &\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|^{\ell}_{\hat{B}_{2,1}^{ \frac{d}{2}-1}}+\mathcal{X}^{2}(t).\end{split} \tag{2.31}\]
Note that the inequality (2.29) also implies for \(j\leq 0\) that
\[\begin{split}\|\dot{\Delta}_{j}(u-w)\|_{L^{2}}+\int_{0}^{t}\| \dot{\Delta}_{j}(u-w)\|_{L^{2}}d\tau\\ &\quad\lesssim 2^{-j}\|\dot{\Delta}_{j}(u_{0},w_{0})\|_{L^{2}}+ \int_{0}^{t}\big{(}2^{j}\|\dot{\Delta}_{j}(a,u,b)\|_{L^{2}}+2^{-j}\|\dot{ \Delta}_{j}(u\cdot\nabla u,w\cdot\nabla w)\|_{L^{2}}+2^{-j}\|\dot{\Delta}_{j} G\|_{L^{2}}\big{)}d\tau.\end{split}\]
Therefore, it holds that
\[\begin{split}\|u&-w\|^{\ell}_{L_{1}^{1}(\hat{B}_{2,1 }^{\frac{d}{2}})}\\ &\lesssim\|(u_{0},w_{0})\|^{\ell}_{\hat{B}_{2,1}^{\frac{d}{2}-1} }+\|(a,u,b)\|^{\ell}_{L_{t}^{1}(\hat{B}_{2,1}^{\frac{d}{2}+1})}+\|(u\cdot \nabla u,w\cdot\nabla w)\|^{\ell}_{L_{t}^{1}(\hat{B}_{2,1}^{\frac{d}{2}-1})}+ \|G\|^{\ell}_{L_{t}^{1}(\hat{B}_{2,1}^{\frac{d}{2}-1})}\\ &\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|^{\ell}_{\hat{B}_{2,1}^{ \frac{d}{2}-1}}+\mathcal{X}^{2}(t).\end{split} \tag{2.32}\]
By (2.28) and (2.31)-(2.32), we prove (2.15).
### High-frequency analysis
In this subsection, we estimate solutions to the Cauchy problem (1.3) in the high-frequency region \(\{\xi\in\mathbb{R}^{d}\ |\ |\xi|\geq\frac{3}{8}\}\). To this end, we show a high-frequency Lyapunov type inequality of (2.7).
**Lemma 2.3**.: _Let \((a,u,b,w)\) be any strong solution to the Cauchy problem (1.3). Then, it holds for any \(j\geq-1\) that_
\[\begin{split}&\frac{d}{dt}\mathcal{E}_{2,j}(t)+\mathcal{D}_{2,j}(t) \\ &\lesssim\big{(}2^{j}\|\dot{\Delta}_{j}(au)\|_{L^{2}}+\|\dot{ \Delta}_{j}(u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{L^{2}}+\|\dot{ \Delta}_{j}G\|_{L^{2}}+\|\mathrm{div}\,u\|_{L^{\infty}}\|\nabla\dot{\Delta}_{j }a\|_{L^{2}}\\ &\quad+\|\nabla\dot{\Delta}_{j}(a\mathrm{div}\,u)\|_{L^{2}}+\|[u \cdot\nabla,\dot{\Delta}_{j}]u\|_{L^{2}}+\sum_{k=1}^{d}\|[u\cdot\nabla,\partial _{k}\dot{\Delta}_{j}]a\|_{L^{2}})\|\dot{\Delta}_{j}(a,\nabla a,u,b,w)\|_{L^{2}},\end{split} \tag{2.33}\]
_where \(\mathcal{E}_{2,j}(t)\) and \(\mathcal{D}_{2,j}(t)\) are defined by_
\[\begin{split}\begin{cases}\mathcal{E}_{2,j}(t)&:=\frac{1} {2}\|\dot{\Delta}_{j}(a,u,b,w)\|^{2}_{L^{2}}+\eta_{2}\Big{(}\frac{1}{2}\|\nabla \dot{\Delta}_{j}a\|^{2}_{L^{2}}+\big{(}\dot{\Delta}_{j}u\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\Big{)}\\ &\quad+\eta_{2}2^{-2j}\Big{(}\dot{\Delta}_{j}w\ |\ \nabla\dot{\Delta}_{j}b \Big{)}_{L^{2}},\\ \mathcal{D}_{2,j}(t)&:=\|\nabla\dot{\Delta}_{j}u\|^{2}_{L ^{2}}+\|\dot{\Delta}_{j}(u-w)\|^{2}_{L^{2}}\\ &\quad+\eta_{2}\big{(}\|\nabla\dot{\Delta}_{j}a\|^{2}_{L^{2}}-\| \mathrm{div}\,\dot{\Delta}_{j}u\|^{2}_{L^{2}}+\big{(}\dot{\Delta}_{j}(u-w)\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\Big{)}\\ &\quad+\eta_{2}2^{-2j}\Big{(}\|\nabla\dot{\Delta}_{j}b\|^{2}_{L ^{2}}-\|\mathrm{div}\,\dot{\Delta}_{j}w\|^{2}_{L^{2}}+\big{(}\dot{\Delta}_{j}(w- u)\ |\ \nabla\dot{\Delta}_{j}b\big{)}_{L^{2}}\Big{)},\end{cases}\end{split} \tag{2.34}\]
_with \(\eta_{2}\in(0,1)\) a constant to be determined._
**Proof.** One can show from (2.7)\({}_{1}\)-(2.7)\({}_{2}\) that
\[\begin{split}&\frac{d}{dt}\Big{(}\frac{1}{2}\|\nabla\dot{\Delta}_{j }a\|_{L^{2}}^{2}+\big{(}\dot{\Delta}_{j}u\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\Big{)}\\ &\quad+\|\nabla\dot{\Delta}_{j}a\|_{L^{2}}^{2}-\|\mathrm{div}\, \dot{\Delta}_{j}u\|_{L^{2}}^{2}+\big{(}\dot{\Delta}_{j}(u-w)\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\\ &=-\big{(}\nabla\mathrm{div}\,\dot{\Delta}_{j}(au)\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}-\big{(}\dot{\Delta}_{j}(u\cdot\nabla u) \ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\\ &\quad+\big{(}\nabla\mathrm{div}\,\dot{\Delta}_{j}(au)\ |\ \dot{\Delta}_{j}u\big{)}_{L^{2}}+\big{(}\dot{\Delta}_{j}G\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}.\end{split} \tag{2.35}\]
Thence we decompose the following two nonlinearities as
\[\begin{cases}\partial_{k}\mathrm{div}\,\dot{\Delta}_{j}(au)=-[u\cdot\nabla, \partial_{k}\dot{\Delta}_{j}]a+u\cdot\nabla\partial_{k}\dot{\Delta}_{j}a+ \partial_{k}\dot{\Delta}_{j}(a\mathrm{div}\,u),\quad k=1,...,d,\\ \dot{\Delta}_{j}(u\cdot\nabla u)=-[u\cdot\nabla,\dot{\Delta}_{j}]u+u \cdot\nabla\dot{\Delta}_{j}u,\end{cases}\]
so that it holds
\[\begin{split}&\big{|}\big{(}\nabla\mathrm{div}\,\dot{\Delta}_{j }(au)\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\big{|}\\ &\quad=\big{|}-\sum_{k=1}^{d}\big{(}[u\cdot\nabla,\partial_{k} \dot{\Delta}_{j}]a\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}-\frac{1}{2}\big{(}\mathrm{div}\,u \nabla\Delta_{j}a\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}+\big{(}\nabla\dot{\Delta}_{j}(a \mathrm{div}\,u)\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\big{|}\\ &\quad\leq\big{(}\sum_{k=1}^{d}\|[u\cdot\nabla,\partial_{k} \dot{\Delta}_{j}]a\|_{L^{2}}+\frac{1}{2}\|\mathrm{div}\,u\|_{L^{\infty}}\| \nabla\dot{\Delta}_{j}a\|_{L^{2}}+\|\nabla\dot{\Delta}_{j}(a\mathrm{div}\,u) \|_{L^{2}}\big{)}\|\nabla\dot{\Delta}_{j}a\|_{L^{2}},\end{split} \tag{2.36}\]
and
\[\begin{split}&\big{|}-\big{(}\dot{\Delta}_{j}(u\cdot\nabla u) \ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}+\big{(}\nabla\mathrm{div}\,\dot{\Delta}_{ j}(au)\ |\ \dot{\Delta}_{j}u\big{)}_{L^{2}}\big{|}\\ &\quad=\big{|}\big{(}u\cdot\nabla\dot{\Delta}_{j}u\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}+\big{(}u\cdot\nabla\dot{\Delta}_{j} \nabla a\ |\ \dot{\Delta}_{j}u\big{)}_{L^{2}}+\big{(}\nabla\dot{\Delta}_{j}(a\mathrm{div}\,u) \ |\ \dot{\Delta}_{j}u\big{)}_{L^{2}}\\ &\quad\quad-\big{(}[u\cdot\nabla,\dot{\Delta}_{j}]u\ |\ \nabla\dot{\Delta}_{j}a \big{)}_{L^{2}}-\sum_{k=1}^{d}\big{(}[u\cdot\nabla,\partial_{k}\dot{\Delta}_{ j}]a\ |\ \dot{\Delta}_{j}u\big{)}_{L^{2}}\big{|}\\ &\quad\leq\big{(}\|\mathrm{div}\,u\|_{L^{\infty}}\|\dot{\Delta}_{ j}u\|_{L^{2}}+\|\nabla\dot{\Delta}_{j}(a\mathrm{div}\,u)\|_{L^{2}}+\|[u\cdot \nabla\dot{\Delta}_{j}]u\|_{L^{2}}\|\nabla\dot{\Delta}_{j}a\|_{L^{2}}\\ &\quad\quad+\sum_{k=1}^{d}\|[u\cdot\nabla,\partial_{k}\dot{\Delta }_{j}]a\|_{L^{2}})\|\dot{\Delta}_{j}(\nabla a,u)\|_{L^{2}}.\end{split} \tag{2.37}\]
The combination of (2.35)-(2.37) gives rise to
\[\begin{split}&\frac{d}{dt}\big{(}\frac{1}{2}\|\nabla\dot{\Delta}_{ j}a\|_{L^{2}}^{2}+\big{(}\dot{\Delta}_{j}u\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\big{)}+\|\nabla\dot{\Delta}_{j}a\|_{L^{2}}^{ 2}-\|\mathrm{div}\,\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\big{(}\dot{\Delta}_{j}(u-w )\ |\ \nabla\dot{\Delta}_{j}a\big{)}_{L^{2}}\\ &\lesssim\big{(}\|\dot{\Delta}_{j}G\|_{L^{2}}+\|\mathrm{div}\,u\|_{ L^{\infty}}\|\nabla\dot{\Delta}_{j}a\|_{L^{2}}+\|\nabla\dot{\Delta}_{j}(a \mathrm{div}\,u)\|_{L^{2}}+\|[u\cdot\nabla,\dot{\Delta}_{j}]u\|_{L^{2}}\\ &\quad+\sum_{k=1}^{d}\|[u\cdot\nabla,\partial_{k}\dot{\Delta}_{j}]a \|_{L^{2}})\|\dot{\Delta}_{j}(u,\nabla a)\|_{L^{2}}.\end{split} \tag{2.38}\]
By virtue of (2.12), (2.14), (2.38), the Bernstein inequality and the fact \(2^{-j}\leq 2\), (2.33) holds.
Furthermore, the following Lyapunov-type inequality is used to derive the higher order estimates for \((b,w)\) in (1.3)\({}_{3}\)-(1.3)\({}_{4}\).
**Lemma 2.4**.: _Let \((a,u,b,w)\) be any strong solution to the Cauchy problem \(\eqref{eq:1.3}\). Then, it holds for any \(j\geq-1\) that_
\[\begin{split}&\frac{d}{dt}\mathcal{E}_{3,j}(t)+\mathcal{D}_{3,j}(t) \\ &\lesssim\|\dot{\Delta}_{j}u\|_{L^{2}}\sqrt{\mathcal{E}_{3,j}(t)} \\ &\quad+\big{(}\|\mathrm{div}\,w\|_{L^{\infty}}\|\dot{\Delta}_{j}(b,w)\|_{L^{2}}+2^{-j}\|\dot{\Delta}_{j}(w\cdot\nabla b,w\cdot\nabla w)\|_{L^{2} }\\ &\quad+\|[w\cdot\nabla,\dot{\Delta}_{j}](b,w)\|_{L^{2}}\big{)} \sqrt{\mathcal{E}_{3,j}(t)},\end{split} \tag{2.39}\]
_where \(\mathcal{E}_{3,j}(t)\) and \(\mathcal{D}_{3,j}(t)\) are defined by_
\[\begin{cases}\mathcal{E}_{3,j}(t):=\frac{1}{2}\|\dot{\Delta}_{j}(b,w)\|_{L^{ 2}}^{2}+\eta_{3}2^{-2j}\big{(}\dot{\Delta}_{j}w\ |\ \nabla\dot{\Delta}_{j}b\big{)}_{L^{2}},\\ \mathcal{D}_{3,j}(t):=\|\dot{\Delta}_{j}w\|_{L^{2}}^{2}+\eta_{3}2^{-2j} \Big{(}\|\nabla\dot{\Delta}_{j}b\|_{L^{2}}^{2}-\|\mathrm{div}\,\dot{\Delta}_{j }w\|_{L^{2}}^{2}+\big{(}\dot{\Delta}_{j}(w-u)\ |\ \dot{\Delta}_{j}b\big{)}_{L^{2}}\Big{)},\end{cases} \tag{2.40}\]
_for a constant \(\eta_{3}\in(0,1)\) to be chosen._
**Proof.** According to \(\eqref{eq:1.3}\)-\(\eqref{eq:1.3}_{4}\), we have
\[\begin{split}&\frac{1}{2}\frac{d}{dt}\|\dot{\Delta}_{j}(b,w)\|_{L ^{2}}^{2}+\|\dot{\Delta}_{j}w\|_{L^{2}}^{2}\\ &\quad\leq\big{(}\|\dot{\Delta}_{j}u\|_{L^{2}}+\frac{1}{2}\| \mathrm{div}\,w\|_{L^{\infty}}\|\dot{\Delta}_{j}(b,w)\|_{L^{2}}+\|[w\cdot \nabla,\dot{\Delta}_{j}](b,w)\|_{L^{2}}\big{)}\|\dot{\Delta}_{j}(b,w)\|_{L^{2}},\end{split}\]
which together with \(\eqref{eq:1.3}\) leads to \(\eqref{eq:1.3}\).
Finally, we are ready to establish the expected high-frequency estimates of solutions to the Cauchy problem \(\eqref{eq:1.3}\).
**Lemma 2.5**.: _Let \(T>0\) be any given time, and \((a,u,b,w)\) be any strong solution to the Cauchy problem \(\eqref{eq:1.3}\) for \(t\in(0,T)\) satisfying \(\eqref{eq:1.9}\) and \(\eqref{eq:1.6}\). Then it holds_
\[\begin{split}&\|a\|_{\widetilde{L}_{t}^{\infty}(B_{2,1}^{\frac{d}{ 4}})}^{h}+\|u\|_{\widetilde{L}_{t}^{\infty}(B_{2,1}^{\frac{d}{4}-1})}^{h}+\|(b,w)\|_{\widetilde{L}_{t}^{\infty}(B_{2,1}^{\frac{d}{4}+1})}^{h}\\ &\qquad+\|a\|_{L_{1}^{1}(B_{2,1}^{\frac{d}{4}})}^{h}+\|(u,b,w)\|_ {L_{1}^{1}(B_{2,1}^{\frac{d}{4}+1})}^{h}\\ &\lesssim\mathcal{X}_{0}+\mathcal{X}^{2}(t),\quad t\in(0,T),\end{split} \tag{2.41}\]
_where \(\mathcal{X}_{0}\) and \(\mathcal{X}(t)\) are defined though \(\eqref{eq:1.7}\) and \(\eqref{eq:2.4}\), respectively._
**Proof.** We recall that the Lyapunov type inequality \(\eqref{eq:1.3}\) holds for \(\mathcal{E}_{2,j}(t)\) and \(\mathcal{D}_{2,j}(t)\) given by \(\eqref{eq:2.34}\). It is easy to verify for any \(j\geq-1\) that
\[(\frac{1}{2}-\eta_{2})\|\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\frac{\eta_{2}}{4}\| \nabla\dot{\Delta}_{j}a\|_{L^{2}}^{2}\leq\mathcal{E}_{2,j}(t)\leq(\frac{1}{2}+ \eta_{2})\|\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\frac{3\eta_{2}}{4}\|\nabla\dot{ \Delta}_{j}a\|_{L^{2}}^{2}, \tag{2.42}\]
and
\[\begin{split}\mathcal{D}_{2,j}(t)&\geq\frac{9}{16}2^{2j} \|\dot{\Delta}_{j}u\|_{L^{2}}^{2}+\|\dot{\Delta}_{j}(u-w)\|_{L^{2}}^{2}\\ &\quad+\eta_{2}\big{(}\frac{1}{2}\|\nabla\dot{\Delta}_{j}a\|_{L^{ 2}}^{2}-\frac{64}{9}2^{2j}\|\dot{\Delta}_{j}u\|_{L^{2}}^{2}-\frac{1}{2}\|\dot{ \Delta}_{j}(u-w)\|_{L^{2}}^{2}\big{)}\\ &\quad+\eta_{2}2^{-2j}\big{(}\frac{9}{32}2^{2j}\|\dot{\Delta}_{j}b \|_{L^{2}}^{2}-\frac{64}{9}2^{2j}\|\dot{\Delta}_{j}w\|_{L^{2}}^{2}-\frac{1}{2} \|\dot{\Delta}_{j}(w-u)\|_{L^{2}}^{2}\big{)}\\ &\geq(\frac{9}{64}-\frac{64}{9}\eta_{2})2^{2j}\|\dot{\Delta}_{j}u \|_{L^{2}}^{2}+(1-\frac{5}{2}\eta_{2})\|\dot{\Delta}_{j}(u-w)\|_{L^{2}}^{2}- \frac{64}{9}\eta_{2}\|\dot{\Delta}_{j}w\|_{L^{2}}\\ &\quad+\frac{\eta_{2}}{2}\|\nabla\dot{\Delta}_{j}a\|_{L^{2}}^{2}+ \frac{9\eta_{2}}{32}\|\dot{\Delta}_{j}b\|_{L^{2}}^{2}.\end{split} \tag{2.43}\]
By (2.20) and (2.42)-(2.43), for any \(j\geq-1\), one can choose a sufficiently small constant \(\eta_{2}\in(0,1)\) so that we have
\[\mathcal{E}_{2,j}(t)\sim\|\dot{\Delta}_{j}(a,\nabla a,u,b,w)\|_{L^{2}}^{2}, \tag{2.44}\]
and
\[\begin{split}\mathcal{D}_{2,j}(t)&\gtrsim\|\dot{ \Delta}_{j}u\|_{L^{2}}^{2}+\|\dot{\Delta}_{j}(u-w)\|_{L^{2}}^{2}-\eta_{2}\| \dot{\Delta}_{j}w\|_{L^{2}}^{2}\\ &\quad+\eta_{2}\|\nabla\dot{\Delta}_{j}a\|_{L^{2}}^{2}+\eta_{2}\| \dot{\Delta}_{j}b\|_{L^{2}}^{2}\\ &\gtrsim\|\dot{\Delta}_{j}(a,\nabla a,u,b,w)\|_{L^{2}}^{2},\end{split} \tag{2.45}\]
where in the last inequality one has used (2.20). Combining (2.33) and (2.44)-(2.45) together, we show
\[\begin{split}&\frac{d}{dt}\mathcal{E}_{2,j}(t)+\mathcal{E}_{2,j}(t) \\ &\lesssim\big{(}2^{j}\|\dot{\Delta}_{j}(au)\|_{L^{2}}+\|\dot{ \Delta}_{j}(u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{L^{2}}+\|\dot{ \Delta}_{j}G\|_{L^{2}}+\|\mathrm{div}\,u\|_{L^{\infty}}\|\nabla\dot{\Delta}_{j }a\|_{L^{2}}\\ &\quad+\|\nabla\dot{\Delta}_{j}(a\mathrm{div}\,u)\|_{L^{2}}+\|[u \cdot\nabla,\dot{\Delta}_{j}]u\|_{L^{2}}+\sum_{k=1}^{d}\|[u\cdot\nabla,\partial _{k}\dot{\Delta}_{j}]a\|_{L^{2}}\big{)}\sqrt{\mathcal{E}_{2,j}(t)},\qquad j \geq-1.\end{split} \tag{2.46}\]
By similar arguments as in (2.22)-(2.23), the inequality (2.46) implies
\[\begin{split}&\|(\nabla a,u,b,w)\|_{\widetilde{L}_{t}^{\infty}(B_{2,1}^{\frac{d}{2}-1})}^{h}+\|(\nabla a,u,b,w)\|_{L_{t}^{1}(B_{2,1}^{\frac{d}{2} -1})}^{h}\\ &\quad\lesssim\|(\nabla a_{0},b_{0},u_{0},w_{0})\|_{B_{2,1}^{ \frac{d}{2}-1}}^{h}\\ &\quad+\|au\|_{L_{t}^{1}(B_{2,1}^{\frac{d}{2}})}^{h}+\|(u\cdot \nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{L_{t}^{1}(B_{2,1}^{\frac{d}{2}-1})} ^{h}\\ &\quad+\|G\|_{L_{t}^{1}(B_{2,1}^{\frac{d}{2}-1})}^{h}+\|\mathrm{ div}\,u\|_{L_{t}^{1}(L^{\infty})}\|a\|_{\widetilde{L}_{t}^{\infty}(B_{2,1}^{\frac{d}{2} })}^{h}+\|a\mathrm{div}\,u\|_{L_{t}^{1}(B_{2,1}^{\frac{d}{2}})}^{h}\\ &\quad+\sum_{j\geq-1}2^{j(\frac{d}{2}-1)}\big{(}\|[u\cdot\nabla, \dot{\Delta}_{j}]u\|_{L_{t}^{1}(L^{2})}+\sum_{k=1}^{d}\|[u\cdot\nabla,\partial _{k}\dot{\Delta}_{j}]a\|_{L_{t}^{1}(L^{2})}\big{)}.\end{split} \tag{2.47}\]
To estimate the right-hand side of (2.47), it holds by (2.24) that
\[\begin{split}&\|a\mathrm{div}\,u\|_{L_{t}^{1}(B_{2,1}^{\frac{d}{2} })}+\|\mathrm{div}\,u\|_{L_{t}^{1}(L^{\infty})}\|a\|_{\widetilde{L}_{t}^{\infty }(B_{2,1}^{\frac{d}{2}})}^{h}\\ &\quad\lesssim\|a\|_{\widetilde{L}_{t}^{\infty}(B_{2,1}^{\frac{d}{2 }})}\|u\|_{L_{t}^{1}(B_{2,1}^{\frac{d}{2}+1})}\lesssim\mathcal{X}^{2}(t).\end{split} \tag{2.48}\]
Making use of the commutator estimates (4.7)-(4.8), we also have
\[\left\{\begin{aligned} \sum_{j\in\mathbb{Z}}2^{j(\frac{d}{2}-1)}& \|[u\cdot\nabla,\dot{\Delta}_{j}]u\|_{L^{1}_{t}(L^{2})}+\sum_{j\in \mathbb{Z}}2^{j(\frac{d}{2}-1)}\sum_{k=1}^{d}\|[u\cdot\nabla,\partial_{k}\dot {\Delta}_{j}]a\|_{L^{1}_{t}(L^{2})}\\ &\lesssim\|u\|_{L^{1}_{t}(B^{\frac{d}{2}-1}_{2,1})}\|(a,u)\|_{ \widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}\lesssim\mathcal{X} ^{2}(t).\end{aligned}\right. \tag{2.49}\]
By (2.25)-(2.27) and (2.47)-(2.49), there holds
\[\begin{aligned} &\|a\|^{h}_{\widetilde{L}^{\infty}_{t}(B^{ \frac{d}{2}}_{2,1})}+\|(u,b,w)\|^{h}_{\widetilde{L}^{\infty}_{t}(\dot{B}^{ \frac{d}{2}-1}_{2,1})}+\|a\|^{h}_{L^{1}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}+\| (u,b,w)\|^{h}_{L^{1}_{t}(B^{\frac{d}{2}-1}_{2,1})}\\ &\lesssim\|a_{0}\|^{h}_{B^{\frac{d}{2}}_{2,1}}+\|(u_{0},b_{0},w_{ 0})\|^{h}_{B^{\frac{d}{2}-1}_{2,1}}+\mathcal{X}^{2}(t).\end{aligned} \tag{2.50}\]
Then applying Lemma 4.6 to the heat equation
\[\partial_{t}u-\Delta u=-\nabla a+w-u-u\cdot\nabla u+G, \tag{2.51}\]
we gain
\[\begin{aligned} &\|u\|^{h}_{\widetilde{L}^{\infty}_{t}(B^{ \frac{d}{2}-1}_{2,1})}+\|u\|^{h}_{L^{1}_{t}(B^{\frac{d}{2}+1}_{2,1})}\\ &\lesssim\|u_{0}\|^{h}_{B^{\frac{d}{2}-1}_{2,1}}+\|a\|^{h}_{L^{ 1}_{t}(B^{\frac{d}{2}}_{2,1})}+\|(u,w)\|^{h}_{L^{1}_{t}(B^{\frac{d}{2}-1}_{2,1 })}\\ &\quad+\|u\cdot\nabla u\|^{h}_{L^{1}_{t}(B^{\frac{d}{2}-1}_{2,1}) }+\|G\|^{h}_{L^{1}_{t}(B^{\frac{d}{2}-1}_{2,1})}\\ &\lesssim\|a_{0}\|^{h}_{B^{\frac{d}{2}}_{2,1}}+\|(u_{0},b_{0},w_{ 0})\|^{h}_{B^{\frac{d}{2}-1}_{2,1}}+\mathcal{X}^{2}(t),\end{aligned} \tag{2.52}\]
where in the last inequality one has used (2.26)-(2.27) and (2.50).
Next, we are going to obtain the \(\widetilde{L}^{\infty}_{t}(B^{\frac{d}{2}+1}_{2,1})\cap L^{1}_{t}(B^{\frac{d} {2}+1}_{2,1})\)-estimates of \((b,w)\). Let \(\mathcal{E}_{3,j}(t)\) and \(\mathcal{D}_{3,j}(t)\) be defined by (2.40). For any \(j\geq-1\), it is easy to verify that
\[(\frac{1}{2}-2\eta_{3})\|\dot{\Delta}_{j}(b,w)\|^{2}_{L^{2}}\leq\mathcal{E}_{ 3,j}(t)\leq(\frac{1}{2}+2\eta_{3})\|\dot{\Delta}_{j}(b,w)\|^{2}_{L^{2}}, \tag{2.53}\]
and
\[\mathcal{D}_{3,j}(t)\geq(1-C\eta_{3})\|\dot{\Delta}_{j}w\|^{2}_{L^{2}}+\frac{9 }{32}\eta_{3}\|\dot{\Delta}_{j}b\|^{2}_{L^{2}}-4\eta_{3}\|\dot{\Delta}_{j}u\|_{ L^{2}}\|\dot{\Delta}_{j}b\|_{L^{2}}, \tag{2.54}\]
where \(C>0\) is a constant independent of time. Choosing a suitably small constant \(\eta_{3}\in(0,1)\), for any \(j\geq-1\), we get by (2.39) and (2.53)-(2.54) that
\[\begin{aligned} &\frac{d}{dt}\mathcal{E}_{3,j}(t)+\mathcal{E}_{3,j}(t) \\ &\lesssim\big{(}\|\dot{\Delta}_{j}u\|_{L^{2}}+\|\mathrm{div}\,w \|_{L^{\infty}}\|\dot{\Delta}_{j}(b,w)\|_{L^{2}}\\ &\quad\quad+2^{-j}\|\dot{\Delta}_{j}(w\cdot\nabla b,w\cdot\nabla w )\|_{L^{2}}+\|[w\cdot\nabla,\dot{\Delta}_{j}](b,w)\|_{L^{2}})\sqrt{\mathcal{E }_{3,j}(t)}.\end{aligned} \tag{2.55}\]
With the help of (2.24), (2.52), (2.55), (4.7) and
\[\|(w\cdot\nabla b,w\cdot\nabla w)\|_{L^{1}_{t}(B^{\frac{d}{2}}_{2,1})}\lesssim \|w\|_{\widetilde{L}^{\infty}_{t}(B^{\frac{d}{2}}_{2,1})}\|(w,b)\|_{L^{1}_{t}(B^{ \frac{d}{2}+1}_{2,1})}\lesssim\mathcal{X}^{2}(t),\]
it holds
\[\begin{split}&\|(b,w)\|_{\widetilde{L}_{t}^{\infty}(B_{2,1}^{\frac{d }{2}+1})}^{h}+\|(b,w)\|_{L^{1}_{t}(B_{2,1}^{\frac{d}{2}+1})}^{h}\\ &\lesssim\|(b_{0},w_{0})\|_{B_{2,1}^{\frac{d}{2}+1}}^{h}+\|u\|_{L ^{1}_{t}(B_{2,1}^{\frac{d}{2}+1})}^{h}+\|\mathrm{div}\,w\|_{L^{\infty}_{t}(L^{ \infty})}\|(b,w)\|_{L^{1}_{t}(B_{2,1}^{\frac{d}{2}+1})}^{h}\\ &\quad+\|(w\cdot\nabla b,w\cdot\nabla w)\|_{L^{1}_{t}(B_{2,1}^{ \frac{d}{2}})}^{h}+\sum_{j\geq-1}2^{j(\frac{d}{2}+1)}\|[w\cdot\nabla,\dot{ \Delta}_{j}](b,w)\|_{L^{2}}\\ &\lesssim\|a_{0}\|_{B_{2,1}^{\frac{d}{2}}}^{h}+\|u_{0}\|_{B_{2,1} ^{\frac{d}{2}-1}}^{h}+\|(b_{0},w_{0})\|_{B_{2,1}^{\frac{d}{2}+1}}^{h}+\mathcal{ X}^{2}(t).\end{split} \tag{2.56}\]
The combination of (2.50), (2.52) and (2.56) leads to (2.41). The proof of Lemma 2.5 is completed.
## 3 Optimal time-decay rates
### The proof of Theorem 1.3
In this subsection, we show Theorem 1.3 on the optimal time-decay rates of the strong solution to the Cauchy problem (1.3) in the case that \(\|(a_{0},u_{0},b_{0},w_{0})^{\ell}\|_{\dot{B}_{2,\infty}^{\sigma_{0}}}\) is bounded.
First, in addition to (1.9), we have the following low-frequency estimates.
**Lemma 3.1**.: _Let \((a,u,b,w)\) be the global solution to the Cauchy problem (1.3) given by Theorem 1.2. Then, under the assumptions of Theorem 1.3, the following inequality holds:_
\[\begin{split}\mathcal{X}_{L,\sigma_{0}}(t):&=\|(a,u,b,w)\|_{\widetilde{L}_{t}^{\infty}(\dot{B}_{2,\infty}^{\sigma_{0}})}^{\ell}+ \|(a,u,b,w)\|_{\widetilde{L}_{1}^{1}(\dot{B}_{2,\infty}^{\sigma_{0}+2})}^{\ell }\\ &\quad+\|u-w\|_{\widetilde{L}_{1}^{1}(\dot{B}_{2,\infty}^{\sigma_ {0}+1})}^{\ell}+\|u-w\|_{\widetilde{L}_{t}^{2}(\dot{B}_{2,\infty}^{\sigma_{0}} )}^{\ell}\leq C\delta_{0},\qquad t>0,\end{split} \tag{3.1}\]
_where \(\delta_{0}\) is defined by (1.12), and \(C>0\) is a constant independent of time._
**Proof.** Multiplying (2.22) by \(2^{\sigma_{0}j}\) and taking the supremum on both \([0,t]\) and \(j\leq 0\), we get
\[\begin{split}&\|(a,u,b,w)\|_{\widetilde{L}_{t}^{\infty}(\dot{B}_{2,\infty}^{\sigma_{0}})}^{\ell}+\|(a,u,b,w)\|_{\widetilde{L}_{t}^{1}(\dot{B}_{2,\infty}^{\sigma_{0}+2})}^{\ell}\\ &\quad\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|_{B_{2,\infty}^{\sigma _{0}}}^{\ell}+\|au\|_{\widetilde{L}_{t}^{1}(\dot{B}_{2,\infty}^{\sigma_{0}+1}) }^{\ell}+\|(u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{\widetilde{L}_{t} ^{1}(\dot{B}_{2,\infty}^{\sigma_{0}})}^{\ell}+\|G\|_{\widetilde{L}_{t}^{1}( \dot{B}_{2,\infty}^{\sigma_{0}})}^{\ell}.\end{split} \tag{3.2}\]
Arguing similarly as in Lemma 2.2, we deduce by (2.6), (4.1), (4.5) and Lemma 4.4 that
\[\begin{split}&\|(u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{ \widetilde{L}_{t}^{1}(\dot{B}_{2,\infty}^{\sigma_{0}})}^{\ell}\\ &\quad\lesssim\|(u,w)\|_{\widetilde{L}_{t}^{2}(\dot{B}_{2,1}^{ \frac{d}{2}})}\|(u,b,w)\|_{\widetilde{L}_{t}^{2}(\dot{B}_{2,\infty}^{\sigma_{0 }+1})}\lesssim\mathcal{X}(t)\big{(}\mathcal{X}_{L,\sigma_{0}}(t)+\mathcal{X}(t )\big{)}.\end{split} \tag{3.3}\]
and
\[\begin{split}&\|G\|_{\widetilde{L}_{t}^{1}(\dot{B}_{2,\infty}^{ \sigma_{0}})}\lesssim\|a\|_{\widetilde{L}_{t}^{2}(\dot{B}_{2,1}^{\frac{d}{2}})} \|a\|_{\widetilde{L}_{t}^{2}(\dot{B}_{2,\infty}^{\sigma_{0}+1})}+\|a\|_{ \widetilde{L}_{t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}})}\|u\|_{\widetilde{L} _{t}^{1}(\dot{B}_{2,\infty}^{\sigma_{0}+2})}\\ &\quad+\big{(}\|a\|_{\widetilde{L}_{t}^{2}(\dot{B}_{2,1}^{\frac{d} {2}})}\|b\|_{\widetilde{L}_{t}^{2}(\dot{B}_{2,1}^{\frac{d}{2}})}+\|a\|_{ \widetilde{L}_{t}^{2}(\dot{B}_{2,1}^{\frac{d}{2}})}+\|b\|_{\widetilde{L}_{t} ^{2}(\dot{B}_{2,1}^{\frac{d}{2}})}\big{)}\|u-w\|_{\widetilde{L}_{t}^{2}(\dot{B} _{2,\infty}^{\sigma_{0}})}\\ &\quad\lesssim\mathcal{X}(t)\big{(}\mathcal{X}_{L,\sigma_{0}}(t)+ \mathcal{X}(t)\big{)}.\end{split} \tag{3.4}\]
Substituting the estimates (3.3)-(3.4) into (3.2), we have
\[\|(a,u,b,w)\|_{\widetilde{L}^{\infty}_{t}(\mathring{B}^{\sigma_{0}}_{2,\infty})}^ {\ell}+\|(a,u,b,w)\|_{\widetilde{L}^{1}_{t}(\mathring{B}^{\sigma_{0}+2}_{2, \infty})}^{\ell}\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|_{\mathring{B}^{\sigma_{ 0}}_{2,\infty}}^{\ell}+\mathcal{X}(t)\big{(}\mathcal{X}_{L,\sigma_{0}}(t)+ \mathcal{X}(t)\big{)}. \tag{3.5}\]
Similarly to (2.29)-(2.32), one can have
\[\begin{split}&\|u-w\|_{\widetilde{L}^{1}_{t}(\mathring{B}^{ \sigma_{0}+1}_{2,\infty})}^{\ell}+\|u-w\|_{\widetilde{L}^{2}_{t}(\mathring{B}^ {\sigma_{0}}_{2,\infty})}^{\ell}\\ &\qquad\lesssim\|(u_{0},w_{0})\|_{\mathring{B}^{\sigma_{0}}_{2, \infty}}^{\ell}+\|(a,u,b)\|_{\widetilde{L}^{1}_{t}(\mathring{B}^{\sigma_{0}+2} _{2,\infty})}^{\ell}+\|(u\cdot\nabla u,w\cdot\nabla w)\|_{\widetilde{L}^{1}_{t }(\mathring{B}^{\sigma_{0}+2}_{2,\infty})}^{\ell}+\|G\|_{\widetilde{L}^{1}_{t }(\mathring{B}^{\sigma_{0}+2}_{2,\infty})}^{\ell}\\ &\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|_{\mathring{B}^{\sigma_{0} }_{2,\infty}}^{\ell}+\mathcal{X}(t)\big{(}\mathcal{X}_{L,\sigma_{0}}(t)+ \mathcal{X}(t)\big{)}.\end{split} \tag{3.6}\]
Thus, it follows by (3.3)-(3.6) that
\[\mathcal{X}_{L,\sigma_{0}}(t)\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|_{\mathring {B}^{\sigma_{0}}_{2,\infty}}^{\ell}+\mathcal{X}(t)\big{(}\mathcal{X}_{L, \sigma_{0}}(t)+\mathcal{X}(t)\big{)}.\]
Making use of (1.9), \(\mathcal{X}(t)\lesssim\mathcal{X}_{0}<<1\) and \(\|(a_{0},u_{0},b_{0},w_{0})\|_{\mathring{B}^{\sigma_{0}}_{2,\infty}}^{\ell}+ \mathcal{X}_{0}\sim\delta_{0}\), we prove (3.1). The proof of Proposition 3.1 is completed.
Next, we introduce a new time-weighted energy functional
\[\begin{split}\mathcal{X}_{\theta}(t)&:=\|\tau^{ \theta}(a,u,b,w)\|_{\widetilde{L}^{\infty}_{t}(\mathring{B}^{\frac{d}{2}-1}_{ 2,1})}^{\ell}+\|\tau^{\theta}(a,u,b,w)\|_{L^{1}_{t}(\mathring{B}^{\frac{d}{2} +1}_{2,1})}^{\ell}\\ &\qquad+\|\tau^{\theta}(u-w)\|_{L^{1}_{t}(\mathring{B}^{\frac{d} {2}}_{2,1})}^{\ell}+\|\tau^{\theta}(u-w)\|_{\widetilde{L}^{2}_{t}(\mathring{B }^{\frac{d}{2}-1}_{2,1})}^{\ell}\\ &\qquad+\|\tau^{\theta}(\nabla a,u)\|_{\widetilde{L}^{\infty}_{ t}(\mathring{B}^{\frac{d}{2}-1}_{2,1})}^{h}+\|\tau^{\theta}(b,w)\|_{\widetilde{L}^{ \infty}_{t}(\mathring{B}^{\frac{d}{2}+1}_{2,1})}^{h}+\|\tau^{\theta}a\|_{L^ {1}_{t}(\mathring{B}^{\frac{d}{2}}_{2,1})}^{h}+\|\tau^{\theta}(u,b,w)\|_{L^{1 }_{t}(\mathring{B}^{\frac{d}{2}+1}_{2,1})}^{h}.\end{split} \tag{3.7}\]
We have the time-weighted estimates of the solution \((a,u,b,w)\) to the Cauchy problem (1.3) for low frequencies.
**Lemma 3.2**.: _Let \((a,u,b,w)\) be the global solution to the Cauchy problem (1.3) given by Theorem 1.2. Then, under the assumptions of Theorem 1.3, for \(-\frac{d}{2}\leq\sigma_{0}<\frac{d}{2}-1\) and \(\theta>\frac{1}{2}(\frac{d}{2}+1-\sigma_{0})\), it holds_
\[\begin{split}&\|\tau^{\theta}(a,u,b,w)\|_{\widetilde{L}^{\infty}_{t }(\mathring{B}^{\frac{d}{2}-1}_{2,1})}^{\ell}+\|\tau^{\theta}(a,u,b,w)\|_{L^{1 }_{t}(\mathring{B}^{\frac{d}{2}+1}_{2,1})}^{\ell}+\|\tau^{\theta}(u-w)\|_{ \widetilde{L}^{1}_{t}(\mathring{B}^{\frac{d}{2}-1}_{2,1})}^{\ell}\\ &\qquad\lesssim\frac{\mathcal{X}(t)+\mathcal{X}_{L,\sigma_{0}}(t )}{\zeta}t^{\theta-\frac{1}{2}(\frac{d}{2}-1-\sigma_{0})}+\big{(}\zeta+ \mathcal{X}(t)\big{)}\mathcal{X}_{\theta}(t),\quad t>0,\end{split} \tag{3.8}\]
_where \(\mathcal{X}(t)\), \(\mathcal{X}_{L,\sigma_{0}}(t)\) and \(\mathcal{X}_{\theta}(t)\) are defined by (2.4), (3.1) and (3.7), respectively, and \(\zeta>0\) is a constant to be determined later._
**Proof.** We recall that \(\mathcal{E}_{1,j}(t)\) given by (2.9) satisfies the Lyapunov type inequality (2.21). Multiplying (2.21) by \(t^{\theta}\) and using the fact \(t^{\theta}\frac{d}{dt}\mathcal{E}_{1,j}(t)\)=\(\frac{d}{dt}\big{(}t^{\theta}\mathcal{E}_{1,j}(t)\big{)}-\theta t^{\theta-1} \mathcal{E}_{1,j}(t)\), we obtain
\[\begin{split}\frac{d}{dt}&\big{(}t^{\theta}\mathcal{E}_ {1,j}(t)\big{)}+t^{\theta}2^{2j}\mathcal{E}_{1,j}(t)\\ &\qquad\lesssim t^{\theta-1}\mathcal{E}_{1,j}(t)+t^{\theta}\big{(} 2^{j}\|\mathring{\Delta}_{j}(au)\|_{L^{2}}+\|\mathring{\Delta}_{j}(u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{L^{2}}+\|\mathring{\Delta}_{j}G\|_{L^{2}} \big{)}\sqrt{\mathcal{E}_{1,j}(t)},\quad j\leq 0,\end{split}\]
which together with (2.18)-(2.19) and \(t^{\theta-1}\sqrt{\mathcal{E}_{1,j}(t)}\Big{|}_{t=0}=0\) yields for any \(j\leq 0\) that
\[t^{\theta}\|\dot{\Delta}_{j}(a,u,b,w)\|_{L^{2}}+2^{2j}\int_{0}^{t} \tau^{\theta}\|\dot{\Delta}_{j}(a,u,b,w)\|_{L^{2}}d\tau \tag{3.9}\] \[\qquad\lesssim\int_{0}^{t}\tau^{\theta-1}\|\dot{\Delta}_{j}(a,u,b,w)\|_{L^{2}}d\tau\] \[\qquad+\int_{0}^{t}\tau^{\theta}\big{(}2^{j}\|\dot{\Delta}_{j}(au) \|_{L^{2}}+\|\dot{\Delta}_{j}(u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_ {L^{2}}+\|\dot{\Delta}_{j}G\|_{L^{2}}\big{)}d\tau.\]
Then we multiply (3.9) by \(2^{j(\frac{d}{2}-1)}\), take the supremum on \([0,t]\) and then sum over \(j\leq 0\) to have
\[\|\tau^{\theta}(a,u,b,w)\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{ \frac{d}{2}-1}_{2,1})}^{\ell}+\|\tau^{\theta}(a,u,b,w)\|_{L^{1}_{t}(\dot{B}^{ \frac{d}{2}+1}_{2,1})}^{\ell} \tag{3.10}\] \[\qquad\lesssim\int_{0}^{t}\tau^{\theta-1}\|(a,u,b,w)\|_{\dot{B}^ {\frac{d}{2}-1}_{2,1}}^{\ell}d\tau+\|\tau^{\theta}au\|_{L^{1}_{t}(\dot{B}^{ \frac{d}{2}}_{2,1})}^{\ell}\] \[\qquad\qquad+\|\tau^{\theta}(u\cdot\nabla u,w\cdot\nabla b,w\cdot \nabla w)\|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{\ell}+\|\tau^{\theta}G \|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{\ell}.\]
To control the first term on the right-hand side of (3.10), we deduce from (4.1)-(4.2) that
\[\int_{0}^{t}\tau^{\theta-1}\|(a,u,b,w)^{\ell}\|_{\dot{B}^{\frac{ d}{2}-1}_{2,1}}d\tau \tag{3.11}\] \[\qquad\lesssim\int_{0}^{t}\tau^{\theta-1}\|(a,u,b,w)^{\ell}\|_{ \dot{B}^{\sigma_{0}}_{2,\infty}}^{1-\eta_{0}}\|(a,u,b,w)^{\ell}\|_{\dot{B}^{ \frac{d}{2}+1}_{2,\infty}}^{\eta_{0}}d\tau\] \[\qquad\lesssim\Big{(}\int_{0}^{t}\tau^{\theta-\frac{1}{1-\eta_{0 }}}d\tau\Big{)}^{1-\eta_{0}}\|(a,u,b,w)^{\ell}\|_{\widetilde{L}^{\infty}_{t}( \dot{B}^{\sigma_{0}}_{2,\infty})}^{1-\eta_{0}}\|\tau^{\theta}(a,u,b,w)^{\ell} \|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}+1}_{2,\infty})}^{\eta_{0}}\] \[\qquad\lesssim\Big{(}t^{(\theta-\theta_{0})}\|(a,u,b,w)\|_{ \widetilde{L}^{\infty}_{t}(\dot{B}^{\sigma_{0}}_{2,\infty})}^{\ell}\Big{)}^{( 1-\eta_{0})}\Big{(}\|\tau^{\theta}(a,u,b,w)\|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}+ 1}_{2,1})}^{\ell}\Big{)}^{\eta_{0}},\]
for the constant \(\eta_{0}\in(0,1)\) given by
\[\frac{d}{2}-1=\eta_{0}(\frac{d}{2}+1)+\sigma_{0}(1-\eta_{0}). \tag{3.12}\]
Taking the advantage of (4.1) and the dissipative properties of \((a,u,b,w)\) for high frequencies, it is easy to verify that
\[\int_{0}^{t}\tau^{\theta-1}\|a^{h}\|_{\dot{B}^{\frac{d}{2}-1}_{2,1 }}d\tau \tag{3.13}\] \[\qquad\lesssim\Big{(}\int_{0}^{t}\tau^{(\theta-1-\theta\eta_{0}) \frac{1}{1-\eta_{0}}}d\tau\Big{)}^{1-\eta_{0}}\Big{(}\|a\|_{\widetilde{L}^{ \infty}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{h}\Big{)}^{1-\eta_{0}}\Big{(}\| \tau^{\theta}a\|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{h}\Big{)}^{\eta_{ 0}}\] \[\qquad\lesssim\Big{(}t^{(\theta-\theta_{0})}\|a\|_{\widetilde{L}^{ \infty}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}^{h}\Big{)}^{1-\eta_{0}}\Big{(}\| \tau^{\theta}a\|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}^{h}\Big{)}^{\eta_{0}},\]
and
\[\int_{0}^{t}\tau^{\theta-1}\|u^{h}\|_{\dot{B}^{\frac{d}{2}-1}_{2,1 }}d\tau \tag{3.14}\] \[\qquad\lesssim\Big{(}\int_{0}^{t}\tau^{(\theta-1-\theta_{0})\frac{ 1}{1-\eta_{0}}}d\tau\Big{)}^{1-\eta_{0}}\Big{(}\|u\|_{\widetilde{L}^{\infty}_{t }(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{h}\Big{)}^{1-\eta_{0}}\Big{(}\|\tau^{\theta} u\|_{\widetilde{L}^{1}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{h}\Big{)}^{ \eta_{0}}\] \[\qquad\lesssim\Big{(}t^{(\theta-\theta_{0})}\|u\|_{\widetilde{L}^{ \infty}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{h}\Big{)}^{1-\eta_{0}}\Big{(}\| \tau^{\theta}u\|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}+1}_{2,1})}^{h}\Big{)}^{\eta_{0}}.\]
Similarly, one has
\[\begin{split}&\int_{0}^{t}\tau^{\theta-1}\|(b,w)^{h}\|_{\dot{B}^{ \frac{d}{2}-1}_{2,1}}d\tau\\ &\quad\lesssim\Big{(}t^{(\theta-\theta_{0})}\|(b,w)\|^{h}_{\widetilde {L}^{\alpha}_{t}(\dot{B}^{\frac{d}{2}+1}_{2,1})}\Big{)}^{1-\eta_{0}}\Big{(}\| \tau^{\theta}(b,w)\|^{h}_{L^{1}_{t}(\dot{B}^{\frac{d}{2}+1}_{2,1})}\Big{)}^{ \eta_{0}}.\end{split} \tag{3.15}\]
By (3.11)-(3.15) and Young's inequality, we have for any constant \(\zeta>0\) that
\[\begin{split}&\int_{0}^{t}\tau^{\theta-1}\|(a,u,b,w)\|^{\ell}_{ \dot{B}^{\frac{d}{2}-1}_{2,1}}d\tau\\ &\quad\lesssim\int_{0}^{t}\tau^{\theta-1}\big{(}\|(a,u,b,w)^{ \ell}\|_{\dot{B}^{\frac{d}{2}-1}_{2,1}}+\|(a,u,b,w)^{h}\|_{\dot{B}^{\frac{d}{2 }-1}_{2,1}}\big{)}d\tau\\ &\quad\lesssim\frac{\mathcal{X}(t)+\mathcal{X}_{L,\sigma_{0}}(t)} {\zeta}t^{\theta-\frac{1}{2}(\frac{d}{2}-1-\sigma_{0})}+\zeta\mathcal{X}_{ \theta}(t).\end{split} \tag{3.16}\]
By similar arguments as used in Lemma 2.2, the nonlinearities on the right-hand side of (3.10) can be estimated by
\[\begin{split}&\|\tau^{\theta}au\|_{L^{1}_{t}(\dot{B}^{\frac{d}{ 2}}_{2,1})}+\|\tau^{\theta}(u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{L^ {1}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}\\ &\quad\lesssim\|(a,u,w)\|_{\widetilde{L}^{2}_{t}(\dot{B}^{\frac {d}{2}}_{2,1})}\|\tau^{\theta}(u,b,w)\|_{\widetilde{L}^{2}_{t}(\dot{B}^{\frac{ d}{2}}_{2,1})}\lesssim\mathcal{X}(t)\mathcal{X}_{\theta}(t),\end{split} \tag{3.17}\]
and
\[\begin{split}&\|\tau^{\theta}G\|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}- 1}_{2,1})}\\ &\quad\lesssim\|a\|_{\widetilde{L}^{2}_{t}(\dot{B}^{\frac{d}{2} }_{2,1})}\|\tau^{\theta}a\|_{\widetilde{L}^{2}_{t}(\dot{B}^{\frac{d}{2}}_{2,1 })}+\|a\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}\|\tau^{ \theta}u\|_{L^{1}_{t}(\dot{B}^{\frac{d}{2}+1}_{2,1})}\\ &\quad+\|\tau^{\theta}(a,b)\|_{\widetilde{L}^{2}_{t}(\dot{B}^{ \frac{d}{2}}_{2,1})}\|u-w\|_{\widetilde{L}^{2}_{t}(\dot{B}^{\frac{d}{2}-1}_{2, 1})}\lesssim\mathcal{X}(t)\mathcal{X}_{\theta}(t).\end{split} \tag{3.18}\]
Substituting (3.16)-(3.18) into (3.10) and using Young's inequality, we get for any \(\zeta>0\) that
\[\begin{split}&\|\tau^{\theta}(a,u,b,w)\|^{\ell}_{\widetilde{L}^{ \infty}(\dot{B}^{\frac{d}{2}-1}_{2,1})}+\|\tau^{\theta}(a,u,b,w)\|^{\ell}_{L^{1 }_{t}(\dot{B}^{\frac{d}{2}+1}_{2,1})}\\ &\quad\lesssim\frac{\mathcal{X}(t)+\mathcal{X}_{L,\sigma_{0}}(t)} {\zeta}t^{\theta-\frac{1}{2}(\frac{d}{2}-1-\sigma_{0})}+\big{(}\zeta+\mathcal{ X}(t)\big{)}\mathcal{X}_{\theta}(t).\end{split} \tag{3.19}\]
In addition, we multiply (2.29) by \(t^{2\theta}\) to have
\[\begin{split}&\frac{d}{dt}\big{(}t^{2\theta}\|\dot{\Delta}_{j}(u-w)\|^{ 2}_{L^{2}}\big{)}+t^{2\theta}\|\dot{\Delta}_{j}(u-w)\|^{2}_{L^{2}}\\ &\lesssim t^{\theta-1}\|\dot{\Delta}_{j}(u,w)\|_{L^{2}}t^{\theta} \|\dot{\Delta}_{j}(u-w)\|_{L^{2}}\\ &\quad+t^{\theta}\big{(}2^{j}\|\dot{\Delta}_{j}(a,u,b)\|_{L^{2}}+ \|\dot{\Delta}_{j}(u\cdot\nabla u,w\cdot\nabla w)\|_{L^{2}}+\|\dot{\Delta}_{j}G \|_{L^{2}}\big{)}t^{\theta}\|\dot{\Delta}_{j}(u-w)\|_{L^{2}},\quad j\leq 0,\end{split} \tag{3.20}\]
Similarly to (2.30)-(2.32), one obtains by (3.16) and (3.19)-(3.20) that
\[\begin{split}&\|\tau^{\theta}(u-w)\|_{\widetilde{L}_{t}^{\infty}( \dot{B}_{2,1}^{\frac{d}{2}-1})}^{\ell}+\|\tau^{\theta}(u-w)\|_{L_{t}^{1}(\dot {B}_{2,1}^{\frac{d}{2}})}^{\ell}\\ &\lesssim\int_{0}^{t}\tau^{\theta-1}\|(u,w)\|_{\dot{B}_{2,1}^{ \frac{d}{2}-1}}^{\ell}d\tau\\ &\quad+\|\tau^{\theta}(a,u,b,w)\|_{\widetilde{L}_{t}^{\infty}( \dot{B}_{2,1}^{\frac{d}{2}-1})}^{\ell}+\|\tau^{\theta}(a,u,b)\|_{\widetilde{L} _{t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}})}^{\ell}\\ &\quad+\|\tau^{\theta}(u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w )\|_{L_{t}^{1}(\dot{B}_{2,1}^{\frac{d}{2}-1})}^{\ell}+\|\tau^{\theta}G\|_{L_ {t}^{1}(\dot{B}_{2,1}^{\frac{d}{2}-1})}^{\ell}\\ &\lesssim\frac{\mathcal{X}(t)+\mathcal{X}_{L,\sigma_{0}}(t)}{ \zeta}t^{\theta-\frac{1}{2}(\frac{d}{2}-1-\sigma_{0})}+\big{(}\zeta+\mathcal{ X}(t)\big{)}\mathcal{X}_{\theta}(t).\end{split} \tag{3.21}\]
By (3.19) and (3.21), we prove (3.8). The proof of Lemma 3.2 is completed.
Then, we show the time-weighted estimates of the solution \((a,u,b,w)\) to the Cauchy problem (1.3) for high frequencies.
**Lemma 3.3**.: _Let \((a,u,b,w)\) be the global solution to the Cauchy problem (1.3) given by Theorem 1.2. Then, under the assumptions of Theorem 1.3, for \(-\frac{d}{2}\leq\sigma_{0}<\frac{d}{2}-1\) and \(\theta>\frac{1}{2}(\frac{d}{2}+1-\sigma_{0})\), it holds_
\[\begin{split}&\|\tau^{\theta}a\|_{\widetilde{L}_{t}^{\infty}( \dot{B}_{2,1}^{\frac{d}{2}})}^{h}+\|\tau^{\theta}u\|_{\widetilde{L}_{t}^{\infty }(\dot{B}_{2,1}^{\frac{d}{2}-1})}^{h}+\|\tau^{\theta}(b,w)\|_{\widetilde{L}_{ t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}+1})}^{h}\\ &\quad+\|\tau^{\theta}a\|_{L_{t}^{1}(\dot{B}_{2,1}^{\frac{d}{2}}) }^{h}+\|\tau^{\theta}(u,b,w)\|_{L_{t}^{1}(\dot{B}_{2,1}^{\frac{d}{2}+1})}^{h} \\ &\lesssim\frac{\mathcal{X}(t)}{\zeta}t^{\theta-\frac{1}{2}(\frac{ d}{2}-1-\sigma_{0})}+\big{(}\zeta+\mathcal{X}(t)\big{)}\mathcal{X}_{\theta}(t), \quad t>0,\end{split} \tag{3.22}\]
_where \(\zeta>0\) is a constant to be determined later, and \(\mathcal{X}(t)\) and \(\mathcal{X}_{\theta}(t)\) are defined by (2.4) and (3.7), respectively._
**Proof.** Let \(\mathcal{E}_{2,j}(t)\) be denoted by (2.34). For any \(j\geq-1\), we show after multiplying the Lyapunov type inequality (2.46) by \(t^{\theta}\) that
\[\begin{split}&\frac{d}{dt}\big{(}t^{\theta}\mathcal{E}_{2,j}(t) \big{)}+t^{\theta}\mathcal{E}_{2,j}(t)\\ &\lesssim t^{\theta-1}\mathcal{E}_{2,j}(t)+t^{\theta}\big{(}2^{j} \|\dot{\Delta}_{j}(au)\|_{L^{2}}+\|\dot{\Delta}_{j}(u\cdot\nabla u,w\cdot \nabla b,w\cdot\nabla w)\|_{L^{2}}+\|\dot{\Delta}_{j}G\|_{L^{2}}\\ &\quad+\|\mathrm{div}\,u\|_{L^{\infty}}\|\nabla\dot{\Delta}_{j}a \|_{L^{2}}+\|\nabla\dot{\Delta}_{j}(a\mathrm{div}\,u)\|_{L^{2}}+\|[u\cdot \nabla,\dot{\Delta}_{j}]u\|_{L^{2}}+\sum_{k=1}^{d}\|[u\cdot\nabla,\partial_{k} \dot{\Delta}_{j}]a\|_{L^{2}})\sqrt{\mathcal{E}_{2,j}(t)}.\end{split}\]
Then we multiply (2.51) by \(t^{\theta}\) to get
\[\partial_{t}(t^{\theta}u)-\Delta(t^{\theta}u)=\theta t^{\theta-1}u-\nabla(t^{ \theta}a)+t^{\theta}w-t^{\theta}u-t^{\theta}u\cdot\nabla u+t^{\theta}G, \tag{3.27}\]
with the initial data \(t^{\theta}u|_{t=0}=0\). By virtue of Lemma 4.6 for (3.27), we have
\[\|\tau^{\theta}u\|_{L^{1}_{t}(\hat{B}^{\frac{d}{2}+1}_{2,1})}^{h} \lesssim\int_{0}^{t}\tau^{\theta-1}\|u\|_{\hat{B}^{\frac{d}{2}-1}_ {2,1}}^{h}d\tau+\|\tau^{\theta}a\|_{L^{1}_{t}(\hat{B}^{\frac{d}{2}}_{2,1})}^{h} +\|\tau^{\theta}(w,u)\|_{L^{1}_{t}(\hat{B}^{\frac{d}{2}-1}_{2,1})}^{h}\] \[\quad+\|\tau^{\theta}(u\cdot\nabla u)\|_{L^{1}_{t}(\hat{B}^{ \frac{d}{2}-1}_{2,1})}^{h}+\|\tau^{\theta}G\|_{L^{1}_{t}(\hat{B}^{\frac{d}{2}- 1}_{2,1})}^{h},\]
which together with (3.13), (3.17)-(3.18), (3.24) and (3.26) gives rise to
\[\|\tau^{\theta}u\|_{L^{1}_{t}(\hat{B}^{\frac{d}{2}+1}_{2,1})}^{h}\lesssim \frac{\mathcal{X}(t)}{\zeta}t^{\theta-\frac{1}{2}(\frac{d}{2}-1-\sigma_{0})}+ \zeta\mathcal{X}_{\theta}(t)+\mathcal{X}(t)\mathcal{X}_{\theta}(t). \tag{3.28}\]
Finally, we multiply the inequality ((2.39)) by \(t^{\theta}\) to obtain
\[\begin{split}&\frac{d}{dt}\big{(}t^{\theta}\mathcal{E}_{3,j}(t) \big{)}+t^{\theta}\mathcal{E}_{3,j}(t)\\ &\lesssim t^{\theta-1}\mathcal{E}_{3,j}(t)\\ &\quad+t^{\theta}\Big{(}\|\dot{\Delta}_{j}u\|_{L^{2}}+\|\mathrm{ div}\,w\|_{L^{\infty}}\|\dot{\Delta}_{j}(b,w)\|_{L^{2}}+2^{-j}\|\dot{\Delta}_{j}(w \cdot\nabla b,w\cdot\nabla w)\|_{L^{2}}\\ &\quad+\|[w\cdot\nabla,\dot{\Delta}_{j}](b,w)\|_{L^{2}}\Big{)} \sqrt{\mathcal{E}_{3,j}(t)},\qquad j\geq-1.\end{split} \tag{3.29}\]
Therefore, the above inequality (3.29) as well as (2.53) implies
\[\begin{split}&\|\tau^{\theta}(b,w)\|_{\tilde{L}^{\infty}_{t}( \hat{B}^{\frac{d}{2}+1}_{2,1})}^{h}+\|\tau^{\theta}(b,w)\|_{L^{1}_{t}(\hat{B}^ {\frac{d}{2}+1}_{2,1})}^{h}\\ &\quad\lesssim\int_{0}^{t}\tau^{\theta-1}\|(b,w)\|_{\hat{B}^{ \frac{d}{2}+1}_{2,1}}^{h}d\tau+\|\tau^{\theta}u\|_{L^{1}_{t}(\hat{B}^{\frac{d }{2}+1}_{2,1})}^{h}\\ &\quad+\|\mathrm{div}\,w\|_{L^{\infty}_{t}(L^{\infty})}\|\tau^{ \theta}(b,w)\|_{L^{1}_{t}(\hat{B}^{\frac{d}{2}+1}_{2,1})}^{h}+\|\tau^{\theta} (w\cdot\nabla b,w\cdot\nabla w)\|_{L^{1}_{t}(\hat{B}^{\frac{d}{2}}_{2,1})}^{h }\\ &\quad+\int_{0}^{t}\sum_{j\geq-1}2^{j(\frac{d}{2}+1)}\|[w\cdot \nabla,\dot{\Delta}_{j}]\tau^{\theta}(b,w)\|_{L^{2}}d\tau.\end{split} \tag{3.30}\]
The right-hand side of (3.30) can be controlled below. As in (3.11), one can get
\[\int_{0}^{t}\tau^{\theta-1}\|(b,w)\|_{\hat{B}^{\frac{d}{2}+1}_{2,1}}^{h}d\tau \lesssim\frac{\mathcal{X}(t)}{\zeta}t^{\theta-\frac{1}{2}(\frac{d}{2}-1- \sigma_{0})}+\zeta\mathcal{X}_{\theta}(t). \tag{3.31}\]
Due to (4.4), we get
\[\begin{split}&\|\mathrm{div}\,w\|_{L^{\infty}_{t}(L^{\infty})}\| \tau^{\theta}(b,w)\|_{L^{1}_{t}(\hat{B}^{\frac{d}{2}+1}_{2,1})}^{h}+\|\tau^{ \theta}(w\cdot\nabla b,w\cdot\nabla w)\|_{L^{1}_{t}(\hat{B}^{\frac{d}{2}}_{2,1 })}\\ &\quad\lesssim\|w\|_{\tilde{L}^{\infty}_{t}(\hat{B}^{\frac{d}{2} +1}_{2,1})}\|\tau^{\theta}(b,w)\|_{L^{1}_{t}(\hat{B}^{\frac{d}{2}+1}_{2,1})} \lesssim\mathcal{X}(t)\mathcal{X}_{\theta}(t).\end{split} \tag{3.32}\]
By (4.7)\({}_{1}\), it also holds
\[\int_{0}^{t}\sum_{j\in\mathbb{Z}}2^{j(\frac{d}{2}+1)}\|[w\cdot\nabla,\dot{ \Delta}_{j}]\tau^{\theta}(b,w)\|_{L^{2}}d\tau\lesssim\|w\|_{L^{1}_{t}(\hat{B}^ {\frac{d}{2}+1}_{2,1})}\|\tau^{\theta}(b,w)\|_{\tilde{L}^{\infty}_{t}(\hat{B}^ {\frac{d}{2}+1}_{2,1})}\lesssim\mathcal{X}(t)\mathcal{X}_{\theta}(t). \tag{3.33}\]
Thence it follows by (3.28)-(3.33) that
\[\begin{split}&\|\tau^{\theta}(b,w)\|_{\tilde{L}^{\infty}_{t}( \hat{B}^{\frac{d}{2}+1}_{2,1})}^{h}+\|\tau^{\theta}(b,w)\|_{L^{1}_{t}(\hat{B}^{ \frac{d}{2}+1}_{2,1})}^{h}\\ &\quad\lesssim\frac{\mathcal{X}(t)}{\zeta}t^{\theta-\frac{1}{2}( \frac{d}{2}-1-\sigma_{0})}+\big{(}\zeta+\mathcal{X}(t)\big{)}\mathcal{X}_{ \theta}(t).\end{split} \tag{3.34}\]
The combination of (3.26), (3.28) and (3.34) leads to (3.22), and the proof of Lemma 3.3 is completed.
_Proof of Theorem 1.3:_ Let the assumptions of Theorem 1.3 hold, and \((a,u,b,w)\) be the global solution to the Cauchy problem (1.3) given by Theorem 1.2. By (1.9), (3.1) and (3.7), there holds
\[\begin{cases}\|(a,u,b,w)(t)\|_{\dot{B}_{2,1}^{\frac{d}{2}-1}}^{\ell}\lesssim \mathcal{X}(t)<<1,\\ \|a(t)\|_{\dot{B}_{2,1}^{\frac{d}{2}}}^{h}+\|u(t)\|_{\dot{B}_{2,1}^{\frac{d}{2 }-1}}^{h}+\|(b,w)(t)\|_{\dot{B}_{2,1}^{\frac{d}{2}+1}}^{h}\lesssim\mathcal{X}( t)<<1,\\ \|(a,u,b,w)(t)\|_{\dot{B}_{2,\infty}^{\sigma_{0}}}^{\ell}\lesssim\mathcal{X}_{L,\sigma_{0}}(t)\lesssim\delta_{0},\\ t^{\theta}\|(a,u,b,w)(t)\|_{\dot{B}_{2,1}^{\frac{d}{2}-1}}^{\ell}\lesssim \mathcal{X}_{\theta}(t),\\ t^{\theta}\big{(}\|a(t)\|_{\dot{B}_{2,1}^{\frac{d}{2}}}^{h}+\|u(t)\|_{\dot{B} _{2,1}^{\frac{d}{2}-1}}^{h}+\|(b,w)(t)\|_{\dot{B}_{2,1}^{\frac{d}{2}+1}}^{h} \big{)}\lesssim\mathcal{X}_{\theta}(t),\end{cases} \tag{3.35}\]
where \(\mathcal{X}(t)\), \(\mathcal{X}_{L,\sigma_{0}}(t)\) and \(\mathcal{X}_{\theta}(t)\) are defined by (2.4), (3.1) and (3.7), respectively.
For any \(\theta>\frac{1}{2}(\frac{d}{2}+1-\sigma_{0})>1\), we obtain by Lemmas 3.2-3.3 that
\[\mathcal{X}_{\theta}(t)\lesssim\frac{\mathcal{X}(t)+\mathcal{X}_{L,\sigma_{0}} (t)}{\zeta}t^{\theta-\frac{1}{2}(\frac{d}{2}-1-\sigma_{0})}+\big{(}\zeta+ \mathcal{X}(t)\big{)}\mathcal{X}_{\theta}(t),\quad t>0. \tag{3.36}\]
Choosing a suitably small constant \(\zeta>0\) in (3.36) and employing \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq
Then it follows from \(\eqref{eq:3.35}_{2}\), \(\eqref{eq:3.38}\) and the interpolation inequality (4.2) that
\[\begin{split}&\|(a,u,b,w)^{\ell}(t)\|_{\dot{B}^{\sigma}_{2,1}}\\ &\lesssim\|(a,u,b,w)^{\ell}(t)\|_{\dot{B}^{\sigma_{0}}_{2,\infty} }^{\frac{\dot{\sigma}-1-\sigma}{\dot{\sigma}-1-\sigma_{0}}}\|(a,u,b,w)^{\ell} (t)\|_{\dot{B}^{\frac{\sigma-\sigma_{0}}{2}-1-\sigma_{0}}_{2,1}}^{\frac{\sigma -\sigma_{0}}{2}}\lesssim\delta_{0}(1+t)^{-\frac{1}{2}(\sigma-\sigma_{0})}, \qquad\sigma\in(\sigma_{0},\frac{d}{2}-1).\end{split} \tag{3.40}\]
By (3.38)-(3.40), the optimal time-decay estimates in \(\eqref{eq:1.11}_{1}\)-\(\eqref{eq:1.11}_{2}\) hold.
Furthermore, we show that the relative velocity \(u-w\) satisfies the faster time-decay rate in \(\eqref{eq:1.11}_{3}\). The equation (1.18) can be re-written as
\[\begin{split} u-w&=e^{-2t}(u_{0}-w_{0})\\ &\quad+\int_{0}^{t}e^{-2(\tau-t)}\big{(}-\nabla a+\Delta u+ \nabla b-u\cdot\nabla u+w\cdot\nabla w+G\big{)}d\tau.\end{split} \tag{3.41}\]
We take the low-frequency \(\dot{B}^{\sigma_{0}}_{2,\infty}\)-norm of (3.41) and make use of (1.11) to get
\[\begin{split}&\|(u-w)(t)\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}^{ \ell}\\ &\lesssim e^{-2t}\|(u_{0},w_{0})\|_{\dot{B}^{\sigma_{0}}_{2,\infty }}^{\ell}\\ &\quad+\int_{0}^{t}e^{-2(t-\tau)}\big{(}\|(a,u,b)\|_{\dot{B}^{ \sigma_{0}+1}_{2,\infty}}^{\ell}+\|(u\cdot\nabla u,w\cdot\nabla w)\|_{\dot{B}^ {\sigma_{0}}_{2,\infty}}^{\ell}+\|G\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}^{\ell }\big{)}d\tau,\quad t>0.\end{split} \tag{3.42}\]
Employing (1.11), we have
\[\int_{0}^{t}e^{-2(t-\tau)}\|(a,u,b)\|_{\dot{B}^{\sigma_{0}+1}_{2,\infty}}^{ \ell}d\tau\lesssim(1+t)^{-\min\{\frac{1}{2},\frac{1}{2}(\frac{d}{2}-1-\sigma_{ 0})\}}. \tag{3.43}\]
In addition, it holds by (1.9), (3.1), (3.38) and (4.5) that
\[\begin{split}&\int_{0}^{t}e^{-2(t-\tau)}\|(u\cdot\nabla u,w \cdot\nabla w)\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}^{\ell}+\|G\|_{\dot{B}^{ \sigma_{0}}_{2,\infty}}^{\ell}\big{)}d\tau\\ &\lesssim\int_{0}^{t}e^{-2(t-\tau)}\big{(}\|(a,u,w)\|_{\dot{B}^{ \frac{d}{2}}_{2,1}}\|(a,u,w)\|_{\dot{B}^{\sigma_{0}+1}_{2,1}}+\|(a,b)\|_{\dot{ B}^{\frac{d}{2}}_{2,1}}\|u-w\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}\big{)}d\tau\\ &\lesssim\Big{(}\int_{0}^{t}e^{-4(t-\tau)}\|(a,u,w)\|_{\dot{B}^{ \sigma_{0}+1}_{2,1}}^{2}d\tau\Big{)}^{\frac{1}{2}}\|(a,u,w)\|_{\dot{\widetilde {L}}^{2}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}\\ &\quad+\Big{(}\int_{0}^{t}e^{-4(t-\tau)}\big{(}\|(a,b)\|_{\dot{B} ^{\frac{d}{2}}_{2,1}}^{\ell}+\|a\|_{\dot{B}^{\frac{d}{2}}_{2,1}}^{h}+\|b\|_{ \dot{B}^{\frac{d}{2}+1}_{2,1}}^{h}\big{)}^{2}d\tau\Big{)}^{\frac{1}{2}}\|u-w \|_{\widetilde{L}^{2}_{t}(\dot{B}^{\sigma_{0}}_{2,\infty})}\\ &\lesssim(1+t)^{-\min\{\frac{1}{2},\frac{1}{2}(\frac{d}{2}-1- \sigma_{0})\}}.\end{split} \tag{3.44}\]
By (3.42)-(3.44), \(\eqref{eq:1.11}_{3}\) follows. Since (1.13) can be proved in a similar way, we omit the details. The proof of Theorem 1.3 is completed.
### The proof of Theorem 1.4
In this subsection, we prove Theorem 1.4 on the time-decay estimates of the global solution to the Cauchy problem (1.3) in the case that \(\|(a_{0},u_{0},b_{0},w_{0})^{\ell}\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}\) is sufficiently small. In what follows, we
need to use the following elementary inequality frequently:
\[\int_{0}^{t}\langle t-\tau\rangle^{-\frac{1}{2}(\sigma-\sigma_{0})}\langle\tau \rangle^{-\sigma_{1}}d\tau\lesssim\langle t\rangle^{-\frac{1}{2}(\sigma-\sigma_{ 0})},\qquad 0\leq\frac{1}{2}(\sigma-\sigma_{0})\leq\sigma_{1},\quad\sigma_{1}>1. \tag{3.45}\]
Define the time-weighted energy functional
\[\begin{split}\mathcal{Z}(t):&=\sup_{\sigma\in[ \sigma_{0}+\varepsilon,\frac{d}{2}+1]}\|\langle\tau\rangle^{\frac{1}{2}( \sigma-\sigma_{0})}(a,u,b,w)\|_{L^{\infty}_{t}(\dot{B}^{\sigma}_{2,1})}^{ \ell}\\ &\quad+\|\langle\tau\rangle^{\frac{1}{2}}(u-w)\|_{L^{\infty}_{t} (\dot{B}^{\sigma_{0}}_{2,\infty})}^{\ell}+\sup_{\sigma\in[\sigma_{0}+ \varepsilon,\frac{d}{2}]}\|\langle\tau\rangle^{\frac{1}{2}(1+\sigma-\sigma_{ 0})}(u-w)\|_{L^{\infty}_{t}(\dot{B}^{\sigma}_{2,1})}^{\ell}\\ &\quad+\|\langle\tau\rangle^{\alpha}a\|_{\widetilde{L}^{\infty} _{t}(\dot{B}^{\frac{d}{2}}_{2,1})}^{h}+\|\langle\tau\rangle^{\alpha}u\|_{ \widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{h}+\|\tau^{\alpha} u\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}+1}_{2,1})}^{h}+\| \langle\tau\rangle^{\alpha}(b,w)\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac {d}{2}+1}_{2,1})}^{h},\end{split} \tag{3.46}\]
with \(\langle t\rangle:=(1+t^{2})^{\frac{1}{2}}\) and \(\alpha:=\frac{1}{2}(d+1-2\sigma_{0}-2\varepsilon)\) for a sufficiently small constant \(\varepsilon\in(0,1]\).
First, we have the time-weighted estimates of the solution \((a,u,b,w)\) to the Cauchy problem (1.3) for low frequencies.
**Lemma 3.4**.: _Let \((a,u,b,w)\) be the global solution to the Cauchy problem (1.3) given by Theorem 1.2. Then, under the assumptions of Theorem 1.4, it holds_
\[\begin{split}&\sup_{\sigma\in[\sigma_{0}+\varepsilon,\frac{d}{2}+1 ]}\|\langle\tau\rangle^{\frac{1}{2}(\sigma-\sigma_{0})}(a,u,b,w)\|_{L^{\infty }_{t}(\dot{B}^{\sigma}_{2,1})}^{\ell}\\ &\qquad\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|_{\dot{B}^{\sigma_{0 }}_{2,\infty}}^{\ell}+\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t),\quad t>0,\end{split} \tag{3.47}\]
_where \(\mathcal{X}(t)\) and \(\mathcal{Z}(t)\) are defined by (2.4) and (3.46), respectively._
**Proof.** Applying the Gronwall inequality to (2.21), we get
\[\begin{split}&\|\dot{\Delta}_{j}(a,u,b,w)\|_{L^{2}}\\ &\qquad\lesssim e^{-2^{2j}t}\|\dot{\Delta}_{j}(a_{0},u_{0},b_{0}, w_{0})\|_{L^{2}}\\ &\qquad+\int_{0}^{t}e^{-2^{2j}(t-\tau)}\big{(}\|\dot{\Delta}_{j}( 2^{j}(au),u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{L^{2}}+\|\dot{ \Delta}_{j}G\|_{L^{2}}\big{)}d\tau,\end{split} \tag{3.48}\]
which implies for any \(\sigma>\sigma_{0}\) that
\[\begin{split}&\|(a,u,b,w)\|_{\dot{B}^{\sigma}_{2,1}}^{\ell}\\ &\qquad\lesssim\langle t\rangle^{-\frac{1}{2}(\sigma-\sigma_{0})} \|(a_{0},u_{0},b_{0},w_{0})\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}^{\ell}\\ &\qquad+\int_{0}^{t}\langle t-\tau\rangle^{-\frac{1}{2}(\sigma- \sigma_{0})}\big{(}\|au\|_{\dot{B}^{\sigma_{0}+1}_{2,\infty}}^{\ell}+\|(u \cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{\dot{B}^{\sigma_{0}}_{2,\infty }}^{\ell}\\ &\qquad+\|g(a)\nabla a\|_{\dot{B}^{\sigma}_{2,\infty}}^{\ell}+\|f (a)\Delta u\|_{\dot{B}^{\sigma}_{2,\infty}}^{\ell}+\|h(a,b)(u-w)\|_{\dot{B}^{ \sigma}_{2,\infty}}^{\ell}\big{)}d\tau.\end{split} \tag{3.49}\]
To control the first nonlinear term on the right-hand side of (3.49), we consider the cases \(t\leq 2\) and \(t\geq 2\)
separately. For the case \(t\leq 2\), we make use of (4.1), (4.5) and \(\langle t\rangle\sim 1\) to obtain
\[\begin{split}&\int_{0}^{t}\langle t-\tau\rangle^{-\frac{1}{2}( \sigma-\sigma_{0})}\|au\|_{\dot{B}^{\sigma_{0}+1}_{2,\infty}}d\tau\\ &\quad\lesssim\|a\|_{L^{\infty}_{t}(B^{\frac{d}{2}}_{2,1})}\big{(} \|u\|_{L^{\infty}_{t}(B^{\sigma_{0}+1}_{2,1})}^{\ell}+\|u\|_{L^{1}_{t}(B^{ \frac{d}{2}+1}_{2,1})}^{h}\big{)}\\ &\quad\lesssim\big{(}\mathcal{X}(t)\mathcal{Z}(t)+\mathcal{X}^{2 }(t)\big{)}\langle t\rangle^{-\frac{1}{2}(\sigma-\sigma_{0})}.\end{split} \tag{3.50}\]
For the case \(t\geq 2\), we split the integration into two parts:
\[\begin{split}\int_{0}^{t}\langle t-\tau\rangle^{-\frac{1}{2}( \sigma-\sigma_{0})}\|au\|_{\dot{B}^{\sigma_{0}+1}_{2,\infty}}d\tau& =\Big{(}\int_{0}^{1}+\int_{1}^{t}\Big{)}\langle t-\tau\rangle^{- \frac{1}{2}(\sigma-\sigma_{0})}\|au\|_{\dot{B}^{\sigma_{0}+1}_{2,\infty}}d \tau.\end{split}\]
Owing to (4.5) and the fact \(\langle t-\tau\rangle\sim\langle t\rangle\) for any \(\tau\in[0,1]\), we have
\[\begin{split}\int_{0}^{1}\langle t-\tau\rangle^{-\frac{1}{2}( \sigma-\sigma_{0})}\|au\|_{\dot{B}^{\sigma_{0}+1}_{2,\infty}}d\tau& \lesssim\langle t\rangle^{-\frac{1}{2}(\sigma-\sigma_{0})}\int_{ 0}^{1}\|a\|_{\dot{B}^{\frac{d}{2}}_{2,1}}\|u\|_{\dot{B}^{\sigma_{0}+1}_{2, \infty}}d\tau\\ &\lesssim\big{(}\mathcal{X}(t)\mathcal{Z}(t)+\mathcal{X}^{2}(t) \big{)}\langle t\rangle^{-\frac{1}{2}(\sigma-\sigma_{0})}.\end{split} \tag{3.51}\]
On the other hand, since it holds for \(\sigma_{0}\in[-\frac{d}{2},\frac{d}{2}-1)\) and \(\sigma\in(\sigma_{0},\frac{d}{2}+1]\) that
\[\frac{1}{2}(\frac{d}{2}+1-\sigma_{0})>1,\qquad 0\leq\frac{1}{2}(\sigma- \sigma_{0})\leq\frac{1}{2}(\frac{d}{2}+1-\sigma_{0}),\]
we use (3.45), (4.1), (4.5), \(au=a^{\ell}u^{\ell}+a^{h}u^{\ell}+a^{\ell}u^{h}+a^{h}u^{h}\) and \(\tau^{-1}\lesssim\langle\tau\rangle^{-1}\) for \(\tau\geq 1\) to get
\[\begin{split}&\int_{1}^{t}\langle t-\tau\rangle^{-\frac{1}{2}( \sigma-\sigma_{0})}\|au\|_{\dot{B}^{\sigma_{0}+1}_{2,\infty}}d\tau\\ &\quad\lesssim\int_{1}^{t}\langle t-\tau\rangle^{-\frac{1}{2}( \sigma-\sigma_{0})}\big{(}\|a\|_{\dot{B}^{\frac{d}{2}}_{2,1}}^{\ell}\|u\|_{ \dot{B}^{\sigma_{0}+1}_{2,1}}^{\ell}+\|a\|_{\dot{B}^{\frac{d}{2}}_{2,1}}^{h} \|u\|_{\dot{B}^{\sigma_{0}+1}_{2,1}}^{\ell}\\ &\quad\quad+\|a\|_{\dot{B}^{\frac{d}{2}}_{2,1}}^{\ell}\|u\|_{ \dot{B}^{\frac{d}{2}+1}_{2,1}}^{h}+\|a\|_{\dot{B}^{\frac{d}{2}}_{2,1}}^{h}\|u \|_{\dot{B}^{\frac{d}{2}+1}_{2,1}}^{h}d\tau\\ &\quad\lesssim\mathcal{Z}^{2}(t)\int_{0}^{t}\langle t-\tau \rangle^{-\frac{1}{2}(\sigma-\sigma_{0})}\langle\tau\rangle^{-\frac{1}{2}( \frac{d}{2}+1-\sigma_{0})}d\tau\\ &\quad\lesssim\mathcal{Z}^{2}(t)\langle t\rangle^{-\frac{1}{2}( \sigma-\sigma_{0})},\quad t\geq 2,\end{split} \tag{3.52}\]
which together with (3.50)-(3.51) implies for any \(\sigma\in(\sigma_{0},\frac{d}{2}+1]\) that
\[\int_{0}^{t}\langle t-\tau\rangle^{-\frac{1}{2}(\sigma-\sigma_{0})}\|au\|_{ \dot{B}^{\sigma_{0}+1}_{2,\infty}}d\tau\lesssim\big{(}\mathcal{X}^{2}(t)+ \mathcal{Z}^{2}(t)\big{)}\langle t\rangle^{-\frac{1}{2}(\sigma-\sigma_{0})}, \quad t>0, \tag{3.53}\]
Similarly, one can show
\[\begin{split}&\int_{0}^{t}\langle t-\tau\rangle^{-\frac{1}{2}( \sigma-\sigma_{0})}\big{(}\|(u\cdot\nabla u,w\cdot\nabla b,w\cdot\nabla w)\|_{ \dot{B}^{\sigma_{0}}_{2,\infty}}+\|g(a)\nabla a\|_{\dot{B}^{\sigma}_{2,\infty }}+\|f(a)\Delta u\|_{\dot{B}^{\sigma}_{2,\infty}}\big{)}d\tau\\ &\quad\lesssim\big{(}\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t) \big{)}\langle t\rangle^{-\frac{1}{2}(\sigma-\sigma_{0})},\quad t>0.\end{split} \tag{3.54}\]
Due to the lack of one spatial derivative, the estimate of the last nonlinear term in (3.49) is different from usual nonlinearities of compressible Navier-Stokes system [19, 41]. To overcome this difficulty, according t
to (4.5)-(4.6) and \(\mathcal{X}(t)\lesssim\mathcal{X}_{0}<<1\), we have
\[\begin{split}&\int_{0}^{t}\langle t-\tau\rangle^{-\frac{1}{2}( \sigma-\sigma_{0})}\|h(a,b)(u-w)\|_{\dot{B}^{\sigma}_{2,\infty}}d\tau\\ &\qquad\lesssim\int_{0}^{t}\langle t-\tau\rangle^{-\frac{1}{2}( \sigma-\sigma_{0})}\|(a,b)\|_{\dot{B}^{\frac{d}{2}}_{2,1}}\|u-w\|_{\dot{B}^{ \sigma_{0}}_{2,\infty}}d\tau\lesssim\mathcal{Z}^{2}(t)\langle t\rangle^{- \frac{1}{2}(\sigma-\sigma_{0})},\end{split} \tag{3.55}\]
where in the last inequality one has used the key fact
\[\begin{split}&\|(a,b)\|_{\dot{B}^{\frac{d}{2}}_{2,1}}\|u-w\|_{ \dot{B}^{\sigma_{0}}_{2,\infty}}\\ &\qquad\lesssim\big{(}\|(a,b)\|_{\dot{B}^{\frac{d}{2}}_{2,1}}^{ \ell}+\|a\|_{\dot{B}^{\frac{d}{2}}_{2,1}}^{h}+\|b\|_{\dot{B}^{\frac{d}{2}}_{2, 1}}^{h}\big{)}\big{(}\|u-w\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}^{\ell}+\|u\|_{ \dot{B}^{\frac{d}{2}-1}_{2,1}}^{h}+\|w\|_{\dot{B}^{\frac{d}{2}+1}_{2,1}}^{h} \big{)}\\ &\qquad\lesssim\mathcal{Z}^{2}(t)\langle t\rangle^{-\frac{1}{2}( \frac{d}{2}+1-\sigma_{0})}.\end{split} \tag{3.56}\]
Combining (3.53)-(3.55) together, we obtain (3.47).
Next, we prove the following time-weighted estimates of the solution \((a,u,b,w)\) to the Cauchy problem (1.3) for low frequencies.
**Lemma 3.5**.: _Let \((a,u,b,w)\) be the global solution to the Cauchy problem (1.3) given by Theorem 1.2. Then, under the assumptions of Theorem 1.4, it holds_
\[\begin{split}&\|\langle\tau\rangle^{\alpha}a\|_{\widetilde{L} ^{\infty}_{t}(\dot{B}^{\frac{d}{2}}_{2,1})}^{h}+\|\langle\tau\rangle^{\alpha}u \|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{h}\\ &\qquad+\|\tau^{\alpha}u\|_{\widetilde{L}^{\infty}_{t}(\dot{B}^{ \frac{d}{2}+1}_{2,1})}^{h}+\|\langle\tau\rangle^{\alpha}(b,w)\|_{\widetilde{L }^{\infty}_{t}(\dot{B}^{\frac{d}{2}+1}_{2,1})}^{h}\\ &\qquad\lesssim\|a_{0}\|_{\dot{B}^{\frac{d}{2}}_{2,1}}^{h}+\|u_{0 }\|_{\dot{B}^{\frac{d}{2}-1}_{2,1}}^{h}+\|(b_{0},w_{0})\|_{\dot{B}^{\frac{d}{2 }+1}_{2,1}}^{h}+\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t),\quad t>0,\end{split} \tag{3.57}\]
_where \(\mathcal{X}(t)\) and \(\mathcal{Z}(t)\) are defined by (2.4) and (3.46), respectively, and \(\alpha\) is given by \(\alpha:=\frac{1}{2}(d+1-2\sigma_{0}-2\varepsilon)\) for a sufficiently small constant \(\varepsilon\in(0,1]\)._
**Proof.** First, it follows from (2.46) for any \(j\geq-1\) that
\[\|\dot{\Delta}_{j}(\nabla a,u,b,w)\|_{L^{2}}\lesssim e^{-t}\|\dot{\Delta}_{j} (\nabla a_{0},u_{0},b_{0},w_{0})\|_{L^{2}}+\int_{0}^{t}e^{-(t-\omega)}\sum_{i= 1}^{7}I_{i,j}d\omega,\]
where \(I_{i,j}\)\((i=1,...,7)\) are given by
\[\begin{split}&\begin{cases}I_{1,j}:=2^{j}\|\dot{\Delta}_{j}(au) \|_{L^{2}},\ \ I_{2,j}:=\|\dot{\Delta}_{j}(u\cdot\nabla u)\|_{L^{2}},\ \ I_{3,j}:=\|\dot{\Delta}_{j}G\|_{L^{2}},\ \ I_{4,j}:=\| \mathrm{div}\,u\|_{L^{\infty}}\|\nabla\dot{\Delta}_{j}a\|_{L^{2}},\\ & I_{5,j}:=\|\nabla\dot{\Delta}_{j}(\nabla a\mathrm{div}\,u)\|_{L^{2 }},\ \ I_{6,j}:=\|[u\cdot\nabla,\dot{\Delta}_{j}]u\|_{L^{2}},\ \ I_{7,j}=\sum_{k=1}^{d}\|[u\cdot\nabla, \partial_{k}\dot{\Delta}_{j}]a\|_{L^{2}}.\end{cases}\end{split}\]
Therefore, one has
\[\begin{split}&\|\langle\tau\rangle^{\alpha}(\nabla a,u,b,w)\|_{ \widetilde{L}^{\infty}_{t}(\dot{B}^{\frac{d}{2}-1}_{2,1})}^{h}\\ &\qquad\lesssim\|(\nabla a_{0},u_{0},b_{0},w_{0})\|_{\dot{B}^{ \frac{d}{2}-1}_{2,1}}^{h}+\sum_{j\geq-1}\sup_{\tau\in[0,t]}\langle\tau\rangle^{ \alpha}\int_{0}^{\tau}e^{-(\tau-\omega)}2^{j(\frac{d}{2}-1)}\sum_{i=1}^{7}I_{i,j}d\omega.\end{split} \tag{3.58}\]
To control the nonlinear terms of (3.58), we may consider two cases \(t\leq 2\) and \(t\geq 2\) and split the integration over \([0,t]\) into \([0,1]\) and \([1,t]\) for \(t\geq 2\), respectively. One can show
\[\sum_{j\geq-1}\sup_{\tau\in[0,t]}\langle\tau\rangle^{\alpha}\int_ {0}^{\tau}e^{-(\tau-\omega)}2^{j(\frac{d}{2}-1)}(I_{1,j}+I_{2,j})d\omega\] \[\qquad\lesssim\int_{0}^{t}\big{(}\|au\|^{h}_{\tilde{B}^{\frac{d}{ 2}}_{2,1}}+\|u\cdot\nabla u\|^{h}_{\tilde{B}^{\frac{d}{2}-1}_{2,1}}\big{)}d \tau\lesssim\|(a,u)\|_{L^{2}_{t}(\tilde{B}^{\frac{d}{2}}_{2,1})}\|u\|_{L^{2}_ {t}(\tilde{B}^{\frac{d}{2}}_{2,1})}\lesssim\mathcal{X}^{2}(t),\qquad t\leq 2.\]
By direct computations, it also holds
\[\sum_{j\geq-1}\sup_{\tau\in[2,t]}\langle\tau\rangle^{\alpha}\int_{0}^{1}e^{-( \tau-\omega)}(I_{1,j}+I_{2,j})d\omega\lesssim\mathcal{X}^{2}(1),\qquad t\geq 2.\]
We turn to estimates the integration on \([1,t]\) of the first and second nonlinear terms on the right-hand side of (3.58) for \(t\geq 2\). Due to (4.1), (4.3) and
\[\begin{cases}\|\langle\tau\rangle^{\frac{1}{2}(\frac{d}{2}-\sigma_{0}+\varepsilon )}(a,u)^{\ell}\|_{\widetilde{L}^{\infty}_{t}(\tilde{B}^{\frac{d}{2}}_{2,1})} \lesssim\|\langle\tau\rangle^{\frac{1}{2}(\frac{d}{2}-\sigma_{0}+\varepsilon )}(a,u)\|^{\ell}_{L^{\infty}_{t}(\tilde{B}^{\frac{d}{2}-\varepsilon}_{2,1})} \lesssim\mathcal{Z}(t),\\ \|\langle\tau\rangle^{\frac{1}{2}(\frac{d}{2}+1-\sigma_{0}+\varepsilon)}(a,u) ^{\ell}\|_{\widetilde{L}^{\infty}_{t}(\tilde{B}^{\frac{d}{2}+1}_{2,1})} \lesssim\|\langle\tau\rangle^{\frac{1}{2}(\frac{d}{2}+1-\sigma_{0}+\varepsilon )}(a,u)\|^{\ell}_{L^{\infty}_{t}(\tilde{B}^{\frac{d}{2}+1-\varepsilon}_{2,1})} \lesssim\mathcal{Z}(t),\end{cases} \tag{3.59}\]
one gets
\[\begin{split}&\|\tau^{\alpha}a^{\ell}u^{\ell}\|^{h}_{\widetilde{L }^{\infty}_{t}(\tilde{B}^{\frac{d}{2}}_{2,1})}+\|\tau^{\alpha}u^{\ell}\cdot \nabla u^{\ell}\|^{h}_{\widetilde{L}^{\infty}_{t}(\tilde{B}^{\frac{d}{2}-1}_{2, 1})}\\ &\lesssim\|\tau^{\alpha}a^{\ell}u^{\ell}\|^{h}_{\widetilde{L}^{ \infty}_{t}(\tilde{B}^{\frac{d}{2}+1}_{2,1})}+\|\tau^{\alpha}u^{\ell}\cdot \nabla u^{\ell}\|^{h}_{\widetilde{L}^{\infty}_{t}(\tilde{B}^{\frac{d}{2}+1}_{2, 1})}\\ &\lesssim\|\langle\tau\rangle^{\frac{1}{2}(\frac{d}{2}+1-\sigma_{0}+ \varepsilon)}(a,u)^{\ell}\|_{\tilde{B}^{\frac{d}{2}+1}_{2,1}}\|\langle\tau \rangle^{\frac{1}{2}(\frac{d}{2}-\sigma_{0}+\varepsilon)}(a,u)^{\ell}\|_{ \tilde{B}^{\frac{d}{2}}_{2,1}}\lesssim\mathcal{Z}^{2}(t).\end{split} \tag{3.60}\]
By (4.3)-(4.4), there holds
\[\begin{split}&\|\tau^{\alpha}a^{h}u^{\ell}\|^{h}_{\widetilde{L} ^{\infty}_{t}(\tilde{B}^{\frac{d}{2}}_{2,1})}+\|\tau^{\alpha}au^{h}\|^{h}_{ \widetilde{L}^{\infty}_{t}(\tilde{B}^{\frac{d}{2}}_{2,1})}\\ &\lesssim\|\langle\tau\rangle^{\alpha}a\|^{h}_{\widetilde{L}^{ \infty}_{t}(\tilde{B}^{\frac{d}{2}}_{2,1})}\|^{\ell}_{\widetilde{L}^{\infty}_ {t}(\tilde{B}^{\frac{d}{2}-1}_{2,1})}+\|a\|_{\widetilde{L}^{\infty}_{t}( \tilde{B}^{\frac{d}{2}}_{2,1})}\|\tau^{\alpha}u\|^{h}_{\widetilde{L}^{\infty }_{t}(\tilde{B}^{\frac{d}{2}+1}_{2,1})}\\ &\lesssim\mathcal{Z}(t)\mathcal{X}(t),\end{split} \tag{3.61}\]
and
\[\begin{split}&\|\tau^{\alpha}u^{h}\cdot\nabla u^{\ell}\|^{h}_{ \widetilde{L}^{\infty}_{t}(\tilde{B}^{\frac{d}{2}-1}_{2,1})}+\|\tau^{\alpha}u \cdot\nabla u^{h}\|^{h}_{\widetilde{L}^{\infty}_{t}(\tilde{B}^{\frac{d}{2}-1}_{2,1})}\\ &\lesssim\|\tau^{\alpha}u\|^{h}_{\widetilde{L}^{\infty}_{t}( \tilde{B}^{\frac{d}{2}}_{2,1})}\|u\|^{\ell}_{\widetilde{L}^{\infty}_{t}(\tilde{B }^{\frac{d}{2}-1}_{2,1})}+\|\tau^{\alpha}u\|^{h}_{\widetilde{L}^{\infty}_{t}( \tilde{B}^{\frac{d}{2}+1}_{2,1})}\|u\|_{\widetilde{L}^{\infty}_{t}(\tilde{B}^{ \frac{d}{2}-1}_{2,1})}\\ &\lesssim\mathcal{Z}(t)\mathcal{X}(t).\end{split} \tag{3.62}\]
For \(t\geq 2\) and the integration on \([1,t]\), one deduces by (3.60)-(3.62) that
\[\begin{split}&\sum_{j\geq-1}\sup_{\tau\in[2,t]}\langle\tau \rangle^{\alpha}\int_{1}^{\tau}e^{-(\tau-\omega)}(I_{1,j}+I_{2,j})d\omega\\ &\qquad\lesssim\big{(}\|\tau^{\alpha}au\|^{h}_{\widetilde{L}^{ \infty}_{t}(\tilde{B}^{\frac{d}{2}}_{2,1})}+\|\tau^{\alpha}u\cdot\nabla u\|^{h}_{ \widetilde{L}^{\infty}_{t}(\tilde{B}^{\frac{d}{2}}_{2,1})}\big{)}\sup_{\tau\in[2,t ]}\langle\tau\rangle^{\alpha}\int_{1}^{\tau}e^{-(\tau-\omega)}\omega^{-\alpha}d\omega \\ &\qquad\lesssim\mathcal{X}^{2}(t)+\mathcal{X}(t)\mathcal{Z}(t). \end{split}\]
Therefore, we have
\[\sum_{j\geq-1}\sup_{\tau\in[0,t]}\langle\tau\rangle^{\alpha}\int_{0}^{t}e^{-(\tau- \omega)}(I_{1,j}+I_{2,j})d\tau\lesssim\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t), \quad t>0. \tag{3.63}\]
By similar arguments as in (3.63), one can show
\[\sum_{j\geq-1}\sup_{\tau\in[0,t]}\langle\tau\rangle^{\alpha}\int_{0}^{t}e^{-( \tau-\omega)}\sum_{i=3}^{7}I_{i,j}d\tau\lesssim\mathcal{X}^{2}(t)+\mathcal{Z}^ {2}(t),\quad t>0. \tag{3.64}\]
Substituting (3.63)-(3.64) into (3.58), we obtain
\[\|\langle\tau\rangle^{\alpha}(\nabla a,u,b,w)\|_{\widetilde{L}_{t}^{\infty}( \dot{B}_{2,1}^{\frac{d}{2}-1})}^{h}\lesssim\|(\nabla a_{0},u_{0},b_{0},w_{0}) \|_{\dot{B}_{2,1}^{\frac{d}{2}-1}}^{h}+\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t). \tag{3.65}\]
Then, we show the higher-order time-decay estimate of \(u\). Employing Lemma 4.6 for (3.27) with \(\theta=\alpha>1\), we gain
\[\|\tau^{\alpha}u\|_{\widetilde{L}_{t}^{\infty}(\dot{B}_{2,1}^{ \frac{d}{2}+1})}^{h} \tag{3.66}\] \[\qquad\lesssim\|\tau^{\alpha-1}u\|_{\widetilde{L}_{t}^{\infty}( \dot{B}_{2,1}^{\frac{d}{2}-1})}^{h}+\|\tau^{\alpha}a\|_{\widetilde{L}_{t}^{ \infty}(\dot{B}_{2,1}^{\frac{d}{2}})}^{h}+\|\tau^{\alpha}(u,w)\|_{\widetilde{L }_{t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}-1})}^{h}\] \[\qquad+\|\tau^{\alpha}u\cdot\nabla u\|_{\widetilde{L}_{t}^{\infty }(\dot{B}_{2,1}^{\frac{d}{2}-1})}^{h}+\|\tau^{\alpha}G\|_{\widetilde{L}_{t}^{ \infty}(\dot{B}_{2,1}^{\frac{d}{2}-1})}^{h}.\]
It is easy to verify that
\[\|\tau^{\alpha-1}u\|_{\widetilde{L}_{t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}-1 })}^{h}\lesssim\|\langle\tau\rangle^{\alpha}u\|_{\widetilde{L}_{t}^{\infty}( \dot{B}_{2,1}^{\frac{d}{2}-1})}^{h}. \tag{3.67}\]
Similarly to (3.60)-(3.62), one has
\[\|\tau^{\alpha}u\cdot\nabla u\|_{\widetilde{L}_{t}^{\infty}(\dot{B}_{2,1}^{ \frac{d}{2}-1})}^{h}+\|\tau^{\alpha}G\|_{\widetilde{L}_{t}^{\infty}(\dot{B}_{ 2,1}^{\frac{d}{2}-1})}^{h}\lesssim\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t). \tag{3.68}\]
Combining (3.65)-(3.68) together, we get
\[\|\tau^{\alpha}u\|_{\widetilde{L}_{t}^{\infty}(\dot{B}_{2,1}^{\frac{d}{2}+1})} ^{h}\lesssim\|(\nabla a_{0},u_{0},b_{0},w_{0})\|_{\dot{B}_{2,1}^{\frac{d}{2}-1 }}^{h}+\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t). \tag{3.69}\]
Furthermore, to establish the higher order time-weighted estimate of \((b,w)\), one has by (2.55) that
\[\|\langle\tau\rangle^{\alpha}(b,w)\|_{\widetilde{L}_{t}^{\infty} (\dot{B}_{2,1}^{\frac{d}{2}+1})}^{h}\] \[\qquad+2^{-j}\|\dot{\Delta}_{j}(w\cdot\nabla b,w\cdot\nabla w)\|_ {L^{2}}+\|[w\cdot\nabla,\dot{\Delta}_{j}](b,w)\|_{L^{2}}\big{)}d\omega.\]
Note that it holds
\[\sum_{j\geq-1}\sup_{\tau\in[0,t]}\langle\tau\rangle^{\alpha}\int_{0}^{\tau}e^{ -(\tau-\omega)}2^{j(\frac{d}{2}+1)}\|\dot{\Delta}_{j}u\|_{L^{2}}d\omega\lesssim \|u\|_{L^{1}_{t}(\dot{B}_{2,1}^{\frac{d}{2}+1})}^{h},\qquad t\leq 2,\]
and
\[\sum_{j\geq-1}\sup_{\tau\in[0,t]}\langle\tau\rangle^{\alpha}\int_{0}^{ \tau}e^{-(\tau-\omega)}2^{j(\frac{d}{2}+1)}\|\hat{\Delta}_{j}u\|_{L^{2}}d\omega\] \[\qquad\lesssim\sum_{j\geq-1}\Big{(}\sup_{\tau\in[0,1]}\langle\tau \rangle^{\alpha}\int_{0}^{\tau}+\sup_{\tau\in[1,t]}\langle\tau\rangle^{\alpha} \int_{0}^{1}+\sup_{\tau\in[1,t]}\langle\tau\rangle^{\alpha}\int_{1}^{\tau} \Big{)}e^{-(\tau-\omega)}2^{j(\frac{d}{2}+1)}\|\hat{\Delta}_{j}u\|_{L^{2}}d\omega\] \[\qquad\lesssim\|u\|_{L^{1}_{1}(B^{\frac{d}{2}+1}_{2,1})}^{h}+\| \tau^{\alpha}u\|_{L^{1}_{t}(B^{\frac{d}{2}+1}_{2,1})}^{h},\qquad t\geq 2,\]
from which and (3.69) we get
\[\sum_{j\geq-1}\sup_{\tau\in[0,t]}\langle\tau\rangle^{\alpha}\int_{0}^{\tau}e ^{-(\tau-\omega)}2^{j(\frac{d}{2}+1)}\|\hat{\Delta}_{j}u\|_{L^{2}}d\omega \lesssim\|(\nabla a_{0},u_{0},b_{0},w_{0})\|_{B^{\frac{d}{2}-1}_{2,1}}^{h}+ \mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t).\]
By (4.3) and (4.7), one can obtain after direct computations that
\[\sum_{j\geq-1}\sup_{\tau\in[0,t]}\langle\tau\rangle^{\alpha}\int_ {0}^{\tau}e^{-(\tau-\omega)}2^{j(\frac{d}{2}+1)}\big{(}\|\mathrm{div}\,w\|_{L ^{\infty}}\|\hat{\Delta}_{j}(b,w)\|_{L^{2}}\] \[\qquad\quad+2^{-j}\|\hat{\Delta}_{j}(w\cdot\nabla b,w\cdot\nabla w )\|_{L^{2}}+\|[w\cdot\nabla,\hat{\Delta}_{j}](b,w)\|_{L^{2}}\big{)}d\omega \lesssim\mathcal{Z}^{2}(t)+\mathcal{X}(t)\mathcal{Z}(t).\]
Thus, we prove
\[\|\langle\tau\rangle^{\alpha}(b,w)\|_{\tilde{L}^{\infty}_{t}( \tilde{B}^{\frac{d}{2}+1}_{2,1})}^{h} \tag{3.70}\] \[\qquad\lesssim\|a_{0}\|_{B^{\frac{d}{2}}_{2,1}}^{h}+\|u_{0}\|_{ B^{\frac{d}{2}-1}_{2,1}}^{h}+\|(b_{0},w_{0})\|_{B^{\frac{d}{2}+1}_{2,1}}^{h}+ \mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t).\]
The combination of (3.65) and (3.69)-(3.70) leads to (3.57) and completes the proof of Lemma 3.5.
Finally, we need the additional time-weighted estimates of the relative velocity \(u-w\) to control the nonlinear term \(h(a,b)(u-w)\) and enclose the expected time-weighted estimates.
**Lemma 3.6**.: _Let \((a,u,b,w)\) be the global solution to the Cauchy problem (1.3) given by Theorem 1.2. Then, under the assumptions of Theorem 1.4, it holds_
\[\|\langle\tau\rangle^{\frac{1}{2}}(u-w)\|_{L^{\infty}_{t}(B^{ \sigma_{0}}_{2,\infty})}^{\ell}+\sup_{\sigma\in[\sigma_{0}+\varepsilon,\frac{ d}{2}]}\|\langle\tau\rangle^{\frac{1}{2}(\sigma-\sigma_{0})}(u-w)\|_{L^{ \infty}_{t}(\tilde{B}^{\sigma}_{2,1})}^{\ell} \tag{3.71}\] \[\qquad\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|_{B^{\sigma_{0}}_{2, \infty}}^{\ell}+\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t),\quad t>0,\]
_where \(\mathcal{X}(t)\) and \(\mathcal{Z}(t)\) are defined by (2.4) and (3.46), respectively, and \(\varepsilon\in(0,1)\) is any suitably small constant._
**Proof.** Taking the low-frequency \(\dot{B}^{\sigma_{0}}_{2,\infty}\)-norm of (3.41) for low frequnecies, we get
\[\|u-w\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}^{\ell} \tag{3.72}\] \[\qquad\lesssim e^{-2t}\|(u_{0},w_{0})\|_{\dot{B}^{\sigma_{0}}_{2, \infty}}^{\ell}\] \[\qquad+\int_{0}^{t}e^{-2(t-\tau)}\big{(}\|(a,b,u)\|_{\dot{B}^{ \sigma_{0}+1}_{2,\infty}}^{\ell}+\|(u\cdot\nabla u,w\cdot\nabla w)\|_{\dot{B}^ {\sigma_{0}}_{2,\infty}}^{\ell}+\|G\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}^{\ell} \big{)}d\tau.\]
The every term on the right-hand side of (3.72) can be estimated as follows. It holds by (3.47) that
\[\begin{split}&\int_{0}^{t}e^{-2(t-\tau)}\big{(}\|(a,b)\|_{\dot{B} ^{\sigma_{0}+1}_{2,\infty}}^{\ell}+\|u\|_{\dot{B}^{\sigma_{0}+2}_{2,\infty}}^{ \ell}\big{)}d\tau\\ &\lesssim\|\langle\tau\rangle^{\frac{1}{2}}(a,u)\|_{L^{\infty}_{ t}(\dot{B}^{\sigma_{0}+1}_{2,1})}^{\ell}\int_{0}^{t}e^{-2(t-\tau)}\langle\tau \rangle^{-\frac{1}{2}}d\tau\\ &\lesssim\big{(}\|(a_{0},u_{0},b_{0},w_{0})\|_{\dot{B}^{\sigma_{0 }}_{2,\infty}}^{\ell}+\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t)\big{)}\langle t \rangle^{-\frac{1}{2}}.\end{split} \tag{3.73}\]
We derive by (3.47) and (4.5) that
\[\begin{split}&\int_{0}^{t}e^{-2(t-\tau)}\big{(}\|(u\cdot\nabla u,w \cdot\nabla w)\|_{\dot{B}^{\sigma_{0}}_{2,\infty}}^{\ell}+\|G\|_{\dot{B}^{ \sigma_{0}}_{2,\infty}}^{\ell}\big{)}d\tau\\ &\lesssim\big{(}\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t)\big{)} \langle t\rangle^{-\frac{1}{2}(\frac{d}{2}+1-\sigma_{0})}.\end{split} \tag{3.74}\]
The combination of (3.72)-(3.74) gives rise to
\[\|\langle\tau\rangle^{\frac{1}{2}}(u-w)\|_{\dot{L}^{\infty}_{t}(\dot{B}^{ \sigma_{0}}_{2,\infty})}^{\ell}\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|_{\dot{B} ^{\sigma_{0}}_{2,\infty}}^{\ell}+\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t). \tag{3.75}\]
By a similar argument as used in (3.75), one can have
\[\|\langle\tau\rangle^{\frac{1+\sigma-\sigma_{0}}{2}}(u-w)\|_{L^{\infty}_{t}( \dot{B}^{\sigma}_{2,1})}^{\ell}\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|_{\dot{B} ^{\sigma_{0}}_{2,\infty}}^{\ell}+\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t),\quad \sigma\in(\sigma_{0},\frac{d}{2}].\]
For brevity, the details are omitted here.
_Proof of Theorem 1.4:_ Assume that the initial data \((a_{0},u_{0},b_{0},w_{0})\) satisfies (1.5) and (1.14), and let \((a,u,b,w)\) be the global solution to the Cauchy problem (1.3) given by Theorem 1.2. In terms of the time-weighted estimated established in Lemmas 3.4-3.6, we have
\[\mathcal{Z}(t)\lesssim\|(a_{0},u_{0},b_{0},w_{0})\|_{\dot{B}^{\sigma_{0}}_{2, \infty}}^{\ell}+\mathcal{X}_{0}+\mathcal{X}^{2}(t)+\mathcal{Z}^{2}(t),\qquad t >0, \tag{3.76}\]
where \(\mathcal{X}(t)\) and \(\mathcal{Z}(t)\) are defined by (2.4) and (3.46), respectively. Due to (3.76) and the fact that \(\delta_{0}\sim\mathcal{X}_{0}+\|(a_{0},u_{0},b_{0},w_{0})\|_{\dot{B}^{\sigma_{0 }}_{2,\infty}}^{\ell}\) is sufficiently small, we conclude \(\mathcal{Z}(t)\lesssim\delta_{0}\) for any \(t>0\). Thus, (1.15) follows.
## 4 Appendix: Littlewood-Paley decomposition and Besov spaces
We explain the notations and technical lemmas used throughout this paper. \(C>0\) and \(c>0\) denote two constants independent of time. \(A\lesssim B(A\gtrsim B)\) means \(A\leq CB\) (\(A\geq CB\)), and \(A\sim B\) stands for \(A\lesssim B\) and \(A\gtrsim B\). For any Banach space \(X\) and the functions \(g,h\in X\), let \(\|(g,h)\|_{X}:=\|g\|_{X}+\|h\|_{X}\). For any \(T>0\) and \(1\leq\varrho\leq\infty\), we denote by \(L^{\varrho}(0,T;X)\) the set of measurable functions \(g:[0,T]\to X\) such that \(t\mapsto\|g(t)\|_{X}\) is in \(L^{\varrho}(0,T)\) and write \(\|\cdot\|_{L^{\varrho}(0,T;X)}:=\|\cdot\|_{L^{\varrho}_{T}(X)}\).
We recall the Littlewood-Paley decomposition, Besov spaces and related analysis tool. The reader can refer to Chapters 2-3 in [1] for the details. Choose a smooth radial non-increasing function \(\chi(\xi)\) compactly supported in \(B(0,\frac{4}{3})\) and satisfying \(\chi(\xi)=1\) in \(B(0,\frac{3}{4})\). Then \(\varphi(\xi):=\chi(\frac{\xi}{2})-\chi(\xi)\) satisfies
\[\sum_{j\in\mathbb{Z}}\varphi(2^{-j}\cdot)=1,\quad\text{Supp }\varphi\subset\{\xi \in\mathbb{R}^{d}\ |\ \frac{3}{4}\leq|\xi|\leq\frac{8}{3}\}.\]
For any \(j\in\mathbb{Z}\), define the homogeneous dyadic blocks \(\dot{\Delta}_{j}\) by
\[\dot{\Delta}_{j}u:=\mathcal{F}^{-1}\big{(}\varphi(2^{-j}\cdot)\mathcal{F}(u) \big{)}=2^{jd}h(2^{j}\cdot)\star u,\qquad h:=\mathcal{F}^{-1}\varphi,\]
where \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) are the Fourier transform and its inverse. Let \(\mathcal{P}\) be the class of all polynomials on \(\mathbb{R}^{d}\) and \(\mathcal{S}^{\prime}_{h}=\mathcal{S}^{\prime}/\mathcal{P}\) stand for the tempered distributions on \(\mathbb{R}^{d}\) modulo polynomials. One can get
\[u=\sum_{j\in\mathbb{Z}}\dot{\Delta}_{j}u\quad\text{in }\mathcal{S}^{\prime}, \quad\forall u\in\mathcal{S}^{\prime}_{h},\qquad\dot{\Delta}_{j}\dot{\Delta}_ {l}u=0,\quad\text{if}\quad|j-l|\geq 2.\]
With the help of those dyadic blocks, we give the definition of homogeneous Besov spaces as follow.
**Definition 4.1**.: _For \(s\in\mathbb{R}\) and \(1\leq p,r\leq\infty\), the homogeneous Besov space \(\dot{B}^{s}_{p,r}\) is defined by_
\[\tilde{B}^{s}_{p,r}:=\big{\{}u\in\mathcal{S}^{\prime}_{h}\ |\ \|u\|_{\dot{B}^{s}_{p,r}}:=\| \{2^{js}\|\dot{\Delta}_{j}u\|_{L^{p}}\}_{j\in\mathbb{Z}}\|_{l^{r}}<\infty\big{\}}.\]
Next, we state a class of mixed space-time Besov spaces introduced by Chemin-Lerner [7].
**Definition 4.2**.: _For \(T>0\), \(s\in\mathbb{R}\) and \(1\leq\varrho,r,q\leq\infty\), the space \(\widetilde{L}^{\varrho}(0,T;\dot{B}^{s}_{p,r})\) is defined as_
\[\widetilde{L}^{\varrho}(0,T;\dot{B}^{s}_{p,r}):=\big{\{}u\in L^{\varrho}(0,T; \mathcal{S}^{\prime}_{h})\ |\ \|u\|_{\widetilde{L}^{\varrho}_{T}(\dot{B}^{s}_{p,r})}:=\|\{2^{js}\|\dot{\Delta} _{j}u\|_{L^{\varrho}_{T}(L^{p})}\}_{j\in\mathbb{Z}}\|_{l^{r}}<\infty\big{\}}.\]
_By the Minkowski inequality, it holds_
\[\|u\|_{\widetilde{L}^{\varrho}_{T}(\dot{B}^{s}_{p,r})}\leq(\geq)\|u\|_{L^{ \varrho}_{T}(\dot{B}^{s}_{p,r})}\quad\text{if }r\geq(\leq)\rho,\]
_where \(\|\cdot\|_{L^{\varrho}_{T}(\dot{B}^{s}_{p,r})}\) is the usual Lebesgue-Besov norm. Moreover, we denote_
\[\mathcal{C}_{b}(\mathbb{R}_{+};\dot{B}^{s}_{p,r}):=\big{\{}u\in\mathcal{C}( \mathbb{R}_{+};\dot{B}^{s}_{p,r})\ |\ \|f\|_{\widetilde{L}^{\infty}(\mathbb{R}_{+};\dot{B}^{s}_{p,r})}<\infty\big{\}}.\]
In order to restrict Besov norms to the low frequency part and the high-frequency part, we often use the following notations for any \(s\in\mathbb{R}\) and \(p\in[1,\infty]\):
\[\begin{cases}\|u\|^{\ell}_{\dot{B}^{s}_{p,r}}:=\|\{2^{js}\|\dot{\Delta}_{j}u\| _{L^{p}}\}_{j\leq 0}\|_{l^{r}},&\|u\|^{h}_{\dot{B}^{s}_{p,r}}:=\|\{2^{js}\|\dot{ \Delta}_{j}u\|_{L^{p}}\}_{j\geq-1}\|_{l^{r}},\\ \|u\|^{\ell}_{\widetilde{L}^{\varrho}_{T}(\dot{B}^{s}_{p,r})}:=\|\{2^{js}\| \dot{\Delta}_{j}u\|_{L^{\varrho}_{T}(L^{p})}\}_{j\leq 0}\|_{l^{r}},&\|u\|^{ \widetilde{L}^{\varrho}_{T}(\dot{B}^{s}_{p,r})}:=\|\{2^{js}\|\dot{\Delta}_{j}u \|_{L^{\varrho}_{T}(L^{p})}\}_{j\geq-1}\|_{l^{r}}.\end{cases}\]
Define
\[u^{\ell}:=\sum_{j\leq-1}\dot{\Delta}_{j}u,\qquad u^{h}:=u-u^{\ell}=\sum_{j \geq 0}\dot{\Delta}_{j}u.\]
It is easy to check for any \(s^{\prime}>0\) that
\[\begin{cases}\|u^{\ell}\|_{\dot{B}^{s}_{p,r}}\lesssim\|u\|^{\ell}_{\dot{B}^{s}_{p,r}}\lesssim\|u\|^{\ell}_{\dot{B}^{s-s^{\prime}}_{p,r}},&\|u^{h}\|_{\dot{B}^{s}_{ p,1}}\lesssim\|u\|^{h}_{\dot{B}^{s}_{p,r}}\lesssim\|u\|^{h}_{\dot{B}^{s+s^{\prime}}_{p,r}},\\ \|u^{\ell}\|_{\dot{\mathcal{L}}^{\varrho}_{T}(\dot{B}^{s}_{p,r})}\lesssim\|u\|^ {\ell}_{\dot{\mathcal{L}}^{\varrho}_{T}(\dot{B}^{s}_{p,r})}\lesssim\|u\|^{\ell} _{\dot{\mathcal{L}}^{\varrho}_{T}(\dot{B}^{s-s^{\prime}}_{p,r})},&\|u^{h}\|_{ \dot{\mathcal{L}}^{\varrho}_{T}(\dot{B}^{s}_{p,r})}\lesssim\|u\|^{h}_{\dot{ \mathcal{L}}^{\varrho}_{T}(\dot{B}^{s}_{p,r})}\lesssim\|u\|^{h}_{\dot{ \mathcal{L}}^{\varrho}_{T}(\dot{B}^{s}_{p,r})}.\end{cases} \tag{4.1}\]
We recall some basic properties of Besov spaces and product estimates which will be used repeatedly in this paper. Remark that all the properties remain true for the Chemin-Lerner type spaces whose time exponent has to behave according to the Holder inequality for the time variable.
The first lemma is the Bernstein inequalities, which in particular implies that \(\dot{\Delta}_{j}u\) is smooth for every \(u\) in any Besov spaces so that we can take direct calculations on linear equations after applying the operator \(\dot{\Delta}_{j}\).
**Lemma 4.1**.: _Let \(0<r<R\), \(1\leq p\leq q\leq\infty\) and \(k\in\mathbb{N}\). For any \(u\in L^{p}\) and \(\lambda>0\), it holds_
\[\begin{cases}\operatorname{Supp}\,\mathcal{F}(u)\subset\{\xi\in\mathbb{R}^{d} \ |\ |\xi|\leq\lambda R\}\Rightarrow\|D^{k}u\|_{L^{q}}\lesssim\lambda^{k+d( \frac{1}{p}-\frac{1}{q})}\|u\|_{L^{p}},\\ \operatorname{Supp}\,\mathcal{F}(u)\subset\{\xi\in\mathbb{R}^{d}\ |\ \lambda r \leq|\xi|\leq\lambda R\}\Rightarrow\|D^{k}u\|_{L^{p}}\sim\lambda^{k}\|u\|_{L^ {p}}.\end{cases}\]
Due to the Bernstein inequalities, the Besov spaces have the following properties.
**Lemma 4.2**.: _The following properties hold:_
* _For_ \(s\in\mathbb{R}\)_,_ \(1\leq p_{1}\leq p_{2}\leq\infty\) _and_ \(1\leq r_{1}\leq r_{2}\leq\infty\)_, it holds_ \[\dot{B}^{s}_{p_{1},r_{1}}\hookrightarrow\dot{B}^{s-d(\frac{1}{p_{1}}-\frac{1}{ p_{2}})}_{p_{2},r_{2}};\]
* _For_ \(1\leq p\leq q\leq\infty\)_, we have the following chain of continuous embedding:_ \[\dot{B}^{0}_{p,1}\hookrightarrow L^{p}\hookrightarrow\dot{B}^{0}_{p,\infty} \hookrightarrow\dot{B}^{\sigma}_{q,\infty},\quad\sigma=-d(\frac{1}{p}-\frac{1} {q})<0;\]
* _If_ \(p<\infty\)_, then_ \(\dot{B}^{\frac{d}{p}}_{p,1}\) _is continuously embedded in the set of continuous functions decaying to 0 at infinity;_
* _The following real interpolation property is satisfied for_ \(1\leq p\leq\infty\)_,_ \(s_{1}<s_{2}\) _and_ \(\theta\in(0,1)\)_:_ \[\|u\|_{\dot{B}^{\theta_{s_{1}+(1-\theta)_{s_{2}}}}_{p,1}}\lesssim\frac{1}{ \theta(1-\theta)(s_{2}-s_{1})}\|u\|^{\theta}_{\dot{B}^{s_{1}}_{p,\infty}}\|u\|^ {1-\theta}_{\dot{B}^{s_{2}}_{p,\infty}},\] (4.2) _which in particular implies for any_ \(\varepsilon>0\) _that_ \[H^{s+\varepsilon}\hookrightarrow\dot{B}^{s}_{2,1}\hookrightarrow\dot{H}^{s};\]
* _Let_ \(\Lambda^{\sigma}\) _be defined by_ \(\Lambda^{\sigma}=(-\Delta)^{\frac{\sigma}{2}}u:=\mathcal{F}^{-}\big{(}|\xi|^{ \sigma}\mathcal{F}(u)\big{)}\) _for_ \(\sigma\in\mathbb{R}\) _and_ \(u\in\dot{S}^{{}^{\prime}}_{h}\)_, then_ \(\Lambda^{\sigma}\) _is an isomorphism from_ \(\dot{B}^{s}_{p,r}\) _to_ \(\dot{B}^{s-\sigma}_{p,r}\)
* _Let_ \(1\leq p_{1},p_{2},r_{1},r_{2}\leq\infty\)_,_ \(s_{1}\in\mathbb{R}\) _and_ \(s_{2}\in\mathbb{R}\) _satisfy_ \[s_{2}<\frac{d}{p_{2}}\quad\text{or}\quad s_{2}=\frac{d}{p_{2}}\text{ and }r_{2}=1.\] _The space_ \(\dot{B}^{s_{1}}_{p_{1},r_{1}}\cap\dot{B}^{s_{2}}_{p_{2},r_{2}}\) _endowed with the norm_ \(\|\cdot\|_{\dot{B}^{s_{1}}_{p_{1},r_{1}}}+\|\cdot\|_{\dot{B}^{s_{2}}_{p_{2},r_{ 2}}}\) _is a Banach space and has the weak compact and Fatou properties_: _If_ \(u_{n}\) _is a uniformly bounded sequence of_ \(\dot{B}^{s_{1}}_{p_{1},r_{1}}\cap\dot{B}^{s_{2}}_{p_{2},r_{2}}\)_, then an element_ \(u\) _of_ \(\dot{B}^{s_{1}}_{p_{1},r_{1}}\cap\dot{B}^{s_{2}}_{p_{2},r_{2}}\) _and a subsequence_ \(u_{n_{k}}\) _exist such that_ \(u_{n_{k}}\to u\) _in_ \(\mathcal{S}^{\prime}\) _and_ \[\|u\|_{\dot{B}^{s_{1}}_{p_{1},r_{1}}\cap\dot{B}^{s_{2}}_{p_{2},r_{2}}}\lesssim \liminf_{n_{k}\to\infty}\|u_{n_{k}}\|_{\dot{B}^{s_{1}}_{p_{1},r_{1}}\cap\dot {B}^{s_{2}}_{p_{2},r_{2}}}.\]
The following Morse-type product estimates in Besov spaces play a fundamental role in the analysis on nonlinear terms:
**Lemma 4.3**.: _The following statements hold:_
* _Let_ \(s>0\) _and_ \(1\leq p,r\leq\infty\)_. Then_ \(\dot{B}^{s}_{p,r}\cap L^{\infty}\) _is a algebra and_ \[\|uv\|_{\dot{B}^{s}_{p,r}}\lesssim\|u\|_{L^{\infty}}\|v\|_{\dot{B}^{s}_{p,r}}+ \|v\|_{L^{\infty}}\|u\|_{\dot{B}^{s}_{p,r}};\] (4.3)
* _Let_ \(s_{1}\)_,_ \(s_{2}\) _and_ \(p\) _satisfy_ \(2\leq p\leq\infty\)_,_ \(s_{1}\leq\frac{d}{p}\)_,_ \(s_{2}\leq\frac{d}{p}\) _and_ \(s_{1}+s_{2}>0\)_. Then we have_ \[\|uv\|_{\dot{B}^{s_{1}+s_{2}-\frac{d}{p}}_{p,1}}\lesssim\|u\|_{\dot{B}^{s_{1}} _{p,1}}\|v\|_{\dot{B}^{s_{2}}_{p,1}};\] (4.4)
* _Assume that_ \(s_{1}\)_,_ \(s_{2}\) _and_ \(p\) _satisfy_ \(2\leq p\leq\infty\)_,_ \(s_{1}\leq\frac{d}{p}\)_,_ \(s_{2}<\frac{d}{p}\) _and_ \(s_{1}+s_{2}\geq 0\)_. Then it holds_ \[\|uv\|_{\dot{B}^{s_{1}+s_{2}-\frac{d}{p}}_{p,\infty}}\lesssim\|u\|_{\dot{B}^{s _{1}}_{p,1}}\|v\|_{\dot{B}^{s_{p}^{2}}_{p,\infty}}.\] (4.5)
We state the following result about the continuity for composition functions:
**Lemma 4.4**.: _Let \(G:I\to\mathbb{R}\) be a smooth function satisfying \(G(0)=0\). For any \(1\leq p\leq\infty\), \(s>0\) and \(g\in\dot{B}^{s}_{2,1}\cap L^{\infty}\), there holds \(G(g)\in\dot{B}^{s}_{p,r}\cap L^{\infty}\) and_
\[\|G(g)\|_{\dot{B}^{s}_{p,r}}\leq C_{g}\|g\|_{\dot{B}^{s}_{p,r}}, \tag{4.6}\]
_where the constant \(C_{g}>0\) depends only on \(\|g\|_{L^{\infty}}\), \(G^{\prime}\), \(s\) and \(d\)._
Finally, the following commutator estimates will be useful to control the nonlinearities in high frequencies:
**Lemma 4.5**.: _Let \(1\leq p\leq\infty\) and \(-\frac{d}{p}-1\leq s\leq 1+\frac{d}{p}\). Then it holds_
\[\sum_{j\in\mathbb{Z}}2^{js}\|[u\cdot\nabla,\dot{\Delta}_{j}]a\|_{L^ {p}}\lesssim\|u\|_{\dot{B}^{\frac{d}{p,1}}_{p,1}}\|a\|_{\dot{B}^{s}_{p,1}}, \tag{4.7}\] \[\sum_{j\in\mathbb{Z}}2^{j(s-1)}\|[u\cdot\nabla,\partial_{k}\dot{ \Delta}_{j}]a\|_{L^{p}}\lesssim\|u\|_{\dot{B}^{\frac{d}{p,1}}_{p,1}}\|a\|_{\dot{B }^{s}_{p,1}},\quad k=1,...,d, \tag{4.8}\]
_with the commutator \([A,B]:=AB-BA\)._
Finally, we need the optimal regularity estimates for the Lame system below.
**Lemma 4.6**.: _Let \(T>0\), \(\mu>0\), \(2\mu+\lambda>0\), \(s\in\mathbb{R}\), \(1\leq p,r\leq\infty\) and \(1\leq\varrho_{2}\leq\varrho_{1}\leq\infty\). Assume that \(u_{0}\in\dot{B}^{s}_{p,r}\) and \(f\in\widetilde{L}^{\rho_{2}}(0,T;\dot{B}^{s-2+\frac{2}{\varrho_{2}}}_{p,r})\) hold. If \(u\) is a solution of_
\[\begin{cases}\partial_{t}u-\mu\Delta u-(\mu+\lambda)\nabla\mathrm{div}\,u=f, \quad x\in\mathbb{R}^{d},\quad t\in(0,T),\\ u(x,0)=u_{0}(x),\quad\quad\quad\quad\quad\quad\quad\quad x\in\mathbb{R}^{d}, \end{cases}\]
_then \(u\) satisfies_
\[\min\{\mu,2\mu+\lambda\}^{\frac{1}{\varrho_{1}}}\|u\|_{\widetilde{L}^{\varrho _{1}}_{T}(\dot{B}^{s+\frac{2}{\varrho_{1}}}_{p,r})}\lesssim\|u_{0}\|_{\dot{B}^ {s}_{p,r}}+\|f\|_{\widetilde{L}^{\rho_{2}}_{T}(\dot{B}^{s-2+\frac{2}{\varrho_ {2}}}_{p,r})}.\]
**Acknowledgments** The authors thank the referees for their valuable suggestions and comments on the manuscript. The second author is grateful to Professor R. Danchin and Dr. T. Crin-Barat for their helpful discussions. The research of the paper is supported by National Natural Science Foundation of China (No.11931010 and 11871047) and by the key research project of Academy for Multidisciplinary Studies, Capital Normal University, and by the Capacity Building for Sci-Tech Innovation-Fundamental Scientific Research Funds (No.007/20530290068).
|
2309.02710 | Improved Outlier Robust Seeding for k-means | The $k$-means is a popular clustering objective, although it is inherently
non-robust and sensitive to outliers. Its popular seeding or initialization
called $k$-means++ uses $D^{2}$ sampling and comes with a provable $O(\log k)$
approximation guarantee \cite{AV2007}. However, in the presence of adversarial
noise or outliers, $D^{2}$ sampling is more likely to pick centers from distant
outliers instead of inlier clusters, and therefore its approximation guarantees
\textit{w.r.t.} $k$-means solution on inliers, does not hold.
Assuming that the outliers constitute a constant fraction of the given data,
we propose a simple variant in the $D^2$ sampling distribution, which makes it
robust to the outliers. Our algorithm runs in $O(ndk)$ time, outputs $O(k)$
clusters, discards marginally more points than the optimal number of outliers,
and comes with a provable $O(1)$ approximation guarantee.
Our algorithm can also be modified to output exactly $k$ clusters instead of
$O(k)$ clusters, while keeping its running time linear in $n$ and $d$. This is
an improvement over previous results for robust $k$-means based on LP
relaxation and rounding \cite{Charikar}, \cite{KrishnaswamyLS18} and
\textit{robust $k$-means++} \cite{DeshpandeKP20}. Our empirical results show
the advantage of our algorithm over $k$-means++~\cite{AV2007}, uniform random
seeding, greedy sampling for $k$ means~\cite{tkmeanspp}, and robust
$k$-means++~\cite{DeshpandeKP20}, on standard real-world and synthetic data
sets used in previous work. Our proposal is easily amenable to scalable,
faster, parallel implementations of $k$-means++ \cite{Bahmani,BachemL017} and
is of independent interest for coreset constructions in the presence of
outliers \cite{feldman2007ptas,langberg2010universal,feldman2011unified}. | Amit Deshpande, Rameshwar Pratap | 2023-09-06T04:46:01Z | http://arxiv.org/abs/2309.02710v1 | # Improved Outlier Robust Seeding for \(k\)-means
###### Abstract
The \(k\)-means is a popular clustering objective, although it is inherently non-robust and sensitive to outliers. Its popular seeding or initialization called \(k\)-means++ uses \(D^{2}\) sampling and comes with a provable \(O(\log k)\) approximation guarantee Arthur and Vassilvitskii (2007). However, in the presence of adversarial noise or outliers, \(D^{2}\) sampling is more likely to pick centers from distant outliers instead of inlier clusters, and therefore its approximation guarantees _w.r.t._\(k\)-means solution on inliers, does not hold.
Assuming that the outliers constitute a constant fraction of the given data, we propose a simple variant in the \(D^{2}\) sampling distribution, which makes it robust to the outliers. Our algorithm runs in \(O(ndk)\) time, outputs \(O(k)\) clusters, discards marginally more points than the optimal number of outliers, and comes with a provable \(O(1)\) approximation guarantee.
Our algorithm can also be modified to output exactly \(k\) clusters instead of \(O(k)\) clusters, while keeping its running time linear in \(n\) and \(d\). This is an improvement over previous results for robust \(k\)-means based on LP relaxation and rounding Charikar et al. (2001), Krishnaswamy et al. (2018) and _robust \(k\)-means++_Deshpande et al. (2020). Our empirical results show the advantage of our algorithm over \(k\)-means++ Arthur and Vassilvitskii (2007), uniform random seeding, greedy sampling for \(k\)-means Bhaskara et al. (2019), and _robust \(k\)_-means++ Deshpande et al. (2020), on standard real-world and synthetic data sets used in previous work. Our proposal is easily amenable to scalable, faster, parallel implementations of \(k\)-means++ Bachem et al. (2018); Bahmani et al. (2012) and is of independent interest for coreset constructions in the presence of outliers Feldman and Langberg (2011); Feldman et al. (2007); Langberg and Schulman (2010).
## 1 Introduction
The \(k\)-means clustering is a popular tool in data analysis and an important objective in statistics, data mining, unsupervised learning, and computational geometry. The objective of the \(k\)-means clustering is to find \(k\) centers that minimize the sum of squared distances of all the points to their nearest centers, respectively. Given a set \(X\subseteq\mathbb{R}^{d}\) of \(n\) data points and an integer \(k>0\), the \(k\)-means objective is to find a set \(C\subseteq\mathbb{R}^{d}\) of \(k\) centers that minimizes
\[\phi_{X}(C)=\sum_{x\in X}\min_{c\in C}\left\|x-c\right\|^{2}. \tag{1}\]
Finding optimal solution of the \(k\)-means objective stated in Equation 1 in NP-Hard Aloise et al. (2009). This problem is NP-Hard even for a restricted instance when all points are in the plane Mahajan et al. (2012). However, several efficient approximation algorithms and
heuristics have been developed to address this. The most popular algorithm for the \(k\)-means remains Lloyd's \(k\)-means method Lloyd (2006), which is a simple, fast heuristic that starts with any given initial solution and iteratively converges to a locally optimal solution.
Although the \(k\)-means problem is well-studied, the algorithms developed for the problem can perform poorly on real-world data. The reason is that the real-world datasets contain outliers, and \(k\)-means objective functions and algorithms are extremely sensitive to outliers. Outliers can drastically change the quality of the clustering solution, and therefore, it is important to consider them while designing algorithms for the \(k\)-means objective. We state the objective function of \(k\)-means with outliers as follows: given a set \(X\subseteq\mathbb{R}^{d}\) of \(n\) points, an integer \(k>0\), and the number of outliers \(z\), the objective of the \(k\)-means with outliers is to find a set \(C\subseteq\mathbb{R}^{d}\) of \(k\) centers that minimizes
\[\rho_{X}(C)=\min_{\begin{subarray}{c}Y\subseteq X\\ |Y|=n-z\end{subarray}}\;\sum_{x\in Y}\min_{c\in C}\left\|x-c\right\|^{2}. \tag{2}\]
_Problem statement:_ _In this work, we aim to design efficient and practical approximation algorithms for \(k\)-means clustering with outliers problem, which solves the optimization problem stated in Equation 2._
\(D^{2}\)**-sampling for \(k\)-means:** The \(k\)-means++ or \(D^{2}\)-sampling Arthur and Vassilvitskii (2007) suggests an adaptive sampling algorithm for \(k\)-means problem (stated in Equation (1)). In this sampling approach, the first point is sampled uniformly at random from the given points, and the sampled point is designated as a cluster center. Then the second point is sampled with probability proportional to its squared distance from the first center, and designated as the second cluster center. In general, in each step, a new point is sampled with probability proportional to its square distance to the nearest cluster center sampled so far. If we sample \(k\) cluster centers following this distribution, the clustering obtained is \(O(\log k)\)-approximation to the global optimum, in expectation. However, a limitation of \(D^{2}\) sampling distribution is that it is extremely sensitive to outliers. Consider a scenario where \(99\%\) of points are well clustered and the remaining \(1\%\) of points are very far away from these clusters. The \(D^{2}\) sampling on this dataset is likely to pick outliers as cluster centers, and the final clustering results obtained is likely to be very far away from the optimal clustering. In this work we propose a simple tweak to \(D^{2}\)-sampling, making it robust to outliers.
## 2 Our results
We propose a simple initialization for \(k\)-means with outlier objective (stated in Equation 2) using a simple modification of \(D^{2}\) sampling. Our algorithm runs in \(O(ndk)\) time and gives \(O(\log k)\) approximation guarantee using \(k\) clusters and \(O(1)\) bi-criteria approximation guarantee using \(O(k)\) clusters. Both these algorithms can be made to output exactly the same number of outliers as the optimal solution, unlike previous algorithms that need to discard extra points than the optimal number of outliers. The pseudocode of our algorithm is presented in Algorithm 1, and the precise statement of our approximation guarantee appears in Theorems 1, 3. In addition, Table 1 shows a detailed comparison of our results with previous work.
We perform extensive experiments to compare our proposal with several baselines and summarise it in Section 5. We use synthetic and real-world datasets for our experimentation.
In synthetic datasets, we synthetically generated inlier clusters and outliers. In real-world datasets, we consider two scenarios: a) where we consider small clusters as outliers and b) where we randomly sample a small fraction of points, and add large Gaussian noise to their features and consider them as outliers. We used the following four baselines for empirical comparison - random seeding, \(k\)-means++ Arthur and Vassilvitskii (2007), Deshpande et al. (2020), and Bhaskara et al. (2019). Random seeding and \(k\)-means++ Arthur and Vassilvitskii (2007) do not solve \(k\)-means with outlier problems. We use them as heuristics, where \(k\) points are sampled using their respective sampling distribution, the farthest few points are marked as outliers, and the remaining points as inliers on which we compute our evaluation metrics (precision/recall/clustering cost). Our empirical findings are as follows: We outperformed on most instances in both the evaluation metric - precision/recall, and the clustering cost (Equation (2)). Note that random and \(k\)-means++ Arthur and Vassilvitskii (2007) are among the fastest. However, their performance on precision/recall and clustering cost metrics are significantly worse on almost all the datasets. The running time of our proposal is observed to be faster than Deshpande et al. (2020), whereas it remains comparable _w.r.t._ Bhaskara et al. (2019). To summarise, our proposal gives both a) theoretical guarantee on the clustering cost _w.r.t._ the optimal solution, and b) strong empirical performance by suggesting a faster and more accurate algorithm for \(k\)-means with outliers problem.
## 3 Related work
Outlier detection is a well-studied problem in data mining and statistics Chandola et al. (2009). Intuitively, outliers are data points far from their nearest neighbours. One classical approach to detecting outliers is via the \(k\)-Nearest Neighbour algorithm. In statistics, Mahalanobis Distance and Minimum Covariance Determinant (MCD) Rousseeuw and van Driessen (1999) are a couple of notable approaches for outliers detection. Our problem statement is completely different from the one mentioned here. These methods focus on detecting outliers related to a distribution/density function, whereas we aim to produce clustering that is near-optimal in terms of an objective that is defined solely on inliers (stated in Equation (2)). Another practical heuristic is to first identify and discard outliers by running these outliers detection algorithms and then run \(k\)-means clustering on the remaining points. However, this heuristic doesn't provide any theoretical guarantee on its clustering cost _w.r.t_ optimal clustering over inliers. In what follows, we state some notable results for \(k\)-means with outliers problem, and we attempt to compare and contrast our results with them as follows.
### Impractical algorithms with theoretical guarantee
Charikar _et. al._Charikar et al. (2001) suggests a clever modification of \(D^{2}\)-sampling Arthur and Vassilvitskii (2007) and gives a 3-approximation for the robust \(k\)-center problem with outliers. For \(k\)-median with outliers, they give a \(4(1+1/\epsilon)\)-approximation in polynomial time that discards at most \((1+\epsilon)\) times more outliers than the optimal solution. Their approximation depends on the number of extra points deleted as outliers, whereas the approximation of our approach is independent of that. Chen Chen (2008) suggest a polynomial time constant factor approximation to \(k\)-means and \(k\)-median with outliers that do not discard extra outliers. However, the algorithms are not designed to be practical.
Krishnaswamy et al. (2018) give (roughly) 53-approximation algorithm for \(k\)-means with \(z\) outliers, and output exactly \(k\) centers and doesn't discard any extra outliers. They use LP relaxation and an iterative rounding approach. Friggstad et al. (2018) give \((1+\epsilon)\)-approximation to \(k\)-means with \(z\) outliers in a fixed dimensional euclidean space. They use local search and output \((1+\epsilon)k\) centers and discard exactly \(z\) outliers. However, both of these algorithms are not designed to be practical. There are few sampling-based robust coreset construction techniques Feldman and Langberg (2011); Feldman et al. (2007) which construct small-size coresets and give \((1+\epsilon)\)-approximation to various robust versions of clustering problems. These algorithms can be considered extensions of sampling-based techniques focusing on getting close to optimal solutions in polynomial time.
Chen Chen (2008) gives an efficient constant factor approximation to \(k\)-means and \(k\)-median with outliers, and their result doesn't discard any extra outliers. However, their algorithm is not designed to be practical compared to \(k\)-means++ and its variants. The algorithm of Kumar et al. Kumar et al. (2005) suggest \((1+\epsilon)\)-approximation algorithm for \(k\)-means with outliers for the datasets when clusters are balanced and their algorithm discards a slightly larger fraction of the outliers than the optimal solution. However, their algorithm is impractical and has an exponential running time in \(k\).
### Practical heuristic without theoretical guarantee
Scalable \(k\)-means++ Bahmani et al. (2012) has empirically observed that their sampling distribution is robust to outliers. Chawla and Gionis (2013) suggests a simple modification of Lloyd's \(k\)-means method to make it robust to outliers. However, these methods do not provide any theoretical justification for their clustering quality.
\begin{table}
\begin{tabular}{|c|l|c|c|l|l|} \hline Result & Approximation guarantee & No. of clusters in the output & No. of outliers discarded & Practical algorithm & Running time \\ \hline This & \(64+\epsilon\) & \((1+c)k\) & \(z\) & Yes & \(O(ndk)\) \\ paper & \(O(\log k)\) & \(k\) & \(z\) & Yes & \(O(ndk)\) \\ \hline Bhaskara et al. (2019) & \((64+\epsilon)\) & \((1+c)k\) & \(\frac{(1+c)(1+\epsilon)z}{(1-\mu)z}\) & Yes & \(\tilde{O}(ndk)\) \\ Bhaskara et al. (2019) & \(O(\log k)\) & \(k\) & \(O(z\log k)\) & Yes & \(\tilde{O}(ndk)\) \\ \hline Deshpande et al. (2020) & 5 & \(kn/z\) & \(O(z)\) & Yes & \(O(ndk)\) \\ \hline Krishnaswamy et al. (2018) & \((53.002+\epsilon)\) & \(k\) & \(z\) & No & \(n^{O(1/\epsilon^{5})}\) \\ Friggstad et al. (2018) & \((1+\epsilon)\) & \(k(1+\epsilon)\) & \(z\) & No & \(kn^{O(d)}\) \\ Chen (2008) & \(O(1)\) & \(k\) & \(z\) & No & \(\text{poly}(n,k)\) \\ Charikar et al. (2001) & \(4(1+1/\epsilon)\) & \(k\) & \((1+\epsilon)z\) & No & \(n^{O(1)}\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison with related work on \(k\)-means clustering with outliers, where \(n\) is the no. of data points, \(d\) is the dimension, \(k\) is the given no. of clusters, and \(z\) is the given no. of outliers. In Bhaskara et al. (2019), \(c\) is a given parameter, and \(\mu>0\) is an arbitrary constant. Their algorithm crucially requires an initial guess of the optimal clustering cost. Charikar et al. (2001); Chen (2008) are about a closely related problem called \(k\)-median clustering with outliers.
### Practical algorithms with theoretical guarantee
Deshpande _et. al._Deshpande et al. (2020) suggest a bi-criteria \(O(1)\)-approximation algorithm for \(k\)-means with \(z\) outliers problem. They propose a sampling distribution consisting of a uniform and \(D^{2}\) sampling mixture. They show that if \(O(kn/z)\) points are picked using this distribution, the sampled points contain a set of \(k\) points that gives \(O(1)\)-factor approximation while discarding slightly more points than the optimal number of outliers. The advantage of our proposal over Deshpande et al. (2020) is that we do not discard any extra points as outliers.
Bhaskara _et. al._Bhaskara et al. (2019) suggest a bi-criteria approximation algorithm for the problem. They show that thresholding on the \(D^{2}\) sampling distribution makes it robust to outliers. However, their algorithms crucially require an initial guess of the optimal clustering cost over the inlier - the very quantity we want to estimate. With this assumption, they showed the following two results. In the first, their algorithm output \((1+c)k\) centers and obtain \((64+\epsilon)\) approximations by discarding \((1+c)(1+\epsilon)z/c(1-\mu)\) outliers, where \(c\) is some parameter, and \(\mu\) is a constant. In the second result, their algorithm outputs exactly \(k\) centers and obtains \(O(\log k)\) approximations by discarding \(O(z\log k)\) outliers. In both cases, the running time of their algorithm is \(\tilde{O}(ndk)\). Our algorithm compares with their results as follows: Bhaskara et al. (2019) requires a guess of the optimal clustering cost over inliers (\(\rho_{X}(C_{\text{OPT}})\)) to compute their sampling distributions. Note that this is the quantity we want to estimate in \(k\)-means with outlier problems. In contrast, we do not require such a guess of the optimal clustering cost to compute our sampling distributions. Moreover, their algorithm discards more than \(z\) outliers, whereas ours discards only \(z\) points as outliers. We summarise our comparison with the related work in Table 1.
## 4 Seeding algorithm for \(k\)-means with outliers
In this section we present our algorithm - Seeding algorithm for \(k\)-means with outliers and its theoretical guarantee. We first state (recall) some notation for convenience.
We denote \(X\subseteq\mathbb{R}^{d}\) as the set of \(n\) input points, integers \(k,z>0\) denote the number of cluster centers, and the number of outliers, respectively. We denote by
\[\phi_{A}(C)=\sum_{x\in A}\min_{c\in C}\left\|x-c\right\|^{2}, \tag{3}\]
the contribution of points in a subset \(A\subseteq X\). Let \(C_{\text{OPT}}\) be the set of optimal \(k\) centers for the \(k\)-means with outlier problem and let \(Y_{\text{OPT}}\) be the optimal subset of inliers, then
\[\rho_{X}(C_{\text{OPT}})=\phi_{Y_{\text{OPT}}}\left(C_{\text{OPT}}\right). \tag{4}\]
In the optimal solution, each point of \(Y_{\text{OPT}}\) is assigned to its nearest center in \(C_{\text{OPT}}\). This induces a natural partition of the inliers \(Y_{\text{OPT}}\) as \(A_{1}\cup A_{2}\cup\dots\cup A_{k}\) into disjoint subsets, with means \(\mu_{1},\mu_{2},\dots,\mu_{k}\), respectively, while \(X\setminus Y_{\text{OPT}}\) are the outliers. Therefore,
\[\rho_{X}(C_{\text{OPT}})=\phi_{Y_{\text{OPT}}}\left(C_{\text{OPT}}\right)= \sum_{j=1}^{k}\phi_{A_{j}}\left(\left\{\mu_{j}\right\}\right). \tag{5}\]
We present our algorithm - Seeding algorithm for \(k\)-means with outliers in Algorithm 1. Our sampling (stated in Line 5 of Algorithm 1) is a simple modification of \(k\)-means++ sampling distribution which is computed by taking minimum of \(k\)-means++ sampling distribution and the term \(\eta\cdot\rho_{X}(S_{i-1})/z\), where \(\eta\) is a parameter, \(z\) is the number of outliers. Note that the term \(\rho_{X}(S_{i-1})\) is the \(k\)-clustering cost (see Equations (4), (5)) by considering the \(S_{i-1}\) sampled points as cluster centers, and discarding the farthest \(z\) as outliers.
```
1:Input: a set \(X\subseteq\mathbb{R}^{d}\) of \(n\) points, number of outliers \(z\), a parameter \(\eta>0\), and number of iterations \(t\).
2:Output: a set \(S\subseteq X\) of size \(t\).
3: Initialize \(S_{0}\leftarrow\emptyset\).
4:for\(i=1\)to\(t\)do
5: Pick a point \(x\in X\) from the following distribution: \(\mathsf{Pr}\left(\text{picking }x\right)\propto\min\left\{\phi_{\{x\}}(S_{i-1}), \frac{\eta\cdot\rho_{X}(S_{i-1})}{z}\right\}\)
6:\(S_{i}\gets S_{i-1}\cup\{x\}\)
7:\(i\gets i+1\)
8:endfor
9:\(S\gets S_{t}\)
10:return\(S\)
```
**Algorithm 1** Seeding algorithm for \(k\)-means with outliers.
We note that a large part of our analysis is similar to Bhaskara et al. Bhaskara et al. (2019). However, the key difference is that they need an estimate of \(\rho_{X}(C_{\text{OPT}})\) to calculate their sampling probabilities - the optimal clustering cost over inliers, the very quantity which we want to estimate. On the other hand, our sampling scheme does not require any estimate of \(\rho_{X}(C_{\text{OPT}})\) to calculate the sampling probabilities.
Another difference from Bhaskara et al. Bhaskara et al. (2019) is that we can set our parameter \(\eta=1\), which allows us to discard exactly \(z\) points as outliers, the same as what the optimal solution discards. Whereas their algorithm cannot discard exactly \(z\) points as outliers because they need to estimate \(\rho_{X}(C_{\text{OPT}})\) which is required to compute the probability distribution for sampling the points.
**Overview of analysis techniques:** Our algorithm is a simple modification of Arthur and Vassilvitskii (2007), where we perform a thresholding of the \(k\)-means++ sampling distribution. This thresholding is controlled by the parameter \(\eta\) (Line 5 of Algorithm 1) that ensures that a small number of outliers points are sampled. In our analysis, (i) we monitor the number of sampled points, and the inlier clusters determined by them (by marking the farthest \(z\) points as outliers), (ii) the number of so-called wasted iteration due to sampling outliers.We measure these quantities using different potential functions. We show two theoretical guarantees on our algorithm depending on the number of points sampled in Algorithm 1. We state it in the following theorem.
**Theorem 1**: _For any constant parameter \(\eta\geq 1\), Algorithm 1, with probability at least \(\delta\), satisfies the following guarantee:_
* \(\mathsf{E}\left[\rho_{X}(S_{k})\right]=O(\log k)\rho_{X}(C_{\text{OPT}}),\qquad\)_when the number of iterations_ \(t=k\)_._
* \(\rho_{X}(S_{t})\leq\frac{(\eta+64)(1+c)\rho_{X}(C_{\text{OPT}})}{(1-\delta)c}\)_, when the number of iterations_ \(t=(1+c)k\)_._
We start with proving the following lemma.
**Lemma 2**: _Suppose after iteration \(i\) of Algorithm 1, we satisfy the following two conditions:_
\[\rho_{X}(S_{i})>\alpha\rho_{X}(C_{\text{OPT}})\quad\text{and}\quad\sum_{x\in X }\min\left\{\phi_{\{x\}}(S_{i}),\frac{\eta\rho_{X}(S_{i})}{z}\right\}\leq\gamma \rho_{X}(C_{\text{OPT}}).\]
_Then there exists \(Y\subseteq X\) such that \(\phi_{Y}(S_{i})\leq\gamma\rho_{X}(C_{\text{OPT}})\) and \(|X\setminus Y|\leq\gamma z/\alpha\eta\)._
**Proof** Let \(Y=\{x\;:\;\phi_{\{x\}}(S_{i})\leq\eta\rho_{X}(S_{i})/z\}\). Then we get
\[\sum_{x\in X}\min\left\{\phi_{\{x\}}(S_{i}),\frac{\eta\rho_{X}(S_{i})}{z} \right\}=\phi_{Y}(S_{i})+|X\setminus Y|\;\frac{\eta\rho_{X}(S_{i})}{z}\leq \gamma\rho_{X}(C_{\text{OPT}}).\]
Thus, \(\phi_{Y}(S_{i})\leq\gamma\rho_{X}(C_{\text{OPT}})\) and \(|X\setminus Y|\leq\gamma z/\alpha\eta\), using \(\rho_{X}(S_{i})>\alpha\rho_{X}(C_{\text{OPT}})\). \(\blacksquare\)
Suppose only the second condition is satisfied after iteration \(i\), then we can choose any \(\alpha\) and still get that either \(\rho_{X}(S_{i})\leq\alpha\rho_{X}(C_{\text{OPT}})\) or otherwise, using both the conditions in Lemma 2, we have \(\phi_{Y}(S_{i})\leq\gamma\rho_{X}(C_{\text{OPT}})\) for some \(Y\subseteq X\) with \(|X\setminus Y|\leq\gamma z/\alpha\eta\). This implies \(\max\{\alpha,\gamma\}\) approximation guarantee while discarding at most \(\gamma z/\alpha\eta\) points as outliers instead of \(z\). In particular, we can use \(\alpha=\gamma\) and \(\eta=1\) to get \(\alpha\)-approximation while discarding at most \(z\) points as outliers.
We now focus on proving that the second condition is satisfied in expectation. First, we show that after \(t=k\) iterations, we get \(\gamma=O(\log k)\).
**Theorem 3**: _After \(k\) iterations of Algorithm 1, we get_
\[\mathsf{E}\left[\sum_{x\in X}\min\left\{\phi_{\{x\}}(S_{k}),\frac{\eta\rho_{ X}(S_{k})}{z}\right\}\right]=O(\log k)\cdot\rho_{X}(C_{\text{OPT}}).\]
Consider any optimal inlier cluster \(A\). Below we show that if \(\phi_{A}(S_{i})\geq 64\)\(\phi_{A}(\{\mu\})\) then \(\phi_{\{\mu\}}(S_{i})\geq\frac{31}{|A|}\phi_{A}(\{\mu\})\) and there exists a large subset \(B\subseteq A\) such that \(\phi_{\{x\}}(S_{i})\) values for all \(x\in B\) are within a small constant factor of each other. In other words, \(D^{2}\) sampling or sampling w.r.t. \(\phi_{\{x\}}(S_{i})\) has an approximately uniform probability distribution over \(B\).
**Lemma 4**: _Let \(A\subseteq X\) be any subset of points with mean \(\mu\). Suppose \(\phi_{A}(S_{i})\geq 64\)\(\phi_{A}(\{\mu\})\). Then \(S_{i}\) satisfies the following two properties:_
1. \(64\)__\(\phi_{A}(\{\mu\})\leq\phi_{A}(S_{i})\leq\frac{64}{31}\left|A\right|\phi_{\{\mu\}} (S_{i})\)_, therefore,_ \(\phi_{\{\mu\}}(S_{i})\geq\frac{31}{|A|}\phi_{A}(\{\mu\})\)_._
2. _Let_ \(B=\left\{x\in A\;:\;\frac{\phi_{\{\mu\}}(S_{i})}{3}\leq\phi_{\{x\}}(S_{i})\leq \frac{7\;\phi_{\{\mu\}}(S_{i})}{3}\right\}\)_. Then,_ \(B\) _is a reasonably large subset of_ \(A\)_, i.e.,_ \(|B|\geq\frac{25}{31}\left|A\right|\)_._
**Proof** For any \(x\in A\) and any \(s\in S_{i}\), the triangle inequality gives
\[\frac{1}{2}\ \left\|\mu-s\right\|^{2}-\left\|x-\mu\right\|^{2}\leq\left\|x-s \right\|^{2}\leq 2\left\|x-\mu\right\|^{2}+2\left\|\mu-s\right\|^{2}.\]
Thus, \(\frac{1}{2}\ \phi_{\{\mu\}}(S_{i})-\left\|x-\mu\right\|^{2}\leq\phi_{\{x\}}(S_{i}) \leq 2\ \phi_{\{\mu\}}(S_{i})+2\left\|x-\mu\right\|^{2}\). Summing the right-hand inequality over \(x\in A\), we get
\[\phi_{A}(S_{i})\leq 2\left|A\right|\phi_{\{\mu\}}(S_{i})+2\ \phi_{A}(\{\mu\}) \leq 2\left|A\right|\phi_{\{\mu\}}(S_{i})+\frac{1}{32}\ \phi_{A}(S_{i}),\]
which gives the first property \(\phi_{A}(S_{i})\leq\frac{64}{31}\left|A\right|\phi_{\{\mu\}}(S_{i})\). Now let \(B^{\prime}=\{x\in A\ :\ \left\|x-\mu\right\|^{2}\leq\frac{1}{6}\phi_{\{\mu\}}(S_{i})\}\). Then by using triangle inequality for squared norms, we can check that \(B^{\prime}\subseteq B\). Since \(\sum_{x\in A}\left\|x-\mu\right\|^{2}=\phi_{A}(\{\mu\})\), Markov's inequality implies that \(\left|A\setminus B^{\prime}\right|\leq 6\ \phi_{A}(\{\mu\})/\phi_{\{\mu\}}(S_{i})\). Now using the first property \(\phi_{A}(S_{i})\leq\frac{64}{31}\left|A\right|\phi_{\{\mu\}}(S_{i})\) and the assumption \(\phi_{A}(S_{i})\geq 64\ \phi_{A}(\{\mu\})\), we get \(\left|A\setminus B^{\prime}\right|\leq 6\left|A\right|/31\). Since \(B^{\prime}\subseteq B\), we get the second property \(\left|B\right|\geq\frac{25}{31}\left|A\right|\). \(\blacksquare\)
The lemma below shows that in each iteration of Algorithm 1, if we pick a point from an optimal inlier cluster \(A\), then we get a \(64\)-approximation guarantee for it, in expectation.
**Lemma 5**: _For any \(A\subseteq X\) and its mean \(\mu\), the point \(x\) picked by Algorithm 1 in a single iteration satisfies \(\mathsf{E}\left[\phi_{A}(S_{i}\cup\{x\})\mid x\in A\right]\leq 64\ \phi_{A}(\{\mu\})\)._
**Proof** Let \(B=\left\{x\in A\ :\ \frac{\phi_{\{\mu\}}(S_{i})}{3}\leq\phi_{\{x\}}(S_{i}) \leq\frac{7\ \phi_{\{\mu\}}(S_{i})}{3}\right\}\), as defined in Lemma 4. Suppose \(\frac{7}{3}\phi_{\{\mu\}}(S_{i})\leq\eta\rho_{X}(S_{i})/z\). Then for all \(x\in B\), we have \(\min\{\phi_{\{x\}}(S_{i}),\eta\rho_{X}(S_{i})/z\}\geq\frac{1}{3}\phi_{\{\mu\}} (S_{i})\). Hence,
\[\sum_{x\in A}\min\{\phi_{\{x\}}(S_{i}),\eta\rho_{X}(S_{i})/z\} \geq\sum_{x\in B}\min\{\phi_{\{x\}}(S_{i}),\eta\rho_{X}(S_{i})/z\}\] \[\geq\left|B\right|\frac{1}{3}\phi_{\{\mu\}}(S_{i})\] \[\geq\frac{25}{31}\left|A\right|\cdot\frac{1}{3}\cdot\frac{31}{64} \frac{\phi_{A}(S_{i})}{\left|A\right|}\qquad\text{using Lemma \ref{lem:
sampling from Arthur and Vassilvitskii Arthur and Vassilvitskii (2007) stated below.
The following key lemma was used by Arthur and Vassilvitskii Arthur and Vassilvitskii (2007) in their analysis of \(k\)-means++ algorithm that uses \(D^{2}\) sampling.
**Lemma 6**: _For any \(A\subseteq X\) and its mean \(\mu\), the point \(x\) picked by \(D^{2}\) sampling satisfies \(\mathsf{E}\left[\phi_{A}(S_{i}\cup\{x\})\mid x\in A\right]\leq 8\)\(\phi_{A}(\{\mu\})\)._
Proof of Theorem 3 is then similar to that in Theorem 3.1 of Bhaskara et al. Bhaskara et al. (2019). We define \(U_{i}\) as the number of points in optimal inlier clusters untouched by Algorithm 1 until step \(i\). Further, let \(H_{i}\) denote the union of points in the covered (optimal) clusters until step \(i\). Let \(w_{i}\) be the number of wasted iterations, i.e., iterations that pick either outliers \(X\setminus Y_{\text{OPT}}\) or repeat points from already touched inlier clusters \(A_{j}\)'s from the optimal solution \(Y_{\text{OPT}}=A_{1}\cup A_{2}\cup\ldots\cup A_{k}\). We denote \(n_{i}\) as the number of uncovered optimal clusters at iteration \(i\).
For brevity we denote
\[\xi(U_{i},S_{i}):=\sum_{x\in U_{i}}\min\left\{\phi_{\{x\}}(S_{i}),\frac{\eta \cdot\rho_{X}(S_{i})}{z}\right\}.\]
We define a similar potential function as in Bhaskara et al. Bhaskara et al. (2019).
\[\Psi_{i}=\frac{w_{i}\cdot\xi(U_{i},S_{i})}{n_{i}}.\]
For any \(i>0\), we have the following lemma:
**Lemma 7** (Adapted from Lemma \(9\) of Bhaskara et al. (2019)): \[\mathsf{E}\left[\xi(H_{i},S_{i})\right]\leq 64.\rho_{X}(C_{\text{OPT}}).\]
A proof of the above lemma is similar to that of Lemma 9 of Bhaskara et al. (2019), and follows from Lemma 5 and 4 along with the inductive argument. We defer it to the full version of the paper.
**Lemma 8** (Adapted from Lemma \(8\) of Bhaskara et al. (2019)): _Let \(S_{i}\) be the set of sampled points in the \(i\)-th iteration, then we have_
\[\mathsf{E}\left[\Psi_{i+1}-\Psi_{i}|S_{i}\right]\leq\frac{\alpha\cdot\rho_{X} (C_{\text{OPT}})+\xi(H_{i},S_{i})}{k-i}.\]
In Lemma 8, \(\alpha\) is a constant mentioned as in Lemma 2. A proof of Lemma 8 is analogous to the proof of Lemma 8 of Bhaskara et al. (2019). We defer it to the full version of the paper.
We now conclude a proof of Theorem 3. Its proof completes via combining Lemma 7 and 8, and summing over \(0\leq i\leq k-1\).
Further, similar to Bhaskara et al. Bhaskara et al. (2019), we can show the following generalization for \(t=(1+c)k\) iterations.
**Theorem 9**: _For any \(\delta>0\) and parameters \(c,\eta>0\), after \(t=(1+c)k\) iterations of Algorithm 1 we get_
\[\sum_{x\in X}\min\left\{\phi_{\{x\}}(S_{t}),\frac{\eta\rho_{X}(S_{t})}{z}\right\} \leq\frac{(\eta+64)(1+c)\rho_{X}(C_{\text{OPT}})}{(1-\delta)c},\]
_with probability at least \(\delta\)._
## 5 Experiments
Hardware description.We performed our experiments on a machine having the following configuration: CPU: Intel(R) Core(TM) i5-3320M CPU @ 2.70GHz x 4; Memory: 8 GB.
Baseline algorithms:We study the performance of our algorithm (Algorithm 1) to find \(k\) initial cluster centers with the following baselines: (a) \(\text{TKM}++\) (Greedy Sampling for Approximate Clustering in the Presence of Outliers) Bhaskara et al. (2019), (b) \(\text{RKM}++\) (Robust \(k\)-means++ Deshpande et al. (2020)), (c) \(\text{KM}++\) (\(k\)-means++ Arthur and Vassilvitskii (2007)), and (d) random seeding Lloyd (2006). Robust \(k\)-means++ (\(\text{RKM}++\)) Deshpande et al. (2020) uses \((\alpha,1-\alpha)\) mixture of uniform and \(D^{2}\) sampling distribution. For a parameter \(\delta\in(0,1)\), their algorithm samples \(O(k/\delta)\) points, then \(k\) points are picked from this sampled set using _weighted \(k\)-means++_ sampling Bahmani et al. (2012); Deshpande et al. (2020). We use \(\alpha=1/2\) and \(\delta=0.1\) for all the experiments. \(\text{TKM}++\) Bhaskara et al. (2019) requires one parameter - an initial guess of optimal clustering cost - to derive the probability distribution on each data point. In their paper they did not mention any principal way of guessing the clustering cost. For our empirical comparison we used, the cost \(k\)-means++ results as an initial guess. Similar to our algorithm, they also require an error parameter to derive the probability distribution. In order to have a fair comparison, we used the same value of the error parameter \(\beta\) in both the algorithms.
Evaluation metric:In all the baselines, once we sample the \(k\) cluster centers, we mark the farthest \(z\) points as outliers. We then note the clustering cost over inliers considering the \(k\) sampled points as the cluster centers. We note the minimum, maximum, and average values of the clustering cost are over 10 repetitions. We also use _precision_ and _recall_ as our evaluation metric. If \(z^{*}\) is the set of true outliers and \(z\) is the set of outliers reported by the algorithm, then _precision_:=\(|z^{*}\cap z|/|z|\), and _recall_:=\(|z^{*}\cap z|/|z^{*}|\). We also note the seeding time of the baseline algorithms to have a comparison of their efficiency.
### Results on Synthetic Data Sets
Dataset.We generate synthetic dataset in the similar way as used in \(k\)-means++ Arthur and Vassilvitskii (2007). We discuss it as follows. We pick \(k+z\) uniformly random points from a large \(d\)-dimensional hypercube of side length \(s=100\). We use \(k\) points from them as means and pick \(n/k\) points around each of them from a random Gaussian of unit variance. This gives a data set of \(n+z\) points with \(n\) points clustered into \(k\) clusters and the remaining \(z\) as outliers.
Empirical Evaluation.We perform experiments on synthetic datasets with values \(n=1000,d=2,k=20\) and the number of outliers \(25,50,100\), and we summarise our results in Table 2.
Insight.In almost every scenario our algorithm outperforms with respect to random, \(k\)-means++ and \(\text{TKM}++\) in terms of both precision/recall, and clustering cost metric. Our performance was comparable/better on most of the instances with respect to \(\text{RKM}++\). Our algorithm is much faster than \(\text{RKM}++\), and comparable with respect to \(\text{TKM}++\), however it was slightly off with respect to random, \(k\)-means++.
### Results on real world data sets
#### 5.2.1 Small clusters as outliers:
In this setting, we consider a few very small clusters as outliers and try to locate them using our baselines.
Results on Shuttle dataset.Shuttle training data set from UCI Machine Learning Repository Lichman (2013) contains \(43,500\) points. It has \(7\) classes in total. The two smallest classes contain only \(17\) points and we would like to detect these as outliers. We run our baselines on the Shuttle dataset with \(k\in\{5,10,15\}\). We summarise our empirical findings in Table 3.
Results on KDDCup Full dataset:KDDFull Lichman (2013) This dataset is from 1999 kddcup competition and contains instances describing connections of sequences of tcp pack
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Method & \(z\) & \multicolumn{3}{c}{Precision} & \multicolumn{3}{c}{Recall} & \multicolumn{3}{c}{Cost} & Time (s) \\ & & Max. & Avg. & Med. & Max. & Avg. & Med. & Min. & Avg. & Med. & \\ \hline RAND & 25 & 0.52 & 0.31 & 0.30 & 0.52 & 0.31 & 0.30 & 0.57 & 1.20 & 1.05 & 0.06 \\ \(\text{KM}++\) & 25 & 0.72 & 0.53 & 0.50 & 0.72 & 0.53 & 0.5 & **0.22** & 0.43 & 0.41 & 0.20 \\ \(\text{TKM}++\) & 25 & 0.88 & 0.67 & 0.72 & 0.88 & 0.67 & 0.72 & 0.25 & 0.58 & 0.51 & 0.92 \\ \(\text{RKM}++\) & 25 & 0.88 & 0.82 & **0.84** & 0.88 & 0.82 & **0.84** & 0.25 & 0.29 & **0.30** & 6.5 \\ \hline This work & 25 & **0.92** & **0.88** & **0.84** & **0.92** & **0.88** & **0.84** & **0.22** & **0.26** & 0.32 & 0.90 \\ \hline RAND & 50 & 0.34 & 0.18 & 0.21 & 0.34 & 0.18 & 0.21 & 1.42 & 2.65 & 2.54 & 0.10 \\ \(\text{KM}++\) & 50 & 0.80 & 0.42 & 0.38 & 0.80 & 0.42 & 0.38 & 0.31 & 0.89 & 0.85 & 0.23 \\ \(\text{TKM}++\) & 50 & 0.82 & 0.76 & 0.48 & 0.82 & 0.76 & 0.48 & 0.34 & 0.37 & 0.69 & 0.49 \\ \(\text{RKM}++\) & 50 & 0.88 & 0.82 & 0.83 & 0.88 & 0.82 & 0.83 & **0.18** & **0.19** & **0.18** & 6.5 \\ \hline This work & 50 & **0.92** & **0.88** & **0.84** & **0.92** & **0.88** & **0.84** & 0.22 & 0.26 & 0.32 & 0.60 \\ \hline RAND & 100 & 0.36 & 0.20 & 0.20 & 0.44 & 0.24 & 0.24 & 0.95 & 1.71 & 1.85 & 0.06 \\ \(\text{KM}++\) & 100 & 0.68 & 0.49 & 0.51 & 0.82 & 0.59 & 0.62 & 0.25 & 0.58 & 0.55 & 0.19 \\ \(\text{TKM}++\) & 100 & 0.75 & 0.49 & 0.54 & 0.90 & 0.59 & 0.65 & 0.23 & 0.71 & 0.60 & 0.71 \\ \(\text{RKM}++\) & 100 & **0.81** & **0.78** & **0.77** & **0.98** & **0.94** & **0.93** & 0.32 & 0.42 & 0.41 & 6.5 \\ \hline This work & 100 & 0.77 & 0.76 & 0.74 & 0.93 & 0.91 & 0.89 & **0.18** & **0.22** & **0.21** & 0.93 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Result on Synthetic dataset with \(k=20\) and \(z=25,50,100\) outliers. We mark the farthest \(25,50\) and \(100\) points as outliers, respectively. For \(\text{RKM}++\) we use \(\delta=0.1\) and \(\alpha=1/2\). We used the parameter \(\beta=0.5\) for both \(\text{TKM}++\) and our method. All costs are multiplicative of \(10^{4}\).
ets, and have about 4.9M data points. We only consider the 34 numerical features of this dataset. We also normalize each feature so that it has zero mean and unit standard deviation. There are 23 classes in this dataset, 98.3% points of the dataset belong to 3 classes (normal 19.6%, neptune 21.6%, and smurf 56.8%). We consider the remaining small clusters as outliers, and the number of outliers is 45747. We run our baselines on the KDDFull dataset with \(k=3,5\) and considering above mentioned 45747 points as outliers. We summarise our empirical findings in Table 4.
Insight.For both KDDCup and Shuttle datasets, we noticed that our clustering results in terms of both - cost and precision/recall metric are significantly better than that of \(\text{TKM}++\), \(k\)-means++ and random seeding, whereas it is comparable _w.r.t._\(\text{RKM}++\). The running time of our algorithm is faster than \(\text{RKM}++\) and RAND whereas it is comparable with respect to the other remaining baselines.
#### 5.2.2 Randomly sampling a small fraction of points and adding large Gaussian noise to them
In this setting, we randomly sample a small fraction of points from the datasets, and add large Gaussian noise to their dimensions. We consider these points as outliers and and try to locate them using our baselines.
Results on Skin Segmentation Lichman (2013) and Shuttle dataset:The Skin Segmentation data is constructed over _B, G, R_ color space. _Skin_ and _Non-skin_ data points
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Method & \(k\) & \multicolumn{3}{c}{Precision} & \multicolumn{3}{c}{Recall} & \multicolumn{3}{c}{Cost} & Time (s) \\ & & Max. & Avg. & Med. & Max. & Avg. & Med. & Min. & Avg. & Med. & \\ \hline RAND & 5 & 0.19 & **0.19** & 0.19 & 0.23 & **0.23** & 0.23 & 3.56 & 3.66 & 3.63 & 1.01 \\ \(\text{KM}++\) & 5 & **0.28** & 0.17 & **0.23** & **0.35** & 0.21 & **0.29** & 1.77 & 2.22 & 2.27 & 2.26 \\ \(\text{TKM}++\) & 5 & 0.23 & 0.17 & **0.23** & 0.29 & 0.21 & **0.29** & 1.70 & 2.26 & 2.23 & 3.42 \\ \(\text{RKM}++\) & 5 & 0.26 & 0.18 & **0.23** & 0.33 & **0.23** & **0.29** & 1.85 & **2.07** & **2.01** & 5.3 \\ \hline This work & 5 & 0.23 & **0.19** & **0.23** & 0.29 & **0.23** & **0.29** & **1.53** & 2.08 & **2.01** & 3.54 \\ \hline RAND & 10 & 0.14 & 0.14 & 0.14 & 0.29 & 0.29 & 0.29 & 1.55 & 1.78 & 1.80 & 1.54 \\ \(\text{KM}++\) & 10 & 0.26 & **0.16** & **0.17** & 0.52 & 0.31 & **0.35** & 0.72 & 0.95 & 0.92 & 3.41 \\ \(\text{TKM}++\) & 10 & 0.26 & 0.14 & 0.11 & 0.52 & 0.29 & 0.23 & 0.73 & 0.96 & 0.90 & 7.35 \\ \(\text{RKM}++\) & 10 & 0.34 & 0.15 & 0.15 & 0.68 & 0.30 & 0.30 & 0.69 & 0.85 & 0.85 & 11.5 \\ \hline This work & 10 & **0.35** & **0.16** & **0.17** & **0.70** & **0.32** & **0.35** & **0.56** & **0.65** & **0.64** & 8.7 \\ \hline RAND & 15 & 0.13 & 0.13 & 0.13 & 0.41 & 0.41 & 0.41 & 0.80 & 0.85 & 0.84 & 2.15 \\ \(\text{KM}++\) & 15 & 0.25 & 0.17 & 0.19 & 0.76 & 0.52 & 0.58 & 0.39 & 0.49 & 0.50 & 5.25 \\ \(\text{TKM}++\) & 15 & 0.25 & 0.17 & 0.19 & 0.76 & 0.52 & 0.58 & 0.36 & 0.48 & 0.48 & 11.20 \\ \(\text{RKM}++\) & 15 & 0.29 & **0.21** & **0.22** & 0.88 & 0.64 & **0.67** & 0.42 & 0.49 & 0.49 & 19.5 \\ \hline This work & 15 & **0.31** & 0.18 & 0.18 & **0.94** & **0.55** & 0.52 & **0.27** & **0.31** & **0.31** & 13.12 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Result on Shuttle dataset considering the two smallest classes as outliers which contain only 17 points. We mark the farthest \(21,34\), and \(51\) points as outliers when the values of \(k\) are \(5,10\), and \(15\), respectively. In both \(\text{TKM}++\) and our algorithm, we used the error parameter \(\beta=0.1\). For \(\text{RKM}++\) we use \(\delta=0.1\) and \(\alpha=1/2\). All costs are multiplicative of \(10^{8}\).
are generated using skin textures from face images of diversity of age, gender, and race people. The dataset consists of 245057 points and each data point has 4 attributes. We sample 2.5% of points and add large Gaussian noise to their features. We summarise our empirical findings in Table 5.
Results on Shuttle dataset:For the Shuttle dataset, we randomly sample 1000 points from the dataset and add large Gaussian noise to their features. We consider them as outliers and try to locate them using our baseline methods. We summarise our empirical findings in Table 6.
Insight.For both Skin and Shuttle datasets, we noticed that our clustering results in terms of precision/recall metric are significantly better than that of \(\text{TKM}++\), \(k\)-means++ and random seeding on the majority of instances. However it is comparable with respect to \(\text{RKM}++\). The running time of our algorithm is faster than \(\text{RKM}++\) and RAND whereas it is comparable with respect to the remaining baselines.
## 6 Concluding remarks and open questions
We suggest an outlier robust seeding for the \(k\)-means clustering problem. Our major contribution lies in developing a simple and intuitive tweak to the \(k\)-means++ sampling algorithm, which makes it robust to outliers. The running time of our method is \(O(ndk)\), and offers \(O(\log k)\) approximation guarantee. We also gave a bi-criteria approximation algorithm, where our algorithm samples slightly more than \(k\) points as cluster centers, which consist of a set of \(k\) points (as inlier cluster centers) that gives \(O(1)\) approximation guarantee. We empirically evaluate our algorithm on synthetic as well as real-world datasets, and show that our proposal outperforms _w.r.t._\(k\)-means++ Arthur and Vassilvitskii (2007), random initialization, and \(\text{TKM}++\) Bhaskara et al. (2019) algorithm in both the metric a) precision/recall and b) \(k\)-means with outliers clustering cost (Equation 2). However the performance of our proposal remains comparable to \(\text{RKM}++\) Deshpande et al. (2020). The
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Method & \(k\) & \multicolumn{3}{c}{Precision} & \multicolumn{3}{c}{Recall} & \multicolumn{3}{c}{Cost} & \multicolumn{3}{c}{Time (s)} \\ & Max. & Avg. & Med. & Max. & Avg. & Med. & Min. & Avg. & Med. & \\ \hline RAND & 3 & 0.64 & **0.63** & **0.64** & 0.64 & **0.63** & **0.64** & 2.83 & 4.04 & 4.10 & 77.1 \\ \(\text{KM}++\) & 3 & 0.64 & 0.59 & 0.61 & 0.64 & 0.59 & 0.61 & 3.29 & 5.41 & 4.86 & 119.2 \\ \(\text{TKM}++\) & 3 & 0.64 & 0.57 & 0.61 & 0.64 & 0.57 & 0.61 & 2.83 & 5.76 & 5.83 & 211.86 \\ \(\text{RKM}++\) & 3 & 0.64 & 0.62 & 0.61 & 0.64 & 0.62 & 0.61 & **2.8** & **2.8** & **2.8** & 295 \\ \hline This work & 3 & **0.65** & **0.63** & **0.64** & **0.65** & **0.63** & **0.64** & **2.83** & 4.20 & 4.07 & 225.23 \\ \hline RAND & 5 & 0.64 & 0.57 & **0.60** & 0.64 & 0.57 & **0.60** & **2.48** & 4.23 & 2.95 & 100.2 \\ \(\text{KM}++\) & 5 & 0.63 & 0.54 & 0.59 & 0.63 & 0.54 & 0.59 & 2.61 & **3.27** & **2.83** & 292.1 \\ \(\text{TKM}++\) & 5 & 0.63 & 0.45 & 0.44 & 0.63 & 0.45 & 0.44 & 2.98 & 3.85 & 3.63 & 402.3 \\ \(\text{RKM}++\) & 5 & 0.64 & 0.58 & 0.61 & 0.64 & 0.58 & 0.61 & 2.52 & 3.8 & 3.7 & 600.2 \\ \hline This work & 5 & **0.65** & **0.60** & **0.60** & **0.65** & **0.60** & **0.60** & 2.54 & 3.78 & 3.11 & 423.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Result on \(\text{KDDCup}\) dataset with \(k=3\) and \(k=5\) with 45747 outliers. We used the error parameter \(\beta=0.1\) for both \(\text{TKM}++\) and our algorithm. For \(\text{RKM}++\) we use \(\delta=0.1\) and \(\alpha=1/2\). All costs are multiplicative of \(10^{7}\).
running time of our algorithm is significantly faster than the random seeding and \(\text{RKM}++\) whereas it is noted to be comparable _w.r.t._\(k\)-means++ Arthur and Vassilvitskii (2007), and \(\text{TKM}++\) Bhaskara et al. (2019). Our work leaves the possibility of several open questions: - improving the theoretical bounds of our proposal, and extending it to other class clustering problems with outliers setting. Finally, given our proposal's simplicity, performance, and efficiency, we hope it will be adopted in practice.
|
2302.00275 | Learning Generalized Zero-Shot Learners for Open-Domain Image
Geolocalization | Image geolocalization is the challenging task of predicting the geographic
coordinates of origin for a given photo. It is an unsolved problem relying on
the ability to combine visual clues with general knowledge about the world to
make accurate predictions across geographies. We present
$\href{https://huggingface.co/geolocal/StreetCLIP}{\text{StreetCLIP}}$, a
robust, publicly available foundation model not only achieving state-of-the-art
performance on multiple open-domain image geolocalization benchmarks but also
doing so in a zero-shot setting, outperforming supervised models trained on
more than 4 million images. Our method introduces a meta-learning approach for
generalized zero-shot learning by pretraining CLIP from synthetic captions,
grounding CLIP in a domain of choice. We show that our method effectively
transfers CLIP's generalized zero-shot capabilities to the domain of image
geolocalization, improving in-domain generalized zero-shot performance without
finetuning StreetCLIP on a fixed set of classes. | Lukas Haas, Silas Alberti, Michal Skreta | 2023-02-01T06:44:07Z | http://arxiv.org/abs/2302.00275v1 | # Learning Generalized Zero-Shot Learners for Open-Domain Image Geolocalization
###### Abstract
Image geolocalization is the challenging task of predicting the geographic coordinates of origin for a given photo. It is an unsolved problem relying on the ability to combine visual clues with general knowledge about the world to make accurate predictions across geographies. We present StreetCLIP, a robust, publicly available foundation model not only achieving state-of-the-art performance on multiple open-domain image geolocalization benchmarks but also doing so in a zero-shot setting, outperforming supervised models trained on more than 4 million images. Our method introduces a meta-learning approach for generalized zero-shot learning by pretraining CLIP from synthetic captions, grounding CLIP in a domain of choice. We show that our method effectively transfers CLIP's generalized zero-shot capabilities to the domain of image geolocalization, improving in-domain generalized zero-shot performance without finetuning StreetCLIP on a fixed set of classes.
Contrastive Pretraining Generalized Zero-Shot Zero-Shot Learning Meta-Learning Image Geolocalization Visual Place Recognition Photo Geolocalization CLIP Computer Vision Multi-Modal
## 1 Introduction
Image geolocalization touches many aspects of our lives with applications in search engines and on-device photo tagging serving billions of users every day. By understanding the hidden locational clues in images, entirely new approaches of analyzing the natural and built environment are being opened up with profound implications for a number of fields, ranging from the recognition of weather, season, and climate patterns to rural and urban scene understanding, and improvements in navigation and self-driving car technology. Since the beginning of 2022, image geolocalization has additionally garnered extensive media coverage for becoming an immediate priority of investigative journalists and open source intelligence (OSINT) researchers in their attempt to verify information and to document war atrocities in Ukraine, extracting geolocalization information from social media content.
Despite high academic and public interest, image geolocalization remains an extremely challenging problem. This is because training datasets are geographically sparse, often limited to specific countries, and biased towards urban or rural scenes. The task is further complicated by the fact that geolocalization requires reasoning on multiple levels of geographic granularity (e.g. countries, cities, and neighborhoods) and that the geolocation of an image is a property that can often not be observed directly. Effective image geolocalization thus forces a model to not only distill critical information from subtle visual clues but to also combine these clues with a general understanding of the world, including abstract concepts such as weather patterns, political boundaries, or on what side of the road people are driving on.
Prior works on image geolocalization have employed convolutional neural networks (Weyand et al., 2016; Vo et al., 2017) and more recently transformer models (Pramanick et al., 2022; Wu & Huang, 2022; Luo et al., 2022), but have all struggled to generalize across geographies. In Wu & Huang (2022), the authors note that CLIP (Radford et al.,
2021) has impressive zero-shot capabilities which extend to image geolocalization and that finetuning a model based on CLIP's image embeddings can further increase performance. However, when evaluated on data that is out of distribution, the performance degrades to be worse than the original CLIP model's accuracy (Wu and Huang, 2022). This raises the question of how we can equip models with more robust knowledge, enabling both transfer and zero-shot learning to unseen geographies and similarly structured tasks.
Motivated by this research question, we introduce a domain-specific pretraining method that generates synthetic captions for contrastive learning, enabling the use of natural language to ground CLIP in the context of the image geolocalization task. For a given image classification task, we uses class labels to derive synthetic captions from a domain-specific caption template. Together with the corresponding images, the synthetic captions are then used for an additional pretraining round of CLIP in a contrastive setting. We show that our method is equivalent to generating a new domain-specific generalized zero-shot learner during every batch iteration which learns to distinguish between classes seen and unseen during training. In doing so, we show that our method is a meta-learning approach for generalized zero-shot learning, encouraging models to learn how to synthesize better domain-specific generalized zero-shot learners.
To demonstrate the effectiveness of domain-specific pretraining method via our synthetic caption method, we introduce StreetCLIP, a robust image geolocalization model trained on an original dataset of 1.1 million Street View images. StreetCLIP achieves state-of-the-art (SOTA) performance on the open-domain image geolocalization benchmarks IM2GPS (Hays and Efros, 2008) and IM2GPS3K (Vo et al., 2017), improving the geolocation prediction accuracy for some distance thresholds between 0.3 and 2.4 percentage points. Beyond improving upon SOTA performance, our results are notable because in contrast to prior SOTA methods, StreetCLIP performs inference using zero-shot learning, outperforming supervised models trained on more than 4 million in-distribution images. We make StreetCLIP available to the broader research community by releasing our pretrained model on Hugging Face1.
Footnote 1: StreetCLIP is publicly available under the CC-BY-NC-4.0 license at [https://huggingface.co/geolocal/StreetCLIP](https://huggingface.co/geolocal/StreetCLIP).
Finally, because our domain-specific pretraining method does not rely on any image captioning datasets, it can be extended to any image classification problem conditional on class names being expressible in natural language. This opens the door for further investigation whether our method can also improve CLIP's generalized zero-shot learning capabilities in other domains.
## 2 Related Work
### Image Geolocalization
The task of image geolocalization, also known as visual place recognition (VPR), is a difficult problem due to the sheer diversity of geographies and conditions in which images are taken. Because datasets suitable for geolocational analysis
Figure 1: **StreetCLIP’s Synthetic Caption Pretraining**. We formulate the task of image geolocalization in natural language via synthetic captions at various levels of geographic granularity. For every batch, our model synthesizes a generalized zero-shot learner, thus learning how to zero-shot learn within a specific domain. The figure layout draws on Radford et al. (2021).
are often heavily biased towards certain geographies, prior research has primarily focused on image geolocalization in constrained environments.
Despite the increasing number of publications in the field, the research community has so far failed to clearly distinguish between problem formulations that aim to geolocate images based on a limited, fixed set of classes or geographies (e.g. landmarks or specific cities), and a setting in which a model must reasonably expect that a test set image could have been taken anywhere in the world. For instance, Berton et al. (2022) introduces a benchmark for image geolocalization based on six different datasets, yet five of them evaluate image geolocalization within a single city or suburb, while the last dataset, Mapillary (Warburg et al., 2020), draws on images from a fixed set of 30 cities. While the benchmark introduced by Berton et al. (2022) is an important contribution to the field of image geolocalization, it is limited in its practicality for real-world applications and does not evaluate the transfer-learning or zero-shot capabilities required by planet-scale image geolocalization without strong priors about the distribution of test set images.
To draw a clear distinction between these two problem formulations, we introduce the terms closed-domain and open-domain image geolocalization. The objective of this distinction is to improve the evaluation process of image geolocalization models and to allow for a better specification of their intended use cases.
#### 2.1.1 Closed-Domain Image Geolocalization
Closed-domain image geolocalization (CDIG) is the problem of predicting the location of images either from a fixed set of geolocation classes or within a geographic region such as a selection of cities or countries. Because of the limited availability of comprehensive datasets and computational resources, older related work developed specialized, feature-based approaches constrained to either specific natural environments such as mountain ranges (Baatz et al., 2012; Saurer et al., 2016), deserts (Tzeng et al., 2013), or beaches (Cao et al., 2012), or the built environment of single cities (Zamir & Shah, 2010, 2014). With the advent of deep learning in computer vision, CDIG performance has improved significantly both on street-level (Berton et al., 2022) and broader geographic scales (Suresh et al., 2018). The concurrent work of Wu & Huang (2022) and Luo et al. (2022) builds on top of these approaches, being the first to apply CLIP (Radford et al., 2021) to the problem of CDIG. Wu & Huang (2022) additionally are the first to employ CLIP in a zero-shot setting via linear probing. In contrast to Wu & Huang (2022), our work uses a planet-scale, hierarchical linear probing strategy that enables zero-shot image classification models to perform open-domain image geolocalization.
#### 2.1.2 Open-Domain Image Geolocalization
Open-domain image geolocalization (ODIG) does not restrict the geographic domain of test set images, meaning models have to perform image geolocalization in an unconstrained manner. The first modern attempt at planet-scale image geolocalization is attributed to IM2GPS (Hays & Efros, 2008) with follow-up work by Zamir & Shah (2014) and Vo et al. (2017). All three approaches rely on image retrieval methods from large reference datasets during test time which is slow and computationally intensive.
In 2016, Google researchers released PlaNet (Weyand et al., 2016) that first applied convolutional neural networks (Krizhevsky et al., 2012) in an end-to-end fashion to photo geolocalization. It also first cast the problem as a classification task after De Brebisson et al. (2015) had demonstrated that it was difficult for models to directly predict geographic coordinates. Further work included the incorporation of scene information (Muller-Budack et al., 2018) and the progression to vision transformer architectures (Pramanick et al., 2022) based on the work of Vaswani et al. (2017). While all these approaches achieved impressive results, their supervised classification setups transform ODIG into CDIG problems during training, limiting performance under distribution shifts.
Our method solves both the limitations of retrieval-based and supervised classification methods by being the first work to apply zero-shot learning to the problem of open-domain image geolocalization.
### Learning Under Distribution Shifts
Successful open-domain image geolocalization requires models to be robust to distribution shifts and ideally to also perform well on out-of-distribution data, for example on countries not seen during training. ODIG is thus a perfect environment to evaluate both the robustness and zero-shot capabilities of models with learnings from ODIG extending to other domains. CLIP, introduced by Radford et al. (2021), has been shown to be a robust image classification model with exceptional zero-shot capabilities by employing natural language supervision. A key question in the literature has consequently emerged: how can we transfer CLIP's knowledge to a specific target domain?
In their work on image geolocalization, Wu & Huang (2022) note that finetuning a model based on CLIP's image embeddings is an effective transfer learning approach conditional on test set data being drawn from the same distribution
as used during finetuning. The authors further observe that finetuning hurts performance compared to zero-shot learning with CLIP when evaluating on a test set drawn from a distribution different than their finetuning distribution.
The phenomenon of finetuning deteriorating model robustness to distribution shifts has also been observed in a broader image classification context (Wortsman et al., 2022). Wortsman et al. (2022) address this problem by ensembles the weights of the original zero-shot and finetuned models, achieving large accuracy gains under distribution shifts. In contrast to Wortsman et al. (2022), our approach is capable of not only improving performance under distribution shifts, but also on out-of-distribution data which finetuning methods cannot achieve because of a fixed set of classes.
Related work on near out-of-distribution learning demonstrates that the pretraining procedure of large transformers is responsible for their robustness to distribution shifts. This is because pretraining on different datasets reduces the models' vulnerability to shortcut learning (Geirhos et al., 2020; Fort et al., 2021). Leveraging this insight, Shen et al. (2021) use masked language modeling as an additional pretraining step for CLIP, achieving better generalization and downstream task performance.
Despite the success of contrastive pretraining in creating CLIP, transferring CLIP's knowledge to a target domain via supervised contrastive learning (Khosla et al., 2020) remains largely unexplored, likely due to the lack of domain-specific image-caption datasets (Aruitunian et al., 2021). Our method addresses this limitation via synthetic captions derived from training class labels. While supervised contrastive learning has been shown to produce degenerate representations, mapping all instances of a class to the same point in latent space (Chen et al., 2022), this is not the case for multi-modal models supervised via natural language which have a virtually infinite number of possible classes. Because semantically similar captions can be represented with many different tokens, natural language supervision in CLIP can be understood as a form of label smoothing, preventing the class collapse observed in Chen et al. (2022).
### Generalized Zero-Shot Learning
Taking robustness to domain shifts to the extreme, an ideal property of ODIG models is the ability to correctly classify images from countries, regions, or cities not seen during training, known as zero-shot learning in learning theory. In its simplest form, zero-shot learning is the task of learning a classifier \(f:X\to Y\) such that \(f\) can correctly predict novel values of \(Y\) not seen during training (Palatucci et al., 2009). This framework can be extended to generalized zero-shot learning (GZSL) in which \(Y\) includes both seen and unseen classes during inference.
While the literature on GZSL for multi-modal models is fairly novel, it identifies two important gaps; first, generalized zero-shot learning in the context of CLIP requires further investigation (Pourpanah et al., 2022), and second, most GZSL methods are based on ideal datasets with learnings not translating to real-world datasets (Pourpanah et al., 2022). Our work addresses both of these research gaps by introducing a novel pretraining method for CLIP to improve GZSL while evaluating it on two challenging real-world benchmarks: IM2GPS (Hays and Efros, 2008) and IM2GPS3K (Vo et al., 2017).
## 3 Preliminaries
In this section, we lay out the notation and existing theory behind generalized zero-shot learning and how CLIP's zero-shot capabilities relate to it. This will be necessary to show how our method trains CLIP to learn how to perform generalized zero-shot learning.
### Generalized Zero-Shot Learning
Generalized zero-shot learning is the task of training a classifier to correctly predict both classes which were seen and unseen during training. Formally, let \(Y^{s}=\{y_{1}^{s},\dots,y_{N_{s}}^{s}\}\) be the set of all classes observed during training and \(Y^{u}=\{y_{1}^{u},\dots,y_{N_{u}}^{u}\}\) the set of all unseen classes, with \(Y^{s}\cap Y^{u}=\emptyset\), \(N_{s}\) being the number of seen classes and \(N_{u}\) the number of unseen classes. Further, let \(Y=Y^{s}\cup Y^{u}\) and \(X\) be the set of all possible model inputs and \(X^{tr}\subset X,X^{ts}\subset X\) be the set of training and testing inputs, respectively, with \(X^{tr}\cap X^{ts}=\emptyset\).
For a classification setting, meaning there exist functions \(g^{tr}\) and \(g^{ts}\) which map training and testing examples to their single ground truth class, respectively, we can now define the training and testing datasets for GZSL as follows:
\[\mathcal{D}^{tr}=\{(x_{i},y_{i})|x_{i}\in X^{tr},y_{i}\in Y^{s},g^{tr}(x_{i})= y_{i}\},\]
and
\[\mathcal{D}^{ts}=\{(x_{i},y_{i})|x_{i}\in X^{ts},y_{i}\in Y,g^{ts}(x_{i})=y_{i}\}.\]
In contrast to zero-shot learning which would attempt to learn a classifier \(f_{\text{ZS}}:X\to Y^{u}\), the objective of GZSL is to learn a classifier \(f_{\text{GZS}}:X\to Y\) which correctly classifies examples from \(D^{ts}\) having only access to \(D^{tr}\) during training. Because \(Y^{u}\subset Y\), GZSL can be seen as a generalization of traditional zero-shot learning.
### How CLIP Performs Zero-Shot Learning
CLIP can be abstracted to consist of four distinct functions: a text encoder \(f:X\to\mathcal{X}\), mapping a batch of captions in natural language \(x\in X\) to a batch of representations in the latent space \(\mathcal{X}\), an image encoder \(g:V\to\mathcal{V}\), similarly mapping a batch of images \(v\in V\) to a batch of representation in the latent space \(\mathcal{V}\), and two prediction functions \(h,j:\mathcal{X}\times\mathcal{V}\to\mathbb{R}^{N^{\text{batch}}\times N^{ \text{batch}}}\) mapping \(N^{\text{batch}}\) latent text and image representations to a matrix of probabilities signifying which caption and image representations correspond to each other. The functions \(h\) and \(j\) simply compute the matrix product of their inputs and a softmax, where \(h\) computes the softmax over the dimension of text representations and \(j\) over the image representations.
Radford et al. (2021) notes that during inference, CLIP can be understood to synthesize a zero-shot learner to classify a given image. To show this mathematically, let \(v\in V\) be an image batch consisting only of a single image, and \(x\in X\) be \(N^{\text{ZS}}\) natural language captions from a specific template such as "An image of an {object}". Passing these inputs through CLIP's image and text encoders yields \(\textbf{v}=g(v)\) and \(\textbf{X}=f(x)\) where \(\textbf{v}\in\mathbb{R}^{d\times 1}\) and \(\textbf{X}\in\mathbb{R}^{N^{\text{Av}}\times d}\), with \(d\) being the dimension of CLIP's joint embedding space. Given that \(x\) was derived from a prompting template, each row of **X** encodes the semantics of a specific {object}.
Further, let \(h\) be the prediction function from CLIP's training procedure which computes the softmax over the dimension of text embeddings. If **v** is a single image representation, then \(h\) is a linear classifier parameterized by **X**:
\[h(\textbf{v};\textbf{X})=\frac{\textbf{Xv}}{\sum_{j=1}^{N^{\text{Av}}}\textbf{ X}_{j}^{\top}\textbf{v}}, \tag{1}\]
computing a softmax over the matrix-vector product **Xv**. The output of \(h(\textbf{v})\) is a probability vector over caption representations, each entry corresponding to a specific {object}.
We note that the labels, or {object}s, supplied via the caption template and used to generate the weight matrix **X** can be chosen arbitrarily whether seen or unseen during training. Since **X** fully parameterizes a linear classifier, instantiating \(h(\cdot)\) with seen _and_ unseen labels is equivalent to CLIP synthesizing a generalized zero-shot learner during inference.
## 4 Learning How to Zero-Shot Learn for Open-Domain Image Geolocalization
The mathematical intuition behind CLIP's zero-shot capabilities and gaps in the literature exposes two intriguing questions: how can CLIP learn to synthesize better generalized zero-shot learners, and how can we learn to create domain-specific learners, capable of classifying new classes of images for which no training data was available? We address these two questions using image geolocalization as a comprehensive case study, emphasizing that our method is general and could be applied to other domains.
### Synthetic Caption Domain-Specific Pretraining
We introduce synthetic caption domain-specific pretraining as an additional pretraining step for CLIP both to learn how to train better zero-shot learners as well as to develop domain-specific zero-shot capabilities. In section 3.2, we described how CLIP synthesizes generalized zero-shot learners during inference - synthetic caption domain-specific pretraining extends this paradigm to the training phase, explicitly training CLIP to learn better generalized zero-shot learners.
Building on top of a pretrained CLIP model, our method performs an additional domain-specific pretraining round, depicted in Figure 1. For every training set image, we generate a synthetic image caption by formulating the task of image geolocalization in natural language using the following prompting template:
A Street View photo close to the town of {city} in the region of {region} in {country}.
The tuple {city}, {region}, {country} denotes the location where a specific image was taken and is the class label our model learns to predict. We include multiple levels of geographic granularity in the template for two reasons.
First, city names can be ambiguous, and second, by employing our geographic hierarchy, a model can learn similar representations for cities within the same country or region.
Because we use the same prompting template for all samples during training, our synthetic captions only differ in the semantics of the class labels. This means that for every training batch of size \(N^{\text{batch}}\), CLIP's text encoder synthesizes a weight matrix \(\mathbf{X}\in\mathbb{R}^{N^{\text{batch}}\times d}\) of which every row encodes a class label of a specific sample in the batch, \(d\) being the dimension of CLIP's embeddings space. \(\mathbf{X}\) is thus the weight matrix of a new, domain-specific generalized zero-shot learner for every batch iteration.
To demonstrate how our synthetic caption domain-specific pretraining method optimizes for creating better generalized zero-shot learners, we refer to CLIP's loss function. Radford et al. (2021) derive two individual cross-entropy loss terms by computing the matrix product between the text and vision encoder embeddings and then taking a softmax over the dimension of text and image embeddings, resulting in \(\mathcal{L}_{\text{Text}}\) and \(\mathcal{L}_{\text{Images}}\), respectively. CLIP's loss consequently becomes:
\[\mathcal{L}_{\text{CLIP}}=0.5\cdot(\mathcal{L}_{\text{Text}}+\mathcal{L}_{ \text{Images}}) \tag{2}\]
However, if we use class labels to fill a pre-defined caption template during training, new intriguing properties of the loss terms emerge. By the mathematical intuition laid out in section 3, \(\mathcal{L}_{\text{Text}}\) now directly optimizes for reducing the cross-entropy loss of a linear generalized zero-shot learner which takes the vision encoder features as input. Likewise, the second loss term \(\mathcal{L}_{\text{Images}}\) employs a cross-entropy loss to now optimize image representations to better correspond to a specific class label, attempting to adjust the vision encoder's weights to render zero-shot learning easier. As a result, CLIP's loss function can be reformulated as:
\[\mathcal{L}_{\text{CLIP}}=0.5\cdot(\mathcal{L}_{\text{GZSL}}+\mathcal{L}_{ \text{Vision Representation}}) \tag{3}\]
where \(\mathcal{L}_{\text{GZSL}}\) is the cross-entropy loss of the batch's linear generalized zero-shot learner, and \(\mathcal{L}_{\text{Vision Representation}}\) is the cross-entropy loss of optimizing image embeddings to correspond to a specific class label.
The main contribution of our synthetic caption domain-specific pretraining method is that equation 3 corresponds to a loss that learns to generate better generalized zero-shot learners - a meta-learning procedure for generalized zero-shot learning. The fact that this procedure is domain-specific allows CLIP's zero-shot capabilities to be improved within a specific domain.
### A Planet-Scale Image Geolocalization Dataset
In evaluating the suitability of datasets for our work, we aimed for our training dataset to fulfill three specific criteria. First, our dataset should be planet-scale - a requirement for sufficiently high ODIG performance. Second, our dataset should come from a different geographical distribution than common image geolocalization benchmarks to demonstrate our model's transfer and zero-shot learning capabilities. Finally, our dataset should not overlap with either YFCC100M (Thomee et al., 2016), on which CLIP (Radford et al., 2021) was trained on, or with any image geolocalization benchmark datasets.
These three criteria led us to the decision to collect an original dataset of Street View images to demonstrate the effectiveness of synthetic caption domain-specific pretraining within the realm of open-domain image geolocalization. To that end, we obtain 275,000 coordinate pairs for which Google Street View images are available from Geoguessr, a Swedish company developing a popular geolocation guessing game. For each coordinate pair, we collect four images together covering a full 360-degree view, resulting in an original dataset of 1.1 million Google Street View images. To obtain the {city}, {region}, and {country} labels for each image, we employ the open-source reverse-geocoding service Nominatim.
Our final planet-scale image geolocalization dataset includes images from 101 countries, the United States making up the largest share of images with a share of 1.92%. More details can be found in Appendix B.1.
### Training
Our work employs the synthetic caption domain-specific pretraining method laid out in section 4.1 to train our model StreetCLIP2, a robust, publicly available foundation model for open-domain image geolocalization. Before we perform our domain-specific pretraining, we initialize StreetCLIP with the weights of OpenAI's own pretrained version of CLIP
(OpenAI, 2022) using 14x14 pixel patches to transform images with a 336-pixel side length into a sequence of 576 image patches as input to its vision encoder transformer.
During training, we use a batch size of 32 and an AdamW optimizer (Loshchilov and Hutter, 2017) with a learning rate of \(1e^{-6}\). We generate synthetic image captions from the template described in section 4.1 and the reverse-geocoded class names for the image's {city}, {region}, and {country}. All input images are preprocessed in the same way as for OpenAI's pretrained CLIP version. A more detailed description of the training parameters can be found in Appendix A.
### Hierarchical Linear Probing
To demonstrate StreetCLIP's zero-shot image geolocalization capabilities, we devise a hierarchical linear probing strategy: we probe for the correct country first before making predictions at a more granular level. Because of the vast number of cities in the world, a hierarchical strategy significantly speeds up the inference process while also resulting in better performance due to eliminating the risk of city name ambiguity, for example predicting London, Canada instead of London, UK.
Figure 2 describes the hierarchical linear probing procedure in detail. During inference, a given image is passed through two generalized zero-shot learners synthesized by StreetCLIP to sequentially determine the most likely country and city of the image's origin. This is achieved via a caption template for countries "A Street View photo in {country}." and one for cities "A Street View photo from {city}.".
We use a comprehensive list of the world's countries3 and a list of cities derived from SimpleMaps4 to generate a decision space for our linear probes. Because disproportionately many images in our training dataset stem from the United States, for our country linear probe, we replace the United States class label with corresponding state-level labels using the template "A Street View photo in {state}, United States.".
Footnote 3: See Appendix B.3.1.
Footnote 4: See Appendix B.3.2.
## 5 Experiments
The central hypothesis of our work is that synthetic caption domain-specific pretraining improves CLIP's generalized zero-shot capabilities applied to a specific task. We evaluate our method in the context of open-domain image geolocalization because it is an extremely challenging task, forcing a model to adapt to strong domain shifts and reason about the world by skillfully combining visual clues and abstract concepts.
Figure 2: **Hierarchical Linear Probing Strategy**. During inference, StreetCLIP synthesizes both a country-level and a city-level generalized zero-shot learner using two different caption templates. Given an input image, our method first identifies the country it deems to be the most likely image origin and then refines its guess within that country’s 30 most populous cities.
### Benchmark Datasets
Despite a variety of evaluation datasets existing for the problem of image geolocalization, most datasets relate to closed-domain image geolocalization, testing model performance in constrained environments with limited real-world applicability (Berton et al., 2022).
The only two planet-scale, widely used open-domain image geolocalization datasets are IM2GPS (Hays and Efros, 2008), containing 237 test set images from 78 countries and IM2GPS3K (Vo et al., 2017) with 2997 images from 112 countries. While IM2GPS's geo-tagged images were collected from all across the internet with famous landmarks being over-represented, IM2GPS3K is a collection of images from Flickr, similar but not overlapping with the YFCC100M (Thome et al., 2016) dataset CLIP was trained on. The distribution of both datasets is very different from StreetCLIP's training dataset described in section 4.2 both geographically and in content. This enables us to test StreetCLIP's performance out-of-distribution.
While other ODIG datasets exist derived from YFCC100M (Thomee et al., 2016), these datasets are overlapping with OpenAI's CLIP implementation's training data and are thus not applicable to our evaluation process.
### Experimental Settings
We perform the evaluation of both CLIP (OpenAI, 2022) and StreetCLIP on the benchmark datasets IM2GPS and IM2GPS3K using our hierarchical linear probing strategy described in section 4.4. The evaluation of our work is done entirely using zero-shot learning, making for an interesting comparison to work in prior literature which includes models trained on millions of images from a similar distribution to our benchmark datasets.
The objective for our benchmark datasets is to predict the images' coordinates of origin with as little deviation as possible. To ensure comparability, we evaluate our work following the set of metrics set forth in prior literature: given a distance in kilometers between the predicted coordinates to the ground truth coordinates, what percentage of test set coordinate distances are below a certain kilometer threshold? This metric is called Percentage at Kilometer (% @ KM). We follow the conventions of thresholds from the literature, using 25, 200, 750, and 2,500 kilometer thresholds while leaving out the 1 kilometer threshold as our method does not make predictions at a more granular level than cities.
To generate a prediction, we perform hierarchical linear probing to guess a city within a country and use the SimpleMaps dataset (Appendix B.3.2) to transform the city name into geographic coordinates. Finally, we use the Haversine formula to get an accurate estimate of the distance between our prediction and the ground truth coordinates in kilometers.
### Results
Table 1 shows the performance of CLIP and StreetCLIP on the two selected open-domain image geolocalization benchmarks using hierarchical linear probing.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{**Benchmark**} & \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Distance (\% @ km)**} \\ & & _City_ & _Region_ & _Country_ & _Content_ \\ & & 25km & 200km & 750km & 2,500km \\ \hline
**IM2GPS** & Planet (Weyand et al., 2016) & 24.5 & 37.6 & 53.6 & 71.3 \\ \(n=237\) & ISNs (Moller-Budack et al., 2018) & 43.0 & 51.9 & 66.7 & 80.2 \\ & TransLocator (Pramanick et al., 2022) & **48.1** & **64.6** & **75.6** & 86.7 \\ \cline{2-6} & Zero-Shot CLIP (ours) & 27.0 & 42.2 & 71.7 & 86.9 \\ & Zero-Shot StreetCLIP (ours) & 28.3 & 45.1 & 74.7 & **88.2** \\ \cline{2-6} & \(\Delta_{\text{StreetCLIP}}\)**- CLIP** & & +1.3 & +2.9 & +3.0 & +1.3 \\ \hline
**IM2GPS3K** & Planet (Weyand et al., 2016) & 24.8 & 34.3 & 48.4 & 64.6 \\ \(n=2997\) & ISNs (Moller-Budack et al., 2018) & 28.0 & 36.6 & 49.7 & 66.0 \\ & TransLocator (Pramanick et al., 2022) & **31.1** & **46.7** & 58.9 & 80.1 \\ \cline{2-6} & Zero-Shot CLIP (ours) & 19.5 & 34.0 & 60.0 & 78.1 \\ & Zero-Shot StreetCLIP (ours) & 22.4 & 37.4 & **61.3** & **80.4** \\ \cline{2-6} & \(\Delta_{\text{StreetCLIP}}\)**- CLIP** & & +2.9 & +3.4 & +1.3 & +2.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation of StreetCLIP on Open-Domain Image Geolocalization Benchmarks
Our model StreetCLIP achieves state-of-the-art (SOTA) performance on both benchmark datasets on a total of three distance thresholds, notably using zero-shot learning. On IM2GPS, StreetCLIP outperforms the prior SOTA performance of TransLocator by Pramanick et al. (2022) for the 2,500 kilometer threshold by 1.5 percentage points, with worse performance than TransLocator on lower kilometer thresholds. On the larger benchmark dataset IM2GPS3K, StreetCLIP sets a new SOTA performance on two distance thresholds, beating the SOTA on the 750 kilometer threshold by 2.4 percentage points and on the 2,500 kilometer threshold by 0.3 percentage points. Again, StreetCLIP performs worse than TransLocator on lower kilometer thresholds.
Further, we observe that domain-specific pretraining with our synthetic caption method drives a substantial improvement in image geolocalization performance compared to our zero-shot CLIP model (an ablation of our pretraining method): for all distance thresholds, our synthetic caption pretraining method improves performance between 1.3 and 3.4 percentage points. This demonstrates that our method improved CLIP's zero-shot reasoning capabilities within the domain of image geolocalization, despite training on a dataset of Street View images which are considerably different from user-uploaded images on Flickr.
### Discussion and Limitations
The results of our experiments demonstrate that our synthetic caption pretraining method is capable of significantly improving CLIP's generalized zero-shot capabilities applied to a specific task while achieving SOTA performance on a selection of benchmark metrics. These results, however, must be placed within the context of our zero-shot learning setup. Notably, StreetCLIP's performance is achieved via planet-scale linear probing in zero-shot in contrast to TransLocator which was trained in a supervised fashion on more than 4 million images. Furthermore, while TransLocator was trained on geo-tagged images from Flickr just as our benchmark dataset IM2GPS3k, StreetCLIP realizes its performance being pretrained on a dataset of StreetView images that experience a strong domain shift to our benchmarks.
Observing the results in Table 1, it becomes clear that hierarchical probing with both CLIP and StreetCLIP works especially well on higher kilometer thresholds. A limitation of our approach, however, is the accurate prediction of image geolocations on more granular kilometer thresholds such as the city or region level. StreetCLIP and CLIP significantly underperform on more granular prediction levels, likely because landmarks are not included in our probing procedure although common in the benchmarks, because our models can only predict the center coordinates of one of 30 most populous cities per country (30 per state in the United States), and due to the fixed vocabulary size of CLIP's text encoder which does not have separate tokens for many of the world's cities as these rarely occur in common text corpora.
Nevertheless, the fact that StreetCLIP was evaluated in zero-shot and can still achieve SOTA performance on an out-of-distribution benchmark signals a strong potential for using StreetCLIP as a backbone for supervised image geolocalization models as well as for finetuning StreetCLIP to perform tasks in other domains of downstream applications.
## 6 Conclusion and Future Work
In conclusion, our experiments validate the central hypothesis of our work that synthetic caption domain-specific pretraining improves CLIP's generalized zero-shot capabilities applied to the task of image geolocalization. Our StreetCLIP model - pretrained using this method - not only improves CLIP's image geolocalization performance by 1.3 to 3.4 percentage points depending on the threshold of prediction granularity used but also achieves SOTA performance on two open-domain image geolocalization benchmarks. Notably, StreetCLIP even outcompetes models in zero-shot that were finetuned in a supervised setting on more than 4 million images originating from a distribution similar to that of our benchmarks.
Our results have broader implications: because our synthetic caption pretraining method does not restrict StreetCLIP to make predictions within a fixed set of classes, StreetCLIP can be adapted to many other tasks benefiting from geographic domain knowledge. A wide array of potential downstream applications remain unexplored, especially in the fields of climate change mitigation, rural and urban scene understanding and education. Since our work focuses on developing a method to improve generalized zero-shot learning capabilities, we expect that finetuning our publicly available version of StreetCLIP on new datasets and problem domains could yield significant performance gains over the status quo.
Our results further suggest that domain-specific pretraining via our synthetic caption method could potentially drive substantial prediction improvements in other domains. StreetCLIP is not restricted to image geolocalization because it only relies on class labels which can be formulated in natural language and on corresponding images. We hope that more domain-specific CLIP variants will be trained for other fields, with synthetic caption domain-specific pretraining leading to performance gains far beyond the realm of image geolocalization. |
2308.15145 | Limited memory gradient methods for unconstrained optimization | The limited memory steepest descent method (Fletcher, 2012) for unconstrained
optimization problems stores a few past gradients to compute multiple stepsizes
at once. We review this method and propose new variants. For strictly convex
quadratic objective functions, we study the numerical behavior of different
techniques to compute new stepsizes. In particular, we introduce a method to
improve the use of harmonic Ritz values. We also show the existence of a secant
condition associated with LMSD, where the approximating Hessian is projected
onto a low-dimensional space. In the general nonlinear case, we propose two new
alternatives to Fletcher's method: first, the addition of symmetry constraints
to the secant condition valid for the quadratic case; second, a perturbation of
the last differences between consecutive gradients, to satisfy multiple secant
equations simultaneously. We show that Fletcher's method can also be
interpreted from this viewpoint. | Giulia Ferrandi, Michiel E. Hochstenbach | 2023-08-29T09:23:25Z | http://arxiv.org/abs/2308.15145v2 | # Limited memory gradient methods for unconstrained optimization
###### Abstract
The limited memory steepest descent method (Fletcher, 2012) for unconstrained optimization problems stores a few past gradients to compute multiple stepsizes at once. We review this method and propose new variants. For strictly convex quadratic objective functions, we study the numerical behavior of different techniques to compute new stepsizes. In particular, we introduce a method to improve the use of harmonic Ritz values. We also show the existence of a secant condition associated with LMSD, where the approximating Hessian is projected onto a low-dimensional space. In the general nonlinear case, we propose two new alternatives to Fletcher's method: first, the addition of symmetry constraints to the secant condition valid for the quadratic case; second, a perturbation of the last differences between consecutive gradients, to satisfy multiple secant equations simultaneously. We show that Fletcher's method can also be interpreted from this viewpoint.
**Keywords:** limited memory steepest descent, unconstrained optimization, secant condition, low-dimensional Hessian approximation, Rayleigh-Ritz extraction, Lyapunov equation.
**AMS Classification:** 65K05, 90C20, 90C30, 65F15, 65F10
## 1 Introduction
We study the limited memory steepest descent method (LMSD), introduced by Fletcher [1], in the context of unconstrained optimization problems for a continuously
differentiable function \(f\):
\[\min_{\mathbf{x}\in\mathbb{R}^{n}}f(\mathbf{x}).\]
The iteration for a steepest descent scheme reads
\[\mathbf{x}_{k+1}=\mathbf{x}_{k}-\beta_{k}\,\mathbf{g}_{k}\ =\ \mathbf{x}_{k}- \alpha_{k}^{-1}\,\mathbf{g}_{k},\]
where \(\mathbf{g}_{k}=\nabla f(\mathbf{x}_{k})\) is the gradient, \(\beta_{k}>0\) is the steplength, and its inverse \(\alpha_{k}=\beta_{k}^{-1}\) is usually chosen as an approximate eigenvalue of an (average) Hessian. We refer to [2, 3] for recent reviews on various steplength selection procedures.
The key idea of LMSD is to store the latest \(m>1\) gradients, and to compute (at most) \(m\) new stepsizes for the following iterations of the gradient method. We first consider the strictly convex quadratic problem
\[\min_{\mathbf{x}\in\mathbb{R}^{n}}\ \tfrac{1}{2}\,\mathbf{x}^{T}\mathbf{A} \mathbf{x}-\mathbf{b}^{T}\mathbf{x} \tag{1}\]
where \(\mathbf{A}\) is a symmetric positive definite (SPD) matrix with eigenvalues \(0<\lambda_{1}\leq\cdots\leq\lambda_{n}\), and \(\mathbf{b}\in\mathbb{R}^{n}\). Fletcher points out that the \(m\) most recent gradients \(\mathbf{G}=[\,\mathbf{g}_{1}\ \cdots\ \mathbf{g}_{m}\,]\) form a basis for an \(m\)-dimensional Krylov subspace of \(\mathbf{A}\). (Although \(\mathbf{G}\) will change during the iterations, for convenience, and without loss of generality, we label the first column as \(\mathbf{g}_{1}\).) Then, \(m\) approximate eigenvalues of \(\mathbf{A}\) (Ritz values) are computed from the low-dimensional representation of \(\mathbf{A}\), a projected Hessian matrix, in the subspace spanned by the columns of \(\mathbf{G}\), and used as \(m\) inverse stepsizes. For \(m=1\), the proposed method reduces to the steepest descent method with Barzilai-Borwein stepsizes [4].
LMSD shares the property with L-BFGS (see, e.g., [5, Ch. 7]), the limited memory version of BFGS, that \(2m\) past vectors are stored, of the form \(\mathbf{s}_{k-1}=\mathbf{x}_{k}-\mathbf{x}_{k-1}\) and \(\mathbf{y}_{k-1}=\mathbf{g}_{k}-\mathbf{g}_{k-1}\). While LMSD is a first-order method which incorporates some second-order information in its simplest form (the stepsize), L-BFGS is a quasi-Newton method, which exploits the \(\mathbf{s}\)-vectors and \(\mathbf{y}\)-vectors to provide an additive rank-\(m\) update of a tentative approximate inverse Hessian (typically a multiple of the identity matrix). Compared to BFGS, at each iteration, the L-BFGS method computes the action of the approximate inverse Hessian, without storing the entire matrix and using \(\mathcal{O}(mn)\) operations (see, e.g., [5, Ch. 7]). As we will see in Section 5, the cost of \(m\) LMSD iterations is approximately \(\mathcal{O}(m^{2}n)\), meaning that the costs of the two algorithms are comparable.
There are several potential benefits of LMSD. First, as shown in [1], there are some problems for which LMSD performs better than L-BFGS. Secondly, to the best of our knowledge and as stated in [5, Sec. 6.4], there are no global convergence results for quasi-Newton methods applied to non-convex functions. Liu and Nocedal [6] have proved the global superlinear convergence of L-BFGS only for (twice continuously differentiable) uniformly convex functions. On the contrary, as a gradient method endowed with line search, LMSD converges globally for continuously differentiable functions (see [7, Thm. 2.1], for the convergence of gradient methods combined with nonmonotone line search). Finally, and quite importantly, we note that the idea of LMSD can be readily extended to other types of problems: to name a few, it has been
used in the scaled spectral projected gradient method [8] for constrained optimization problems, and in a stochastic gradient method [9].
**Summary of the state of the art and our contributions.** In the quadratic case (1), the projected Hessian matrix can be computed from the Cholesky decomposition of \(\mathbf{G}^{T}\mathbf{G}\) (cf. [1, Eq. (19)] and Section 2) without involving any extra matrix-vector product with \(\mathbf{A}\). Although this procedure is memory and time efficient, it is also known to be potentially numerically unstable (cf., e.g., the discussion in [10]) because of the computation of the Gramian matrix \(\mathbf{G}^{T}\mathbf{G}\), especially in our context of having an ill-conditioned \(\mathbf{G}\). Therefore, we consider alternative ways to obtain the projected Hessian in Section 2.1; in particular, we propose to use the pivoted QR decomposition of \(\mathbf{G}\) (see, e.g., [11, Algorithm 1]), or its SVD, and compare the three methods.
In addition, we show that, in the quadratic case, there is a least squares secant condition associated with LMSD. Indeed, in Section 2.3 we prove that the projected Hessian, obtained via one of these three decompositions, is similar to the solution to \(\min_{\mathbf{B}}\|\mathbf{Y}-\mathbf{S}\mathbf{B}\|\), where \(\|\cdot\|\) denotes the Frobenius norm of a matrix, and \(\mathbf{S}=[\,\mathbf{s}_{1}\;\ldots\;\mathbf{s}_{m}\,]\) and \(\mathbf{Y}=[\,\mathbf{y}_{1}\;\ldots\;\mathbf{y}_{m}\,]\) store the \(m\) most recent \(\mathbf{s}\)-vectors and \(\mathbf{y}\)-vectors, respectively.
Since \(\mathbf{Y}=\mathbf{A}\mathbf{S}\) for quadratic functions, the obtained stepsizes are inverse eigenvalues of a projection of the Hessian matrix \(\mathbf{A}\). In the general nonlinear case (i.e., for a non-quadratic function \(f\)), one can still reproduce the small matrix in [1, Eq. (19)], since the Hessian is not needed explicitly in its computation. However, there is generally not a clear interpretation of the stepsizes as approximate inverse eigenvalues of a certain Hessian matrix. Also, the obtained eigenvalues might even be complex.
To deal with this latter problem, Fletcher proposes a practical symmetrization of [1, Eq. (19)], but, so far, a clear _theoretical justification_ for this approach seems to be lacking. To address this issue, we rely on Schnabel's theorem [12, Thm. 3.1] to connect Fletcher's symmetrization to a perturbation of the \(\mathbf{Y}\) matrix, of the form \(\widetilde{\mathbf{Y}}=\mathbf{Y}+\Delta\mathbf{Y}\). This guarantees that the eigenvalues of the symmetrized matrix [1, Eq. (19)] correspond to a certain symmetric matrix \(\mathbf{A}_{+}\) that satisfies multiple secant equations \(\widetilde{\mathbf{Y}}=\mathbf{A}_{+}\mathbf{S}\) as in the quadratic case. The matrix \(\mathbf{A}_{+}\) can be interpreted as an approximate Hessian in the current iterate.
In the same line of thought, we also exploit one of the perturbations \(\widetilde{\mathbf{Y}}\) proposed by Schnabel [12] in the LMSD context. Although the idea of testing different perturbations of \(\mathbf{Y}\) is appealing, a good perturbation may be expensive to compute, compared to the task of getting \(m\) new stepsizes. Therefore, we explore a different approach based on the modification of the least squares secant condition of LMSD. The key idea is to add a _symmetry constraint_ to the secant condition:
\[\min_{\mathbf{B}=\mathbf{B}^{T}}\|\mathbf{Y}-\mathbf{S}\mathbf{B}\|.\]
Interestingly, the solution to this problem corresponds to the solution of a _Lyapunov equation_ (see, e.g., [13]). This secant condition provides a smooth transition from the strictly convex quadratic case to the general case, and its solution has real eigenvalues by construction.
Along with discussing both the quadratic and the general case, we study the computation of _harmonic Ritz values_, which are also considered by Fletcher [1] and Curtis and Guo [14, 15]. For the quadratic case, in Section 2.2, we show that there are some nice symmetries between the computation of the Ritz values of \(\mathbf{A}\) by exploiting a basis for the matrix of gradients \(\mathbf{G}\), and the computation of the _inverse_ harmonic Ritz values of \(\mathbf{A}\) by means of \(\mathbf{Y}\). Our implementation is different from Fletcher's, but the two approaches show similar performance in the quadratic experiments of Section 5.1. In general, LMSD with harmonic Ritz values appears to show a less favorable behavior than LMSD with Ritz values. Therefore, in Section 2.2, we present a way to improve the quality of the harmonic Ritz values, by taking an extra Rayleigh quotient of the harmonic Ritz vectors. This is based on the remarks in, e.g., [16, 17].
**Outline.** The rest of the paper is organized as follows. We first focus on the strictly convex quadratic problem (1) in Section 2. We review the LMSD method, as described by Fletcher [1], and present new ways to compute the approximate eigenvalues of the Hessian. We also give a secant condition for the low-dimensional Hessian of which we compute the eigenvalues. We move to the general unconstrained optimization problems in Section 3, where we give a theoretical foundation to Fletcher's symmetrized matrix [1, Eq. (19)], and show how to compute new stepsizes from the secant equation for quadratics, by adding symmetry constraints. A third new approach based on [12] is also proposed. In both Sections 2 and 3, particular emphasis is put on the issue of (likely) numerical rank-deficiency of \(\mathbf{G}\) (or \(\mathbf{Y}\), when computing the harmonic Ritz values). Section 4 reports the LMSD algorithms for strictly convex quadratic problems, as in [1], and for general continuously differentiable functions, as in [2]. Related convergence results are also recalled. Finally, numerical experiments on both strictly convex quadratics and general unconstrained problems are presented in Section 5; conclusions are drawn in Section 6.
## 2 Limited memory BB1 and BB2 for quadratic problems
We review Fletcher's limited memory approach [1] for strictly convex quadratic functions (1), and study some new theoretical and computational aspects. Common choices for the steplength in gradient methods for quadratic functions are the Barzilai-Borwein (BB) stepsizes [4]
\[\beta_{k}^{\text{BB1}}=\frac{\mathbf{g}_{k-1}^{T}\mathbf{g}_{k-1}}{\mathbf{g }_{k-1}^{T}\mathbf{A}\,\mathbf{g}_{k-1}},\qquad\beta_{k}^{\text{BB2}}=\frac{ \mathbf{g}_{k-1}^{T}\mathbf{A}\,\mathbf{g}_{k-1}}{\mathbf{g}_{k-1}^{T} \mathbf{A}^{2}\,\mathbf{g}_{k-1}}. \tag{2}\]
The inverse stepsizes \(\alpha_{k}^{\text{BB1}}=(\beta_{k}^{\text{BB1}})^{-1}\) and \(\alpha_{k}^{\text{BB2}}=(\beta_{k}^{\text{BB2}})^{-1}\) are the standard and the harmonic Rayleigh quotients of \(\mathbf{A}\), evaluated at \(\mathbf{g}_{k-1}\), respectively. Therefore, they provide estimates of the eigenvalues of \(\mathbf{A}\). The key idea of LMSD is to produce \(m>1\) approximate eigenvalues from an \(m\)-dimensional space simultaneously, hopefully capturing more information compared to that from a one-dimensional space. One hint about why considering \(m>1\) may be favorable is provided by the well-known Courant-Fischer Theorem and Cauchy's Interlace Theorem (see, e.g., [18, Thms. 10.2.1
and 10.1.1]). For two subspaces \(\mathcal{V}\), \(\mathcal{W}\) with \(\mathcal{V}\subseteq\mathcal{W}\), we have
\[\max_{\mathbf{z}\in\mathcal{V},\;\|\mathbf{z}\|=1}\mathbf{z}^{T}\mathbf{A} \mathbf{z}\leq\max_{\mathbf{z}\in\mathcal{W},\;\|\mathbf{z}\|=1}\mathbf{z}^{T} \mathbf{A}\mathbf{z}\leq\max_{\|\mathbf{z}\|=1}\mathbf{z}^{T}\mathbf{A} \mathbf{z}=\lambda_{n}.\]
Therefore, a larger search space may result in better approximations to the largest eigenvalue of \(\mathbf{A}\). Similarly, a larger subspace may better approximate the smallest eigenvalue, as well as the next-largest and the next-smallest values.
We now show why \(m\) consecutive gradients form a basis of a Krylov subspace of \(\mathbf{A}\). It is easy to check that, given the stepsizes \(\beta_{1},\ldots,\beta_{m}\) corresponding to the \(m\) most recent gradients, each gradient can be expressed as follows:
\[\mathbf{g}_{k}=\prod_{i=1}^{k-1}\left(I-\beta_{i}\mathbf{A}\right)\mathbf{g}_{ 1},\quad k=1,\ldots,m. \tag{3}\]
Therefore all \(m\) gradients belong to the Krylov subspace of degree \(m\) (and of dimension at most \(m\))
\[\mathbf{g}_{k}\in\mathcal{K}_{m}(\mathbf{A},\mathbf{g}_{1})=\mathrm{span}\{ \mathbf{g}_{1},\;\mathbf{A}\,\mathbf{g}_{1},\,\ldots,\;\mathbf{A}^{m-1} \mathbf{g}_{1}\}.\]
Moreover, under mild assumptions, the columns of \(\mathbf{G}\) form a basis for \(\mathcal{K}_{m}(\mathbf{A},\mathbf{g}_{1})\). This result is mentioned by Fletcher [1]; here we provide an explicit proof.
**Proposition 1**: _Suppose the gradient \(\mathbf{g}_{1}\) does not lie in an \(\ell\)-dimensional invariant subspace, with \(\ell<m\), of the SPD matrix \(\mathbf{A}\). If \(\beta_{k}\neq 0\) for all \(k=1,\ldots,m-1\), the vectors \(\mathbf{g}_{1},\ldots,\mathbf{g}_{m}\) are linearly independent._
_Proof_ In view of the assumption, the set \(\{\mathbf{g}_{1},A\,\mathbf{g}_{1}\ldots,A^{m-1}\mathbf{g}_{1}\}\) is a basis for \(\mathcal{K}_{m}(A,\mathbf{g}_{1})\). In fact, from (3),
\[[\,\mathbf{g}_{1}\;\;\mathbf{g}_{2}\;\;\ldots\;\;\;\mathbf{g}_{m}\,]=[\, \mathbf{g}_{1}\;\;A\mathbf{g}_{1}\;\;\ldots\;\;A^{m-1}\mathbf{g}_{1}]\left[ \begin{matrix}1&\times&\times&\times&\times\\ -\beta_{1}&\times&\times&\times\\ &\beta_{1}\beta_{2}&\times&\times\\ &&\ddots&\times\\ &&&(-1)^{m}\prod_{i=1}^{m-1}\beta_{i}\end{matrix}\right]. \tag{4}\]
Up to a sign, the determinant of the rightmost matrix in this equation is \(\beta_{1}^{m-1}\beta_{2}^{m-2}\cdots\beta_{m-1}\), which is nonzero if and only if the stepsizes are nonzero. Therefore, \(\mathbf{g}_{1},\ldots,\mathbf{g}_{m}\) are linearly independent. \(\Box\)
This result shows that \(m\) consecutive gradients of a quadratic function are linearly independent in general; in practice, this formula suggests that small \(\beta_{i}\) may quickly cause ill conditioning. Numerical rank-deficiency of \(\mathbf{G}\) is an important issue in the LMSD method and will be considered in the computation of a basis for \(\mathrm{span}(\mathbf{G})\) in Section 2.1.
For the following discussion, we also relate \(\mathbf{S}\) and \(\mathbf{Y}\) to the Krylov subspace \(\mathcal{K}_{m}(\mathbf{A},\mathbf{g}_{1})\).
**Proposition 2**: _If \(\mathbf{G}\) is a basis for \(\mathcal{K}_{m}(A,\mathbf{g}_{1})\), then_
1. _the columns of_ \(\mathbf{S}\) _also form a basis for_ \(\mathcal{K}_{m}(\mathbf{A},\mathbf{g}_{1})\)_;_
2. _the columns of_ \(\mathbf{Y}\) _form a basis for_ \(\mathbf{A}\,\mathcal{K}_{m}(\mathbf{A},\mathbf{g}_{1})\)_._
_Proof_ The thesis immediately follows from the relations
\[\mathbf{S}=-\mathbf{G}\mathbf{D}^{-1},\quad\mathbf{Y}=-\mathbf{A}\mathbf{G} \mathbf{D}^{-1},\qquad\mathbf{D}=\operatorname{diag}(\alpha_{1},\ldots, \alpha_{m}), \tag{5}\]
where the \(\alpha_{i}=\beta_{i}^{-1}\) are the latest \(m\) inverse stepsizes, ordered from the oldest to the most recent. Note that \(\mathbf{D}\) is nonsingular. \(\square\)
Given a basis for \(\mathcal{K}_{m}(\mathbf{A},\mathbf{g}_{1})\) (or \(\mathbf{A}\,\mathcal{K}_{m}(\mathbf{A},\mathbf{g}_{1})\)), one can approximate some eigenpairs of \(\mathbf{A}\) from this subspace. The procedure is known as the Rayleigh-Ritz extraction method (see, e.g., [18, Sec. 11.3]) and is recalled in the next section.
### The Rayleigh-Ritz extraction
We formulate the standard and harmonic Rayleigh-Ritz extractions in the context of LMSD methods for strictly convex quadratic functions. Let \(\mathcal{S}\) be the subspace spanned by the columns of \(\mathbf{S}\), and \(\mathcal{Y}\) be the subspace spanned by the columns of \(\mathbf{Y}\). Fletcher's main idea [1] is to exploit the Rayleigh-Ritz method on the subspace \(\mathcal{S}\). We will now review and extend this approach.
We attempt to extract \(m\) promising approximate eigenpairs from the subspace \(\mathcal{S}\). Therefore, such approximate eigenpairs can be represented as \((\theta_{i},\mathbf{Sc}_{i})\), with nonzero \(\mathbf{c}_{i}\in\mathbb{R}^{m}\), for \(i=1,\ldots,m\). The (standard) Rayleigh-Ritz extraction imposes a Galerkin condition:
\[\mathbf{A}\,\mathbf{S}\,\mathbf{c}-\theta\,\mathbf{S}\,\mathbf{c}\perp \mathcal{S}. \tag{6}\]
This means that the pairs \((\theta_{i},\mathbf{c}_{i})\) are the eigenpairs of the \(m\times m\) pencil \((\mathbf{S}^{T}\mathbf{Y},\,\mathbf{S}^{T}\mathbf{S})\). The \(\theta_{i}\) are called _Ritz values_. In the LMSD method, we have \(\mathcal{S}=\mathcal{K}_{m}(\mathbf{A},\mathbf{g}_{1})\) (see Proposition 2). Note that for \(m=1\), the only approximate eigenvalue reduces to the Rayleigh quotient \(\alpha^{\mathrm{BB1}}\) (2). Ritz values are bounded by the extreme eigenvalues of \(\mathbf{A}\), i.e., \(\theta_{i}\in[\lambda_{1},\lambda_{n}]\). This follows from Cauchy's Interlace Theorem [18, Thm. 10.1.1], by choosing an orthogonal basis for \(\mathcal{S}\). This inclusion is crucial to prove the global convergence of LMSD for quadratic functions [1].
Although the matrix of gradients \(\mathbf{G}\) (or \(\mathbf{S}\)) already provides a basis for \(\mathcal{S}\), from a numerical point of view it may not be ideal to exploit it to compute the Ritz values, since \(\mathbf{G}\) is usually numerically ill conditioned. Therefore, we recall Fletcher's approach [1] to compute a basis for \(\mathcal{S}\), and then propose two new variants: via a pivoted QR and via an SVD. Fletcher starts by a QR decomposition \(\mathbf{G}=\mathbf{QR}\), discarding the oldest gradients whenever \(\mathbf{R}\) is numerically singular. Then \(\mathbf{Q}\) is an orthogonal basis for a possibly smaller space \(\mathcal{S}=\operatorname{span}([\mathbf{g}_{m-s+1},\ldots,\mathbf{g}_{m}])\), with \(s\leq m\). The product \(\mathbf{AG}\) can be computed from the gradients without additional multiplications by \(\mathbf{A}\), in view of
\[\mathbf{AG}=-\mathbf{Y}\mathbf{D}=[\,\mathbf{G}\ \ \mathbf{g}_{m+1}\,]\ \mathbf{J},\qquad \text{where}\quad\mathbf{J}=\begin{bmatrix}\alpha_{1}&\\ -\alpha_{1}&\ddots&\\ &\ddots&\alpha_{m}\\ &&-\alpha_{m}\end{bmatrix}. \tag{7}\]
Here, the relation \(\mathbf{y}_{k-1}=\mathbf{g}_{k}-\mathbf{g}_{k-1}\) is used. Then the \(s\times s\) low-dimensional representation of \(\mathbf{A}\) can be written in terms of \(\mathbf{R}\):
\[\mathbf{T}:=\mathbf{Q}^{T}\mathbf{A}\mathbf{Q}=[\,\mathbf{R}\ \ \mathbf{r}\,]\ \mathbf{J}\,\mathbf{R}^{-1}, \tag{8}\]
where \(\mathbf{r}=\mathbf{Q}^{T}\mathbf{g}_{m+1}\). It is clear that \(\mathbf{T}\) is symmetric; it is also tridiagonal in view of the fact that it is associated with a Krylov relation for a symmetric matrix (see also Fletcher [1]). Since \(\mathbf{r}\) is also the solution to \(\mathbf{R}^{T}\mathbf{r}=\mathbf{G}^{T}\mathbf{g}_{m+1}\), the matrix \(\mathbf{Q}\) is in fact not needed to compute \(\mathbf{r}\). For this reason, Fletcher concludes that the Cholesky decomposition \(\mathbf{G}^{T}\mathbf{G}=\mathbf{R}^{T}\mathbf{R}\) is sufficient to determine \(\mathbf{T}\) and its eigenvalues. Standard routines raise an error when \(\mathbf{G}^{T}\mathbf{G}\) is numerically not SPD (numerically having a zero or tiny negative eigenvalue). If this happens, the oldest gradients are discarded (if necessary one by one in several steps), and the Cholesky decomposition is repeated.
Instead of discarding the oldest gradients \(\mathbf{g}_{1},\ldots,\mathbf{g}_{m-s}\), we will now consider a new variant by selecting the gradients in the following way. We carry out a pivoted QR decomposition of \(\mathbf{G}\), i.e., \(\mathbf{G}\widehat{\mathbf{\Pi}}=\widehat{\mathbf{Q}}\widehat{\mathbf{R}}\), where \(\widehat{\mathbf{\Pi}}\) is a permutation matrix that iteratively picks the column with the maximal norm after each Gram-Schmidt step [11]. As a consequence, the diagonal entries of \(\widehat{\mathbf{R}}\) are ordered nonincreasingly in magnitude. (In fact, we can always ensure that these entries are positive, but since standard routines may output negative values, we consider the magnitudes.)
The pivoted QR approach is also a rank-revealing factorization, although generally less accurate than the SVD (see, e.g., [11]). Let \(\widehat{\mathbf{R}}_{G}\) be the first \(s\times s\) block of \(\widehat{\mathbf{R}}\) for which \(|\widehat{r}_{i}|>\mathsf{thresh}\cdot|\widehat{r}_{1}|\), where \(\widehat{r}_{i}\) is the \(i\)th diagonal element of \(\widehat{R}\) and \(\mathsf{thresh}>0\). A crude approximation to its condition number is \(\kappa(\widehat{\mathbf{R}}_{G})\approx|\widehat{r}_{1}|\,/\,|\widehat{r}_{s}|\). Although this approximation may be quite imprecise, the alternative to repeatedly compute \(\kappa(\widehat{\mathbf{R}}_{G})\) by removing the last column and row of the matrix at each iteration might take up to \(\mathcal{O}(m^{4})\) work, which, even for modest values of \(m\), may be unwanted.
The approximation subspace for the eigenvectors of \(\mathbf{A}\) is now \(\mathcal{S}=\mathrm{span}(\widehat{\mathbf{Q}}_{G})\), with \(\mathbf{G}\widehat{\mathbf{\Pi}}_{G}=\widehat{\mathbf{Q}}_{G}\widehat{ \mathbf{R}}_{G}\), where \(\widehat{\mathbf{\Pi}}_{G}\) and \(\widehat{\mathbf{Q}}_{G}\) are the first \(s\) columns of \(\widehat{\mathbf{\Pi}}\) and \(\widehat{\mathbf{Q}}\), respectively. The upper triangular \(\widehat{\mathbf{R}}\) can be partitioned as follows:
\[\widehat{\mathbf{R}}=\begin{bmatrix}\widehat{\mathbf{R}}_{G}&\widehat{ \mathbf{R}}_{12}\\ \mathbf{0}&\widehat{\mathbf{R}}_{22}\end{bmatrix}. \tag{9}\]
As in (8), we exploit (7) to compute the projected Hessian
\[\mathbf{B}^{\mathrm{QR}} :=\widehat{\mathbf{Q}}_{G}^{T}\,\mathbf{A}\widehat{\mathbf{Q}}_{ G}=\widehat{\mathbf{Q}}_{G}^{T}\,\mathbf{A}\,\widehat{\mathbf{G}}\,\widehat{ \mathbf{\Pi}}_{G}\widehat{\mathbf{R}}_{G}^{-1}=\widehat{\mathbf{Q}}_{G}^{T}\,[ \,\widehat{\mathbf{Q}}\widehat{\mathbf{R}}\widehat{\mathbf{\Pi}}^{-1}\ \ \mathbf{g}_{m+1}\,]\,\mathbf{J}\widehat{ \mathbf{\Pi}}_{G}\widehat{\mathbf{R}}_{G}^{-1}\] \[=[\,[\widehat{\mathbf{R}}_{G}\ \widehat{\mathbf{R}}_{12}]\,\widehat{ \mathbf{\Pi}}^{-1}\ \ \widehat{\mathbf{Q}}_{G}^{T}\,\mathbf{g}_{m+1}\,]\,\mathbf{J}\,\widehat{ \mathbf{\Pi}}_{G}\,\widehat{\mathbf{R}}_{G}^{-1}. \tag{10}\]
Note that, compared to Fletcher's approach, this decomposition removes the unwanted gradients all at once, while in [1] the Cholesky decomposition is repeated every time the \(\mathbf{R}\) matrix is numerically singular. Fletcher's \(\mathbf{T}\) (8) is a specific case of (10), where \(\widehat{\mathbf{\Pi}}=\widehat{\mathbf{\Pi}}_{G}\) is the identity matrix, and \(\widehat{\mathbf{R}}_{G}\) is the whole \(\widehat{\mathbf{R}}\), but where \(\mathbf{G}\) only contains \([\mathbf{g}_{m-s+1},\ldots,\mathbf{g}_{m}]\).
As the second new variant, we exploit an SVD decomposition \(\mathbf{G}=\mathbf{U}\,\mathbf{\Sigma}\,\mathbf{V}^{T}\), where \(\mathbf{\Sigma}\) is \(m\times m\), to get a basis for \(\mathcal{S}\). An advantage of an SVD is that this provides a natural way to reduce the space by removing the singular vectors corresponding to singular values below a certain tolerance. We decide to retain the \(s\leq m\) singular values for which \(\sigma_{i}\geq\mathsf{thresh}\cdot\sigma_{1}\), where \(\sigma_{1}\) is the largest singular value of \(\mathbf{G}\). Therefore we consider the truncated SVD \(\mathbf{G}\approx\mathbf{G}_{1}=\mathbf{U}_{G}\,\mathbf{\Sigma}_{G}\,\mathbf{ V}_{G}^{T}\), where the matrices on the right-hand side are \(n\times s\), \(s\times s\), and \(s\times m\), respectively. Then the approximation subspace becomes \(\mathcal{S}=\operatorname{span}(\mathbf{U}_{G})\), and we compute the corresponding \(s\times s\) representation of \(\mathbf{A}\). Since \(\mathbf{G}_{1}\mathbf{V}_{G}=\mathbf{G}\mathbf{V}_{G}\), and \(\mathbf{U}_{G}=\mathbf{G}_{1}\mathbf{V}_{G}\mathbf{\Sigma}_{G}^{-1}\), we have, using the expression for \(\mathbf{AG}\) (7),
\[\mathbf{B}^{\mathrm{SVD}} =\mathbf{U}_{G}^{T}\,\mathbf{A}\mathbf{U}_{G}=\mathbf{U}_{G}^{T} \,\mathbf{AG}\,\mathbf{V}_{G}\,\mathbf{\Sigma}_{G}^{-1}=\mathbf{U}_{G}^{T}\,[ \,\mathbf{U}\,\mathbf{\Sigma}\,\mathbf{V}^{T}\ \ \mathbf{g}_{m+1}\,]\ \mathbf{J}\,\mathbf{V}_{G}\,\mathbf{\Sigma}_{G}^{-1}.\] \[=[\,\mathbf{\Sigma}_{G}\mathbf{V}_{G}^{T}\quad\mathbf{U}_{G}^{T} \,\mathbf{g}_{m+1}\,]\ \mathbf{J}\,\mathbf{V}_{G}\,\mathbf{\Sigma}_{G}^{-1}. \tag{11}\]
We remark that, by construction, both \(\mathbf{B}^{\mathrm{SVD}}\) and \(\mathbf{B}^{\mathrm{QR}}\) are SPD. Due to the truncation of the decompositions of \(\mathbf{G}\) in both the pivoted QR and SVD techniques, the subspace \(\mathcal{S}\) will generally not be a Krylov subspace, in contrast to Fletcher's method. Still, of course, one can also expect to extract useful information from a non-Krylov subspace.
Since LMSD with Ritz values can be seen as an extension of a gradient method with BB1 stepsizes, it is reasonable to look for a limited memory extension of the gradient method with BB2 stepsizes. The harmonic Rayleigh-Ritz extraction is a suitable tool to achieve this goal.
### The harmonic Rayleigh-Ritz extraction
The use of harmonic Ritz values in the context of LMSD has been mentioned by Fletcher [1, Sec. 7], and further studied by Curtis and Guo [14]. While the Rayleigh-Ritz extraction usually finds good approximations for exterior eigenvalues, the harmonic Rayleigh-Ritz extraction has originally been introduced to approximate eigenvalues close to a target value in the interior of the spectrum. A natural way to achieve this is to consider a Galerkin condition for \(\mathbf{A}^{-1}\):
\[\mathbf{A}^{-1}\mathbf{Y}\widetilde{\mathbf{c}}-\widetilde{\theta}^{-1} \mathbf{Y}\widetilde{\mathbf{c}}\perp\mathcal{Y}, \tag{12}\]
which leads to the eigenpairs \((\widetilde{\theta}_{i}^{-1},\widetilde{\mathbf{c}}_{i})\) of the pair \((\mathbf{Y}^{T}\mathbf{S},\,\mathbf{Y}^{T}\mathbf{Y})\). However, since \(\mathbf{A}^{-1}\) is usually not explicitly available or too expensive to compute, one may choose a subspace of the form \(\mathcal{Y}=\mathbf{A}\,\mathcal{S}\) (see, e.g., [16]). This simplifies the Galerkin condition:
\[\mathbf{AS}\,\widetilde{\mathbf{c}}-\widetilde{\theta}\,\mathbf{S}\, \widetilde{\mathbf{c}}\perp\mathbf{A}\,\mathcal{S}.\]
The eigenvalues \(\widetilde{\theta}_{i}\) from this condition are called _harmonic Ritz values_. In the limited memory extension of BB2 we set \(\mathcal{Y}=\mathbf{A}\,\mathcal{K}_{m}(\mathbf{A},\mathbf{g}_{1})\), and we know that \(\mathbf{Y}\) is a basis for \(\mathcal{Y}\) from Proposition 2. Harmonic Ritz values are also bounded by the extreme eigenvalues of \(\mathbf{A}\): \(\widetilde{\theta}_{i}\in[\lambda_{1},\lambda_{n}]\); see, e.g., [19, Thm. 2.1]. It is easy to check that the (memory-less) case \(m=1\) corresponds to the computation of the harmonic Rayleigh quotient \(\alpha^{\mathrm{BB2}}\).
We have just observed that the Galerkin condition for the harmonic Ritz values can be formulated either in terms of \(\mathbf{Y}\) or \(\mathbf{S}\). The latter way is presented in the references [1, 14], which again look for a basis of \(\mathcal{S}\) by means of a QR decomposition of \(\mathbf{G}\). Following the line of [1], the aim is to find the eigenvalues of
\[\left(\mathbf{Q}^{T}\mathbf{A}\mathbf{Q}\right)^{-1}\mathbf{Q}^{T}\!\mathbf{A }^{2}\mathbf{Q}=:\mathbf{T}^{-1}\,\mathbf{P}, \tag{13}\]
where \(\mathbf{G}=\mathbf{Q}\mathbf{R}\). Since \(\mathbf{Q}^{T}\!\mathbf{A}^{2}\mathbf{Q}\) involves the product \([\,\mathbf{G}\quad\mathbf{g}_{m+1}\,]^{T}[\,\mathbf{G}\quad\mathbf{g}_{m+1}\,]\), we determine the Cholesky decomposition of this matrix, to write [1, Eq. (30)]
\[\mathbf{P}=\mathbf{R}^{-T}\mathbf{J}^{T}\begin{bmatrix}\mathbf{R}&\mathbf{r} \\ \mathbf{0}&\rho\end{bmatrix}^{T}\begin{bmatrix}\mathbf{R}&\mathbf{r}\\ \mathbf{0}&\rho\end{bmatrix}\mathbf{J}\mathbf{R}^{-1}, \tag{14}\]
where \(\mathbf{R}\) is the Cholesky factor of \(\mathbf{G}^{T}\mathbf{G}\), and \(\mathbf{r}\) is as in (8). Both \(\mathbf{T}\) and \(\mathbf{P}\) are symmetric; moreover, while \(\mathbf{T}\) is tridiagonal, \(\mathbf{P}\) is pentadiagonal. If \(\mathbf{G}\) is rank deficient, the oldest gradients are discarded.
Given the similar roles of \(\mathbf{S}\) for \(\mathbf{A}\) in (6) and of \(\mathbf{Y}\) for \(\mathbf{A}^{-1}\) in (12), we now consider new alternative ways to find the harmonic Ritz values of \(\mathbf{A}\), based on the decomposition of either \(\mathbf{Y}\) or \(\mathbf{Y}^{T}\mathbf{Y}\). The aim is to get an \(s\times s\) representation of \(\mathbf{A}^{-1}\), as we did for \(\mathbf{A}\) in Section 2.1. In this context, we need the following (new) relation:
\[\mathbf{A}^{-1}\mathbf{Y}=-\mathbf{G}\mathbf{D}^{-1}=[\,\mathbf{Y}\;\;- \mathbf{g}_{m+1}\,]\,\widetilde{\mathbf{J}},\quad\text{where}\quad\widetilde{ \mathbf{J}}=\begin{bmatrix}1&&\\ \vdots&\ddots&\\ 1&\cdots&1\\ 1&\cdots&1\end{bmatrix}\mathbf{D}^{-1}. \tag{15}\]
As for (7), this follows from the definition \(\mathbf{y}_{k-1}=\mathbf{g}_{k}-\mathbf{g}_{k-1}\).
We start with the pivoted QR of \(\mathbf{Y}\), i.e., \(\mathbf{Y}\tilde{\mathbf{I}}=\mathbf{\tilde{Q}}\mathbf{\tilde{R}}\). As in Section 2.1, we truncate the decomposition based on the diagonal values of \(\tilde{\mathbf{R}}\), and obtain \(\mathbf{Y}\tilde{\mathbf{I}}_{Y}=\tilde{\mathbf{Q}}_{Y}\tilde{\mathbf{R}}_{Y}\), with
\[\tilde{\mathbf{R}}=\begin{bmatrix}\tilde{\mathbf{R}}_{Y}&\tilde{\mathbf{R}}_{ 12}\\ \mathbf{0}&\tilde{\mathbf{R}}_{22}\end{bmatrix}.\]
Then we project \(\mathbf{A}^{-1}\) onto \(\mathcal{Y}=\operatorname{span}(\mathbf{Q}_{Y})\) to obtain
\[\mathbf{H}^{\mathrm{QR}} =\mathbf{\tilde{Q}}_{Y}^{T}\mathbf{A}^{-1}\mathbf{\tilde{Q}}_{Y} =\mathbf{\tilde{Q}}_{Y}^{T}\mathbf{A}^{-1}\,\tilde{\mathbf{Y}}\,\tilde{ \mathbf{I}}_{Y}\tilde{\mathbf{R}}_{Y}^{-1}=\mathbf{\tilde{Q}}_{Y}^{T}\,[\, \mathbf{Y}\;\;-\mathbf{g}_{m+1}\,]\,\widetilde{\mathbf{J}}\,\tilde{\mathbf{ I}}_{Y}\tilde{\mathbf{R}}_{Y}^{-1}\] \[=[\,[\tilde{\mathbf{R}}_{Y}\;\tilde{\mathbf{R}}_{12}]\,\tilde{ \mathbf{I}}^{-1}\;\;-\mathbf{\tilde{Q}}_{Y}^{T}\mathbf{g}_{m+1}\,]\,\widetilde {\mathbf{J}}\,\tilde{\mathbf{I}}_{Y}\tilde{\mathbf{R}}_{Y}^{-1}. \tag{16}\]
The matrix \(\mathbf{H}^{\mathrm{QR}}\) is also symmetric and delivers the reciprocals of harmonic Ritz values; its expression is similar to (10). An approach based on the Cholesky decomposition of \(\mathbf{Y}^{T}\mathbf{Y}=\mathbf{\tilde{R}}^{T}\widetilde{\mathbf{R}}\) may also be derived:
\[\mathbf{H}^{\mathrm{CH}}=[\,\widetilde{\mathbf{R}}\;\;\widetilde{\mathbf{r}}\,] \,\widetilde{\mathbf{J}}\,\widetilde{\mathbf{R}}^{-1}, \tag{17}\]
with \(\widetilde{\mathbf{r}}\) solution to \(\widetilde{\mathbf{R}}^{T}\widetilde{\mathbf{r}}=-\mathbf{Y}^{T}\mathbf{g}_{m +1}\).
As for the Ritz values, SVD is another viable option. Consider the truncated SVD of \(\mathbf{Y}\colon\mathbf{Y}_{1}=\mathbf{U}_{Y}\;\mathbf{\Sigma}_{Y}\;\mathbf{V}_{Y}^ {T}\), where \(\mathbf{\Sigma}_{Y}\) is \(s\times s\). Since \(\mathbf{Y}_{1}\mathbf{V}_{Y}=\mathbf{Y}\mathbf{V}_{Y}\), by using similar arguments as in the derivation of (11), we get the following low-dimensional representation of \(\mathbf{A}^{-1}\):
\[\mathbf{H}^{\mathrm{SVD}} =\mathbf{U}_{Y}^{T}\mathbf{A}^{-1}\mathbf{U}_{Y}=\mathbf{U}_{Y}^ {T}\mathbf{A}^{-1}\mathbf{Y}\,\mathbf{V}_{Y}\,\mathbf{\Sigma}_{Y}^{-1}=\mathbf{ U}_{Y}^{T}\left[\,\mathbf{Y}\;\;\;-\mathbf{g}_{m+1}\,\right]\widetilde{\mathbf{J}}\, \mathbf{V}_{Y}\,\mathbf{\Sigma}_{Y}^{-1}\] \[=\left[\,\mathbf{\Sigma}_{Y}\mathbf{V}_{Y}^{T}\;\;-\mathbf{U}_{Y }^{T}\mathbf{g}_{m+1}\,\right]\widetilde{\mathbf{J}}\,\mathbf{V}_{Y}\,\mathbf{ \Sigma}_{Y}^{-1}. \tag{18}\]
Note that, in contrast to \(\mathbf{T}^{-1}\mathbf{P}\), the matrix \(\mathbf{H}^{\mathrm{SVD}}\) is symmetric and gives the reciprocals of harmonic Ritz values. In addition, the expression for \(\mathbf{H}^{\mathrm{SVD}}\) is similar to the one for \(\mathbf{B}^{\mathrm{SVD}}\) in (11).
To conclude the section, we mention the following technique, which is new in the context of LMSD. For the solution of eigenvalue problems, it has been observed (e.g., by Morgan [16]) that harmonic Ritz values sometimes do not approximate eigenvalues well, and it is recommended to use the Rayleigh quotients of harmonic Ritz vectors instead. This means that we use \(\mathbf{S}\widetilde{\mathbf{c}}_{i}\) as approximate eigenvectors, and their Rayleigh quotients \(\widetilde{\mathbf{c}}_{i}^{T}\mathbf{S}^{T}\mathbf{A}\mathbf{S}\widetilde{ \mathbf{c}}_{i}\) as approximate eigenvalues. This fits nicely with Fletcher's approach: in fact, once we have the eigenvectors \(\widetilde{\mathbf{c}}_{i}\) of \(\mathbf{T}^{-1}\mathbf{P}\) (13), we compute their corresponding Rayleigh quotients as \(\widetilde{\mathbf{c}}_{i}^{T}\mathbf{T}\widetilde{\mathbf{c}}_{i}\). We remark that, in the one-dimensional case, this procedure reduces to the gradient method with BB1 stepsizes, instead of the BB2 ones.
In Section 5.1 we compare and comment on the different strategies to get both the standard and the harmonic Ritz values. We will see how the computation of the harmonic Rayleigh quotients can result in a lower number of iterations of LMSD, although computing extra Rayleigh quotients involves some additional work in the \(m\)-dimensional space.
### Secant conditions for LMSD
We finally show that the low-dimensional representations of the Hessian matrix \(\mathbf{A}\) (or its inverse) satisfy a certain secant condition. This result is new in the context of LMSD, and will be the starting point of one of our extensions of LMSD to general unconstrained optimization problems in Section 3. Recall from [4] that the BB stepsizes (2) satisfy a secant condition each, in the least squares sense:
\[\alpha^{\mathrm{BB1}}=\underset{\alpha}{\mathrm{argmin}}\;\|\mathbf{y}-\alpha \,\mathbf{s}\|,\qquad\alpha^{\mathrm{BB2}}=\underset{\alpha}{\mathrm{argmin}}\; \|\alpha^{-1}\,\mathbf{y}-\mathbf{s}\|.\]
We now give a straightforward extension of these conditions to the limited memory variant of the steepest descent. We show that there exist \(m\times m\) matrices that satisfy a secant condition and share the same eigenvalues as the two pencils (\(\mathbf{S}^{T}\mathbf{Y}\), \(\mathbf{S}^{T}\mathbf{S}\)), (\(\mathbf{Y}^{T}\mathbf{S}\), \(\mathbf{Y}^{T}\mathbf{Y}\)). In the quadratic case, when \(\mathbf{Y}=\mathbf{A}\mathbf{S}\), the following results correspond to [18, Thm. 11.4.2] and [20, Thm. 4.2], respectively.
**Proposition 3**: _Let \(\mathbf{S}\), \(\mathbf{Y}\in\mathbb{R}^{n\times m}\) be full rank, with \(n\geq m\), and let \(\mathbf{B}\), \(\mathbf{H}\in\mathbb{R}^{m\times m}\)._
1. _The unique solution to_ \(\min_{\mathbf{B}}\|\mathbf{Y}-\mathbf{S}\mathbf{B}\|\) _is_ \(\mathbf{B}=(\mathbf{S}^{T}\mathbf{S})^{-1}\mathbf{S}^{T}\mathbf{Y}\)_._
2. _The unique solution to_ \(\min_{\mathbf{H}}\|\mathbf{Y}\mathbf{H}-\mathbf{S}\|\) _is_ \(\mathbf{H}=(\mathbf{Y}^{T}\mathbf{Y})^{-1}\mathbf{Y}^{T}\mathbf{S}\)_._
3. _In the quadratic case (_1_), the eigenvalues of_ \(\mathbf{B}\) _are the Ritz values of_ \(\mathbf{A}\) _and the eigenvalues of_ \(\mathbf{H}\) _are the_ inverse _harmonic Ritz values of_ \(\mathbf{A}\)_._
Proof.: The stationarity conditions for the overdetermined least squares problem \(\min_{\mathbf{B}}\ \|\mathbf{Y}-\mathbf{S}\mathbf{B}\|\) are the normal equations \(\mathbf{S}^{T}(\mathbf{Y}-\mathbf{S}\mathbf{B})=\mathbf{0}\). Since \(\mathbf{S}\) is of full rank, \(\mathbf{S}^{T}\mathbf{S}\) is nonsingular, and thus \(\mathbf{B}=(\mathbf{S}^{T}\mathbf{S})^{-1}\,\mathbf{S}^{T}\mathbf{Y}\). Part (ii) follows similarly, by exchanging the role of \(\mathbf{S}\) and \(\mathbf{Y}\). Since \(\mathbf{B}\) and \((\mathbf{S}^{T}\mathbf{Y},\,\mathbf{S}^{T}\mathbf{S})\) have the same eigenvalues, part (iii) easily follows. The same relation holds for the eigenvalues of \(\mathbf{H}\) and the eigenvalues of the pencil \((\mathbf{Y}^{T}\mathbf{S},\,\mathbf{Y}^{T}\mathbf{Y})\).
Proposition 3 is a good starting point to extend LMSD for solving general unconstrained optimization problems.
## 3 General nonlinear functions
When the objective function \(f\) is a general continuously differentiable function, the Hessian is no longer constant through the iterations, and not necessarily positive definite. In general, there is no SPD approximate Hessian such that multiple secant equations hold (that is, an expression analogous to \(\mathbf{Y}=\mathbf{A}\mathbf{S}\) in the quadratic case). This is clearly stated by Schnabel [12, Thm. 3.1].
**Theorem 4**.: _Let \(\mathbf{S},\,\mathbf{Y}\) be full rank. Then there exists a symmetric (positive definite) matrix \(\mathbf{A}_{+}\) such that \(\mathbf{Y}=\mathbf{A}_{+}\,\mathbf{S}\) if and only if \(\mathbf{Y}^{T}\mathbf{S}\) is symmetric (positive definite)._
By inspecting all the expressions derived in Sections 2.1 and 2.2, we observe that only \(\mathbf{G}\) and \(\mathbf{Y}\) are needed to compute the \(m\times m\) matrices of interest for LMSD. However, given that \(\mathbf{Y}^{T}\mathbf{S}\) is in general not symmetric, Theorem 4 suggests that we cannot interpret these matrices as low-dimensional representations of some Hessian matrices.
We propose two ways to restore the connection with Hessian matrices. In Section 3.1, we exploit a technique proposed by Schnabel [12] for quasi-Newton methods. It consists of perturbing \(\mathbf{Y}\) to make \(\mathbf{Y}^{T}\mathbf{S}\) symmetric. We show that Fletcher's method can also be interpreted in this way. In Section 3.2, we introduce a second method which does not aim at satisfying multiple secant equations at the same time, but finds the solution to the least squares secant conditions of Proposition 3 by imposing symmetry constraints.
### Perturbation of \(\mathbf{Y}\) to solve multiple secant equations
In the context of quasi-Newton methods, Schnabel [12] proposes to perturb the matrix \(\mathbf{Y}\) of a quantity \(\Delta\mathbf{Y}=\widetilde{\mathbf{Y}}-\mathbf{Y}\) to obtain an SPD \(\widetilde{\mathbf{Y}}^{T}\mathbf{S}\). With this strategy, we implicitly obtain a certain SPD approximate Hessian \(\mathbf{A}_{+}\) such that \(\widetilde{\mathbf{Y}}=\mathbf{A}_{+}\,\mathbf{S}\). We then refer to Sections 2.1 and 2.2 to compute either the Ritz values or the harmonic Ritz values
of the approximate Hessian \(\mathbf{A}_{+}\). Although we only have \(\widetilde{\mathbf{Y}}\) at our disposal, and not \(\mathbf{A}_{+}\), this is all that is needed; the procedures in Section 2 do not need to know \(\mathbf{A}_{+}\) explicitly. In addition, Proposition 3 is still valid, after replacing \(\mathbf{Y}\) with \(\widetilde{\mathbf{Y}}\). We remark that, for our purpose, just a symmetric \(\widetilde{\mathbf{Y}}^{T}\mathbf{S}\) may also be sufficient, since we usually discard negative Ritz values.
In Section 5 we test one possible way of computing \(\Delta\mathbf{Y}\), as proposed in [12], and the Ritz values of the associated low-dimensional representation of \(\mathbf{A}_{+}\). This application is new in the context of LMSD. The perturbation is constructed as follows: first, consider the strict negative lower triangle \(\mathbf{L}\) of \(\mathbf{Y}^{T}\mathbf{S}-\mathbf{S}^{T}\mathbf{Y}=-\mathbf{L}+\mathbf{L}^{T}\), and suppose \(\mathbf{S}\) is of full rank. (If not, remove the oldest \(\mathbf{s}\)-vectors until the condition is satisfied.) Then \(\mathbf{Y}^{T}\mathbf{S}+\mathbf{L}\) is symmetric. Schnabel [12] solves the underdetermined system \(\Delta\mathbf{Y}^{T}\mathbf{S}=\mathbf{L}\), which has \(\Delta\mathbf{Y}=\mathbf{S}(\mathbf{S}^{T}\mathbf{S})^{-1}\mathbf{L}^{T}\) as minimum norm solution. By Theorem 4, there exists a symmetric \(\mathbf{A}_{+}\) such that \(\widetilde{\mathbf{Y}}=\mathbf{A}_{+}\,\mathbf{S}\). Now let us consider the QR decomposition of \(\mathbf{G}\), which is of full rank since \(\mathbf{S}\) is also of full rank. Similar to (7) we know that \(\mathbf{A}_{+}\mathbf{G}=-\widetilde{\mathbf{Y}}\mathbf{D}\). Moreover, we recall that \(\mathbf{S}=-\mathbf{G}\mathbf{D}^{-1}\), and that \(\mathbf{Y}\mathbf{D}=-[\,\mathbf{G}\,\,\,\mathbf{g}_{m+1}\,]\,\mathbf{J}\) from (7). Therefore, we obtain the following low-dimensional representation of \(\mathbf{A}_{+}\):
\[\mathbf{Q}^{T}\mathbf{A}_{+}\mathbf{Q} =\mathbf{Q}^{T}\mathbf{A}_{+}\mathbf{G}\mathbf{R}^{-1}=-\mathbf{ Q}^{T}\widetilde{\mathbf{Y}}\mathbf{D}\mathbf{R}^{-1}=-\mathbf{Q}^{T}(\mathbf{Y}+ \Delta\mathbf{Y})\,\mathbf{D}\mathbf{R}^{-1}\] \[=[\,\mathbf{R}\,\,\,\,\mathbf{r}\,]\,\mathbf{J}\mathbf{R}^{-1}+ \mathbf{R}(\mathbf{R}^{T}\mathbf{R})^{-1}\mathbf{D}\mathbf{L}^{T}\mathbf{D} \mathbf{R}^{-1}, \tag{19}\]
where \(\mathbf{r}\) is the solution to \(\mathbf{R}^{T}\mathbf{r}=\mathbf{G}^{T}\mathbf{g}_{m+1}\) as in (8). This means that (19) can be computed by means of the Cholesky decomposition of \(\mathbf{G}^{T}\mathbf{G}\) only; the factor \(\mathbf{Q}\) is not needed.
We now give a new interpretation of Fletcher's extension of LMSD to general nonlinear problems [1, Sec. 4], in terms of a specific perturbation of \(\mathbf{Y}\). Fletcher notices that the matrix \(\mathbf{T}\) (8) is an upper Hessenberg matrix and can still be computed from the matrix of gradients, but, because of Theorem 4, there is no guarantee that \(\mathbf{T}\) corresponds to a low-dimensional representation of a symmetric approximate Hessian matrix. Since the eigenvalues of \(\mathbf{T}\) might be complex, Fletcher proposes to enforce \(\mathbf{T}\) to be tridiagonal by replacing its strict upper triangular part with the transpose of its strict lower triangular part. We now show that this operation in fact corresponds to a perturbation of the \(\mathbf{Y}\) matrix. To the best of our knowledge, this result is new.
**Proposition 5**: _Let \(\mathbf{T}\) be as in (8) and consider its decomposition \(\mathbf{T}=\mathbf{L}+\mathbf{\Lambda}+\mathbf{U}\), where \(\mathbf{L}\) (\(\mathbf{U}\)) is strictly lower (upper) triangular and \(\mathbf{\Lambda}\) is diagonal. Moreover, let \(\mathbf{G}\) be full rank and \(\mathbf{G}=\mathbf{Q}\mathbf{R}\) its QR decomposition. If_
\[\Delta\mathbf{Y}=\mathbf{Q}\left(\mathbf{U}-\mathbf{L}^{T}\right)\mathbf{D} \mathbf{R}^{-1},\]
_then \(\widetilde{\mathbf{Y}}^{T}\mathbf{S}=(\mathbf{Y}+\Delta\mathbf{Y})^{T}\mathbf{S}\) is symmetric and there exists a symmetric \(\mathbf{A}_{+}\) such that \(\widetilde{\mathbf{Y}}=\mathbf{A}_{+}\mathbf{S}\) and \(\mathbf{Q}^{T}\mathbf{A}_{+}\mathbf{Q}=\mathbf{L}+\mathbf{\Lambda}+\mathbf{L} ^{T}\)._
_Proof_ First, we prove that \(\mathbf{S}^{T}\widetilde{\mathbf{Y}}\) is symmetric. By replacing the expression for \(\Delta\mathbf{Y}\) and exploiting the QR decomposition of \(\mathbf{G}\), we get
\[\mathbf{S}^{T}\widetilde{\mathbf{Y}}=-\mathbf{D}^{-1}\mathbf{R}^{T}\big{(}-[ \,\mathbf{R}\,\,\,\mathbf{r}\,]\,\mathbf{J}\mathbf{R}^{-1}+\mathbf{U}- \mathbf{L}^{T}\big{)}\,\mathbf{R}\mathbf{D}^{-1}\]
\[=-\mathbf{D}^{-1}\mathbf{R}^{T}\big{(}-(\mathbf{L}+\mathbf{\Lambda}+ \mathbf{U})+\mathbf{U}-\mathbf{L}^{T}\big{)}\,\mathbf{R}\mathbf{D}^{-1}\] \[=\mathbf{D}^{-1}\mathbf{R}^{T}\big{(}\mathbf{L}+\mathbf{\Lambda}+ \mathbf{L}^{T}\big{)}\,\mathbf{R}\mathbf{D}^{-1}.\]
Therefore \(\mathbf{S}^{T}\widetilde{\mathbf{Y}}\) is symmetric; Theorem 4 implies that there exists a symmetric \(\mathbf{A}_{+}\) such that \(\widetilde{\mathbf{Y}}=\mathbf{A}_{+}\mathbf{S}\). From this secant equation, it follows that
\[\mathbf{Q}^{T}\mathbf{A}_{+}\mathbf{Q}=-\mathbf{Q}^{T}\widetilde{\mathbf{Y}} \mathbf{D}\mathbf{R}^{-1}=\left(\mathbf{L}+\mathbf{\Lambda}+\mathbf{L}^{T} \right)\mathbf{R}\mathbf{D}^{-1}(\mathbf{D}\mathbf{R}^{-1})=\mathbf{L}+ \mathbf{\Lambda}+\mathbf{L}^{T}.\]
\(\Box\)
From this proposition, we are able to provide an upper bound for the spectral norm of the perturbation \(\Delta\mathbf{Y}\):
\[\|\Delta\mathbf{Y}\|_{2}\leq(\min_{i}\beta_{i}\cdot\sigma_{m}(\mathbf{R}))^{- 1}\,\|\mathbf{T}-(\mathbf{L}+\mathbf{\Lambda}+\mathbf{L}^{T})\|_{2},\]
where \(\sigma_{\min}(\mathbf{R})\) is the smallest singular value of \(\mathbf{R}\) and \(\min_{i}\beta_{i}\) is the smallest stepsize among the latest \(m\) steps. This suggests that the size of the perturbation \(\Delta\mathbf{Y}\) is determined not only by the distance between \(\mathbf{T}\) and its symmetrization, as expected, but also by the conditioning of \(\mathbf{R}\): if \(\mathbf{R}\) is close to singular, the upper bound may be large.
We would like to point out the following important open and intriguing question. While Schnabel solves \(\Delta\mathbf{Y}^{T}\mathbf{S}=\mathbf{L}\) to symmetrize \(\mathbf{Y}^{T}\mathbf{S}\), and Fletcher's update is described in Proposition 5, there may be other choices for the perturbation matrix \(\Delta\mathbf{Y}\) that, e.g., have a smaller \(\Delta\mathbf{Y}\) in a certain norm. However, obtaining these perturbations might be computationally demanding, compared to the task of getting \(m\) new stepsizes. In the cases we have analyzed, the lower-dimensional \(\mathbf{A}_{+}\) can be obtained from the Cholesky decomposition of \(\mathbf{G}^{T}\mathbf{G}\) at negligible cost.
Given the generality of Schnabel's Theorem 4, another possibility that may be explored is a perturbation of \(\mathbf{S}\), rather than \(\mathbf{Y}\), to symmetrize \(\mathbf{S}^{T}\mathbf{Y}\). This would be a natural choice for computing the harmonic Ritz values given a basis for \(\mathbf{Y}\). In this situation, the matrix binding \(\mathbf{S}\) and \(\mathbf{Y}\) would play the role of an approximate inverse Hessian. A thorough investigation is out of the scope of this paper.
### Symmetric solutions to the secant equations
In this subsection, we explore a second and alternative extension of LMSD. We start from the secant condition of Proposition 3 for a low-dimensional matrix \(\mathbf{B}\). The key idea is to impose _symmetry constraints_ to obtain real eigenvalues from the solutions to the least squares problems of Proposition 3. Even if the hypothesis of Theorem 4 is not met, this method still fulfills the purpose of obtaining new stepsizes for the LMSD iterations.
The following proposition gives the stationarity conditions to solve the two modified least squares problems. Denote the symmetric part of a matrix by \(\mathrm{sym}(\mathbf{A}):=\frac{1}{2}(\mathbf{A}+\mathbf{A}^{T})\).
**Proposition 6**: _Let \(\mathbf{S}\), \(\mathbf{Y}\in\mathbb{R}^{n\times m}\) be full rank, with \(n\geq m\), and \(\mathbf{B}\), \(\mathbf{H}\in\mathbb{R}^{m\times m}\)._
1. _The solution to_ \(\min_{\mathbf{B}=\mathbf{B}^{T}}\|\mathbf{Y}-\mathbf{S}\mathbf{B}\|\) _satisfies_ \(\mathrm{sym}(\mathbf{S}^{T}\mathbf{S}\,\mathbf{B}-\mathbf{S}^{T}\mathbf{Y})= \mathbf{0}\)_._
_._
2. _The solution to_ \(\min_{\mathbf{H}=\mathbf{H}^{T}}\|\mathbf{Y}\mathbf{H}-\mathbf{S}\|\) _satisfies_ \(\operatorname{sym}(\mathbf{Y}^{T}\mathbf{Y}\mathbf{H}-\mathbf{Y}^{T}\mathbf{S})= \mathbf{0}\)_._
Proof: If \(\mathbf{B}\) is symmetric, it holds that
\[\|\mathbf{Y}-\mathbf{S}\mathbf{B}\|^{2}=\operatorname{tr}(\mathbf{B}\, \mathbf{S}^{T}\mathbf{S}\,\mathbf{B}-2\,\operatorname{sym}(\mathbf{S}^{T} \mathbf{Y})\,\mathbf{B}+\mathbf{Y}^{T}\mathbf{Y}),\]
where \(\operatorname{tr}(\cdot)\) denotes the trace of a matrix. Differentiation leads to the following stationarity condition for \(\mathbf{B}\):
\[\mathbf{S}^{T}\mathbf{S}\,\mathbf{B}+\mathbf{B}\,\mathbf{S}^{T}\mathbf{S}=2 \,\operatorname{sym}(\mathbf{S}^{T}\mathbf{Y}), \tag{20}\]
which is a Lyapunov equation. Since \(\mathbf{S}\) is of full rank, its Gramian matrix is positive definite. This implies that the spectra of \(\mathbf{S}^{T}\mathbf{S}\) and \(-\mathbf{S}^{T}\mathbf{S}\) are disjoint, and therefore the equation admits a unique solution (see, e.g., [13] for a review of the Lyapunov equation and properties of its solution). Part (ii) follows similarly. \(\Box\)
It is easy to check that, for \(m=1\), \(\mathbf{B}\) in part (i) reduces to \(\alpha^{\text{BB1}}\) and \(\mathbf{H}\) in part (ii) to \(\beta^{\text{BB2}}\). Compared to Fletcher's \(\mathbf{T}\) matrix (8), the symmetric solutions \(\mathbf{B}\) and \(\mathbf{H}\) will generally give a larger residual (since they are suboptimal for the unconstrained secant conditions), but they enjoy the benefit that their eigenvalues are guaranteed to be real.
We remark that symmetry constraints also appear in the secant conditions of the BFGS method, and in the symmetric rank-one update (see, e.g., [5, Chapter 6]). While in the BFGS method the approximate Hessians are SPD, provided that the initial approximation is SPD, in the rank-one update method it is possible to get negative eigenvalues. The fundamental difference between LMSD and these methods is that we do not attempt to find an approximate \(n\times n\) Hessian matrix.
Even while we do not approximate the eigenvalues of some Hessian, as in the quadratic case of Section 2, it is possible to establish bounds for the extreme eigenvalues of the solutions to the Lyapunov equations of Proposition 6, provided that \(\operatorname{sym}(\mathbf{S}^{T}\mathbf{Y})\) is positive definite. The following result is a direct consequence of [21, Cor. 1].
**Proposition 7**: _Given the solution \(\mathbf{B}\) to (20), let \(\lambda_{1}(\mathbf{B})\) (\(\lambda_{n}(\mathbf{B})\)) be the smallest (largest) eigenvalue of \(\mathbf{B}\). If \(\mathbf{S}\) is of full rank and \(\operatorname{sym}(\mathbf{S}^{T}\mathbf{Y})\) is positive definite, then_
\[[\lambda_{1}(\mathbf{B}),\,\lambda_{n}(\mathbf{B})]\subseteq[\lambda_{1}(( \mathbf{S}^{T}\mathbf{S})^{-1}\operatorname{sym}(\mathbf{S}^{T}\mathbf{Y})), \;\lambda_{n}((\mathbf{S}^{T}\mathbf{S})^{-1}\operatorname{sym}(\mathbf{S}^{T }\mathbf{Y}))].\]
_If there exists an SPD matrix \(\mathbf{A}_{+}\) such that \(\mathbf{Y}=\mathbf{A}_{+}\mathbf{S}\), then \([\lambda_{1}(\mathbf{B}),\,\lambda_{n}(\mathbf{B})]\subseteq[\lambda_{1}( \mathbf{A}_{+}),\,\lambda_{n}(\mathbf{A}_{+})]\)._
Proof: The first statement directly follows from [21, Cor. 1]. From this we have
\[\lambda_{1}(\mathbf{B})\geq-\lambda_{1}^{-1}(-\mathbf{S}^{T}\mathbf{S}\,( \operatorname{sym}(\mathbf{S}^{T}\mathbf{Y}))^{-1}),\quad\lambda_{n}(\mathbf{ B})\leq-\lambda_{n}^{-1}(-\mathbf{S}^{T}\mathbf{S}\,(\operatorname{sym}( \mathbf{S}^{T}\mathbf{Y}))^{-1}).\]
The thesis follows from the fact that, given a nonsingular matrix \(\mathbf{\Lambda}\) with positive eigenvalues, the following equality holds for the largest and the smallest eigenvalue: \(\lambda_{i}(-\mathbf{\Lambda}^{-1})=-1/\lambda_{i}(\mathbf{\Lambda})\), where \(i=1\), \(n\).
When \(\mathbf{Y}=\mathbf{A}_{+}\mathbf{S}\), from Cauchy's Interlace Theorem, the spectrum of \(\mathbf{B}\) lies in \([\lambda_{1}(\mathbf{A}_{+}),\lambda_{n}(\mathbf{A}_{+})]\). \(\Box\)
An analogous result can be provided for the matrix \(\mathbf{H}\) of Proposition 6(ii).
### Solving the Lyapunov equation while handling rank deficiency
The solution to the Lyapunov equation (20) is unique, provided that \(\mathbf{S}\) is of full rank. In this section, we propose three options if \(\mathbf{S}\) is (close to) rank deficient. As in Section 2, we discuss approaches using a Cholesky decomposition, a pivoted QR factorization, and a truncated SVD. By using the decompositions exploited in Section 2.1 we can either discard some \(\mathbf{s}\)-vectors (and their corresponding \(\mathbf{y}\)-vectors) or solve the Lyapunov equation onto the space spanned by the first right singular vectors of \(\mathbf{S}\).
In the Cholesky decomposition and the pivoted QR decomposition we remove some of the \(\mathbf{s}\)-vectors and the corresponding \(\mathbf{y}\)-vectors, if needed. As we have seen in Section 2.1, in the Cholesky decomposition we discard past gradients until the Cholesky factor \(\mathbf{R}\) of \(\mathbf{G}^{T}\mathbf{G}\) is sufficiently far from singular. In this new context of Lyapunov equations, we additionally need the relation (cf. (7))
\[\mathbf{Y}=\left[\begin{array}{cc}\mathbf{G}&\mathbf{g}_{m+1}\end{array} \right]\mathbf{K},\qquad\text{where}\quad\mathbf{K}=\begin{bmatrix}-1&\text{
We remark that it is appropriate to control the truncation of the SVD by the condition number of the coefficient matrix \(\mathbf{S}^{T}\mathbf{S}\), which is \(\kappa^{2}(\mathbf{S})\).
The previous discussion on the three decompositions can also be extended to the secant equation of Proposition 6(ii), to compute the matrix \(\mathbf{H}\) and use its eigenvalues directly as stepsizes. Several possibilities may be explored by decomposing either \(\mathbf{G}\) or \(\mathbf{Y}\) as in Section 2.2 for the harmonic Ritz values. We will not discuss any further details regarding all these methods, but in the experiments in Section 5 we will present results obtained with the Cholesky factorization of \([\,\mathbf{G}\;\;\mathbf{g}_{m+1}\,]^{T}[\,\mathbf{G}\;\;\mathbf{g}_{m+1}\,]\) as expressed in (14). Then for the quantities in Proposition 6(ii) we have:
\[\mathbf{Y}^{T}\mathbf{Y}=\mathbf{K}^{T}\begin{bmatrix}\mathbf{R}&\mathbf{r}\\ \mathbf{0}&\rho\end{bmatrix}^{T}\begin{bmatrix}\mathbf{R}&\mathbf{r}\\ \mathbf{0}&\rho\end{bmatrix}\mathbf{K},\]
and the matrix \(\mathbf{Y}^{T}\mathbf{S}\) can be obtained from (22).
We note that all Lyapunov equations in this section are of the form \(\mathbf{E}^{T}\mathbf{E}\mathbf{B}+\mathbf{B}\mathbf{E}^{T}\mathbf{E}=\mathbf{ F}\). We describe a practical solution approach. Consider the truncated SVD \(\mathbf{E}\approx\mathbf{U}_{E}\mathbf{\Sigma}_{E}\mathbf{V}_{E}^{T}\), where the singular values in \(\mathbf{\Sigma}_{E}\) satisfy \(\sigma_{i}^{2}(\mathbf{E})\geq\mathsf{thresh}\cdot\sigma_{1}^{2}(\mathbf{E})\). In case we exploit the Cholesky decomposition or the pivoted QR, an extra truncated SVD might still be appropriate, since these two decompositions do not provide an accurate estimate of \(\kappa^{2}(\mathbf{E})\). By left and right multiplication by \(\mathbf{V}_{E}\), we obtain an expression analogous to (22):
\[\mathbf{\Sigma}_{E}^{2}\,\mathbf{B}_{E}+\mathbf{B}_{E}\,\mathbf{\Sigma}_{E}^{ 2}=\mathbf{V}_{E}^{T}\mathbf{F}\mathbf{V}_{E},\]
where \(\mathbf{B}_{E}=\mathbf{V}_{E}^{T}\mathbf{B}\mathbf{V}_{E}\). Since \(\mathbf{\Sigma}_{E}\) is diagonal, the solution to this Lyapunov equation can be easily found by elementwise division (cf. [13, p. 388]):
\[[\mathbf{B}_{E}]_{ij}=[\mathbf{V}_{E}^{T}\mathbf{F}\mathbf{V}_{E}]_{ij}\ /\ ( \sigma_{i}^{2}(\mathbf{E})+\sigma_{j}^{2}(\mathbf{E})).\]
We notice that, in the SVD approach (22), the solution can be found directly from this last step. In addition, we remark that, for the scope of LMSD, it is not necessary to find the solution \(\mathbf{B}\) to the original Lyapunov equation.
## 4 Algorithms and convergence results
In this section we present the LMSD method for strictly convex quadratic functions and general continuously differentiable functions. As mentioned in Section 2, the key idea of both algorithms is to store either the \(m\) most recent gradients or \(\mathbf{y}\)-vectors, to compute up to \(s\leq m\) new stepsizes, according to the procedures described in Sections 2-3. These stepsizes are then used in (up to) \(s\) consecutive iterations of a gradient method; this group of iterations is referred to as a _sweep_[1].
In Algorithm 1, we report the LMSD method for strictly convex quadratic functions as proposed in [1]. Algorithm 2 is a slight variation of [2, Alg. 2]. Our new approaches differ from these mainly in the way we determine the new stepsizes. Other minor differences will be discussed in the rest of the section.
In both algorithms, we plug in the stepsizes in increasing order, but there is no theoretical guarantee that this choice is optimal in some sense. From a theoretical
viewpoint, the ordering of the stepsizes is irrelevant in a gradient method for strictly convex quadratic functions, as is apparent from (3). In practice, due to rounding errors and other additions to the implementation (such as, e.g., Lines 7-13 of Algorithm 1 and Lines 10 and 13 of Algorithm 2), the stepsize ordering is relevant for both the quadratic and the nonlinear case. For the quadratic case, Fletcher [1] suggests that choosing the increasing order improves the chances of a monotone decrease in both the function value and the gradient norm. Nevertheless, his argument is based on the knowledge of \(s\) exact eigenvalues of \(\mathbf{A}\)[22].
### Strictly convex quadratic functions
The LMSD method for quadratic functions (1) is described in Algorithm 1, which corresponds to [1, Algorithm "A Ritz sweep algorithm"]. This routine is a gradient method without line search. Particular attention is put into the choice of the stepsize: whenever the function value increases compared to the initial function value of the sweep \(f_{\mathrm{ref}}\), Fletcher resets the iterate and computes a new point by taking a Cauchy step (cf. Algorithm 1, Line 9). This ensures that the next function value will not be higher than the current \(f_{\mathrm{ref}}\), since the Cauchy step is the solution to the exact line search \(\min_{\beta}f(\mathbf{x}_{k}-\beta\,\mathbf{g}_{k})\). Additionally, every time we take a Cauchy step, or the norm of the current gradient has increased compared to the previous iteration, we clear the stack of stepsizes and compute new (harmonic) Ritz values. At each iteration, a new gradient or \(\mathbf{y}\)-vector is stored, depending on the method chosen to approximate the eigenvalues of \(\mathbf{A}\) (cf. Section 2).
```
0: Function \(f(\mathbf{x})=\frac{1}{2}\,\mathbf{x}^{T}\mathbf{A}\mathbf{x}-\mathbf{b}^{T} \mathbf{x}\) with \(\mathbf{A}\) SPD, initial guess \(\mathbf{x}_{0}\), initial stepsize \(\beta_{0}>0\), tolerance tol
0: Approximation to minimizer \(\operatorname*{argmin}_{\mathbf{x}}f(\mathbf{x})\)
1:\(\mathbf{g}_{0}=\nabla f(\mathbf{x}_{0}),\quad f_{\mathrm{ref}}=f(\mathbf{x} _{0})\)
2:\(j=0,\ \ s=1\) # \(s\) is the stack size
3:for\(k=0,1,\ldots\)
4:\(\nu_{k}=\beta_{j},\quad j=j+1\)
5:\(\mathbf{x}_{k+1}=\mathbf{x}_{k}-\nu_{k}\,\mathbf{g}_{k}\)
6:if\(\|\mathbf{g}_{k+1}\|\leq\operatorname*{tol}\cdot\|\mathbf{g}_{0}\|,\ \ \mathbf{ return},\ \ \mathbf{end}\)
7:if\(f(\mathbf{x}_{k+1})\geq f_{\mathrm{ref}}\)
8: Reset \(\mathbf{x}_{k+1}=\mathbf{x}_{k},\ \ \text{clear the stack}\)
9: Reset \(\beta_{1}=\mathbf{g}_{k}^{T}\mathbf{g}_{k}\,/\,\mathbf{g}_{k}^{T}\mathbf{A} \mathbf{g}_{k}\) # Cauchy stepsize
10:continue
11:else
12:if\(\|\mathbf{g}_{k+1}\|\geq\|\mathbf{g}_{k}\|\), clear the stack, end
13:end
14:if empty stack or\(j>s\)
15: Compute stack of \(s\leq m\) new stepsizes \(\beta_{j}\), ordered increasingly
16:\(j=1\), \(f_{\mathrm{ref}}=f(\mathbf{x}_{k+1}),\ \ \mathbf{end}\)
```
**Algorithm 1** LMSD for strictly convex quadratic functions [1]
It is possible to implement LMSD without controlling the function value of the iterates or the gradient norm, as in [15]. Here Curtis and Guo also show the R-linear convergence of the method. However, in our experiments, we have noticed that this latter implementation converges slower than Fletcher's (for quadratic problems).
To the best of our knowledge, an aspect that has not been discussed yet is the presence of rounding errors in the low-dimensional representation of the Hessian. Except for (13), all the obtained matrices are symmetric, but their expressions are not. Therefore, in a numerical setting, it might happen that a representation of the Hessian is not symmetric. This may result in negative or complex eigenvalues; for this reason, we enforce symmetry by taking the symmetric part of the projected Hessian, i.e., \(\mathbf{B}\leftarrow\frac{1}{2}(\mathbf{B}+\mathbf{B}^{T})\), which is the symmetric matrix nearest to \(\mathbf{B}\). In the Cholesky decomposition, we replace the upper triangle of \(\mathbf{T}\) with the transpose of its lower triangle, in agreement with Fletcher's choice for the unconstrained case (cf. [1] and Section 3.1). In both situations, we discard negative eigenvalues, which may still arise.
In practice, we observe that the non-symmetry of a projected Hessian appears especially in problems with large \(\kappa(\mathbf{A})\), for a relatively large choice of \(m\) (e.g., \(m=10\)) and a small value of \(\mathsf{thresh}\) (e.g., \(\mathsf{thresh}=10^{-10}\)). In this situation, the Cholesky decomposition seems to produce a non-symmetric projected Hessian more often than pivoted QR or SVD. This is likely related to the fact that the Cholesky decomposition of an ill-conditioned Gramian matrix leads to a more inaccurate \(\mathbf{R}\) factor (cf. Section 1). In addition, the symmetrized \(\mathbf{T}\) seems to generate negative eigenvalues more often than the Hessian representations obtained via pivoted QR and SVD. However, these aspects may not directly affect the performance of LMSD. As we will see in Section 5.1, the adoption of different decompositions does not seem to influence the speed of LMSD.
We finally note that for smaller values of \(m\), such as \(m=5\), the projected Hessian tends to be numerically symmetric even for a small \(\mathsf{thresh}\). In fact, fewer gradients form a better condition matrix, because of the following argument. First, we have \(\sigma_{i}^{2}(\mathbf{G})=\lambda_{m-i+1}(\mathbf{G}^{T}\mathbf{G})\), for \(i=1,\ldots,m\). Since the Gramian matrix of the \(s\leq m\) most recent gradients \([\mathbf{g}_{m-s+1},\ldots,\mathbf{g}_{m}]\) is a submatrix of \(\mathbf{G}^{T}\mathbf{G}\), from Cauchy's Interlace Theorem (see, e.g., [18, Thms. 10.2.1 and 10.1.1]), we get that \(\sigma_{\min}(\mathbf{G})\leq\sigma_{\min}([\mathbf{g}_{m-s+1},\ldots,\mathbf{ g}_{m}])\) and \(\sigma_{\max}(\mathbf{G})\geq\sigma_{\max}([\mathbf{g}_{m-s+1},\ldots,\mathbf{ g}_{m}])\). This proves that \(\kappa([\mathbf{g}_{m-s+1},\ldots,\mathbf{g}_{m}])\leq\kappa(\mathbf{G})\).
### General nonlinear functions
We now review the limited memory steepest descent for general unconstrained optimization problems, as implemented in [2] and reported in Algorithm 2. Compared to the gradient method for strictly convex quadratic functions, LMSD for general nonlinear functions has more complications. In Section 3 we have proposed two alternative ways to find a set of real eigenvalues to use as stepsizes. However, we may still get negative eigenvalues. This problem also occurs in classical gradient methods, when \(\mathbf{s}_{k}^{T}\mathbf{y}_{k}<0\): in this case, the standard approach is to replace any negative stepsize with a positive one. In LMSD, we keep \(s\leq m\) positive eigenvalues and discard the negative ones. If all eigenvalues are negative, we restart from \(\beta_{k}=\max(\min(\|\mathbf{g}_{k}\|_{2}^{-1},\,10^{5}),\,\,1)\) as in [7]. Moreover, as in [2], only the latest \(s\) gradients are kept. As an alternative to
this strategy, we also mention the more elaborated approach of Curtis and Guo [14], which involves the simultaneous computation of Ritz and harmonic Ritz values.
The line search of LMSD in [2] is inspired by Algorithm 1 for quadratic functions. Once new stepsizes have been computed, at each sweep we produce a new iterate starting from the smallest stepsize in the stack. The reference function value \(f_{\mathrm{ref}}\) for the Armijo sufficient decrease condition is the function value at the beginning of the sweep, as in Algorithm 1. We note that this Armijo type of line search appropriately replaces the exact line search of Algorithm 1, i.e., the choice of the Cauchy stepsize when a nonmonotone behavior (with respect to \(f_{\mathrm{ref}}\)) is observed. The stack of stepsizes is cleared whenever the current steplength needs to be reduced to meet the sufficient decrease condition, or when the new gradient norm is larger than the previous one. This requirement is also present in Algorithm 1. Notice that, since we terminate the sweep whenever a backtracking step is performed, starting from the smallest stepsizes decreases the likelihood of ending a sweep prematurely. In contrast with [2], we keep storing the past gradients even after clearing the stack. This choice turns out to be favorable for the experiments in Section 5.2.
```
1:Input: Continuously differentiable function \(f\), initial guess \(\mathbf{x}_{0}\), initial stepsize \(\nu_{0}>0\), tolerance tol; safeguarding parameters \(\beta_{\mathrm{max}}>\beta_{\mathrm{min}}>0\); line search parameters \(c_{\mathrm{ls}}\), \(\sigma_{\mathrm{ls}}\in(0,1)\)
2:Output: Approximation to minimizer \(\operatorname*{argmin}_{\mathbf{x}}f(\mathbf{x})\)
3:\(\mathbf{g}_{0}=\nabla f(\mathbf{x}_{0}),\;\;\;\beta_{1}=\nu_{0},\;\;\;\;f_{ \mathrm{ref}}=f(\mathbf{x}_{0})\)
4:\(j=0,\;\;s=1\) # \(s\) is the stack size
5:for\(k=0,1,\ldots\)
6:\(\nu_{k}=\max(\beta_{\mathrm{min}},\;\min(\beta_{j},\,\beta_{\mathrm{max}}))\)
7:if\(f(\mathbf{x}_{k}-\nu_{k}\mathbf{g}_{k})\leq f_{\mathrm{ref}}-c_{\mathrm{ls}} \,\nu_{k}\,\|\mathbf{g}_{k}\|^{2}\)
8:\(\mathbf{x}_{k+1}=\mathbf{x}_{k}-\nu_{k}\mathbf{g}_{k}\)
9:else
10:while\(f(\mathbf{x}_{k}-\nu_{k}\,\mathbf{g}_{k})>f_{\mathrm{ref}}-c_{\mathrm{ls}} \,\nu_{k}\,\|\mathbf{g}_{k}\|^{2}\;\;\;\mathbf{do}\;\;\nu_{k}=\sigma_{\mathrm{ ls}}\,\nu_{k}\;\;\;\mathbf{end}\)
11:\(\mathbf{x}_{k+1}=\mathbf{x}_{k}-\nu_{k}\mathbf{g}_{k}\)
12: clear the stack
13:end
14:if\(\|\mathbf{g}_{k+1}\|\geq\|\mathbf{g}_{k}\|\), clear the stack end
15:\(j=j+1\)
16:if empty stack or\(j>s\)
17: Compute stack of \(s\leq m\) new stepsizes \(\beta_{j}>0\), ordered increasingly
18: Store only last \(s\) vectors of \(\mathbf{G}\)
19:\(j=1,\;\;f_{\mathrm{ref}}=f(\mathbf{x}_{k+1})\)
20:end
21:end
```
**Algorithm 2** LMSD for general nonlinear functions [2]
We remark that, by construction, all new function values within a sweep are smaller than \(f_{\mathrm{ref}}\). Therefore, the line search strategy adopted in [2] can be seen as a nonmonotone line search strategy [23]. Given the uniform bounds imposed on the sequence of stepsizes, the result of global convergence for a gradient method with nonmonotone line search [7, Thm. 2.1] also holds for Algorithm 2.
## 5 Numerical experiments
We explore the several variants of LMSD, for the quadratic and for the general unconstrained case. We compare LMSD with a gradient method with \(\mathrm{ABB}_{\mathrm{min}}\) stepsizes [24]. As claimed by Fletcher [1], we have observed that LMSD may indeed perform better than L-BFGS on some problems. However, in the majority of our test cases, L-BFGS, as implemented in [25], converges faster than LMSD, in terms of number of function (and gradient) evaluations, and computational time. The comparison with another gradient method seems fairer to us than the comparison with a second-order method, and therefore we will not show L-BFGS in our study. Nevertheless, as discussed in Section 1, we recall the two main advantages of considering LMSD methods: the possibility to extend its idea to problems beyond unconstrained optimization (see, e.g., [8, 9]), and the less stringent requirements on the objective functions to guarantee the global convergence of the method.
### Quadratic functions
The performance of the LMSD method may depend on several choices: the memory parameter \(m\), whether we compute Ritz or harmonic Ritz values, and how we compute a basis for either \(\mathcal{S}\) or \(\mathcal{Y}\). This section studies how different choices affect the behavior of LMSD in the context of strictly convex quadratic problems (1).
We consider quadratic problems by taking the Hessian matrices from the Suite-Sparse Matrix Collection [26]. These are 103 SPD matrices with a number of rows \(n\) between \(10^{2}\) and \(10^{4}\). From this collection we exclude only \(\mathsf{mhd}1280\mathsf{b}\), \(\mathsf{nd}3\mathsf{k}\), \(\mathsf{nos}7\). The vector \(\mathbf{b}\) is chosen so that the solution of \(\mathbf{Ax}=\mathbf{b}\) is \(\mathbf{x}^{*}=\mathbf{e}\), the vector of all ones. For all problems, the starting vector is \(\mathbf{x}_{0}=10\,\mathbf{e}\), and the initial stepsize is \(\beta_{0}=1\). The algorithm stops when \(\|\mathbf{g}_{k}\|\leq\mathsf{tol}\cdot\|\mathbf{g}_{0}\|\) with \(\mathsf{tol}=10^{-6}\), or when \(5\cdot 10^{4}\) iterations are reached. We compare the performance of LMSD with memory parameters \(m=3\), \(5\), \(10\) with the \(\mathrm{ABB}_{\mathrm{min}}\) gradient method [24]. Its stepsize is defined as
\[\beta_{k}^{\mathrm{ABB}_{\mathrm{min}}}=\left\{\begin{array}{ll}\min\{\beta_ {j}^{\mathrm{BB2}}\mid j=\max\{1,k-m\},\ldots,k\},&\mbox{if $\beta_{k}^{\mathrm{BB2}}<\eta\,\beta_{k}^{\mathrm{BB1}}$},\\ \beta_{k}^{\mathrm{BB1}},&\mbox{otherwise},\end{array}\right.\]
where \(m=5\) and \(\eta=0.8\). Since the performance of \(\mathrm{ABB}_{\mathrm{min}}\) depends less on the choice of \(m\) than LMSD, we only show \(m=5\) for \(\mathrm{ABB}_{\mathrm{min}}\). Among many possible stepsize choices, we compare LMSD with \(\mathrm{ABB}_{\mathrm{min}}\) because the latter method behaves better than classical BB stepsizes on quadratic problems (see, e.g., [27]).
We recall that one \(\mathrm{ABB}_{\mathrm{min}}\) step requires the computation of three inner products of cost \(\mathcal{O}(n)\) each. An LMSD sweep is slightly more expensive, involving operations of order \(m^{2}n\) and (much less important) \(m^{3}\), but it is performed approximately once
every \(m\) iterations. These costs correspond to the decomposition of either \(\mathbf{G}\) or \(\mathbf{Y}\), the computation of the projected Hessian matrices and their eigenvalues. We also remark that, while pivoted QR and SVD require \(\mathcal{O}(m^{2}n)\) operations, the Cholesky decomposition is \(\mathcal{O}(m^{3})\), but is preceded by the computation of a Gramian matrix, with cost \(\mathcal{O}(m^{2}n)\).
We consider two performance metrics: the number of gradient evaluations (NGE) and the computational time. The number of gradient evaluations also includes the iterations that had to be restarted with a Cauchy step (cf. Algorithm 1, Line 9). Our experience indicates that computational time may depend significantly on the chosen programming language, and therefore should not be the primary choice in the comparison of the methods. Nevertheless, it is included as an indication, because it takes into account the different costs of an LMSD sweep and \(m\) iterations of a gradient method.
The comparison of different methods is made by means of the performance profile [28], as it is implemented in Python's library perfprof. Briefly speaking, the cost of each algorithm per problem is normalized, so that the winning algorithm has cost 1. This quantity is called _performance ratio_. Then we plot the proportion of problems that have been solved within a certain performance ratio. An infinite cost is assigned whenever a method is not able to solve a problem to the tolerance within the maximum number of iterations.
We compare the performance of LMSD where the stepsizes are computed as summarized in Table 1.
In the first comparison we only consider methods that involve a Cholesky decomposition for simplicity. The Cholesky routine raises an error any time the input matrix is not SPD; therefore no tolerance thresh for discarding old gradients needs to be chosen. The performance profiles for this first experiment are shown in Figure 1 for \(m\in\{3,5,10\}\), in the performance range [1, 3]. As \(m\) increases all methods improve, both in terms of gradient evaluations and computational time.
The method that performs best, both in terms of NGE and computational time, is LMSD-G for \(m=10\). When \(m=5\), LMSD-HG-RQ performs better than LMSD-G in terms of NGE, but it is more computationally demanding. This is reasonable, since LMSD-HG-RQ has to compute \(m\) extra Rayleigh quotients; this operation has an
\begin{table}
\begin{tabular}{l l l} \hline \hline Method & Description & Matrix \\ \hline LMSD-G [1] & Cholesky on \(\mathbf{G}^{T}\mathbf{G}\) to compute the inverse Ritz values of \(\mathbf{A}\) & \(\mathbf{T}\) (8) \\ LMSD-G-QR & Pivoted QR on \(\mathbf{G}\) to compute the inverse Ritz values of \(\mathbf{A}\) & \(\mathbf{B}^{\mathrm{QR}}\) (10) \\ LMSD-G-SVD & SVD on \(\mathbf{G}\) to compute the inverse Ritz values of \(\mathbf{A}\) & \(\mathbf{B}^{\mathrm{SVD}}\) (11) \\ LMSD-HG [1] & \(\mathbf{A}\cdot\mathrm{span}(\mathbf{G})\) to compute the inverse harmonic Ritz values of \(\mathbf{A}\) & \(\mathbf{T}^{-1}\mathbf{P}\) (13) \\ LMSD-HG-RQ & \(\mathbf{A}\cdot\mathrm{span}(\mathbf{G})\) to find the harmonic Ritz vectors \(\mathbf{A}\) and compute their & \(\mathbf{T}^{-1}\mathbf{P}\) (13) \\ & inverse Rayleigh quotients (cf. end of Sec. 2.2) & \\ LMSD-HY & Cholesky on \(\mathbf{Y}^{T}\mathbf{Y}\) to compute the Ritz values of \(\mathbf{A}^{-1}\) & \(\mathbf{H}^{\mathrm{CH}}\) (17) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Strategies to compute the new stack of stepsizes in LMSD methods for quadratic functions. RQ refers to the computation of Rayleigh quotients from the harmonic Ritz vectors. H stands for “harmonic”, the letters G (Y) indicate whether a decomposition has been used to implicitly compute a basis for \(\mathbf{G}\) (\(\mathbf{Y}\)).
additional cost of \(m^{3}\), which can be relatively large for some problems in our collection, where \(m^{3}\approx n\).
LMSD-HG and LMSD-HY perform similarly, since they are two different ways of computing the same harmonic Ritz values. They generally perform worse than the other two methods; in the case \(m=10\), their performances are comparable with those of LMSD-G for \(m=5\).
In Figure 2 we compare LMSD with ABB\({}_{\min}\). Given the comments to Figure 1, we decide to compute the Ritz values of the Hessian matrix, by decomposing \(\mathbf{G}\) in different ways. Specifically, we compare LMSD-G, LMSD-G-QR and LMSD-G-SVD
Figure 1: Performance profile for strictly convex quadratic problems, based on the number of gradient evaluations (left) and computational time (right). Different line types indicate different values for \(m\). Comparison between the computation of Ritz values or harmonic Ritz values.
Figure 2: Performance profile for strictly convex quadratic problems, based on the number of gradient evaluations (left) and computational time (right). Different line types indicate different values for \(m\). Comparison between different decompositions for the matrix \(\mathbf{G}\).
(cf. Table 1). The tolerance to decide the memory size \(s\leq m\) is set to \(\mathsf{thresh}=10^{-8}\), for both pivoted QR and SVD. Once more, we clearly see that LMSD improves as the memory parameter increases, both in terms of gradient evaluations and computational time. Once \(m\) is fixed, the three methods to compute the basis for \(\mathcal{S}\) are almost equivalent. LMSD-G-SVD seems to be slightly faster than LMSD-G in terms of computational time, as long as the performance ratio is smaller than \(1.5\). In our implementation, LMSD-G-QR seems to be more expensive. Compared to \(\mathrm{ABB}_{\min}\), all LMSD methods with \(m=5,10\) perform better in terms of gradient evaluations. LMSD-G-SVD, for \(m=10\), appears to be faster than \(\mathrm{ABB}_{\min}\) also in terms of computational time.
Figure 2 already suggests that different decompositions give approximately equivalent results. In addition, given a problem, it is difficult to recommend a certain decomposition strategy. We illustrate this idea with the following example: consider a family of \(15\) problems with \(\mathbf{A}=\mathrm{diag}(1,\omega,\omega^{2},\ldots,\omega^{99})\), where \(\omega\) assumes \(15\) values equally spaced in \([1.01,\,1.4]\). Geometric sequences as eigenvalues are quite frequent in the literature; see, e.g., [1, 2]. The starting vector is \(\mathbf{x}_{0}=\mathbf{e}\), the associated linear system is \(\mathbf{A}\mathbf{x}=\mathbf{0}\); the memory parameter is \(m=5\), and each problem is scaled by the norm of the first gradient, so that \(\mathsf{tol}=10^{-7}/\|\mathbf{g}_{0}\|\). The initial stepsize is \(\beta_{0}=0.5\). In Figure 3, we plot the condition number of \(\mathbf{A}\) against the number of gradient evaluations. The three methods start to differ already with \(\kappa(\mathbf{A})\approx 10^{5}\). For a large condition number, there is no clear winner in the performed experiments.
To summarize, when the objective function is strictly convex quadratic, Ritz values seem preferable over harmonic Ritz values. This is emphasized by the improvement of LMSD-HG when taking Rayleigh quotients instead of harmonic Rayleigh quotients. Different decompositions of \(\mathbf{G}\) result in mild differences in the performance of LMSD. Even if Cholesky decomposition is the least stable from a numerical point of view, its
Figure 3: Condition number of quadratic problems with Hessian matrix \(\mathbf{A}\) and corresponding number of gradient evaluations. Different colors indicate different ways of computing the Ritz values of \(\mathbf{A}\).
instability does not seem to have a clear effect on the performance of LMSD. Finally, we observe that, in all methods, LMSD seems to improve as the memory parameter \(m\) increases.
### General unconstrained optimization problems
In this section we want to assess the performance of LMSD for general unconstrained optimization problems, when we choose different methods to compute the stepsizes. These choices are summarized in Table 2.
All the methods presented in Section 3 are considered, along with the extension of the harmonic Ritz values computation to the general unconstrained case. This is explained in [14], and indicated as LMSD-H-CHOL. In the quadratic case, the authors point out that the matrix \(\mathbf{P}\) (14) can be expressed in terms of \(\mathbf{T}\) as \(\mathbf{P}=\mathbf{T}^{T}\mathbf{T}+\boldsymbol{\xi}\boldsymbol{\xi}^{T}\), where \(\boldsymbol{\xi}^{T}=\left[\mathbf{0}^{T}\ \rho\right]\mathbf{J}\mathbf{R}^{-1}\). Then, if \(\widetilde{\mathbf{T}}\) is the tridiagonal symmetrization of \(\mathbf{T}\) as in LMSD-CHOL, the new \(\mathbf{P}\) is defined as \(\widetilde{\mathbf{P}}=\widetilde{\mathbf{T}}^{T}\widetilde{\mathbf{T}}+ \boldsymbol{\xi}\boldsymbol{\xi}^{T}\). The new stepsizes are the eigenvalues of \(\widetilde{\mathbf{P}}^{-1}\widetilde{\mathbf{T}}\), and are real since \(\widetilde{\mathbf{P}}\) is generically SPD, and \(\widetilde{\mathbf{T}}\) is symmetric.
All the LMSD methods are tested against the gradient method with nonmonotone line search [7]. The stepsize choice is again \(\text{ABB}_{\min}\) with \(m=5\). The nonmonotone line search features a memory parameter \(M=10\); negative stepsizes are replaced by \(\beta_{k}=\max(\min(\|\mathbf{g}_{k}\|^{-1},\,10^{5}),\ 1)\), as in [7]. In both algorithms, we set \(\beta_{\min}=10^{-30}\), \(\beta_{\max}=10^{30}\), \(c_{\text{ls}}=10^{-4}\), \(\sigma_{\text{ls}}=\frac{1}{2}\), and \(\beta_{0}=\|\mathbf{g}_{0}\|^{-1}\). The routine stops when \(\|\mathbf{g}_{k}\|\leq\text{tol}\cdot\|\mathbf{g}_{0}\|\), with \(\text{tol}=10^{-6}\), or when \(10^{5}\) iterations are reached. In LMSD, the memory parameter has been set to \(m\in\{3,5,7\}\).
We take 31 general differentiable functions from the CUTEst collection [29, 30] and the suggested starting points \(\mathbf{x}_{0}\) therein. The problems are reported in Table 3. Since some test problems are non-convex, we checked whether all gradient methods converged to the same stationary point for different methods. As the performance profile, we may consider three different costs: the number of function evaluations (NFE), the number of iterations, and the computational time. The number of iterations coincides with the number of gradient evaluations for both LMSD and \(\text{ABB}_{\min}\).
Before comparing LMSD methods with the \(\text{ABB}_{\min}\) gradient method, we discuss the following two aspects of LMSD: the use of different decompositions of either \(\mathbf{G}\) or \(\mathbf{S}\) in LMSD-LYA, which has been presented in Section 3.3; the number of steps
\begin{table}
\begin{tabular}{l l} \hline Method & Description \\ \hline LMSD-CHOL [1] & Tridiagonalize \(\mathbf{T}\) as in [1] and compute its inverse eigenvalues \\ LMSD-H-CHOL [14] & Symmetric \(\mathbf{P}^{-1}\mathbf{T}\) as in [14] and compute its eigenvalues \\ LMSD-LYA & Inverse eigenvalues of the solution to Prop. 6 (i) with Cholesky of \(\mathbf{G}^{T}\mathbf{G}\) to handle rank deficiency \\ LMSD-LYA-QR & Idem with pivoted QR of \(\mathbf{G}\) to handle rank deficiency \\ LMSD-LYA-SVD & Idem with SVD of \(\mathbf{S}\) to handle rank deficiency \\ LMSD-H-LYA & Eigenvalues of the solution to Prop. 6 (ii) with Cholesky of \(\left[\mathbf{G}\ \mathbf{g}_{m+1}\right]^{T}\left[\mathbf{G}\ \mathbf{g}_{m+1}\right]\) to handle rank deficiency \\ LMSD-PERT & Perturb \(\mathbf{Y}\) according to [12] to get (19) and compute its inverse eigenvalues \\ \hline \end{tabular}
\end{table}
Table 2: Strategies to compute the new stack of stepsizes in LMSD methods for general nonlinear functions. H stands for “harmonic”.
per sweep that are actually used by each LMSD method, in relation with the chosen memory parameter \(m\).
**Different decompositions in LMSD-LYA.** In the quadratic case, we notice that there is not much difference between the listed decompositions to compute a basis for \(\mathcal{S}\). We repeat this experiment with LMSD-LYA, for general unconstrained problems, because the Hessian matrix is not constant during the iterations and therefore the way we discard the past gradients might be relevant. We recall that Cholesky decomposition (LMSD-LYA) discards the oldest gradients first, pivoted QR (LMSD-LYA-QR) selects the gradients in a different order; SVD (LMSD-LYA-SVD) takes a linear combination of the available gradients. For the last two methods, the tolerance to detect linear dependency is set to \(\mathsf{thresh}=10^{-8}\).
Figure 4 shows the three decompositions for \(m=5\). Memory parameters \(m=3\), \(7\) are not reported as they are similar to the case \(m=5\). The conclusion is the same as for the quadratic case: the decomposition method does not seem to have a large impact on the performance of LMSD. However, for the general case, we remark that while LMSD-LYA solved all problems, both LMSD-LYA-QR and LMSD-LYA-SVD fail to solve one problem each, for all the tested memory parameters. In addition, LMSD-LYA seems more computationally efficient than the other methods. For these two reasons, we continue our analysis by focusing on Cholesky decomposition only.
\begin{table}
\begin{tabular}{l r l r r r} \hline \hline Problem & \(n\) & Problem & \(n\) & Problem & \(n\) \\ \hline ARGTRIGLS & 200 & EIGENBLS & 110 & MOREBV & 5000 \\ CHNROSNB & 50 & EIGENCLS & 462 & MSQRTALS & 529 \\ COATING & 134 & ERRINROS & 50 & MSQRTBLS & 529 \\ COSINE & 10000 & EXTROSNB & 1000 & NONCVXU2 & 10000 \\ DIXMAANE1 & 3000 & FLETCHCR & 1000 & NONCVXUN & 10000 \\ DIXMAANF & 9000 & FMINSURF & 1024 & NONDQUAR & 10000 \\ DIXMAANG & 9000 & GENHUMPS & 5000 & SPMSRTLS & 10000 \\ DIXMAANH & 9000 & GENROSE & 500 & SSBRYBND & 5000 \\ DIXMAANJ & 9000 & LUKSAN11LS & 100 & TQUARTIC & 5000 \\ DIXMAANK & 9000 & LUKSAN21LS & 100 & & \\ EIGENALS & 110 & MODBEALE & 2000 & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Problems from the CUTEst collection and their sizes.
Figure 4: Performance profile for general unconstrained problems, based on the number of function evaluations, gradient evaluations, and computational time. Comparison between different decompositions for the matrix **G** (or **S**) and \(m=5\).
**Average number of stepsizes per sweep.** We quantify the efficiency of the various LMSD methods as follows. Ideally, each sweep should provide \(m\) new stepsizes, which are supposed to be used in the next \(m\) iterations. However, because of the algorithm we adopted, less than \(m\) stepsizes are actually employed before the stack is cleared. For each problem and method, we compute the ratio between the number of iterations and the number of sweeps. This gives the average number of stepsizes that are used in each sweep. This value is in \([1,m]\), where the memory parameter \(m\) indicates the ideal situation where all the steps are used in a sweep. A method that often uses less than \(m\) stepsizes might be inefficient, since the effort of computing new stepsizes (of approximately \(\mathcal{O}(m^{2}n)\) operations) is not entirely compensated.
The number of iterations per sweep is shown in Figure 5 as a distribution function over the tested problems. An ideal curve should be a step function with a jump in \(m\). For example, when \(m=3\), LMSD-CHOL, i.e., Fletcher's method, tends to use 3 stepsizes on average for approximately \(80\%\) of the problems; this is close to the desired situation. When \(m=5\), we notice that LMSD-H-LYA and LMSD-LYA have a similar behavior but for an average smaller than 5. In all cases, LMSD-H-LYA is the curve that shows the lowest average number of steps per sweep. Another interesting behavior is the one of LMSD-PERT, which, for some problems, approaches the largest value \(m\), but, for many others, shows a lower average. In \(m=5,7\), more than \(50\%\) of problems are solved by using only half of the available stepsizes per sweep. This behavior was reflected by the performance profiles of LMSD-PERT: while going from \(m=5\) to \(m=7\), we observed an improvement in terms of the number of function evaluations, but a deterioration in the computational time.
As \(m\) increases, the deterioration of the average number of stepsizes per sweep is also visible for the other methods. As already remarked by [1, 2], this suggests that choosing a large value for \(m\) does not improve the LMSD methods for general unconstrained problems. This is in contrast with what we have observed in the quadratic case.
**Comparison with a gradient method.** In what follows, we consider only \(m=5\) for the comparison with \(\text{ABB}_{\min}\). For LMSD, we do not include \(m=3\) because it
Figure 5: Cumulative distribution function of the number of iterations per sweep, i.e., the average number of stepsizes per sweep. Curves are based on the tested problems. Straight dashed lines indicate the uniform distribution over \([1,m]\).
showed poorer results compared to the simpler nonmonotone gradient method. LMSD for \(m=7\) is not considered since it gives performances similar to \(m=5\), but with a higher computational cost. Results are shown in Figure 6. From the performance profiles related to the computational time, we see that \(\text{ABB}_{\min}\) solves a high proportion of problems with the minimum computational time. LMSD-LYA, LMSD-PERT start competing with the \(\text{ABB}_{\min}\) gradient method when the performance ratio is larger than \(1.5\).
Regarding the performance profiles for both NFE and NGE, we note a similar pattern: LMSD-PERT has the highest curve; LMSD-LYA and LMSD-CHOL almost overlap for a performance ratio smaller than \(1.5\); after that, the two curves split, and LMSD-CHOL reaches LMSD-PERT.
The LMSD-PERT method solves \(52\%\) of the problems with the minimum number of gradient evaluations. By looking at the performance profile for the NFE, this fact does not seem to be complemented by a low number of function evaluations. Intuitively, this means that LMSD-PERT enters the backtracking procedure more often than the other methods, and it reflects what we have also observed in the central plot of Figure 5. Any time we enter the backtracking procedure, the stack of stepsizes is cleared and the sweep is terminated. Then, the more backtracking we need, the smaller the number of stepsizes per sweep we use.
LMSD-H-CHOL and LMSD-H-LYA, the "harmonic" approaches, perform a little bit worse than the other methods: while LMSD-H-CHOL can still compete with \(\text{ABB}_{\min}\) in terms of NFE and NGE, it performs worse in terms of computational time. LMSD-H-LYA performs generally worse than the other techniques; Figure 5 was already suggesting the poorer quality of the stepsizes of LMSD-H-LYA, which often need backtracking or lead to an increasing gradient norm.
To complete the picture, Table 4 reports two important quantities related to the performance profile: the proportion of problems solved by each method, and the proportion of problems solved with minimum cost, which is not always clearly visible from Figure 6. We notice that \(\text{ABB}_{\min}\) and LMSD-H-LYA fail to solve one of the 31 tested problems. When \(\text{ABB}_{\min}\) succeeds, it solves \(32\%\) of problems with minimum NFE and \(39\%\) of problems with minimum computational time. LMSD-PERT wins in
Figure 6: Performance profile for general unconstrained problems, based on the number of function evaluations, gradient evaluations, and computational time. Comparison between different ways to compute the new stepsizes of a sweep in LMSD, and the gradient method with nonmonotone line search and- the \(\text{ABB}_{\min}\) step.
terms of NGE. The proportion of problems solved with minimum NFE is the same for LMSD-PERT, LMSD-LYA, and LMSD-H-CHOL.
## 6 Conclusions
We have reviewed the limited memory steepest descent method proposed by Fletcher [1], for both quadratic and general nonlinear unconstrained problems. In the context of strictly convex quadratic functions, we have explored pivoted QR and SVD as alternative ways to compute a basis for either the matrix \(\mathbf{G}\) (Ritz values) or \(\mathbf{Y}\) (harmonic Ritz values). We have also proposed to improve the harmonic Ritz values by computing the Rayleigh quotients of their corresponding harmonic Ritz vectors.
Experiments in Section 5.1 have shown that the type of decomposition has little influence on the number of iterations of LMSD. The choice between Cholesky decomposition, pivoted QR and SVD is problem dependent. These three methods may compete with the ABB\({}_{\min}\) gradient method.
The experiments also suggest that a larger memory parameter improves the performance of LMSD, and Ritz values seem to perform better than harmonic Ritz values. The modification of the harmonic Ritz values (Section 2.2) effectively improves the number of iterations, at the extra expense of (relatively cheap) \(\mathcal{O}(m^{3})\) work.
In the context of general nonlinear functions, we have given a theoretical foundation to Fletcher's idea [1] (LMSD-CHOL), by connecting the symmetrization of \(\mathbf{T}\) (8) to a perturbation of \(\mathbf{Y}\). We have proposed another LMSD method (LMSD-PERT) based on a different perturbation given by Schnabel [12] in the area of quasi-Newton methods. An additional modification of LMSD for general functions (LMSD-LYA) has been obtained by adding symmetry constraints to the secant condition of LMSD for quadratic functions. The solution to this problem coincides with the solution to a Lyapunov equation.
In Section 5.2, experiments on general unconstrained optimization problems have shown that, in contrast with the quadratic case, increasing the memory parameter does not necessarily improve the performance of LMSD. This may also be related to the choices made in Algorithm 2, such as the sufficient decrease condition or the criteria to keep or discard old gradients.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Solved (\%) & \multicolumn{3}{c}{PR = 1 (\%)} \\ & NFE & NGE & Time \\ \hline LMSD-PERT & 1.00 & 0.16 & 0.52 & 0.16 \\ LMSD-LYA & 1.00 & 0.16 & 0.23 & 0.19 \\ LMSD-H-CHOL & 1.00 & 0.16 & 0.03 & 0.10 \\ LMSD-CHOL & 1.00 & 0.13 & 0.13 & 0.10 \\ ABB\({}_{\min}\) & 0.97 & 0.32 & 0.10 & 0.39 \\ LMSD-H-LYA & 0.97 & 0.06 & 0.06 & 0.06 \\ \hline \hline \end{tabular}
\end{table}
Table 4: For each method, we report the proportion of solved problems and the proportion of problems solved at minimum cost (performance ratio equal to 1) for different performance measures. The memory parameter is \(m=5\) for all the LMSD methods.
Given a certain memory parameter, the aforementioned LMSD methods seem to perform equally well in terms of the number of function evaluations and computational time. They all seem valid alternatives to the nonmonotone gradient method based on ABB\({}_{\min}\) stepsizes, with the caveat that LMSD-PERT and LMSD-LYA tend to not exploit all the stepsizes computed in a sweep, more often than LMSD-CHOL.
A Python code for the LMSD methods and the nonmonotone ABB\({}_{\min}\) is available at github.com/gferroni/lmsdpy.
**Acknowledgments:** This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 812912. We would also like to thank Natasa Krejic for the inspiring discussions on gradient methods.
|
2306.12386 | $\mathbf{\mathbb{E}^{FWI}}$: Multi-parameter Benchmark Datasets for
Elastic Full Waveform Inversion of Geophysical Properties | Elastic geophysical properties (such as P- and S-wave velocities) are of
great importance to various subsurface applications like CO$_2$ sequestration
and energy exploration (e.g., hydrogen and geothermal). Elastic full waveform
inversion (FWI) is widely applied for characterizing reservoir properties. In
this paper, we introduce $\mathbf{\mathbb{E}^{FWI}}$, a comprehensive benchmark
dataset that is specifically designed for elastic FWI.
$\mathbf{\mathbb{E}^{FWI}}$ encompasses 8 distinct datasets that cover diverse
subsurface geologic structures (flat, curve, faults, etc). The benchmark
results produced by three different deep learning methods are provided. In
contrast to our previously presented dataset (pressure recordings) for acoustic
FWI (referred to as OpenFWI), the seismic dataset in
$\mathbf{\mathbb{E}^{FWI}}$ has both vertical and horizontal components.
Moreover, the velocity maps in $\mathbf{\mathbb{E}^{FWI}}$ incorporate both P-
and S-wave velocities. While the multicomponent data and the added S-wave
velocity make the data more realistic, more challenges are introduced regarding
the convergence and computational cost of the inversion. We conduct
comprehensive numerical experiments to explore the relationship between P-wave
and S-wave velocities in seismic data. The relation between P- and S-wave
velocities provides crucial insights into the subsurface properties such as
lithology, porosity, fluid content, etc. We anticipate that
$\mathbf{\mathbb{E}^{FWI}}$ will facilitate future research on multiparameter
inversions and stimulate endeavors in several critical research topics of
carbon-zero and new energy exploration. All datasets, codes and relevant
information can be accessed through our website at https://efwi-lanl.github.io/ | Shihang Feng, Hanchen Wang, Chengyuan Deng, Yinan Feng, Yanhua Liu, Min Zhu, Peng Jin, Yinpeng Chen, Youzuo Lin | 2023-06-21T17:11:35Z | http://arxiv.org/abs/2306.12386v2 | # \(\mathbb{E}^{\text{FWI}}\): Multiparameter Benchmark Datasets for
###### Abstract
Elastic geophysical properties (such as P- and S-wave velocities) are of great importance to various subsurface applications like CO\({}_{2}\) sequestration and energy exploration (e.g., hydrogen and geothermal). Elastic full waveform inversion (FWI) is widely applied for characterizing reservoir properties. In this paper, we introduce \(\mathbb{E}^{\text{FWI}}\), a comprehensive benchmark dataset that is specifically designed for elastic FWI. \(\mathbb{E}^{\text{FWI}}\) encompasses 8 distinct datasets that cover diverse subsurface geologic structures (flat, curve, faults, etc). The benchmark results produced by three different deep learning methods are provided. In contrast to our previously presented dataset (pressure recordings) for acoustic FWI (referred to as OpenFWI), the seismic dataset in \(\mathbb{E}^{\text{FWI}}\) has both vertical and horizontal components. Moreover, the velocity maps in \(\mathbb{E}^{\text{FWI}}\) incorporate both P- and S-wave velocities. While the multicomponent data and the added S-wave velocity make the data more realistic, more challenges are introduced regarding the convergence and computational cost of the inversion. We conduct comprehensive numerical experiments to explore the relationship between P-wave and S-wave velocities in seismic data. The relation between P- and S-wave velocities provides crucial insights into the subsurface properties such as lithology, porosity, fluid content, etc. We anticipate that \(\mathbb{E}^{\text{FWI}}\) will facilitate future research on multiparameter inversions and stimulate endeavors in several critical research topics of carbon-zero and new energy exploration. All datasets, codes1 and relevant information can be accessed through our website at [https://efwi-lanl.github.io/](https://efwi-lanl.github.io/).
Footnote 1: Codes will be released upon approval by Los Alamos National Laboratory and U.S. Department of Energy.
## 1 Introduction
Seismic waves, propagating through the subsurface medium, can unveil the physical properties of the rock formations. Full waveform inversion (FWI) has emerged as an effective technique for obtaining high-resolution models of the subsurface physical properties [1; 2; 3]. The determination of such properties from seismic data is posed as an inverse problem. FWI is designed to find a solution
by minimizing the difference between observed and synthetic seismic data [4]. This technique has made substantial contributions across a range of domains, including geothermal energy exploration, earthquake monitoring, subsurface imaging for engineering applications, and many others [5].
Acoustic approximations have been widely employed in wavefield simulation for FWI, resulting in a substantial reduction in computational cost [6; 7]. It assumes that the subsurface medium behaves as a fluid and focuses on simulating the kinematic aspects of compressional (P) wave propagation within the medium. However, acoustic wave propagation is an oversimplified representation of real-world scenarios, as it solely considers P-wave propagation and does not adequately model the dynamics of the wavefield [8; 9]. Consequently, this oversimplification leads to suboptimal accuracy of the reconstructed medium parameters [10; 11; 12; 13; 14].
**Why elastic FWI:** Elastic inversion, which considers both P- and shear (S-) waves, provides a more comprehensive and precise representation of the subsurface. The correlation between the P-(\(\mathrm{V_{P}}\)) and S-wave velocities (\(\mathrm{V_{S}}\)) holds significant implications in the determination of Poisson's ratio (i.e., \(\mathrm{V_{P}}\)-\(\mathrm{V_{S}}\) ratio) and Young's modulus. These parameters play a vital role in the reservoir characterization and serve as essential indicators in the identification and assessment of hydrogen and geothermal reservoirs [15; 16; 17; 18]. The following aspects highlight their significance:
* _Lithology discrimination_: _Combination of \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\) velocities is useful for the lithology estimation, while \(\mathrm{V_{P}}\) alone introduces significant ambiguity because of the overlap of \(\mathrm{V_{P}}\) for different types of rocks [19].
* _Fracture characterization_: _Using the Poisson's ratio (\(\mathrm{V_{P}}\)-\(\mathrm{V_{S}}\) ratio) and S-wave splitting can estimate fracture orientation and facilitate hydraulic fracturing stimulation [20].
Figure 2: **Comparison of elastic data in \(\mathbb{E}^{\mathbf{FWI}}\) and acoustic data in OpenFWI. Acoustic data only contain P-waves propagation while elastic data contain both P- and S-waves.**
* _Estimation of fluid content and saturation_: _Poisson's ratio (\(\mathrm{V_{P}}\)-\(\mathrm{V_{S}}\) ratio) allows us to estimate the compressibility and estimate the fluid property qualitatively with other relevant reservoir parameters such as the pressure and temperature [21]._
Elastic FWI, as a prominent multiparameter-inversion technique, allows us to simultaneously estimate P- and S-wave velocities [22]. However, the simultaneous consideration of multiple parameters and the expanded dimensions of seismic data significantly increases the complexity of the objective function. This escalation results from the enhanced nonlinearity and the induced trade-offs between the velocities. The coupled impact of P- and S-wave velocities on seismic response further complicates the iterative update process for each parameter. Additionally, the nonlinearity becomes even more pronounced when multiple parameter classes are incorporated into the inversion, as this substantially expands the model space by introducing an increased degree of freedom [23]. Thus, the multidimensionality of elastic FWI renders the problem considerably more complex and challenging compared to the acoustic single-parameter counterpart. With the recent development of machine learning, researchers have been actively exploring _data-driven_ solutions for multiparameter FWI, including multilayer perceptron (MLP) [24], encoder-decoder-based convolutional neural networks (CNNs) [25; 26], recurrent network [27; 28], generative adversarial networks (GANs) [29], etc. Nonetheless, the absence of a publicly available elastic dataset poses challenges in facilitating a fair comparison of these methods.
Here, we present \(\mathbb{E}^{\mathbf{FWI}}\), which stands as the pioneering large-scale compilation of an open-access elastic seismic full-waveform dataset. Examples of Poisson's ratio maps, P- and S-wave velocity maps are shown in Figure 1. \(\mathbb{E}^{\mathbf{FWI}}\) is constructed upon our previously published open-access acoustic seismic dataset, known as OpenFWI [30]. Our approach incorporates the advantageous characteristics of _multi-scale_, _multi-domain_, and _multi-subsurface-complexity_, inherited from the OpenFWI framework. Furthermore, \(\mathbb{E}^{\mathbf{FWI}}\) entails the creation of S-wave velocity maps and employs the elastic wave equation in the forward modeling phase (Figure 2). The computational demands associated with conducting elastic forward modeling are substantial. Consequently, the availability of this dataset would significantly alleviate the burden on researchers.
\(\mathbb{E}^{\mathbf{FWI}}\) facilitates equitable comparisons across various methodologies using multiple datasets. In this study, we evaluate the effectiveness of three prominent methodologies derived from pre-existing networks, namely InversionNet [31], VelocityGAN [32], and SimFWI [33]. The objective of this evaluation is to establish a benchmark for future investigations. For comprehensive replication attempts, including the GitHub repository, pre-trained models, and associated licenses, we direct readers to the resources referenced in Section 1 of supplementary materials.
The rest of this paper is organized as follows: Section 2 offers a comprehensive overview of the fundamental principles governing elastic FWI. Section 3 presents a detailed description of the methodology employed in the construction of the dataset. Section 4 offers a succinct introduction to three deep learning methods employed for benchmarking purposes, alongside the presentation of inversion performance on each dataset. The investigation of the interdependence between P- and S-waves is conducted through ablation experiments, as outlined in Section 5. Section 6 outlines the challenges faced and discusses the future implications of the dataset. Lastly, Section 7 offers conclusive remarks summarizing the key findings and contributions.
## 2 Elastic Forward Modeling and Data-driven FWI
Figure 3 provides a concise illustration of 2D data-driven elastic FWI and the relationship between P-, S-wave velocity maps and the input horizontal and vertical components of particle displacement therein. In general, the objective of data-driven elastic FWI is to use neural networks to obtain the subsurface velocity maps of the P- (\(\mathrm{V_{P}}\)) and S-waves (\(\mathrm{V_{S}}\)). The velocities represent the propagation speed of P- and S-waves through the subsurface medium, which is contingent on the spatial coordinates \((x,z)\). Additionally, we consider the density of the subsurface as \(\rho\). The source term, denoted as \(\mathbf{s}\), is influenced by the spatial coordinates and time \((x,z,t)\). It serves to excite both the P- and S-wave components. The particle displacement in the horizontal and vertical directions is represented by the vector \(\mathbf{u}=(u_{x},u_{z})\). The governing equation of the elastic wave forward modeling in an isotropic medium can be described as [34]
\[\rho\frac{\partial^{2}\mathbf{u}}{\partial t^{2}}-\nabla[\rho(V_{P}^{2}-2V_{S}^{2})( \nabla\cdot\mathbf{u})]-\nabla\cdot[\rho V_{S}^{2}(\nabla\mathbf{u}+(\nabla \mathbf{u})^{T})]=\mathbf{s}. \tag{1}\]
For simplicity, we assume a constant density \(\rho\) with the value of 1 \(g/cm^{3}\). The forward modeling problem is expressed as the expression \((u_{x},u_{z})=f_{e}(V_{P},V_{S})\), where \(f_{e}(\cdot)\) represents the highly nonlinear elastic forward modeling, and it describes how the P- and S-waves, generated by the source \(s\), travel through the subsurface as \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\), respectively, over time \(t\). These waves are then recorded by receivers and measured as the components \(u_{x}\) and \(u_{z}\). The goal of data-driven elastic FWI is to utilize neural networks to learn the inverse mapping \((V_{P},V_{S})=f_{e}^{-1}(u_{x},u_{z})\). This inverse mapping enables us to infer the subsurface velocity maps (\(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\)) based on the recorded particle displacements (\(u_{x}\) and \(u_{z}\)) obtained from the receivers. By training the neural networks with a dataset of recorded waveforms and corresponding velocity maps, we can optimize the network parameters to accurately estimate the subsurface velocities.
## 3 \(\mathbb{E}^{\mathbf{FWI}}\) Dataset
This section describes the methodology used to extend the velocity maps from the OpenFWI dataset to elastic FWI and generate our new dataset \(\mathbb{E}^{\mathbf{FWI}}\). Our intention is to provide an accessible, open-source benchmark dataset that can comprehensively facilitate the development and evaluation of the machine learning algorithms in elastic FWI.
The basic information and physical meaning of all the datasets in \(\mathbb{E}^{\mathbf{FWI}}\) is summarized in Table 1 and Table 2. The velocity maps encompass P- \(\mathrm{V_{P}}\) and S-wave velocities \(\mathrm{V_{S}}\), whereas the seismic data comprise the horizontal and vertical components of particle displacement, \(u_{x}\) and \(u_{z}\). The geophysical attributes in "\(\mathbb{E}^{\mathbf{Vel}}\) Family" and the "\(\mathbb{E}^{\mathbf{Fault}}\) Family" have been constructed utilizing the \(\mathrm{V_{P}}\) maps derived from two distinct groups, namely the "_Vel_ Family" and the "_Fault_ Family" within the OpenFWI dataset. Similar to OpenFWI, the dataset has been categorized into two distinct versions, namely easy (A) and hard (B), based on the relative complexity of the subsurface structures. A thorough examination of the methodologies employed in the construction of the \(\mathrm{V_{P}}\) maps and the detailed analysis of the complexity inherent in the velocity maps can be found in [30].
\begin{table}
\begin{tabular}{c|c|c c c} \hline Group & Dataset & Size & \#Train/\#Test & Seismic Data Size & Velocity Map Size \\ \hline \(\mathbb{E}^{\mathbf{Vel}}\) & \(\mathbb{E}^{\mathbf{FWI}/\mathbf{B}}\) & \(123\)GB & \(24\)K / \(6\)K & \(5\times 1000\times 1\times 70\) & \(70\times 1\times 70\) \\ Family & \(\mathbb{E}^{\mathbf{CVA}/\mathbf{B}}\) & \(123\)GB & \(24\)K / \(6\)K & \(5\times 1000\times 1\times 70\) & \(70\times 1\times 70\) \\ \hline \(\mathbb{E}^{\mathbf{Fault}}\) & \(\mathbb{E}^{\mathbf{FWI}/\mathbf{B}}\) & \(222\)GB & \(48\)K / \(6\)K & \(5\times 1000\times 1\times 70\) & \(70\times 1\times 70\) \\ Family & \(\mathbb{E}^{\mathbf{CFA}/\mathbf{B}}\) & \(222\)GB & \(48\)K / \(6\)K & \(5\times 1000\times 1\times 70\) & \(70\times 1\times 70\) \\ \hline \end{tabular}
\end{table}
Table 1: **Dataset summary of \(\mathbb{E}^{\mathbf{FWI}}\). Velocity maps are represented in dimensions of depth \(\times\) width \(\times\) length, while seismic data is presented as #sources \(\times\) time \(\times\) #receivers in width \(\times\) #receivers in length.**
Figure 3: **Schematic depiction of the data-driven approach for elastic forward modeling and FWI. The forward modeling process involves utilizing elastic forward modeling to compute seismic data by employing the governing elastic wave equations, while elastic FWI employs neural networks to infer the P- and S-wave velocity maps from seismic data containing vertical and horizontal components.**
The P-wave velocity (\(\mathrm{V_{P}}\)) maps in \(\mathbb{E}^{\mathbf{FWI}}\) are identical to those in the previously published OpenFWI dataset. For example, \(\mathrm{V_{P}}\) in \(\mathbb{E}^{\mathbf{FVA}}\) is corresponding to "FlatVel-A" in OpenFWI, \(\mathrm{V_{P}}\) in \(\mathbb{E}^{\mathbf{CFB}}\) is corresponding to "CurveFault-B" in OpenFWI, and the same naming rule applies to the rest datasets. These velocity maps incorporate a wide range of geological scenarios reflecting diverse subsurface complexities, thereby providing an extensive testbed for machine learning methodologies.
In order to construct the S-wave velocity (\(\mathrm{V_{S}}\)) maps, we incorporate the Poisson's ratio (\(\mathrm{Pr}\)) maps [35], which provide a representation of the relationship between the P- (\(\mathrm{V_{P}}\)) and the S-wave velocities (\(\mathrm{V_{S}}\))
\[P_{r} = \frac{V_{P}^{2}-2V_{S}^{2}}{2V_{P}^{2}-2V_{S}^{2}}. \tag{2}\]
The initial step involves the generation of Poisson's ratio (\(\mathrm{Pr}\)) maps by selecting two values within the reasonable range of 0.1 to 0.4 in a random manner [36]. One of these values is allocated to represent the background, whereas the other value is assigned to represent a thin-layer reservoir. Thin-layer reservoirs are selected due to their significance in representing areas where pores are saturated with fluids, making them crucial targets for subsurface exploration and reservoir detection. In the \(\mathbb{E}^{\mathbf{FWI}}\) framework, the S-wave velocity (\(\mathrm{V_{S}}\)) maps are synthesized by multiplying the models of P-wave velocity (\(\mathrm{V_{P}}\)) with the respective Poisson's ratio (\(\mathrm{Pr}\)) maps, adhering to the following relationship:
\[V_{S}=\sqrt{\frac{0.5-Pr}{1-Pr}}*V_{P}. \tag{3}\]
This approach ensures a wide range of velocity contrasts, resulting in diverse wavefield behaviors, thus expanding the scope of scenarios for machine learning tests in elastic FWI. The details of the elastic forward modeling are given in Section 2 of the supplementary materials.
## 4 \(\mathbb{E}^{\mathbf{FWI}}\) Benchmarks
### Deep Learning Methods for Elastic FWI
Our benchmark presents inversion results by three deep learning-based approaches, namely ElasticNet, \(\mathbb{E}\)lasticGAN, and \(\mathbb{E}\)lasticTransformer. These methods are derived from pre-existing networks, namely InversionNet [31], VelocityGAN [32], and SimFWI [33], with modifications tailored to address the challenges posed by elastic FWI. We provide a summary of each method separately as follows:
\(\mathbb{E}\)lasticNet is extended from the vanilla InversionNet [31] to the elastic setting with two pairs of input and output. It is a fully-convolutional neural network taking seismic data \(u_{x}\) and \(u_{z}\) as the input of two encoders to learn the latent embeddings independently. The mutual representations of two inputs are concatenated and then forwarded to two independent decoders to obtain the estimated velocity maps \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\) as output.
\(\mathbb{E}\)lasticGAN follows the design of VelocityGAN [32] but substitutes the original generator with an encoder-decoder network such as \(\mathbb{E}\)lasticNet. The estimated velocity maps \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\) produced by the generator are fed to two independent discriminators to identify the real and fake predictions. A CNN architecture is employed for both discriminators.
\(\mathbb{E}\)lasticTransformer follows a similar seismic-encoder and velocity-decoder architecture design as the SimFWI described in [33]. It consists of two two-layer transformer encoders that take \(u_{x}\) and \(u_{z}\) as inputs and two two-layer transformer decoders to output \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\) separately. Two latent embeddings of \(u_{x}\) and \(u_{z}\) are concatenated and passed through two linear converters, then transformed embeddings fed into the decoders. Unlike the linear upsampler utilized at the end of the velocity decoder in [33], we stack upsampling and convolution blocks to construct the upsampler.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Dataset} & Grid & Velocity Map & Source & Source Line & Receiver Line & Receiver Line & Time & Recorded \\ & Spacing & Spatial Size & Spacing & Length & Spacing & Length & Spacing & Time \\ \hline \(\mathbb{E}^{\mathbf{Val}}\), \(\mathbb{E}^{\mathbf{Fault}}\) Family & 5 \(m\) & 0.35 \(\times\) 0.35 \(km^{2}\) & 87.5 \(m\) & 0.35 \(km\) & 5 \(m\) & 0.35 \(km\) & 0.001 \(s\) & 1 \(s\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Physical meaning of \(\mathbb{E}^{\mathbf{FWI}}\) dataset**
### Inversion Benchmarks
The experiments were conducted using Nvidia Tesla V100 GPUs, and the training parameters were kept consistent across all datasets. The training process is conducted utilizing the \(\ell_{1}\)-norm and \(\ell_{2}\)-norm loss functions, respectively. In our study, we assess not only the accuracy of the predicted velocities \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\) but also the degree of decoupling between them by evaluating the accuracy of the predicted Poisson ratio \((\mathrm{Pr})\). To quantify the performance of our predictions, we utilize three evaluation metrics: mean absolute error (MAE), root mean square error (RMSE), and structural similarity index (SSIM). These metrics provide a comprehensive assessment of the quality of our predictions and their similarity to the ground truth values. The performance of ElasticNet on various datasets is presented in Table 3, while Table 4 provides the estimated training time per epoch for each method on the \(\mathbb{E}^{FWI}\) datasets. In Figure 4, examples of inverted velocity maps obtained using the ElasticNet are presented alongside the corresponding ground truth velocity maps. These visual representations highlight instances where the inversion process successfully predicts accurate velocities, as well as instances where further improvement is required. The benchmarks with ElasticGAN and ElasticTransformer are given in Section 6 of the supplementary materials.
The performance of all three models exhibits a decline as the complexity of the dataset increases. Notably, in the case of the dataset with version B, it consistently exhibits lower performance compared to the dataset with version A. The network provides direct predictions for \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\), whereas \(\mathrm{Pr}\) is obtained indirectly through calculations based on \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\). As a result, \(\mathrm{Pr}\) consistently exhibits lower SSIM compared to \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\). However, it should be noted that \(\mathrm{Pr}\) represents a sparser map compared to \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\), leading to lower MAE and RMSE values for \(\mathrm{Pr}\) compared to \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\).
## 5 Ablation Study
### Independent vs. Joint Inversion: Impact on \(\mathrm{Pr}\) Maps
The first experiment examined the impact of separate versus joint inversion of \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\) on the accuracy of predicted Poisson ratio (\(\mathrm{Pr}\)) maps. This process involved individually training two InversionNets on the \(\mathbb{E}^{\mathbf{FWI}}\) dataset to predict \(\mathrm{V_{P}}\) and \(\mathrm{V_{S}}\) maps, which were then used to calculate the \(\mathrm{Pr}\) maps. The results revealed a substantial deterioration in map quality, with the independent inversion maps exhibiting significantly higher MAE and MSE, and lower SSIM values, as outlined in Table 5, compared to those reconstructed from joint inversion, shown in Table 3, especially for the complex B datasets, such as "\(\mathbb{E}^{\mathbf{CFB}}\)". These findings reinforce the significance of considering the \(\mathrm{V_{P}}\)-\(\mathrm{V_{S}}\) relationship and P-S wave coupling, with the single-parameter inversion approach being deemed unviable. Detailed information on this experiment can be found in Section 6 of the supplementary materials.
### Investigating P- and S-waves Coupling via Machine Learning
The second experiment focused on examining the interaction between P- and S-waves in the context of seismic data inversion. Two InversionNets were trained, one focusing on P-wave velocity (\(\mathrm{V_{P}}\)) and the other on S-wave velocity (\(\mathrm{V_{S}}\)), while adjusting the structural characteristics of the ignored
Figure 4: **Examples of both successful and inadequate predictions in \(\mathbb{E}^{FWI}\) benchmarks performed by the ElasticNet.**
wave. This experiment, trained with the OpenFWI's InversionNet using data from \(\mathbb{E}^{\mathbf{FWI}}\), revealed that any minor change in the disregarded wave velocity structure significantly degraded the network's performance, as evidenced in Table 6. This outcome was clearly demonstrated in the more complex datasets, such as "\(\mathbb{E}^{\mathbf{CFB}^{*}}\) test set, where changes in structure led to a substantial increase in MAE and RMSE, along with a decrease in the SSIM. For a more detailed analysis, refer to the supplementary materials.
## 6 Discussion
### Future Challenge
**Decouple P- and S-waves:** The interaction between P- and S-waves during seismic wave propagation poses a significant challenge when attempting to simultaneously determine P and S velocities. The networks described in this paper exhibit limited success in separating P- and S-waves within the seismic data. Consequently, we anticipate the development of robust methodologies that can precisely estimate both P and S velocities while effectively mitigating the interdependence between these wave components.
**Generalization of data-driven methods:** The elastic approximation provides a more accurate representation of field data in comparison to acoustic data. As a result, we expect the neural networks trained on the \(\mathbb{E}^{\mathbf{FWI}}\) dataset to exhibit improved resilience in handling real-world field data. However, it should be noted that there are additional physical phenomena, such as anisotropy and viscosity, which are not accounted for in the \(\mathbb{E}^{\mathbf{FWI}}\) dataset. The question of how to incorporate these phenomena into the analysis of field data remains an open and unanswered challenge.
**Forward modeling:** The computational expense associated with elastic forward modeling surpasses that of acoustic cases due to various factors. These include the increased memory requirements
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & ElasticNet & ElasticGAN & ElasticTransformer \\ \hline \(\mathbb{E}\)Vel Family & 4m15s & 2m20s & 1m15s \\ \(\mathbb{E}\)Fault Family & 8m35s & 3m50s & 2m30s \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Training time** in each epoch by each benchmarking method on \(\mathbb{E}^{\mathbf{FWI}}\) datasets. All the models are trained on a single GPU.
\begin{table}
\begin{tabular}{c|c|c c c|c c|c c} \hline \hline \multirow{3}{*}{Dataset} & \multirow{3}{*}{Loss} & \multicolumn{6}{c|}{InversionNet} \\ \cline{3-10} & & \multicolumn{2}{c|}{\(\mathrm{V_{P}}\): different \(\mathrm{V_{S}}\) structure} & \multicolumn{2}{c}{\(\mathrm{V_{S}}\): different \(\mathrm{V_{P}}\) structure} \\ \cline{3-10} & & MAE\(\downarrow\) & RMSE\(\downarrow\) & SSIM\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & SSIM\(\uparrow\) \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{FVA}}\)} & \(\ell_{1}\) & 0.0731 & 0.1207 & **0.9254** & 0.2879 & 0.4027 & 0.5843 \\ & \(\ell_{2}\) & 0.0704 & 0.1189 & 0.9245 & 0.2474 & 0.3503 & **0.7592** \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{FVVB}}\)} & \(\ell_{1}\) & 0.1158 & 0.2142 & 0.8225 & 0.0649 & 0.1316 & 0.8587 \\ & \(\ell_{2}\) & 0.1120 & 0.2058 & **0.8294** & 0.0622 & 0.1278 & **0.8648** \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{CVA}}\)} & \(\ell_{1}\) & 0.1191 & 0.1953 & **0.7327** & 0.3807 & 0.4650 & **0.3655** \\ & \(\ell_{2}\) & 0.1183 & 0.1946 & 0.7313 & 0.4223 & 0.5181 & 0.3026 \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{CVB}}\)} & \(\ell_{1}\) & 0.1910 & 0.3328 & 0.6229 & 0.1316 & 0.2324 & 0.6628 \\ & \(\ell_{2}\) & 0.1884 & 0.3295 & **0.6266** & 0.1334 & 0.2294 & **0.6624** \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{FFA}}\)} & \(\ell_{1}\) & 0.0691 & 0.1308 & **0.8828** & 0.6860 & 0.8119 & 0.1582 \\ & \(\ell_{2}\) & 0.0715 & 0.1331 & 0.8814 & 0.6492 & 0.7552 & **0.3801** \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{FFB}}\)} & \(\ell_{1}\) & 0.1194 & 0.1877 & 0.6797 & 0.4806 & 0.6000 & **0.1980** \\ & \(\ell_{2}\) & 0.1205 & 0.1877 & **0.6827** & 0.7509 & 0.8531 & 0.1721 \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{CFA}}\)} & \(\ell_{1}\) & 0.0777 & 0.1452 & **0.8657** & 0.1275 & 0.2040 & 0.7395 \\ & \(\ell_{2}\) & 0.0899 & 0.1583 & 0.8427 & 0.1160 & 0.1795 & **0.7752** \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{CFB}}\)} & \(\ell_{1}\) & 0.1718 & 0.2556 & **0.5843** & 0.5630 & 0.6823 & 0.1793 \\ & \(\ell_{2}\) & 0.1754 & 0.2577 & 0.5740 & 0.6614 & 0.7865 & **0.3590** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Quantitative results** of InversionNet trained with \(\mathbb{E}^{\mathbf{FWI}}\) data. The performance compared between testing on the same and different disregarded velocity structural datasets.
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c c c} \hline \hline \multirow{3}{*}{Dataset} & \multirow{3}{*}{Loss} & \multicolumn{6}{c|}{InversionNet} \\ \cline{3-10} & & \multicolumn{3}{c|}{\(\mathrm{V_{P}}\)} & \multicolumn{3}{c|}{\(\mathrm{V_{S}}\)} & \multicolumn{3}{c}{Pr} \\ \cline{3-10} & & MAE\(\downarrow\) & RMSE\(\downarrow\) & SSIM\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & SSIM\(\uparrow\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & SSIM\(\uparrow\) \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{FVA}}\)} & \(\ell_{1}\) & 0.0548 & 0.0915 & 0.9356 & 0.0265 & 0.0487 & **0.9636** & 0.0544 & 0.1011 & **0.8117** \\ & & \(\ell_{2}\) & 0.0450 & 0.0840 & **0.9425** & 0.0295 & 0.0524 & 0.9544 & 0.0631 & 0.1175 & 0.7944 \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{FVB}}\)} & \(\ell_{1}\) & 0.0914 & 0.1865 & 0.8273 & 0.0637 & 0.1292 & 0.8640 & 0.0874 & 0.1836 & 0.6587 \\ & & \(\ell_{2}\) & 0.0932 & 0.1885 & **0.8274** & 0.0601 & 0.1231 & **0.8662** & 0.1049 & 0.2729 & **0.6835** \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{CVA}}\)} & \(\ell_{1}\) & 0.0996 & 0.1677 & 0.7527 & 0.0791 & 0.1328 & **0.7775** & 0.0966 & 0.1833 & **0.6042** \\ & & \(\ell_{2}\) & 0.0905 & 0.1573 & **0.7662** & 0.0756 & 0.1293 & 0.7669 & 0.1438 & 0.3184 & 0.5833 \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{CVB}}\)} & \(\ell_{1}\) & 0.1783 & 0.3216 & 0.6246 & 0.1325 & 0.2358 & 0.6583 & 0.1613 & 0.3373 & 0.4480 \\ & & \(\ell_{2}\) & 0.1784 & 0.3178 & **0.6287** & 0.1286 & 0.2263 & **0.6681** & 0.2605 & 0.5561 & **0.4798** \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{FFA}}\)} & \(\ell_{1}\) & 0.0624 & 0.1167 & **0.9024** & 0.0431 & 0.0814 & **0.8909** & 0.0960 & 0.1924 & **0.6641** \\ & & \(\ell_{2}\) & 0.0565 & 0.1134 & 0.8967 & 0.0466 & 0.0846 & 0.8908 & 0.1070 & 0.2182 & 0.6555 \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{FFB}}\)} & \(\ell_{1}\) & 0.1173 & 0.1823 & 0.6863 & 0.0855 & 0.1320 & **0.7413** & 0.1088 & 0.1947 & 0.5485 \\ & & \(\ell_{2}\) & 0.1120 & 0.1778 & **0.6871** & 0.0855 & 0.1328 & 0.7408 & 0.3564 & 0.5966 & **0.5862** \\ \hline \multirow{3}{*}{\(\mathbb{E}^{\mathbf{CFA}}\)} & \(\ell_{1}\) & 0.0675 & 0.1325 & 0.8592 & 0.0499 & 0.0962 & **0.8610** & 0.0769 & 0.1682 & 0.6604 \\ & & \(\ell_{2}\) & 0.0598 & 0.1213 & **0.8736** &
and the implementation of smaller grid sizes to counteract dispersion phenomena, among others. A detailed comparison highlighting these aspects can be found in the last section of supplementary materials. Despite the possibility of bypassing extensive forward modeling by providing the \(\mathbb{E}^{\mathbf{FWI}}\) dataset, there remains a need to explore an efficient forward modeling algorithm to accommodate the growing volume of data in the field.
### Broader Impact
**Multiparameter inversion:** Multiparameter inversion techniques have found wide-ranging applications across diverse scientific and engineering domains, including but not limited to geophysics, medical imaging, and material science. The introduction of \(\mathbb{E}^{\mathbf{FWI}}\) serves as a catalyst for further investigation and the pursuit of innovative methodologies in these fields. By addressing the inherent limitations and complexities associated with multiparameter inversion, this advancement encourages ongoing research and the exploration of novel solutions.
**Carbon-zero emission:** The attainment of carbon-zero emissions holds paramount significance in addressing climate change, safeguarding human well-being, and fostering sustainable development. While researchers continue to explore effective strategies towards achieving this goal, elastic FWI emerges as a promising approach that can contribute significantly. Particularly, elastic FWI plays a crucial role in assessing and developing geothermal energy resources, as well as in facilitating carbon capture and storage projects, among other applications. The introduction of \(\mathbb{E}^{\mathbf{FWI}}\) as a fundamental dataset for elastic FWI is expected to stimulate further research and innovation in this direction, thereby enhancing our understanding and capabilities in the pursuit of carbon-zero emissions.
**New energy exploration:** Elastic FWI can be utilized to evaluate the geological viability of potential sites for hydrogen storage, including underground formations or depleted oil and gas reservoirs. The suitability, capacity, and feasibility of a storage site heavily rely on the effectiveness of a geophysical survey and characterization approaches. With the availability of the \(\mathbb{E}^{\mathbf{FWI}}\) dataset, it would yield great potential to enhance the accuracy in characterizing the subsurface reservoir, therefore providing better identification of hydrogen storage locations.
## 7 Conclusion
This paper presents \(\mathbb{E}^{\mathbf{FWI}}\), an open-source elastic FWI dataset. \(\mathbb{E}^{\mathbf{FWI}}\) comprises eight datasets and includes benchmarks for three deep learning methods. The datasets released with \(\mathbb{E}^{\mathbf{FWI}}\), provide diverse P-wave and S-wave velocities, specifically addressing the coupling problem encountered in multiparameter inversion. The initial benchmarks demonstrate promising results on certain datasets, while others may require further investigation. Additionally, coupling tests are conducted to provide insights into network design for multiparameter inversion problems. Furthermore, this paper discusses the future challenges that can be explored using these datasets and outlines the envisioned future advancements as \(\mathbb{E}^{\mathbf{FWI}}\) continues to evolve.
## Acknowledgement
This work was funded by the Los Alamos National Laboratory (LANL) - Technology Evaluation and Demonstration (TED) program and by the U.S. Department of Energy (DOE) Office of Fossil Energy's Carbon Storage Research Program via the Science-Informed Machine Learning to Accelerate Real-Time Decision Making for Carbon Storage (SMART-CS) Initiative. |
2306.13098 | Three generations of colored fermions with $S_3$ family symmetry from
Cayley-Dickson sedenions | An algebraic representation of three generations of fermions with $SU(3)_C$
color symmetry based on the Cayley-Dickson algebra of sedenions $\mathbb{S}$ is
constructed. Recent constructions based on division algebras convincingly
describe a single generation of leptons and quarks with Standard Model gauge
symmetries. Nonetheless, an algebraic origin for the existence of exactly three
generations has proven difficult to substantiate. We motivate $\mathbb{S}$ as a
natural algebraic candidate to describe three generations with $SU(3)_C$ gauge
symmetry. We initially represent one generation of leptons and quarks in terms
of two minimal left ideals of $\mathbb{C}\ell(6)$, generated from a subset of
all left actions of the complex sedenions on themselves. Subsequently we employ
the finite group $S_3$, which are automorphisms of $\mathbb{S}$ but not of
$\mathbb{O}$ to generate two additional generations. Given the relative
obscurity of sedenions, efforts have been made to present the material in a
self-contained manner. | Niels G. Gresnigt, Liam Gourlay, Abhinav Varma | 2023-06-11T09:30:13Z | http://arxiv.org/abs/2306.13098v2 | Three generations of colored fermions with \(S_{3}\) family symmetry
## Abstract
An algebraic representation of three generations of fermions with \(SU(3)_{C}\) color symmetry based on the Cayley-Dickson algebra of sedenions \(\mathbb{S}\) is constructed. Recent constructions based on division algebras convincingly describe a single generation of leptons and quarks with Standard Model gauge symmetries. Nonetheless, an algebraic origin for the existence of exactly three generations has proven difficult to substantiate. We motivate \(\mathbb{S}\) as a natural algebraic candidate to describe three generations with \(SU(3)_{C}\) gauge symmetry. We initially represent one generation of leptons and quarks in terms of two minimal left ideals of \(\mathbb{C}\ell(6)\), generated from a subset of all left actions of the complex sedenions on themselves. Subsequently we employ the finite group \(S_{3}\), which are automorphisms of \(\mathbb{S}\) but not of \(\mathbb{O}\) to generate two additional generations. Given the relative obscurity of sedenions, efforts have been made to present the material in a self-contained manner.
## 1 Introduction
Despite its great practical success in colliders and other experiments, there are several unexplained features of the Standard Model of particle physics (SM) which lack a deeper theoretical motivation. These include, among others, a derivation of the SM gauge group from first principles, an explanation for why some representations of the SM gauge group correspond to particle multiplets whereas others do not, and an account for why fermions come in three generations. These theoretical shortcomings may be suggestive that the SM ultimately emerges from a more fundamental physical principle or mathematical structure.
In an attempt to establish the geometric and algebraic roots of the SM, several proposals have been put forth over the years which take as its essential mathematical ingredients (tensor products of) the only four normed division algebras over the reals: \(\mathbb{R}\), \(\mathbb{C}\), \(\mathbb{H}\), and \(\mathbb{O}\). Instead of unifying the internal symmetries into a single larger group, as is done in grand unified theories (GUTs) such as \(SU(5)\) and \(Spin(10)\), these division algebraic approaches attempt to unify the gauge groups together with the leptons and quarks that they act on into a single unified algebraic framework, in terms of an algebra acting on itself.
The octonions \(\mathbb{O}\), the largest of the division algebras, were first considered in the 70s for their intriguing efficacy in describing quark color symmetry [1]. Dixon [2, 3, 4] considers the algebra \(\mathbb{R}\otimes\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) and its invariant subspaces in connection to the particles and charges of the SM. The algebra \(\mathbb{R}\otimes\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) has exactly the right dimensions (32 complex) to describe one generation of fermions. In a closely related approach, Furey studies the minimal ideals of the Clifford algebras \(\mathbb{C}\ell(4)\), and \(\mathbb{C}\ell(6)\), generated from \(\mathbb{C}\otimes\mathbb{H}\), and \(\mathbb{C}\otimes\mathbb{O}\) respectively [5, 6]. In her approach, the leptons and quarks correspond to elements of these minimal ideals, and the gauge symmetries are those unitary symmetries that preserve the ideals. In particular, the part \(\mathbb{C}\otimes\mathbb{O}\) part of Dixon's algebra can be associated to the color and electric charge internal degrees of freedom,
with the color gauge group \(SU(3)\) corresponding to the maximal compact subgroup of the exceptional group \(G_{2}\) of automorphisms of the \(\mathbb{O}\) which fixes one of the octonion units.
Many others have contributed to these, and related, algebraic approaches including those based on topology [7, 8, 9, 10, 11], exceptional Lie groups [12, 13, 14, 15, 16, 17], Clifford algebras [18, 19, 20, 21, 22, 23, 24, 25, 26], and Jordan algebras [27, 28, 29, 30, 31, 32].
Existing division algebraic models offer an elegant algebraic construction for the internal space of a single generation of leptons and quarks. Despite several attempts [4, 33, 34], a clear algebraic origin for the existence of three generation is yet to be found. The Pati-Salam model, as well as both the \(SU(5)\) and \(Spin(10)\) grand unified theories likewise correspond to single generation models, lacking any theoretical basis for three generations, which ultimately has to be imposed by hand.
Furey identifies three generations of color states directly from the algebra \(\mathbb{C}\ell(6)\) generated from the adjoint actions of \(\mathbb{C}\otimes\mathbb{O}\)[33]. The algebra \(\mathbb{C}\ell(6)\) is 64 complex dimensional. Constructing two representations of the Lie algebra \(su(3)\) within this algebra, the remaining 48 degrees of freedom transform under the action of the \(SU(3)\) as three generations of leptons and quarks. The most obvious extension to include \(U(1)_{em}\) via the number operator, which works in the context of a one-generation model, fails to assign the correct electric charges to states. A generalized action that leads to a generator that produces the correct electric charges for all states is introduced in [35].
Dixon on the other hand considers the algebra \(\mathbb{T}^{6}=\mathbb{C}\otimes\mathbb{H}^{2}\otimes\mathbb{O}^{3}\), where \(\mathbb{T}=\mathbb{R}\otimes\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) in order to represent three generations, with a single generation being described by \(\mathbb{T}^{2}\), a complexified (hyper) spinor in 1+9D spacetime [3]. However, the choice \(\mathbb{T}^{6}\), as opposed to any other \(\mathbb{T}^{2n}\) appears rather arbitrary, although can be motivated from the Leech lattice.
These division algebraic models share many similarities with those based on the exceptional Jordan algebra \(J_{3}(\mathbb{O})\) consisting of three by three matrices over \(\mathbb{O}\), which has likewise been proposed to describe three generations [27, 28, 29, 30, 32, 31]. In these models, each of the three octonions in \(J_{3}(\mathbb{O})\) is likewise associated with one generation via the three canonical \(J_{2}(\mathbb{O})\) subalgebras of \(J_{3}(\mathbb{O})\).
In [25] it is argued that \(\mathbb{R}\otimes\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\)-valued gravity can naturally describe a grand unified field theory of Einstein's gravity with a Yang-Mills theory containing the SM, leading to a \(SU(4)^{4}\) symmetry group that potentially extends the SM with an extra fourth family of fermions. The existence of a fourth generation of fermions lacks experimental support however.
In [36] it is shown how, by choosing a privileged \(\mathbb{C}\) subalgebra of \(\mathbb{O}\), it is possible to reduce ten dimensional spacetime represented by \(SL(2,\mathbb{O})\) to four dimensional spacetime \(SL(2,\mathbb{C})\). This process of dimensional reduction naturally isolates three \(\mathbb{H}\) subalgebras of \(\mathbb{O}\): those that contain the privileged \(\mathbb{C}\) subalgebra. These three intersecting \(\mathbb{H}\) subalgebras are subsequently interpreted as describing three generations of leptons.
Starting with \(\mathbb{R}\), each of the remaining three division algebras can be generated via what is called the Cayley-Dickson (CD) process. This process does not terminate with \(\mathbb{O}\) however, but continues indefinitely to produce a series of \(2^{n}\)-dimensional algebras. We therefore ask the question: Can we go beyond the division algebras, to the CD algebra of sedenions \(\mathbb{S}\), generated from \(\mathbb{O}\), in order to describe three generations?
The present paper advocates that the CD algebra of sedenions \(\mathbb{S}\) constitutes a natural mathematical object which exhibits the algebraic structure necessary to describe the internal space of three generations. We restrict ourselves for the time being to considering only the \(SU(3)_{C}\) color symmetry of leptons and quarks.
The algebra \(\mathbb{S}\) was first proposed to play a role in describing three generations in [34]. The key idea behind that proposal was to generalize the constructing of three generations of leptons in terms of three \(\mathbb{H}\) subalgebras of \(\mathbb{O}\) in [36] to three generations in terms of three \(\mathbb{O}\) subalgebras of \(\mathbb{S}\), where each generation is associated with one copy of \(\mathbb{O}\). One finds, as in [36], that the resulting three generations are not linearly independent. It was suggested in [34] (and later in [14]) that this overlap could provide an algebraic basis for neutrino oscillations and quark mixing, although the viability of this idea remains to be investigated.
The model in [34] suffers from two significant drawbacks. Each generation comes with its own copy of \(SU(3)_{C}\) thereby also requiring three generations of gluons, for which there is currently no experimental evidence. Additionally, \(Aut(\mathbb{S})=Aut(\mathbb{O})\times S_{3}\), where \(Aut(\mathbb{O})=G_{2}\), and \(S_{3}\) is the permutation group of three objects [37, 38]. The \(S_{3}\) automorphisms of \(\mathbb{S}\) were however not given any clear physical interpretation, in part because these automorphisms stabilize the octonion subalgebras in \(\mathbb{S}\).
The model we presented here builds on [34] and seeks to resolve the shortcoming just mentioned. Instead of associating each \(\mathbb{O}\) subalgebra of \(\mathbb{S}\) with one generation, we use all three \(\mathbb{O}\) subalgebras to construct a single
generation. This corresponds to a direct generalization of the construction in [5] where three \(\mathbb{H}\) subalgebras of \(\mathbb{O}\) are used to construct a single generation. Subsequently, we utilize sedenion \(S_{3}\) automorphism of order three to generate the additional two generations. This construction provides a clear interpretation of the new \(S_{3}\) automorphism. Furthermore, all three generations transform as required under a single copy of the gauge group \(SU(3)_{C}\), thereby avoiding introducing three generations of gluons.
In the next section we provide a brief overview of the normed division algebras, in particular the quaternions \(\mathbb{H}\) and octonions \(\mathbb{O}\). In Section 3 we review the construction of one generation of fermions with unbroken \(SU(3)_{c}\times U(1)_{em}\) gauge symmetry from \(\mathbb{C}\otimes\mathbb{O}\), following closely [5]. The Cayley-Dickson construction and the algebra of sedenions are discussed in Section 4. Finally we present our three generation model based on the algebra of sedenions in Section 5. We conclude with an outlook of how to develop the model further, and a discussion.
## 2 Normed division algebras
A division algebra is an algebra over a field where division is always well-defined, except by zero. A normed division algebra has the additional property that it is also normed vector spaces, with the norm defined in terms of a conjugate. A well-known result by Hurwitz [39] is that there exist only four normed division algebras (over the field of real numbers): \(\mathbb{R}\), \(\mathbb{C}\), \(\mathbb{H}\), \(\mathbb{O}\), of dimensions one, two, four and eight respectively. In going to higher-dimensional algebras, successive algebraic properties are lost: \(\mathbb{R}\) is self-conjugate, commutative and associative, \(\mathbb{C}\) is commutative and associative (but no longer self-conjugate), \(\mathbb{H}\) is associative but no longer commutative, and finally \(\mathbb{O}\) is neither commutative nor associative (but alternative).
The quaternions \(\mathbb{H}\) are a generalization of the complex numbers \(\mathbb{C}\) with three mutually anticommuting imaginary units \(I,J,K\), satisfying \(I^{2}=J^{2}=K^{2}=IJK=-1\), which implies \(IJ=K=-JI\), \(JK=I=-KJ\), and \(KI=J=-IK\). A general quaternion \(q\) may then be written as
\[q=q_{0}1+q_{1}I+q_{2}J+q_{3}K,\qquad q_{0},q_{1},q_{2},q_{3}\in\mathbb{R}, \tag{1}\]
With the quaternion conjugate \(\overline{q}\) defined as \(\overline{q}=q_{0}-q_{1}I-q_{2}J-q_{3}K\). The norm of a quaternion \(|q|\) is subsequently defines by \(|q|^{2}=q\overline{q}=\overline{q}q\), and the inverse \(q^{-1}=\overline{q}/|q|^{2}\).
The automorphism group of \(\mathbb{H}\) is \(SU(2)\). Indeed, there is an isomorphism between the quaternions \(\mathbb{H}\) and the real Clifford algebra \(C\ell(0,2)\), while the complexified quaternions \(\mathbb{C}\otimes\mathbb{H}\) (isomorphic to the Pauli algebra) are isomorphic to the complex Clifford algebra \(\mathbb{C}\ell(2)\). Note, however, that \(\mathbb{C}\otimes\mathbb{H}\) is not a division algebra (but remains associative), and manifestly contains projectors, for example: \((1+iK)(1-iK)=0\).
The octonions \(\mathbb{O}\) are the largest division algebra, of dimension eight. Its orthonormal basis comprises seven imaginary units: \(i_{1},...i_{7}\), along with the unit \(1=i_{0}\). A general octonion \(x\) may then be written as
\[x=x_{0}i_{0}+x_{1}i_{i}+...+x_{7}i_{7},\qquad x_{0},...,x_{7}\in\mathbb{R}, \tag{2}\]
with the octonion conjugate \(\overline{x}\) defined as \(\overline{x}=x_{0}i_{0}-x_{1}i_{1}-...-x_{7}i_{7}\). The norm of an octonion \(|x|\) is subsequently defines by \(|x|^{2}=x\overline{x}=\overline{x}x\), and the inverse \(x^{-1}=\overline{x}/|x|^{2}\).
The multiplication of octonions1 is captured in terms of the Fano plane Fig. 1. Each projective line in the Fano plane corresponds (together with the identity \(i_{0}\)) to an \(\mathbb{H}\) subalgebra; there are seven such subalgebras. Like with \(\mathbb{H}\), all the imaginary units anticommute under multiplication. Unlike with \(\mathbb{H}\), the multiplication of elements not belonging to the same \(\mathbb{H}\) subalgebra is non-associative. For example \(i_{4}(i_{7}i_{6})=-i_{5}\neq i_{5}=(i_{4}i_{7})i_{6}\). Octonion multiplication however is alternative \(x(xy)=(xx)y\) and \(y(xx)=(yx)x\), \(\forall x,y\in\mathbb{O}\). The complexified octonions \(\mathbb{C}\otimes\mathbb{O}\) are again not a division algebra (but remains alternative).
Footnote 1: There are different multiplication rules for \(\mathbb{O}\) used by different authors in the literature. Here we follow the multiplication table used in [40]
As vector spaces \(\mathbb{O}=\mathbb{C}^{4}\). The splitting of \(\mathbb{O}\) as \(\mathbb{C}\oplus\mathbb{C}^{3}\) relies on choosing a preferred octonion unit \(i_{a}\) (and hence a preferred \(\mathbb{C}\) subalgebra in \(\mathbb{O}\)). For our purpose we choose \(i_{4}\). The map [36]
\[\pi(x)=\frac{1}{2}(x+i_{4}x\bar{i_{4}}),\quad x\in\mathbb{O}, \tag{3}\]
here \(\bar{i_{4}}\) indicates the octonion conjugation, then projects \(\mathbb{O}\) down to this preferred \(\mathbb{C}\subset\mathbb{O}\), and we can write the octonion \(x=x_{0}i_{0}+...+x_{7}i_{7}\) as:
\[x=(x_{0}+x_{4}i_{4})i_{0}+(x_{1}-x_{5}i_{4})i_{1}+(x_{2}-x_{6}i_{4})i_{2}+(x_{3 }-x_{7}i_{4})i_{3}. \tag{4}\]
Note that the product \(i_{4}x\bar{i_{4}}\) is defined unambiguously since \(\mathbb{O}\) is alternative.
The automorphism group of \(\mathbb{O}\) is the 14-dimensional exceptional Lie group \(G_{2}\). This exceptional group contains \(SU(3)\) as one of its maximal subgroups, corresponding to the stabilizer subgroup of one of the octonion imaginary units, or equivalently, the subgroup of \(Aut(\mathbb{O})\) that preserves the representation of \(\mathbb{O}\) as the complex space \(\mathbb{C}\oplus\mathbb{C}\)3. This splitting is associated with the quark-lepton symmetry [36]. The space of internal states of a quark is then the three complex dimensional space \(\mathbb{C}\)3 whereas the internal space of a lepton is \(\mathbb{C}\).
Footnote 3: We will henceforth refer to this algebra simply as the left action algebra or left multiplication algebra
Since \(\mathbb{O}\) and \(\mathbb{C}\otimes\mathbb{O}\) are nonassociative, they are not representable as matrix algebras (with the standard matrix product). The algebra generated from the composition of left and right actions of \(\mathbb{O}\) (and \(\mathbb{C}\otimes\mathbb{O}\)) however is associative, since each such left (right) action corresponds to a linear operator (endomorphism).
Let \(L_{a}\) (\(R_{a}\)) denote the linear operator of left (right) multiplication by \(a\in\mathbb{C}\otimes\mathbb{O}\):
\[L_{a}[x]=ax,\quad R_{a}[x]=xa,\quad\forall a,x\in\mathbb{C}\otimes\mathbb{O}. \tag{5}\]
Then
\[L_{a}L_{b}[x] = a(bx)\quad\neq\quad L_{ab}[x]=(ab)x, \tag{6}\] \[R_{a}R_{b}[x] = (xb)a\quad\neq\quad R_{ab}[x]=x(ab). \tag{7}\]
The mappings \(a\to L_{a}\) and \(a\to R_{a}\) do not correspond to algebra homomorphisms as they each generate an associative algebra called the _associative multiplication algebra_2. They do however preserve the quadratic relations \(\langle x,y\rangle=\frac{1}{2}(x\overline{y}+y\overline{x})\) where \(x,y\in\mathbb{C}\otimes\mathbb{O}\)
Footnote 2: We will henceforth refer to this algebra simply as the left action algebra or left multiplication algebra
\[L_{x}L_{\overline{y}}1+L_{y}L_{\overline{x}}1=2\langle x,y\rangle 1=R_{x}R_{\overline{y}}1+R_{y}R_{\overline{x}}1. \tag{8}\]
Since \(L_{a}\) (\(R_{a}\)) correspond to linear operators, they can be represented as \(8\times 8\) complex matrices (acting on the vector space \(\mathbb{C}\otimes\mathbb{O}\) written as a column vector).
For \(\mathbb{C}\otimes\mathbb{O}\), one finds the following identities [4, 5]:
\[L_{i_{1}}L_{i_{2}}L_{i_{3}}L_{i_{4}}L_{i_{5}}L_{i_{6}}x = L_{i_{7}}x, \tag{9}\] \[...L_{i_{k}}L_{i_{a}}L_{i_{a}}L_{i_{c}}....x = -...L_{i_{k}}L_{i_{c}}...x,\] (10) \[...L_{i_{a}}L_{i_{b}}...x = -...L_{i_{k}}L_{i_{a}}..., \tag{11}\]
where \(a,b,c=1,...,7\).
Figure 1: The Fano plane, encoding the multiplicative structure of our octonions, where \(a\equiv i_{a},\ a=1,...,7\). Note that each line is cyclic, representing a quaternionic triple.
Due to the nonassociativity of \(\mathbb{O}\), the left (right) associative multiplication algebra of \(\mathbb{C}\otimes\mathbb{O}\) contains genuinely new maps which are not captured by \(\mathbb{C}\otimes\mathbb{O}\). For example, \(i_{3}(i_{4}(i_{6}+i_{2}))\neq y(i_{6}+i_{2})\) for any \(y\in\mathbb{C}\otimes\mathbb{O}\). There are a total of 64 distinct left-acting complex-linear maps from \(\mathbb{C}\otimes\mathbb{O}\) to itself, and these (due to the given identities above) provide a faithful representation of \(\mathbb{C}\ell(6)\).
Denoting the 64-dimensional left (right) associative multiplication algebra generated from left (right) actions of \(\mathbb{C}\otimes\mathbb{O}\) on itself by \((\mathbb{C}\otimes\mathbb{O})_{L}\) (\((\mathbb{C}\otimes\mathbb{O})_{R}\)), one finds that any left (right) action can always be rewritten as a right (left) action [4]. That is:
\[(\mathbb{C}\otimes\mathbb{O})_{L}\cong(\mathbb{C}\otimes\mathbb{O})_{R}\cong \mathbb{C}\ell(6). \tag{12}\]
This is in contrast to a similar construction for \(\mathbb{C}\otimes\mathbb{H}\), where one finds that the left and right actions are genuinely distinct, each generating a copy of \(\mathbb{C}\ell(2)\). The left and right adjoint actions in this case commute, and only by considering both does one obtain a basis for \(Mat(4,\mathbb{C})\cong\mathbb{C}\ell(4)\).
## 3 One generation of electrocolor states from \(\mathbb{C}\otimes\mathbb{O}\)
Let \(e_{1}:=L_{i_{1}},...,e_{6}:=L_{i_{6}}\) be a generating basis (over \(\mathbb{C}^{6}\)) for \(\mathbb{C}\ell(6)\) associated with the left multiplication algebra of the complex octonions, satisfying \(e_{i}^{2}=-1\), \(e_{i}e_{j}=-e_{j}e_{i}\). Define the Witt basis
\[\alpha_{1}^{\dagger} = \frac{1}{2}(e_{1}+ie_{5}),\quad\alpha_{2}^{\dagger}=\frac{1}{2}(e _{2}+ie_{6}),\quad\alpha_{3}^{\dagger}=\frac{1}{2}(e_{3}+ie_{7})\] \[\alpha_{1} = \frac{1}{2}(-e_{1}+ie_{5}),\quad\alpha_{2}=\frac{1}{2}(-e_{2}+ie_ {6}),\quad\alpha_{3}=\frac{1}{2}(-e_{3}+ie_{7}).\]
Here \({}^{\dagger}\) corresponds to the composition of complex and octonion conjugation. This new basis satisfies the anticommutation relations 3
Footnote 3: Note that as left actions we have \(\{L_{\alpha_{i}},L_{\alpha_{j}}\}x=0\) etc for \(x\in\mathbb{C}\otimes\mathbb{O}\).
\[\{\alpha_{i},\alpha_{j}\}=\{\alpha_{i}^{\dagger},\alpha_{j}^{\dagger}\}=0, \quad\{\alpha_{i},\alpha_{j}^{\dagger}\}=\delta_{ij} \tag{13}\]
Each pair of ladder operators in isolation generates \(\mathbb{C}\ell(2)\), and is associated with one of the three \(\mathbb{H}\) subalgebras of \(\mathbb{O}\) that contain the privileged complex subalgebra generated by \(i_{4}\). Subsequently, we obtain three anticommuting copies of \(\mathbb{C}\ell(2)\), which when considered together generate the full \(\mathbb{C}\ell(6)\) left multiplication algebra of \(\mathbb{C}\otimes\mathbb{O}\).
From the Witt basis, it is possible to construct two minimal left ideals of the algebra \(\mathbb{C}\ell(6)\), following the procedure in [41] (see [5] for a detailed construction):
\[\mathbb{C}\ell(6)\omega\omega^{\dagger},\qquad\mathbb{C}\ell(6)\omega^{ \dagger}\omega, \tag{14}\]
where \(\omega=\alpha_{1}\alpha_{2}\alpha_{3}\) and \(\omega^{\dagger}=\alpha_{3}^{\dagger}\alpha_{2}^{\dagger}\alpha_{1}^{\dagger}\) are nilpotents, but \(\omega\omega^{\dagger}\) and \(\omega^{\dagger}\omega\) are primitive idempotents. Each ideal is eight complex dimensional. Explicitly,
\[S^{u}=\nu\omega\omega^{\dagger}+ S^{d}=\overline{\nu}\omega^{\dagger}\omega+\] \[\overline{d^{\tau}}\alpha_{1}^{\dagger}\omega\omega^{\dagger}+ \overline{d}\alpha_{2}^{\dagger}\omega\omega^{\dagger}+\overline{d}\alpha_{ 3}^{\dagger}\omega\omega^{\dagger}+ d^{\prime}\alpha_{1}\omega^{\dagger}\omega+ d^{g}\alpha_{2}\omega^{\dagger}\omega+d^{b}\alpha_{3}\omega^{\dagger}\omega+\] \[u^{\tau}\alpha_{3}^{\dagger}\alpha_{2}^{\dagger}\omega\omega^{ \dagger}+u^{g}\alpha_{1}^{\dagger}\alpha_{3}^{\dagger}\omega\omega^{\dagger}+ u^{b}\alpha_{2}^{\dagger}\alpha_{1}^{\dagger}\omega\omega^{\dagger}+ \overline{u^{\tau}}\alpha_{3}\alpha_{2}\omega^{\dagger}\omega+\overline{u^{ \tau}}\alpha_{1}\alpha_{3}\omega^{\dagger}\omega+\overline{u^{\tau}}\alpha_{ 2}\alpha_{1}\omega^{\dagger}\omega+\] \[e^{+}\alpha_{3}^{\dagger}\alpha_{2}^{\dagger}\alpha_{1}^{\dagger} \omega\omega^{\dagger} e^{-}\alpha_{3}\alpha_{2}\alpha_{1}\omega^{\dagger}\omega\]
where the suggestively labelled coefficients are elements of \(\mathbb{C}\).
The unitary symmetries that preserve the Witt basis, and hence the minimal left ideals is \(U(3)=SU(3)\times U(1)\). The generators of this symmetry, written in terms of the Witt basis, are:
\[\Lambda_{1}=-\alpha_{2}^{\dagger}\alpha_{1}-\alpha_{1}^{\dagger} \alpha_{2}\qquad\Lambda_{2}=i\alpha_{2}^{\dagger}\alpha_{1}-i\alpha_{1}^{ \dagger}\alpha_{2}\qquad\Lambda_{3}=\alpha_{2}^{\dagger}\alpha_{2}-\alpha_{1}^{ \dagger}\alpha_{1}\] \[\Lambda_{4}=-\alpha_{1}^{\dagger}\alpha_{3}-\alpha_{3}^{\dagger} \alpha_{1}\qquad\Lambda_{5}=-i\alpha_{1}^{\dagger}\alpha_{3}+i\alpha_{3}^{ \dagger}\alpha_{1}\qquad\Lambda_{6}=-\alpha_{3}^{\dagger}\alpha_{2}-\alpha_{2}^ {\dagger}\alpha_{3}\] \[\Lambda_{7}=i\alpha_{3}^{\dagger}\alpha_{2}-i\alpha_{2}^{\dagger} \alpha_{3}\quad\Lambda_{8}=-\frac{1}{\sqrt{3}}(\alpha_{1}^{\dagger}\alpha_{1}+ \alpha_{2}^{\dagger}\alpha_{2}-2\alpha_{3}^{\dagger}\alpha_{3}),\]
\[Q=\frac{1}{3}(\alpha_{1}^{\dagger}\alpha_{1}+\alpha_{2}^{\dagger}\alpha_{2}+\alpha _{3}^{\dagger}\alpha_{3}).\]
The basis states of minimal ideals transform as \(1\oplus 3\oplus\overline{3}\oplus 1\) under \(SU(3)\), and this symmetry can therefore be associated with the color symmetry \(SU(3)_{C}\), justifying the choice of coefficients. The \(U(1)\) generator \(Q\), related to the number operator \(Q=N/3\), on the other hand gives correct electric charge for each state. The ideal \(S^{u}\) contains the isospin up states, whereas the \(S^{d}\) contains the isospin down states.
One generation of leptons and quarks with correct unbroken \(SU(3)_{C}\times U(1)_{em}\) symmetry can therefore be elegantly represented in terms of two minimal left ideals of \(\mathbb{C}\ell(6)\) generated from \(\mathbb{C}\otimes\mathbb{O}\). The dimension of the minimal ideals dictates the number of distinct physical states, whereas the gauge symmetries are those unitary symmetries that preserve the ideals (or equivalently, the Witt basis).
## 4 The Cayley-Dickson construction and the algebra of sedenions
The CD process is an iterative construction that generates at each stage an algebra (with involution) of dimension twice that of the previous. Each algebra is constructed as a direct sum of the previous algebra, so that \(\mathbb{C}=\mathbb{R}\oplus\mathbb{R}i\) where \(i\) is the complex structure introduced in the process. Similarly, \(\mathbb{H}=\mathbb{C}\oplus\mathbb{C}J\), where \(i\), \(J\) and \(iJ\) are identified with the quaternion imaginary bases \(I,J,K\), and similarly \(\mathbb{O}=\mathbb{H}\oplus\mathbb{H}i_{4}\).
This process does not terminate with \(\mathbb{O}\) but continues, generating a series of \(2^{n}\)-dimensional (non division) algebras. A generic element of the CD algebra \(\mathbb{A}_{n}\), can then be written as \(a+bu\), where \(a,b\in\mathbb{A}_{n-1}\), and \(u\) is the new imaginary unit introduced by the CD process applied to \(\mathbb{A}_{n-1}\). The fifth CD algebra \(\mathbb{A}_{4}\) (\(\mathbb{A}_{0}=\mathbb{R}\)), generated from \(\mathbb{O}\), is the 16-dimensional algebra of sedenions \(\mathbb{S}\). This algebra is non-commutative, non-associative, and not even alternative (\(x(xy)\neq(xx)y\) and \(y(xx)\neq(yx)x\) in general). Other properties, like flexibility (\((xy)x=x(yx)\)) and power-associativity (\(x^{n}\) associative), still hold (and hold for all CD algebras).
An orthonormal basis for \(\mathbb{S}\) comprises 15 mutually anticommuting imaginary units \(s_{1},...,s_{15}\) together with the unit \(1=s_{0}\). The imaginary units \(s_{1},..,s_{7}\) correspond to the original octonion units \(i_{1},...,i_{7}\). A general sedenion may then be written as
\[w=w_{0}s_{0}+w_{1}s_{1}+...+w_{15}s_{15},\quad w_{0},...,w_{15}\in\mathbb{R}. \tag{15}\]
with the sedenion conjugate \(\overline{w}\) defined as \(\overline{w}=w_{0}s_{0}-w_{1}s_{1}-...-w_{15}s_{15}\), and the sedenion norm \(|w|\) defined by \(|w|^{2}=w\overline{w}=\overline{w}w\). Whenever \(|w|^{2}\neq 0\), the inverse of \(w\) is given by \(w^{-1}=\overline{w}/|w|^{2}\).
The product of two sedenions \(w,v\) can be determined using the multiplication table of the orthonormal basis units of \(\mathbb{S}\) provided in Appendix A, source [40, 42], together with linearity.
Because \(\mathbb{S}\) is not a division algebra, it contains zero divisors. These are elements of the form
\[(s_{a}+s_{b})(s_{c}+s_{b})=0,\qquad s_{a},s_{b},s_{c},s_{d}\in\mathbb{S}. \tag{16}\]
There are 84 such zero divisors, and the subspace of zero divisors of unit norm is homeomorphic to \(G_{2}\)[43].
### Octonion subalgebras inside the sedenions
Let us now use \(\{e_{0},e_{1},...,e_{n^{2}-1}\}\) to denote an orthonormal basis for \(\mathbb{A}_{n}\), so that
\[\mathbb{A}_{1}=\mathbb{C} : \{e_{0},e_{1}\},\quad\text{where}\quad e_{1}=i, \tag{17}\] \[\mathbb{A}_{2}=\mathbb{H} : \{e_{0},e_{1},e_{2},e_{3}\},\quad\text{where}\quad e_{1}=i=I,\;e_{ 2}=J,\;e_{3}=K\] (18) \[\mathbb{A}_{3}=\mathbb{O} : \{e_{0},e_{1},...,e_{7}\},\quad\text{where}\quad e_{1}=i=I=i_{1},...,e_{7}=i_{7}\] (19) \[\mathbb{A}_{4}=\mathbb{S} : \{e_{0},e_{1},...,e_{15}\},\quad\text{where}\quad e_{1}=i=I=i_{1} =s_{1},...,e_{15}=s_{15}. \tag{20}\]
Consider \(\mathbb{A}_{2}=\mathbb{H}\) with basis \(\{e_{0},e_{1},e_{2},e_{3}\}\). There are three subalgebras isomorphic to \(\mathbb{C}\) within \(\mathbb{H}\), each containing the identity and one of the imaginary units of \(\mathbb{H}\). These subalgebras correspond to three different complex structures in \(\mathbb{H}\), and the common intersection of these three \(\mathbb{C}\) subalgebras is isomorphic to \(\mathbb{R}\). The automorphism group of \(\mathbb{H}\) is \(Aut(H)=SU(2)\). The subset of automorphisms of \(\mathbb{H}\) that preserve a given complex structure is \(U(1)\), corresponding to the element wise stabilizer subgroup of \(SU(2)\).
Applying the CD process to \(\mathbb{H}\) generates \(\mathbb{O}\) with basis \(\{e_{0},e_{1},e_{2},e_{3},e_{4},e_{5},e_{6},e_{7}\}\) where \(e_{4}\) is the newly introduced anticommuting imaginary unit and \(e_{i}e_{4}=e_{i+4}\). Via this same construction, each of the three \(\mathbb{C}\subset\mathbb{H}\) subalgebras generate a quaternion:
\[\mathbb{C}_{1}:\{e_{0},e_{1}\}\xrightarrow{CD}\mathbb{H}_{1}:\{e _{0},e_{1},e_{4},e_{5}\}, \tag{21}\] \[\mathbb{C}_{2}:\{e_{0},e_{2}\}\xrightarrow{CD}\mathbb{H}_{2}:\{e _{0},e_{2},e_{4},e_{6}\},\] (22) \[\mathbb{C}_{3}:\{e_{0},e_{3}\}\xrightarrow{CD}\mathbb{H}_{3}:\{e _{0},e_{3},e_{4},e_{7}\}. \tag{23}\]
The common intersection of these three \(\mathbb{H}\in\mathbb{O}\) is isomorphic to \(\mathbb{C}\), spanned by \(e_{0}\) and \(e_{4}\). These \(\mathbb{H}_{i}\), \(i=1,2,3\) however are not the only \(\mathbb{H}\) subalgebras of \(\mathbb{O}\), and in total there are seven such subalgebras. However \(\mathbb{H}_{1},\mathbb{H}_{2},\mathbb{H}_{3}\) are the only (quaternions) subalgebras that contain the new \(e_{4}\). Together with the identity, \(e_{4}\) corresponds to a complex structure.
Applying the CD process to \(\mathbb{O}\) generates \(\mathbb{S}\). Apart from the original (_principal_) \(\mathbb{O}\) with basis \(\{e_{0},e_{1},...,e_{7}\}\), \(\mathbb{S}\) contains seven \(\mathbb{O}\) subalgebras (which we call _non-principal_). These are explicitly listed in [42]. Interestingly, all of these contain the new imaginary unit \(e_{8}\), introduced in the CD construction of \(\mathbb{S}\) from \(\mathbb{O}^{4}\). Via this process, each of \(\mathbb{H}_{i}\subset\mathbb{O}\) above generates one \(\mathbb{O}_{i}\in\mathbb{S}\).
\[\mathbb{H}_{1}:\{e_{0},e_{1},e_{4},e_{5}\}\xrightarrow{CD} \mathbb{O}_{1}:\{e_{0},e_{1},e_{4},e_{5},e_{8},e_{9},e_{12},e_{13}\}, \tag{24}\] \[\mathbb{H}_{2}:\{e_{0},e_{2},e_{4},e_{6}\}\xrightarrow{CD} \mathbb{O}_{2}:\{e_{0},e_{2},e_{4},e_{6},e_{8},e_{10},e_{12},e_{14}\},\] (25) \[\mathbb{H}_{3}:\{e_{0},e_{3},e_{4},e_{7}\}\xrightarrow{CD} \mathbb{O}_{3}:\{e_{0},e_{3},e_{4},e_{7},e_{8},e_{11},e_{12},e_{15}\}. \tag{26}\]
Since each \(\mathbb{H}_{i}\) contains \(e_{4}\), and each \(\mathbb{O}_{i}\) contains \(e_{8}\), it follows that each \(\mathbb{O}_{i}\) also contains \(e_{4}e_{8}=e_{12}\). That is, common intersection of the three \(\mathbb{O}_{i}\) now corresponds to a \(\mathbb{H}\subset\mathbb{S}\):
\[\mathbb{O}_{1}\cap\mathbb{O}_{2}\cap\mathbb{O}_{3}=\mathbb{H}, \tag{27}\]
where this \(\mathbb{H}\) is generated by \(\{e_{0},e_{4},e_{8},e_{12}\}\). \(\mathbb{O}_{1},\mathbb{O}_{2},\mathbb{O}_{3}\) are the only (octonion) subalgebras of \(\mathbb{S}\) that contain \(e_{4},e_{8}\), and \(e_{12}\). Together with the identity, this corresponds to a quaternionic structure.
In addition to the eight octonion subalgebras of \(\mathbb{S}\), there are also a further seven quasi-octonion subloops \(\tilde{\mathbb{O}}\), satisfying all the same properties of the octonion subalgebras, except for the Moufang identities5. As such, they are not isomorphic to the octonion subalgebras. None of the \(\tilde{\mathbb{O}}\) contain the element \(e_{8}\).
Footnote 4: Note that this is a new feature that appears with \(\mathbb{S}\). There are three new, (that is excluding the original \(\mathbb{H}\) with basis \(\{e_{0},e_{1},e_{2},e_{3}\}\) used to generate \(\mathbb{O}\)) \(\mathbb{H}\) subalgebras of \(\mathbb{O}\) that do no contain \(e_{4}\). For \(\mathbb{S}\) there are no new \(\mathbb{O}\) that do not contain \(e_{8}\).
Footnote 5: The Moufang identities, describing a weaker notion of associativity, are essentially equivalent to left and right alternativity, and flexibility
### The left multiplication algebra of \(\mathbb{C}\otimes\mathbb{S}\)
Despite the algebra \(\mathbb{S}\) being non-associative and non-alternative, just as for \(\mathbb{O}\), we can consider the left actions of \(\mathbb{S}\) on itself as linear operators generating an associative algebra.
The generalisation from \((\mathbb{C}\otimes\mathbb{O})_{L}\) to \((\mathbb{C}\otimes\mathbb{S})_{L}\) is not immediately obvious because the identities which held for \((\mathbb{C}\otimes\mathbb{O})_{L}\), namely
\[L_{i_{1}}L_{i_{2}}L_{i_{3}}L_{i_{4}}L_{i_{5}}L_{i_{6}}x = L_{i_{7}}x, \tag{28}\] \[...L_{i_{k}}L_{i_{4}}L_{i_{4}}...x = -...L_{i_{k}}L_{i_{c}}....x,\] (29) \[...L_{i_{a}}L_{i_{b}}...x = -...L_{i_{k}}L_{i_{a}}..., \tag{30}\]
where \(a,b,c=1,...,7\) are no longer satisfied by general composition of sedenion left multiplications. Although one finds a new identify
\[L_{s_{1}}L_{s_{2}}...L_{s_{14}}w=L_{s_{15}}w, \tag{31}\]
the two identities (29) and (30), crucial for generating a Clifford algebra, no longer hold in general. For example, \(s_{3}(s_{14}(s_{1}))=-s_{14}(s_{3}(s_{1}))\), but \(s_{3}(s_{14}(s_{2}))=+s_{14}(s_{3}(s_{2})\). Consequently, the left multiplications of the (complex) sedenions do not generate \(\mathbb{C}\ell(14)\), as one might initially expect.
However, since the linear operators corresponding to each complex sedenion left multiplication can be written as a \(16\times 16\) matrix with complex entries acting on \(\mathbb{C}\otimes\mathbb{S}\) written as a column vector, one would expect to be able to generate \(Mat(16,\mathbb{C})\cong\mathbb{C}\ell(8)\). It then remains to find a suitable set of sedenion elements that generate \(\mathbb{C}\ell(8)\) via their left multiplication. Closer inspection reveals that all the left multiplications of the original octonion elements \(e_{0}=i_{0}=s_{0},...,e_{7}=i_{7}=s_{7}\) do satisfy the identities (29) and (30) (where now the action is on \(\mathbb{C}\otimes\mathbb{S}\) instead of \(\mathbb{C}\otimes\mathbb{O}\)), as do all new sedenion basis elements \(e_{9}=s_{7},...,e_{15}=s_{15}\):
\[e_{i}(e_{j}w) = -e_{j}(e_{i}w),\qquad e_{i+8}(e_{j+8}w)=-e_{j+8}(e_{i+8}w), \tag{32}\] \[e_{i}(e_{i}w) = -w,\qquad e_{i+8}(e_{i+8}w)=-w\quad i,j=1,...,7,\quad\forall w \in\mathbb{C}\otimes\mathbb{S} \tag{33}\]
However, \(e_{i}\) and \(e_{i+8}\) fail to anti-commute (or commute) with each other as left actions
\[e_{i}(e_{j+8}w)\neq-e_{j+8}(e_{i}w). \tag{34}\]
The left action of \(e_{8}\) however anti-commutes with the left action of every other basis element. One possible generating basis for \(\mathbb{C}\ell(8)\) is therefore given by the left multiplications of \(\{e_{1},...,e_{8}\}\). Another possible generating basis is \(\{e_{8},...,e_{15}\}\).
Accepting some abuse of notation, we now simple write \(L_{e_{i}}=e_{i}\) and take \(e_{0},e_{1},...,e_{8}\) as our generating basis for \(\mathbb{C}\ell(8)\). The left action of the remaining sedenion basis elements \(e_{9},...,e_{15}\) then need to be expressed as the left action of some element of \(\mathbb{C}\ell(8)\). After some trial and error, one finds that
\[e_{9}w = \frac{1}{2}(-e_{123458}+e_{123678}+e_{145678}-e_{18})w, \tag{35}\] \[e_{10}w = \frac{1}{2}(-e_{123468}-e_{123578}+e_{245678}-e_{28})w,\] (36) \[e_{11}w = \frac{1}{2}(-e_{123478}+e_{123568}+e_{345678}-e_{38})w\] (37) \[e_{12}w = \frac{1}{2}(e_{124568}+e_{134578}+e_{234678}-e_{48})w\] (38) \[e_{13}w = \frac{1}{2}(e_{124578}-e_{134568}+e_{235678}-e_{58})w\] (39) \[e_{14}w = \frac{1}{2}(e_{124678}-e_{135678}-e_{234568}-e_{68})w\] (40) \[e_{15}w = \frac{1}{2}(e_{125678}+e_{134678}-e_{234578}-e_{78})w, \tag{41}\]
where \(e_{123458}=e_{1}e_{2}e_{3}e_{4}e_{5}e_{8}=L_{e_{1}}L_{e_{2}}L_{e_{3}}L_{e_{4} }L_{e_{5}}L_{e_{8}}\), \(e_{18}=e_{1}e_{8}=L_{e_{1}}L_{e_{8}}\) etc are elements of \(\mathbb{C}\ell(8)\).
### Automorphisms of sedenions
Schafer [37] showed that for CD algebras \(\mathbb{A}_{n}\) with \(n\geq 4\) (\(\mathbb{A}_{0}=\mathbb{R}\)), the derivation algebra \(\mathfrak{der}(\mathbb{A}_{n})\) consists of derivations of the form \(a+bu\to aD+(bD)u\), where \(a,b\in\mathbb{A}_{n-1}\), \(u\) is the new anticommuting imaginary unit \(u\) introduces in the CD construction of \(\mathbb{A}_{n}\) from \(\mathbb{A}_{n-1}\), and \(D\) is a derivation of \(\mathbb{A}_{n-1}\). Brown [38] demonstrated that if \(\theta\in Aut(\mathbb{A}_{n-1})\), then
\[\theta^{\prime} : a+bu\to a\theta+(b\theta)u, \tag{42}\] \[\epsilon : a+bu\to a-bu,\] (43) \[\psi : a+bu\rightarrow\frac{1}{4}[a+3a^{*}+\sqrt{3}(b-b^{*})]+\frac{1}{ 4}[b+3b^{*}-\sqrt{3}(a-a^{*})]u, \tag{44}\]
are automorphisms of \(\mathbb{A}_{n}\). Here \({}^{*}\) denotes conjugation in \(\mathbb{A}_{n-1}\), and \(\epsilon\) and \(\psi\), satisfying \(\epsilon^{2}=\psi^{3}=1,\ \epsilon\psi=\psi^{2}\epsilon\), generate \(S_{3}\). That is, in general
\[Aut(\mathbb{A}_{n-1})\times S_{3}\subseteq Aut(\mathbb{A}_{n}). \tag{45}\]
It follows that \(SU(2)\times S_{3}\) are automorphisms of \(\mathbb{O}\), but crucially, these are not all of the octonion automorphisms: \(SU(2)\times S_{3}\subset G_{2}\). However, for the cases where \(n=4,5,6\), the equality holds [38]
\[Aut(\mathbb{A}_{n-1})\times S_{3}=Aut(\mathbb{A}_{n}),\quad n=4,5,6. \tag{46}\]
In particular, this means that:
\[Aut(\mathbb{S})=Aut(\mathbb{O})\times S_{3}=G_{2}\times S_{3}. \tag{47}\]
Explicitly, the automorphisms of \(\mathbb{S}\) are therefore give by
\[\theta^{\prime} : a+be_{8}\to a\theta+(b\theta)e_{8}, \tag{48}\] \[\epsilon : a+be_{8}\to a-be_{8},\] (49) \[\psi : a+be_{8}\rightarrow\frac{1}{4}[a+3\overline{a}+\sqrt{3}(b- \overline{b})]+\frac{1}{4}[b+3\overline{b}-\sqrt{3}(a-\overline{a})]e_{8}, \tag{50}\]
where \(a,b\in\mathbb{O}\). The explicit action of \(\psi\) on the sedenion basis elements can be written as:
\[\psi(e_{i}) = -\frac{1}{2}e_{i}-\frac{\sqrt{3}}{2}e_{i+8}, \tag{51}\] \[\psi(e_{i+8}) = -\frac{1}{2}e_{i+8}+\frac{\sqrt{3}}{2}e_{i},\] (52) \[\psi(e_{8}) = e_{8} \tag{53}\]
where \(i=1,...,7\). The automorphism \(\psi\) corresponds to a simultaneous rotations in the seven \(e_{i}-e_{i+8}\) planes by \(2\pi/3\), and therefore does not correspond to an automorphism of \(\mathbb{C}\otimes\mathbb{O}\). It is also possible to write the \(S_{3}\) automorphisms in matrix form:
\[\psi : \begin{pmatrix}e_{i}\\ e_{i+8}\end{pmatrix}\rightarrow\begin{pmatrix}-1/2&-\sqrt{3}/2\\ \sqrt{3}/2&-1/2\end{pmatrix}\begin{pmatrix}e_{i}\\ e_{i+8}\end{pmatrix} \tag{54}\] \[\epsilon : \begin{pmatrix}e_{i}\\ e_{i+8}\end{pmatrix}\rightarrow\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\begin{pmatrix}e_{i}\\ e_{i+8}\end{pmatrix},\quad i=1,...,7 \tag{55}\]
The fundamental symmetries of \(\mathbb{S}\) are the same as those of \(\mathbb{O}\), although one find an additional \(S_{3}\) symmetry, suggesting a threefold multiplicity of the automorphisms of \(\mathbb{O}\).
From the action of \(\theta^{\prime}\) on the sedenion units above it is immediately clear that \(e_{8}\) is stabilized by the \(G_{2}\) automorphisms. Furthermore, these \(G_{2}\) automorphisms map \(e_{i},i<8\) to \(e_{j},j<8\), and \(e_{i+8}\) to \(e_{j+8}\). They therefore do not mix the new sedenion elements \(e_{i+8}\) with the original octonion elements \(e_{i}\), \(i=1,...,7\). Only the \(S_{3}\) automorphism \(psi\) of order three mixes the original octonion units with the new sedenion units.
Given that the stabilizer of \(e_{4}\) in \(G_{2}\) is \(SU(3)\), and \(e_{8}\) is likewise an \(SU(3)\) singlet (as it is fixed by \(G_{2}\)), it follows that the \(SU(3)\) subgroup of \(G_{2}\) fixes the entire quaternion generated by \(\{e_{0},e_{4},e_{8},e_{4}e_{8}=e_{12}\}\). The \(S_{3}\) automorphisms on the other hand do not fix this quaternion, although they do stabilize it. Note that this quaternion corresponds precisely to the common intersection of our previously isolated octonion subalgebras \(\mathbb{O}_{i}\), see eqn. (27).
## 5 Three generations of color states from \(\mathbb{C}\otimes\mathbb{S}\)
### Why not sedenions?
Since one generation of electrocolor states are efficiently represented starting from \(\mathbb{C}\otimes\mathbb{O}\), one might ask if \(\mathbb{S}\) (or rather \(\mathbb{C}\otimes\mathbb{S}\)) is an appropriate larger algebraic structure capable of describing three generations. Not being a division algebra is not grounds to disqualify \(\mathbb{S}\), for we point out that neither \(\mathbb{C}\otimes\mathbb{O}\) nor \(\mathbb{C}\ell(6)\) generated as the left multiplication algebra are themselves division algebras. Furthermore, the algebra \(\mathbb{R}\otimes\mathbb{C}\otimes\mathbb{H}\otimes\mathbb{O}\) is, like \(\mathbb{S}\), not even alternative. in fact, the construction of invariant subspaces (minimal ideals) relies explicitly on the use of projectors (and nilpotents), which altogether do not exist in division algebras.
There are several natural reasons to suspect that \(\mathbb{S}\) exhibits the algebraic structure necessary to describe three full generations:
1. \(Aut(\mathbb{S})=Aut(\mathbb{O})\times S_{3}\), and one finds a threefold multiplicity of the symmetries associated with \(\mathbb{O}\),
2. The process in Section 4 shows how to naturally isolate these \(\mathbb{O}\) subalgebras within \(\mathbb{S}\), which could perhaps be used to construct three generations,
3. The group \(Spin(8)\) generated from \(\mathbb{C}\ell(8)\) admits a triality, which has on occasion been suggested as a potential source of three generations [32, 44]. The group of outer-automorphisms of \(Spin(8)\) is precisely \(S_{3}\).
One approach to construct three generations with \(SU(3)_{C}\) symmetry is to use each of the three \(\mathbb{C}\otimes\mathbb{O}_{i}\subset\mathbb{S}\) to generate (via its left action on itself, but not as a left action of \(\mathbb{C}\otimes\mathbb{S}\)!) a \(\mathbb{C}\ell(6)\) algebra, and subsequently representing three generations in terms of the minimal left ideals of these three \(\mathbb{C}\ell(6)\). This approach was considered in [34], as a generalization of the construction of three generations of leptons from three \(\mathbb{H}\subset\mathbb{O}\) developed in [36].
There are several drawbacks to this approach however. Each generation requires its own copy of \(SU(3)\) resulting in three generations of gluons. Finally, the physical interpretation of the \(S_{3}\) automorphisms remains obscured, because these automorphisms stabilize the (non-principle) octonion subalgebras in \(\mathbb{S}\).
The approach pursued here is different and seeks to resolve these shortcomings. We instead use each \(\mathbb{C}\otimes\mathbb{O}_{i}\subset\mathbb{S}\) to construct a pair or fermionic ladder operators that each generate \(\mathbb{C}\ell(2)\), via their left multiplication action on all of \(\mathbb{C}\otimes\mathbb{S}\) (instead of just \(\mathbb{C}\otimes\mathbb{O}_{i}\)). The three pairs of ladder operators are independent of one another and hence we identify a single copy of \(\mathbb{C}\ell(6)\cong\mathbb{C}\ell(2)\hat{\otimes}\mathbb{C}\ell(2)\hat{ \otimes}\mathbb{C}\ell(2)\), corresponding to a subalgebra of \(\mathbb{C}\ell(8)\). Thus, we will employ all three \(\mathbb{O}_{i}\subset\mathbb{S}\) in order to construct a single generation of states. This corresponds to a direct generalization of the construction of reviewed in Section 3 where three \(\mathbb{H}\) subalgebras of \(\mathbb{O}\) are used to construct a single generation of color states. Subsequently, the order three \(S_{3}\) automorphism of \(\mathbb{S}\) will be used to generate two additional generations. This construction will therefore provide a clear interpretation of the new \(S_{3}\) automorphism \(\psi\). All three generations constructed in this manner transform as required under a single copy of the gauge group \(SU(3)_{C}\), thereby avoiding introducing three generations of gluons.
### One generation of electrocolor states from \(\mathbb{C}\otimes\mathbb{S}\)
We proceed by first constructing a single generation with unbroken \(SU(3)_{C}\times U(1)\) symmetry from \(\mathbb{C}\otimes\mathbb{S}\). Considering the three octonion subalgebras \(\mathbb{O}_{1},\mathbb{O}_{2},\mathbb{O}_{3}\) of \(\mathbb{S}\) defined above:
\[\mathbb{O}_{1} : \{e_{0},e_{1},e_{4},e_{5},e_{8},e_{9},e_{12},e_{13}\} \tag{56}\] \[\mathbb{O}_{2} : \{e_{0},e_{2},e_{4},e_{6},e_{8},e_{10},e_{12},e_{14}\}\] (57) \[\mathbb{O}_{3} : \{e_{0},e_{3},e_{4},e_{7},e_{8},e_{11},e_{12},e_{15}\}. \tag{58}\]
For each \(\mathbb{O}_{i}\) subalgebra, we define a single pair of raising and lowering operators as follows:
\[A_{1}^{\dagger} \equiv \frac{1}{2\sqrt{2}}(e_{1}+ie_{5}+e_{9}+ie_{13}),\quad A_{1}\equiv \frac{1}{2\sqrt{2}}(-e_{1}+ie_{5}-e_{9}+ie_{13}) \tag{59}\] \[A_{2}^{\dagger} \equiv \frac{1}{2\sqrt{2}}(e_{2}+ie_{6}+e_{10}+ie_{14}),\quad A_{2} \equiv\frac{1}{2\sqrt{2}}(-e_{2}+ie_{6}-e_{10}+ie_{14})\] (60) \[A_{3}^{\dagger} \equiv \frac{1}{2\sqrt{2}}(e_{3}+ie_{7}+e_{11}+ie_{15}),\quad A_{3} \equiv\frac{1}{2\sqrt{2}}(-e_{3}+ie_{7}-e_{11}+ie_{15}). \tag{61}\]
It is readily checked that \(A_{i}(A_{i}w)=A_{i}^{\dagger}(A_{i}^{\dagger}w)=0\) and \(A_{i}(A_{j}w)=-A_{j}(A_{i}w)\), \(\forall w\in\mathbb{C}\otimes\mathbb{S}\), and therefore these ladder operators satisfy (as left actions on a general \(w\in\mathbb{S}\)) the usual anticommutation relations:
\[\{A_{i},A_{j}\}w=\{A_{i}^{\dagger},A_{j}^{\dagger}\}w=0,\quad\{A_{i},A_{j}^{ \dagger}\}w=\delta_{ij}w,\ \forall w\in\mathbb{C}\otimes\mathbb{S}. \tag{62}\]
Each \(A_{i}\in\mathbb{O}_{i}\subset\mathbb{S}\) in a generalization of \(\alpha_{i}\in\mathbb{H}_{i}\subset\mathbb{O}\) where now each ladder operators consists of four terms instead of just two.
Subsequently we can proceed to construct two minimal left ideals, in a manner identical as in Section 3. These ideals are identical to \(S^{u}\) and \(S^{d}\) above, and (as will be demonstrated shortly) preserved by the same unitary symmetries, but with both the states and symmetry generators written in terms of the generalised ladder operators \(A_{i}\) and \(A_{i}^{\dagger}\).
\[S_{1}^{u}= \nu_{e}\omega_{1}\omega_{1}^{\dagger}+ S_{1}^{d}=\overline{\nu}_{e}\omega_{1}^{\dagger}\omega_{1}+\] \[\overline{d^{r}}A_{1}^{\dagger}\omega_{1}\omega_{1}^{\dagger}+ \overline{d^{g}}A_{2}^{\dagger}\omega_{1}\omega_{1}^{\dagger}+\overline{d^{b}} A_{3}^{\dagger}\omega_{1}\omega_{1}^{\dagger}+ d^{r}A_{1}\omega_{1}^{\dagger}\omega_{1}+d^{g}A_{2}\omega_{1}^{\dagger}\omega_{1}+ d^{b}A_{3}\omega_{1}^{\dagger}\omega_{1}+\] \[u^{r}A_{3}^{\dagger}A_{2}^{\dagger}\omega_{1}\omega_{1}^{\dagger }+u^{g}A_{1}^{\dagger}A_{3}^{\dagger}\omega_{1}\omega_{1}^{\dagger}+u^{b}A_{2 }^{\dagger}A_{1}^{\dagger}\omega_{1}^{\dagger}+ \overline{u^{r}}A_{3}A_{2}\omega_{1}^{\dagger}\omega_{1}+\overline{u^{g}}A_ {1}A_{3}\omega_{1}^{\dagger}\omega_{1}+\overline{u^{b}}A_{2}A_{1}\omega_{1}^{ \dagger}\omega_{1}+\] \[e^{+}A_{3}^{\dagger}A_{2}^{\dagger}A_{1}^{\dagger}\omega_{1} \omega_{1}^{\dagger}, e^{-}A_{3}A_{2}A_{1}\omega_{1}^{\dagger}\omega_{1},\]
where \(\omega_{1}:=A_{1}A_{2}A_{3}\).
The \(SU(3)_{C}\) generators are now a direct generalization of those in Section 3, with \(\alpha_{i}\) replaced by \(A_{i}\):
\[\Lambda_{1}=-A_{2}^{\dagger}A_{1}-A_{1}^{\dagger}A_{2} \qquad\Lambda_{2}=iA_{2}^{\dagger}A_{1}-iA_{1}^{\dagger}A_{2} \qquad\Lambda_{3}=A_{2}^{\dagger}A_{2}-A_{1}^{\dagger}A\] \[\Lambda_{4}=-A_{1}^{\dagger}A_{3}-A_{3}^{\dagger}A_{1} \qquad\Lambda_{5}=-iA_{1}^{\dagger}A_{3}+iA_{3}^{\dagger}A_{1} \qquad\Lambda_{6}=-A_{3}^{\dagger}A_{2}-A_{2}^{\dagger}A_{3}\] \[\Lambda_{7}=iA_{3}^{\dagger}A_{2}-iA_{2}^{\dagger}A_{3} \quad\Lambda_{8}=-\frac{1}{\sqrt{3}}(A_{1}^{\dagger}A_{1}+A_{2}^{ \dagger}A_{2}-2A_{3}^{\dagger}A_{3}),\]
Furthermore, the number generator can again be used to define a \(U(1)\) generator that assigns the correct electric charges to all the states:
\[Q=\frac{N}{3}=\frac{1}{3}(A_{1}^{\dagger}A_{1}+A_{2}^{\dagger}A_{2}+A_{3}^{ \dagger}A_{3}). \tag{63}\]
### Generating two additional generations from \(S_{3}\) automorphisms
Having constructed one generation of electrocolor states, we now proceed to apply \(\psi\) to the ladder operators eqns. (59)-(61), and subsequently the basis states of the minimal left ideals, in order to generate two additional generations.
The first thing to check is that the \(S_{3}\) automorphisms of \(\mathbb{S}\) carry over the automorphisms of the left multiplication algebra \(\mathbb{C}\ell(8)\). Given the generating basis \(e_{0},...,e_{8}\) for \(\mathbb{C}\ell(8)\), it is readily verified that
\[\psi(e_{i})\psi(e_{i})=\psi(e_{i}^{2})=-1, \tag{64}\] \[\psi(e_{i})\psi(e_{j})+\psi(e_{i})\psi(e_{j})=0. \tag{65}\]
Subsequently, \(\psi\) extends to an automorphism of the entire \(\mathbb{C}\ell(8)\) algebra (see Lemma 9.7 of [45]). A similar argument holds for the order two \(S_{3}\) automorphism \(\epsilon\) of \(\mathbb{S}\), which likewise extends to an automorphism of \(\mathbb{C}\ell(8)\). It can furthermore be checked that:
\[\psi(e_{i+8})\psi(e_{i+8})=\psi(e_{i+8}^{2})=-1, \tag{66}\] \[\psi(e_{i+8})\psi(e_{j+8})=-\psi(e_{i+8})\psi(e_{j+8}). \tag{67}\]
Finally, one also find that
\[\psi^{2}(e_{i})+\psi(e_{i})+e_{i}=0, \tag{68}\] \[\psi^{2}(e_{i+8})+\psi(e_{i+8})+e_{i+8}=0, \tag{69}\]
indicating that \(\psi^{2}(e_{i}),\ \psi(e_{i})\) and \(e_{i}\) (as well as \(\psi^{2}(e_{i+8}),\ \psi(e_{i+8})\) and \(e_{i+8}\)) are not linearly independent. These conditions however are not satisfied by more general \(\mathbb{C}\ell(8)\) multivectors (nor for \(e_{8}\), which is fixed by \(\psi\)).
Since the action of \(\psi\) on \(e_{1},...,e_{15}\) as elements of \(\mathbb{C}\ell(8)\) is known, we can via linearity establish the action
of \(\psi\) on the ladder operators. This gives the three sets of ladder operators:
\[A_{i}^{\dagger} = \frac{1}{2\sqrt{2}}(e_{i}+ie_{i+4}+e_{i+8}+ie_{i+12}), \tag{70}\] \[A_{i} = \frac{1}{2\sqrt{2}}(-e_{i}+ie_{i+4}-e_{i+8}+ie_{i+12}),\] (71) \[\psi(A_{i}^{\dagger})=B_{i}^{\dagger} = \frac{1}{2\sqrt{2}}(ae_{i}+iae_{i+4}+be_{i+8}+ibe_{i+12}),\] (72) \[\psi(A_{i})=B_{i} = \frac{1}{2\sqrt{2}}(-ae_{i}+iae_{i+4}-be_{i+8}+ibe_{i+12}),\] (73) \[\psi^{2}(A_{i}^{\dagger})=C_{i}^{\dagger} = \frac{1}{2\sqrt{2}}(be_{i}+ibe_{i+4}+ae_{i+8}+iae_{i+12}),\] (74) \[\psi^{2}(A_{i})=C_{i} = \frac{1}{2\sqrt{2}}(-be_{i}+ibe_{i+4}-ae_{i+8}+iae_{i+12}), \tag{75}\]
where \(i=1,2,3\), and
\[a=\frac{\sqrt{3}-1}{2},\qquad b=\frac{-\sqrt{3}-1}{2}, \tag{76}\]
satisfying \(a^{2}+b^{2}=2,\;a+b=-1,\;ab=-1/2\).
The three sets of ladder operators are not linearly independent since
\[\psi^{2}(A_{i}^{\dagger})+\psi(A_{i}^{\dagger})+A_{i}^{\dagger}=0. \tag{77}\]
The two additional sets of ladder operators \(\{B_{i}^{\dagger},B_{i}\}\) and \(\{C^{\dagger},C_{i}\}\) generated in this manner likewise constitute Witt bases for \(\mathbb{C}\ell(6)\), satisfying the same anticommutation relations as \(\{A_{i}^{\dagger},A_{i}\}\), and we proceed to construct an additional pair of minimal left ideals of \(\mathbb{C}\ell(6)\) for each of the two additional sets of ladder operators \(\{B_{i}B_{i}^{\dagger}\}\) and \(\{C_{i},C_{i}^{\dagger}\}\). We interpret these additional pairs of minimal ideals as representing the second and third generation of electrocorolators:
\[S_{2}^{u}= \nu_{\mu}\omega_{2}\omega_{2}^{\dagger}+ S_{2}^{d}= \overline{\nu}_{\mu}\omega_{2}^{\dagger}\omega_{2}+\] \[\overline{c^{r}}B_{1}^{\dagger}\omega_{2}\omega_{2}^{\dagger}+ \overline{c^{g}}B_{2}^{\dagger}\omega_{2}\omega_{2}^{\dagger}+ \overline{c^{b}}B_{3}^{\dagger}\omega_{2}\omega_{2}^{\dagger}+ c^{r}B_{1}\omega_{2}^{\dagger}\omega_{2}+c^{g}B_{2}\omega_{2}^{\dagger}\omega_{2}+ c^{b}B_{3}\omega_{2}^{\dagger}\omega_{2}+\] \[s^{r}B_{3}^{\dagger}B_{2}^{\dagger}\omega_{2}\omega_{2}^{\dagger}+ s^{g}B_{1}^{\dagger}B_{3}^{\dagger}\omega_{2}\omega_{2}^{\dagger}+s^{b}B_{2}^{ \dagger}B_{1}^{\dagger}\omega_{2}\omega_{2}^{\dagger}+ \overline{s^{g}}B_{3}B_{2}\omega_{2}^{\dagger}\omega_{2}+\overline{s^{g}}B_{ 1}B_{3}\omega_{2}^{\dagger}\omega_{2}+\overline{s^{b}}B_{2}B_{1}\omega_{2}^{ \dagger}\omega_{2}+\] \[\mu^{+}B_{3}^{\dagger}B_{2}^{\dagger}B_{1}^{\dagger}\omega_{2} \omega_{2}^{\dagger} \mu^{-}B_{3}B_{2}B_{1}\omega_{2}^{\dagger}\omega_{2}\]
\[S_{3}^{u}= \nu_{\tau}\omega_{3}\omega_{3}^{\dagger}+ S_{3}^{d}= \overline{\nu}_{\tau}\omega_{3}^{\dagger}\omega_{3}+\] \[\overline{b^{r}}C_{1}^{\dagger}\omega_{3}\omega_{3}^{\dagger}+ \overline{b^{s}}C_{2}^{\dagger}\omega_{3}\omega_{3}^{\dagger}+ \overline{b^{c}}C_{3}^{\dagger}\omega_{3}\omega_{3}^{\dagger}+ b^{r}C_{1}\omega_{3}^{\dagger}\omega_{3}+b^{g}C_{2}\omega_{3}^{\dagger}\omega_{3}+ b^{b}C_{3}\omega_{3}^{\dagger}\omega_{3}+\] \[t^{r}C_{3}^{\dagger}C_{2}^{\dagger}\omega_{3}\omega_{3}^{\dagger}+ t^{g}C_{1}^{\dagger}C_{3}^{\dagger}\omega_{3}\omega_{3}^{\dagger}+t^{b}C_{2}^{ \dagger}C_{1}^{\dagger}\omega_{3}\omega_{3}^{\dagger}+ \overline{t^{r}}C_{3}C_{2}\omega_{3}^{\dagger}\omega_{3}+\overline{t^{g}}C_{ 1}C_{3}\omega_{3}^{\dagger}\omega_{3}+\overline{t^{b}}C_{2}C_{1}\omega_{3}^{ \dagger}\omega_{3}+\] \[\tau^{+}C_{3}^{\dagger}C_{2}^{\dagger}C_{1}^{\dagger}\omega_{3} \omega_{3}^{\dagger} \tau^{-}C_{3}C_{2}C_{1}\omega_{3}^{\dagger}\omega_{3}\]
Here, \(\omega_{2}:=B_{1}B_{2}B_{3},\;\omega_{3}:=C_{1}C_{2}C_{3}\).
That is the order three automorphism \(\psi\) of the finite group \(S_{3}\) can be used to construct exactly two additional generations of color states. This automorphism then permutes between the three generations.
On the other hand, the order two automorphism \(\epsilon\) does not generate transformations between the three generations. Applying \(\epsilon\) to the ladder operators \(A_{i}\) and \(A_{i}^{\dagger}\) generates an complementary set of ladder operators satisfying the same anticommutation relations as \(A_{i}\) and \(A_{i}^{\dagger}\) (since \(\epsilon\) is an automorphism of \(\mathbb{C}\ell(8)\)), and hence can likewise be used to construct minimal ideals. The automorphism \(\epsilon\) can then be used to incorporate an additional degree of freedom (perhaps chirality, handedness, or spin). It is not immediately clear at present however what the most natural physical interpretation for \(\epsilon\) is.
### \(Su(3)_{c}\) color symmetries of three generations
The anticommutators between ladder operators belonging to different generations are not the standard fermionic anticommutation relations. That is
\[\{A_{i},B_{j}\}w\neq 0,\qquad\{B_{i},C_{j}\}w\neq 0,\qquad\{C_{i},A_{j}\}w\neq 0, \quad\mbox{whenever $i\neq j$} \tag{78}\]
and likewise for the other anticommutation relations. Surprisingly however, it turns out that all three generations of states transform correctly under a single copy of \(SU(3)\), which may be generated from any of the three sets of \(\mathbb{C}\ell(6)\) ladder operators. We will here use the \(SU(3)_{C}\) generators from Subsection 5.2. One then finds for example that
\[\left[\Lambda_{1},A_{1}\omega_{1}^{\dagger}\omega_{1}\right]=A_{2}\omega_{1}^ {\dagger}\omega_{1},\quad\left[\Lambda_{1},B_{1}\omega_{1}^{\dagger}\omega_{1 }\right]=B_{2}\omega_{1}^{\dagger}\omega_{1},\quad\left[\Lambda_{1},C_{1} \omega_{1}^{\dagger}\omega_{1}\right]=C_{2}\omega_{1}^{\dagger}\omega_{1}. \tag{79}\]
A single \(SU(3)\) generator thus correctly transforms the equivalent states of each generation
\[\Lambda_{1}:d^{r}\to d^{g},\quad c^{r}\to c^{g},\quad t^{r}\to t^{g}. \tag{80}\]
The same holds true for the other \(SU(3)\) generators and states. Despite having three generations of fermions, only one copy of \(SU(3)\), and hence one generation of gauge bosons is needed in the present construction.
On the other hand, one finds that the number operator only assigns the correct electric charge for one of the generations. This same issue was encountered in [33]. Thus although the number operator can be used to obtain the correct electric charge within the context of a single generation model, this approach fails to work in a three generation model. A generalized action to obtain the correct electric charges is presented in [35]. This approach, which makes extensive use of both left and right projectors would work for the present model as well. However, it remains unclear why this action takes the particular form that it does, and so this issue will be revisited at a later time once the weak (or electroweak) symmetry has been included within the present sedenion model.
### Linear dependence of minimal ideal states
Although we have been able to construct three pairs of minimal ideals to represent three generations of fermions, these minimal ideals are not linearly independent. Indeed, one might expect a certain degree of overlap of the minimal ideals, given the condition eqn. (77) above.
We propose (as in [34] and [14]) that the inevitable overlap of the three generations could form a basis for including quark mixing and neutrino oscillations into the present algebraic model based on sedenions. However, a detailed investigation into the feasibility of this proposal is beyond the scope of this article, and requires first an understanding of the weak interaction within the present algebraic context.
Nonetheless, some initial calculations (carried out using Mathematica) indicate that the (anti) down-type quarks across generations are linearly dependent. That is:
\[A_{i}\omega_{1}^{\dagger}\omega_{1}+B_{i}\omega_{2}^{\dagger} \omega_{2}+C_{i}\omega_{3}^{\dagger}\omega_{3} = 0, \tag{81}\] \[A_{i}^{\dagger}\omega_{1}\omega_{1}^{\dagger}+B_{i}^{\dagger} \omega_{2}\omega_{2}^{\dagger}+C_{i}^{\dagger}\omega_{3}\omega_{3}^{\dagger} = 0,\quad i=1,2,3 \tag{82}\]
On the other hand, all 18 of the (anti) up-type quarks turn out to be linearly independent. This might be related to the fact that the Yukawa couplings, and subsequent mass matrix for either the up-type or down-type quarks can be taken to be diagonal, but not both. Subsequently, one can say without loss of generality that only the down-type quarks mix via the CKM matrix
## 6 Outlook
The model presented here is clearly far from complete as our focus has been restricted to the \(SU(3)_{C}\) color gauge symmetry, and we have not considered the remaining internal symmetries nor the spacetime symmetries.
One way to include chiral \(SU(2)_{L}\) states it is via a copy of \(\mathbb{C}\ell(4)\)[46]. In [5] it is shown that the right actions of \(\mathbb{C}\otimes\mathbb{H}\) together with the right actions of the \(\mathbb{C}\ell(6)\) nilpotents \(\omega\) and \(\omega^{\dagger}\) generate the required \(\mathbb{C}\ell(4)\)
to represent chiral \(SU(2)_{L}\) states in terms of two four-dimensional minimal ideals. The resulting model is then based on \(\mathbb{C}\ell(10)\)[6, 47, 48]. In our present construction, we have restricted ourselves to a \(\mathbb{C}\ell(6)\) subalgebra of the full \(\mathbb{C}\ell(8)\) left multiplication algebra. The thus far unused \(\mathbb{C}\ell(2)\) could be combined with the right actions of the nilpotents \(\omega_{i}\) and \(\omega_{i}^{\dagger}\) to generate \(\mathbb{C}\ell(4)\) without the need to invoke \(\mathbb{H}\). This approach is currently being investigated by the authors.
It is well known that \(\mathbb{C}\otimes\mathbb{H}\cong\mathbb{C}\ell(2)\cong SL(2,\mathbb{C})\). By complementing the present model sedenion model with a factor of \(\mathbb{H}\), it should be possible to include the Lorentz (spacetime) symmetries into the model. Specifically, it was shown in [5] that Weyl-, Dirac- and Majorana-spinors can all be represented in terms of the ideals of \(\mathbb{C}\otimes\mathbb{H}\cong\mathbb{C}\ell(2)\). The spacetime symmetries would then be identical for all three generations, a desirable feature.
By choosing a complex structure within \(\mathbb{O}\), the ten dimensional spacetime described by \(SL(2,\mathbb{O})\) is reduced to four dimensional spacetime \(SL(2,\mathbb{C})\)[36, 49]. It would be interesting to see whether \(SL(2,\mathbb{O})\) spacetime can be extended to an 18 dimensional spacetime \(SL(2,\mathbb{S})\) which is then broken to \(SL(2,\mathbb{H})\) via our quaternionic structure inside \(\mathbb{S}\), and subsequently to \(SL(2,\mathbb{C})\) by choosing a complex structure inside \(\mathbb{H}\).
The construction of three generations presented here bears some intriguing resemblance to the three generation construction in [33, 35]. In our case, all three generations reside in a \(\mathbb{C}\ell(6)\) subalgebra of \(\mathbb{C}\ell(6)\). Likewise in [33], the 48 degrees of freedom that remain in \(\mathbb{C}\ell(6)\) once two representations of the Lie algebra \(su(3)\) have been accounted for are shown under the action of \(SU(3)\) to transform as three generations of leptons and quarks. The number operator in this construction no longer assigns the correct electric charges to the states however, an issue that was overcome in a later paper [35]. How to include \(U(1)_{em}\) into the sedenion model, as well as a detailed understanding of how these two complementary constructions are related remains to be worked out.
Likewise, it remains to be investigated in detail how the sedenion model proposed here relates to the constructions of three generations based on the exceptional Jordan \(J_{3}(\mathbb{O})\), in which each of the three octonions in \(J_{3}(\mathbb{O})\) is associated with one generation via the three canonical \(J_{2}(\mathbb{O})\) subalgebras of \(J_{3}(\mathbb{O})\)[27, 28, 29, 30, 31, 32].
Given the richer algebraic structure provided by \(\mathbb{S}\), it may be possible to describe additional features of the SM. The \(S_{3}\) automorphism of order three not only generates two additional generations, but also facilitate transformations between physical states of different generations. This suggests that \(S_{3}\) could perhaps be used as a basis for including quark mixing and neutrino oscillations. This is something that has not yet been considered within the context of division algebras. It is interesting that several authors have proposed \(S_{3}\) extensions of the SM to explain the hierarchy of quark masses and mixing in the SM [50, 51, 52, 53, 54].
## 7 Discussion
We have argued that the CD algebra \(\mathbb{S}\) provides a suitable algebraic structure to describe three generations. Our focus has been restricted to the \(SU(3)_{C}\) color gauge symmetry. Three intersecting \(\mathbb{O}\) subalgebras of \(\mathbb{S}\) are used to construct a Witt basis for \(\mathbb{C}\ell(6)\). Two minimal left ideals are then use to represent one generation of electrocolor states. Subsequently, the \(S_{3}\) automorphism of order three of \(\mathbb{S}\) is then applied to the \(\mathbb{C}\ell(6)\) Witt basis in order to obtain exactly two additional generations of color states, but not electrocolor states.
In [4, 5], \(\mathbb{C}\ell(6)\) arises as the left multiplication algebra of \(\mathbb{C}\otimes\mathbb{O}\). Instead, in our approach \(\mathbb{C}\ell(6)\) does not arise as the multiplication algebra of a single octonion algebras, but rather from three intersection octonion subalgebras of the sedenions, with each octonion subalgebra contributing a \(\mathbb{C}\ell(2)\) factor. The model presented here overcomes two limitations of a previous three generation model based on sedenions. First, only a single copy of \(SU(3)_{C}\) correctly transforms the states of all three generations. We thereby avoid introducing three generations of gauge bosons. Additionally in the present model, the order three \(S_{3}\) automorphisms of \(\mathbb{S}\) if given a clear physical interpretation as a family symmetry, responsible for generating two additional generations from the first.
One compelling reason to consider division algebras as the foundational mathematical input from which to generate SM particle multiplets and gauge symmetries, is that there are only four of them. Since the CD process generates an infinite series of algebras, one might question whether going beyond the division algebras and including \(\mathbb{S}\) is a wise idea, or if it opens the door to considering ever larger algebras. The derivation algebra for all CD algebras \(\mathbb{A}_{n}\), \(n\geq 3\) is equal to \(\mathfrak{g}_{2}\) however [37]. Furthermore, at least for the
cases \(n=4,5,6\), the automorphism group of each successive CD algebra only picks up additional factors of \(S_{3}\)[38]. It therefore seems unlikely that CD algebras beyond \(\mathbb{A}_{4}=\mathbb{S}\) will provide additional physical insight. As an interesting aside, the sphere \(S^{15}\), associated with the imaginary (pure) sedenions, is the largest sphere to appear in any of the four Hopf fibrations.
### Acknowledgements
The authors wish to thank Alessio Marrani insightful discussions and detailed feedback on earlier drafts of this work.
## Appendix A: Sedenion multiplication table
Here we provide the multiplication table for the Cayley-Dickson sedenions that we use, written in terms of the indices \((i\equiv e_{i})\):
|
2307.15918 | Entropic sampling in frustrated magnets: role of self-intersecting
spaces | Frustrated magnets typically possess a large space of classical ground
states. If this degeneracy is not protected by symmetry, thermal fluctuations
may `select' certain states via order-by-disorder. In this article, we examine
a precursor effect where all ground states are sampled, but with different
weights. Geometry plays a key role in determining the weight distribution and
its behaviour. We demonstrate this with two examples -- both clusters with four
spins coupled by XY interactions. In the first, the classical ground states
form a smooth space. In the second, they form a self-intersecting non-manifold
space. Ground state sampling is very different in these two cases. We first
consider the microcanonical ensemble picture, where fluctuations conserve
energy. Phase space arguments suggest that the first model exhibits
energy-independent probabilities. The second shows a dramatic energy-dependence
with relative probability increasing as $\epsilon^{-1/2}$, where $\epsilon$ is
the energy of the system. We simulate low-energy dynamics in both models,
confirming the expected behaviour. We next consider the canonical ensemble,
where the first model produces temperature-independent probabilities. In the
second, relative probability rises sharply as $T^{-1/2}$, where $T$ is the
temperature. Our results bring out a classical analogue of
order-by-singularity, a mechanism that has been recently proposed in the
context of quantum spin clusters. The sampling of classical orders is
qualitatively different in systems with self-intersecting ground state spaces.
It grows at low energies and becomes singular as $\epsilon \rightarrow 0$
(microcanonical ensemble) or $T\rightarrow 0$ (canonical ensemble). We discuss
relevance for disordered phases in macroscopic magnets, particularly for spiral
liquids. | Alwyn Jose Raja, R. Ganesh | 2023-07-29T07:21:24Z | http://arxiv.org/abs/2307.15918v2 | # Entropic selection in frustrated magnets: ordering on self-intersecting spaces
###### Abstract
Frustrated magnets typically possess a large space of classical ground states. If this degeneracy is not protected by symmetry, thermal fluctuations may'select' certain states via order-by-disorder. We examine the role of geometry in this mechanism in the context of classical magnets. We consider two model magnetic clusters, each with four spins coupled by XY interactions. In the first, the classical ground states form a smooth space. In the second, they form a self-intersecting non-manifold space. State selection is very different in these two cases. We first consider the microcanonical ensemble picture, where fluctuations conserve energy. Phase space arguments suggest that the first model samples the set of classical ground states with energy-independent probabilities. The second shows a dramatic energy-dependence with relative probability increasing as \(\epsilon^{-1/2}\), where \(\epsilon\) is the energy of the system. We simulate low-energy dynamics in both models, confirming the expected behaviour. We next consider the canonical ensemble, where the first model produces temperature-independent probabilities. In the second, relative probability rises sharply as \(T^{-1/2}\), where \(T\) is the temperature. Our results bring out a classical analogue of order-by-singularity, a mechanism that has been recently proposed in the context of quantum spin clusters. State selection is qualitatively different in systems with self-intersecting ground state spaces. It grows at low energies and becomes perfect as \(\epsilon\to 0\) (microcanonical ensemble) or \(T\to 0\) (canonical ensemble). We discuss relevance for ordering in various macroscopic magnets.
## I Introduction
A hallmark of frustrated magnetism is large degeneracy of classical ground states. Despite being degenerate, ground states may provide varying scope for fluctuations. Originating in quantum or thermal effects, fluctuations lead to differences in zero-point energy and/or free energy. The system settles into the classical ground state with the lowest (free) energy. This state is said to have been'selected' by fluctuations. This phenomenon is well known as 'order by disorder [1; 2]. Recent studies have explored the underlying mechanism by drawing an analogy to particle localization [3; 4; 5]. At low energies, a frustrated magnet can be viewed as a single particle moving on an abstract space - consisting of all classical ground states. Fluctuations, be they of quantum or thermal origin, give rise to a potential on this space. If the potential is deep enough, the particle localizes at its minimum. This manifests as the magnet settling into one particular classically ordered state.
In this article, we examine the role of geometry in this mechanism - in particular, that of the space of classical ground states (CGSS). Various CGSS geometries are known to be realized. Examples include lines [6; 7], circle-like closed curves [8; 9; 10], sheets [7], surfaces [11], tori [12], intersecting circles [13; 14] and even dense three-dimensional spaces [15]. In macroscopic magnets, the CGSS is usually described in momentum space using the Luttinger-Tisza approach. In magnetic clusters, the CGSS is often an abstract space arising from geometric constraints. [16; 12; 13] In the context of quantum fluctuations in magnetic clusters, studies (by one of the present authors) have drawn a distinction between two mechanisms that contribute to state selection [4]. One is driven by a fluctuation-generated potential that induces localization at its minimum. This is the only possible mechanism in systems where the CGSS is a smooth manifold. The second mechanism comes into play if the CGSS self-intersects, e.g., forming a figure-of-eight. It arises from bound state formation at a singularity - a quantum effect driven by the local topology around an intersection point [17]. This mechanism has been termed 'order by singularity' [12]. It may have observable consequences, e.g., in the scaling behaviour of the selection-induced energy gap [4].
Here, we consider state selection in a purely classical setting. We build upon early work by Moessner and Chalker [18; 16], pointing out contrasting selection behaviour in two magnetic clusters with all-to-all couplings. We expand on their findings by contrasting two similar clusters - one with a smooth CGSS and the other with self-intersections. We demonstrate that this difference gives rise to qualitatively different selection behaviour. Unlike the smooth case, self-intersection-driven selection grows dramatically at low energies.
## II Models
We discuss two model magnetic clusters below. The first is the symmetric quadrumer, a cluster of four spins with all-to-all XY couplings. It is described by the Hamiltonian
\[H_{sym.}=J\sum_{i<j}\vec{S}_{i}\cdot\vec{S}_{j}, \tag{1}\]
where the indices \(i\) and \(j\) run over all pairs chosen from among four spins. The couplings are of XY nature, i.e.,
\(\vec{S}_{i}\cdot\vec{S}_{j}\equiv S_{i}^{x}S_{j}^{x}+S_{i}^{y}S_{j}^{y}\). The coupling constant \(J\) is assumed to be positive and is henceforth set to unity. The resulting classical ground states have been discussed in Ref. [12]. To minimize energy, the four spin vectors must lie in the XY plane _and_ add to zero. This can be viewed as orienting the spins as two anti-aligned pairs. This can be done in three distinct ways, each with two free angle variables. This results in a CGSS containing three tori. However, the tori are not entirely distinct - they intersect pairwise along lines that represent collinear states. We discuss a simplified view of this space below.
The second model is the asymmetric quadrumer, also described in Ref. [12]. It is very similar to the symmetric quadrumer, but with two bonds having a stronger coupling strength. The Hamiltonian is given by
\[H_{asym.}=H_{sym.}+\lambda\Big{\{}\vec{S}_{1}\cdot\vec{S}_{2}+\vec{S}_{3} \cdot\vec{S}_{4}\Big{\}}. \tag{2}\]
Here, \(\lambda\) is the anisotropy parameter. A positive value of \(\lambda\) forces ground states to have \(\vec{S}_{1}=-\vec{S}_{2}\) and \(\vec{S}_{3}=-\vec{S}_{4}\), with all four spins lying in the XY plane. The resulting CGSS is a single torus, a space parameterized by two angle variables.
In both models, the CGSS is larger than the space of Hamiltonian symmetries. The only continuous Hamiltonian symmetry is global rotation about the spin-\(z\) axis. With this symmetry in mind, we work in a frame that co-rotates with \(\vec{S}_{1}\). The first spin, \(\vec{S}_{1}\) can now be taken to have a fixed orientation, say along the Y axis. The CGSS of the asymmetric quadrumer in this frame is shown in Fig. 1(a). It is a circle parameterized by a single angle \(A\), the angular displacement between \(\vec{S}_{1}\) and \(\vec{S}_{3}\). The other two spins are immediately determined as \(\vec{S}_{2}=-\vec{S}_{1}\) and \(\vec{S}_{4}=-\vec{S}_{3}\). The CGSS of the symmetric quadrumer in this frame is shown in Fig. 1(b). It consists of three circles. The circle parameterized by the angle \(A\) is the same as that in the asymmetric quadrumer case. The circle parameterized by \(B\) has \(\vec{S}_{3}=-\vec{S}_{1}\) and \(\vec{S}_{4}=-\vec{S}_{2}\), with \(B\) representing the relative angle between \(\vec{S}_{1}\) and \(\vec{S}_{2}\). Similarly, the third circle has \(\vec{S}_{4}=-\vec{S}_{1}\) and \(\vec{S}_{3}=-\vec{S}_{2}\), with \(C\) representing the relative angle between \(\vec{S}_{1}\) and \(\vec{S}_{2}\).
In the symmetric quadrumer, the three circles of the CGSS intersect pairwise. These points of intersection are collinear states. For example, the circles parameterized by \(A\) and \(B\) share a common point, where \(\vec{S}_{1}=\vec{S}_{4}=-\vec{S}_{2}=-\vec{S}_{3}\). As we show below, these collinear states are selected by fluctuations.
## III Phase space in the microcanonical ensemble
To describe state selection, we consider the low-energy behaviour of the two models. We first adopt the microcanonical approach where the total energy is held fixed. We restrict our attention to low energies, i.e., where the total energy is only slightly higher than the classical ground state energy. We describe the resulting phase space and sampling probabilities.
### Asymmetric quadrumer
Any classical ground state of the asymmetric quadrumer can be written as
\[\vec{S}_{1}=-\vec{S}_{2}=\hat{n}(\phi_{1});\ \ \vec{S}_{3}=-\vec{S}_{4}=\hat{n}( \phi_{2}), \tag{3}\]
where the length of each spin is taken to be unity. We have defined \(\hat{n}(\phi)=\cos\phi\ \hat{x}+\sin\phi\ \hat{y}\). This state can be viewed as two rods within the plane, one corresponding to \(\vec{S}_{1,2}\) and other to \(\vec{S}_{3/4}\). In the frame that co-rotates with \(\vec{S}_{1}\) (see Fig. 1), \(\phi_{1}\) is fixed. Low energy states involve small deviations away from the classical ground states. A generic low-energy state can be expressed in terms of six fluctuation variables \(\{\ell_{1},\ell_{2},\mu_{1},\mu_{2},m_{1},m_{2}\}\) as follows:
\[\vec{S}_{1} = \hat{n}(\phi_{1})+\ell_{1}\ \hat{\zeta}(\phi_{1})+\mu_{1}\ \hat{z}+m_{1}\ \hat{z};\] \[\vec{S}_{2} = -\hat{n}(\phi_{1})+\ell_{1}\ \hat{\zeta}(\phi_{1})+\mu_{1}\ \hat{z}-m_{1}\ \hat{z};\] \[\vec{S}_{3} = \hat{n}(\phi_{2})+\ell_{2}\ \hat{\zeta}(\phi_{2})+\mu_{2}\ \hat{z}+m_{2}\ \hat{z};\] \[\vec{S}_{4} = -\hat{n}(\phi_{2})+\ell_{2}\ \hat{\zeta}(\phi_{2})+\mu_{2}\ \hat{z}-m_{2}\ \hat{z}. \tag{4}\]
Here, \(\mu_{1}\) and \(\mu_{2}\) represent out-of-plane buckling of the rods. In contrast, \(m_{1}\) and \(m_{2}\) represent out-of-plane tilting fluctuations. The components \(l_{1}\) and \(l_{2}\) represent buckling deformations within the plane. We have defined unit vectors denoted by \(\hat{\zeta}\) that are perpendicular to \(\hat{n}\)'s, with \(\hat{\zeta}(\phi)=\sin\phi\ \hat{x}-\cos\phi\ \hat{y}\). The expressions in Eqs. 4
Figure 1: Classical ground state spaces (CGSS’) of the two models. a) CGSS of the asymmetric quadrumer is a circle, parameterized by one angle \(A\). Each point on the circle corresponds to a state as shown in (c) with \(\vec{S}_{1}=-\vec{S}_{2}\) and \(\vec{S}_{3}=-\vec{S}_{4}\). The angle between \(\vec{S}_{1}\) and \(\vec{S}_{3}\) is denoted as \(A\). b) CGSS of the symmetric quadrumer consists of three intersecting circles. The angles \(A\), \(B\) and \(C\) parameterize points on each circle – they correspond to states shown in (c), (d) and (e). The intersection points are marked with red squares. All spins lie within the XY plane. All states allow for a global rotation about the Z axis.
include all possible length-preserving deformations. It can be easily checked that \(\vec{S}_{j}\cdot\vec{S}_{j}\) (\(j=1,2,3,4\)) is unity, up to corrections that are quadratic in the fluctuation variables.
Having parameterized low-energy states, we consider the low-energy phase space. The Hamiltonian of Eq. 2 reduces to
\[H_{asym.} \approx E_{CGS}+(1+\lambda)(\mu_{1}^{2}+m_{1}^{2}+\mu_{2}^{2}+m_{2}^{2}) \tag{5}\] \[+ (1+\lambda+\cos A)\ \ell_{+}^{2}+(1+\lambda-\cos A)\ \ell_{-}^{2}.\]
The '\(\approx\)' symbol indicates that this form is valid for small fluctuations, where terms beyond quadratic order can be neglected. Here, \(E_{CGS}=-2(1+\lambda)\) is the classical ground state energy. The variable \(A=\phi_{2}-\phi_{1}\) (see Fig. 1) identifies a point on the CGSS. It encodes the classical ground state that is closest. We have defined \(\ell_{\pm}=2(\ell_{1}\pm\ell_{2})\).
In Eq. 5, we have arrived at an expression for the energy of low-lying states. As it contains six quadratic terms, a constant-energy-surface is an ellipsoid in six dimensions. We now take the microcanonical view by demanding that the energy must lie within \((E_{CGS}+\epsilon,E_{CGS}+\epsilon+d\epsilon)\), where \(d\epsilon\ll\epsilon\ll 1\). The low energy condition dictates that the energy, \(\epsilon\), must be much smaller than the coupling constant (set to unity) which sets the energy scale in the problem. The microcanonical approach allows for a small energy spread \(d\epsilon\) that is much smaller than the energy content, \(\epsilon\). We arrive at a region of the six-dimensional phase space - an ellipsoidal shell. We seek to find the volume of this 'accessible' region. As \(d\epsilon\) is small, this volume can be expressed as \(V_{\epsilon}^{\epsilon+d\epsilon}d\epsilon\). Going further, we wish to resolve this volume into neighbourhoods around each point in the CGSS. To do so, we express the accessible volume as
\[V_{\epsilon}^{\epsilon+d\epsilon}d\epsilon\sim\left\{\int dA\ \ v(\epsilon,A) \right\}d\epsilon, \tag{6}\]
where \(A\) is the angle that parameterizes the CGSS. Relating details to the appendix, we find
\[v(\epsilon,A)=\frac{\pi^{3}}{6}\frac{32}{(\lambda+1)^{2}\sqrt{(1+\lambda)^{2 }-\cos^{2}A}}\ \ 3\epsilon^{2}. \tag{7}\]
We now assert that the principles of statistical mechanics, viz. the ergodic and equiprobability hypotheses, hold true. They dictate that the probability of a certain classical ground state being sampled is proportional to the volume of the accessible phase space in its neighbourhood, i.e.,
\[P(A)\sim v(\epsilon,A)\sim\frac{1}{(\lambda+1)^{2}\sqrt{(1+\lambda)^{2}-\cos ^{2}A}}. \tag{8}\]
From the expression for \(v(\epsilon,A)\) in Eq. 7 above, we see that the probability for any \(A\) depends on the energy via an overall factor of \(\epsilon^{2}\). If we compare two values \(A_{1}\) and \(A_{2}\), we find that \(P(A_{1})/P(A_{2})\) is independent of \(\epsilon\). This results in an _energy-independent_ distribution of probabilities over the CGSS, as reflected in Eq. 8.
### Symmetric quadrumer
Starting from the asymmetric quadrumer, we obtain the symmetric quadrumer by tuning the anisotropy parameter, \(\lambda\), to zero. For any positive value of \(\lambda\), the CGSS is a circle. Precisely at \(\lambda=0\), the CGSS expands to form three intersecting circles as shown in Fig. 1. For simplicity, we restrict our attention to points on one of the circles when characterizing the phase space. Similar arguments hold on the other two circles.
We consider states characterized by Eq. 3 above, shown in Fig. 1(a). Although they were introduced as classical ground states of the asymmetric quadrumer, they are ground states of the symmetric quadrumer as well. They correspond to one of the circles in the CGSS.
Following the same arguments as in the asymmetric quadrumer, low-energy states can be parameterized as in Eq. 4. Their energy takes the same form as given in Eq. 5, with six quadratic terms. However, a crucial difference emerges in the \(\lambda\to 0\) limit. Consider the quadratic term proportional to \(\ell_{+}^{2}\). Its prefactor, given by \(1+\cos A\), vanishes when \(A\to\pi\). Likewise, the prefactor of the \(\ell_{-}^{2}\) term vanishes when \(A\to 0\). This reflects the change in CGSS topology as \(\lambda\to 0\). For a generic value of \(A\), the low-energy Hamiltonian has six quadratic terms. However, at two special values of \(A\), the Hamiltonian reduces to five quadratic terms. These points represent collinear states where the CGSS self-intersects. The vanishing of one quadratic coefficient can be viewed as mode softening. At these collinear states, the system may leave one circle of the CGSS and move to another (see Fig. 1).
This has dramatic consequences for the phase space volume. In the vicinity of a non-collinear state, arguments from Sec. III.1 continue to hold. The phase space \(v(\epsilon,A)\) takes the same form as in Eq. 7 above. The probability of the state being sampled is \(P(\epsilon,\mathrm{non}-\mathrm{collinear})\sim\epsilon^{2}\). However, when \(A=0\) or \(\pi\), the phase space is qualitatively different. It constitutes an ellipsoid in five dimensions, with a free sixth coordinate. This coordinate can be freely integrated over its range, as it is not constrained by energy - this will contribute an energy-independent factor to the phase space volume. Based on these arguments, we may write
\[v(\epsilon,\mathrm{collinear})\sim\epsilon^{3/2}. \tag{9}\]
We have used the fact the volume of an ellipsoidal shell in \(D\) dimensions scales as \(R^{D-1}dR\), where \(R\) is the length scale (say, the semi-major axis). Here, we have \(D=5\) and \(R\sim\sqrt{\epsilon}\) so that the volume scales as \(\epsilon^{3/2}d\epsilon\). Crucially, the volume scales as a different power of energy when compared with non-collinear states. This leads to a stark difference when comparing relative probabilities,
\[\frac{P(\epsilon,\mathrm{collinear})}{P(\epsilon,\mathrm{non}-\mathrm{collinear})}\sim\frac{\epsilon^{3/2}}{\epsilon^{2}}\sim \epsilon^{-1/2}. \tag{10}\]
Unlike the asymmetric quadrumer, relative probability depends on energy. In fact, it varies dramatically when
\(\epsilon\to 0\), as collinear states dominate over all others. This indicates that state selection sharpens as energy is lowered.
## IV Probabilities from Energy-Preserving Dynamics
In the previous section, we have examined relative probabilities over the CGSS from phase space considerations. The asymmetric quadrumer yields energy-independent relative probabilities. In contrast, in the symmetric quadrumer, state selection strengthens sharply at low energies. We now verify these results by numerical simulations of low-energy spin-dynamics. As initial conditions for our simulations, we generate random configurations that lie within a suitable energy window. As the system evolves in time, we track the amount of time spent in the vicinity of each classical ground state. We interpret this as a probability distribution over the CGSS.
The time-evolution of classical magnets is described by the Landau-Lifshitz equation Landau and Lifshitz (1935), written succinctly as
\[\frac{d\vec{S}_{j}}{dt}=\vec{S}_{j}\times\vec{B}_{\mathrm{eff},j}. \tag{11}\]
The vector \(\vec{B}_{\mathrm{eff},j}\) represents the effective magnetic field seen by the \(j^{\mathrm{th}}\) spin. Its form is obtained by re-expressing the Hamiltonian as \(H=\vec{B}_{\mathrm{eff},j}\cdot\vec{S}_{j}\), where \(j\) runs over the four spins of the cluster. For any given spin, the effective field is given in terms of all other spins. Eq. 11 encodes a set of twelve coupled ordinary differential equations as we have four spins (\(j=1,2,3,4\)) with three components each. Given an initial condition, the time evolution of the system can be found by numerical integration by Runge-Kutta methods. Up to numerical errors, Eq. 11 preserves spin lengths as well as the energy.
We consider the symmetric and asymmetric quadrumers within the microcanonical ensemble. We fix a small energy window (\(E_{CGS}+\epsilon,E_{CGS}+\epsilon+d\epsilon\)) where \(E_{CGS}=-2(1+\lambda)\) is the classical ground state energy.
### Dynamics of the asymmetric quadrumer
For each value of \(\epsilon\), we choose an energy interval \(d\epsilon=2\times 10^{-3}\epsilon\). We generate 3 \(\times\)\(10^{4}\) initial configurations with energy within this window. Each configuration is time evolved for \(10^{4}\) time units (time is measured in units of \(J^{-1}\), where \(J\) is the coupling constant - set to unity). As time evolution proceeds, the low value of energy guarantees that the system will always be in the vicinity of the CGSS. At each time, we determine the closest point on the CGSS, i.e., the classical ground state that is closest to the current configuration, see Appendix B. To characterize sampling probability, we divide the CGSS into bins of width one degree (note that the CGSS is parameterized by angle variables). We interpret the fraction of time spent in each interval as the probability of sampling that neighbourhood of the CGSS.
Fig. 2 shows the obtained probability distributions. The asymmetry parameter is set at \(\lambda=2\). Note that collinear states (\(A=0\) or \(\pi\)) have the highest probability - they are selected by fluctuations. The figure shows data for three values of \(\epsilon\). In all three, the data follow the same curve - that given by Eq. 8. The probabilities follow the same form even as the energy is varied over three orders of magnitude. This is in line with the arguments of Sec. III.1 where state-selection was argued to be energy-independent.
Fig. 2 depicts state selection for \(\lambda=2\), a strong value of the anisotropy parameter. For weaker values of \(\lambda\), the numerical simulations deviate from the expected probability distribution. We believe this is tied to the presence of a large number of periodic orbits. With most initial conditions, the system evolves in a periodic fashion. As a result, it may not sample the accessible phase space in a uniform manner.
### Dynamics of the symmetric quadrumer
We simulate time evolution in the same manner for the symmetric quadrumer. We vary energy, \(\epsilon\), over a large range and fix \(d\epsilon=2\times 10^{-3}\epsilon\). For each energy, we generate 2000 samples as initial conditions for time evolution. We simulate time evolution for 5 \(\times\)\(10^{4}\) time units. As the CGSS consists of three intersecting circles, we track the nearest point on this space. Fig. 3 shows the obtained probability distribution over these three circles. This is shown for three different energies. In all three, the probability is highest at the intersection points on the CGSS (collinear states). Unlike the asymmetric quadrumer, the resulting curves vary dramatically with energy. As energy decreases, the probability curves become more sharply peaked. The selection of collinear states strengthens with decreasing energy (\(\epsilon\)).
Fig. 4 compares the relative probability between collinear states and perpendicular states. The latter are states where the four spins point toward the corner of a square - as seen from Fig. 3, these perpendicular states have the lowest probability on the CGSS. Fig. 4 shows that the relative probability varies strongly with energy. It fits well to \(\sim\epsilon^{-1/2}\) as predicted by phase space arguments in Sec. III.2.
## V Probabilities in the canonical ensemble
When coupled to a reservoir, the energy of a system will vary with time. Phase space is sampled according to the temperature, a property of the reservoir. We may
write the partition function as
\[\mathcal{Z}_{canonical}=\int e^{-\epsilon/T}V_{\epsilon}^{\epsilon+d\epsilon}d\epsilon, \tag{12}\]
where \(T\) is the temperature, measured in units of energy (so that the Boltzmann constant is unity). The quantity \(V_{\epsilon}^{\epsilon+d\epsilon}d\epsilon\) represents the volume of phase space that lies within an energy window \((\epsilon,\epsilon+d\epsilon)\). This is precisely the volume that was evaluated in the context of the microcanonical ensemble.
At low energies, the volume \(V_{\epsilon}^{\epsilon+d\epsilon}\) involves an integral over all phase space coordinates. As with Eq. 6 above, we separate out the coordinate that parameterizes the CGSS. For the asymmetric quadrumer, we write
\[\mathcal{Z}_{canonical}^{asym.}=\int dA\ z_{canonical}^{asym.}(A), \tag{13}\]
where \(A\) is the CGSS coordinate. We have
\[z_{canonical}^{asym.}(A)=\int e^{-\epsilon/T}v(\epsilon,A)d\epsilon. \tag{14}\]
As seen from Eq. 7 above, \(v(\epsilon,A)\sim\epsilon^{2}\). As a result, the integral over \(\epsilon\) can be carried out in a straightforward fashion. We have
\[z_{canonical}^{asym.}(A)\sim f(A)\int_{0}^{\infty}d\epsilon\ e^{-\epsilon/T} \ \epsilon^{2}\sim f(A)\ T^{3}. \tag{15}\]
Figure 4: Relative probability of collinear and perpendicular states in the symmetric quadrumer as a function of energy, \(\epsilon\). Error bars are estimated from the spread in values with various choices of collinear and perpendicular states. The data are fit to a curve \(P_{rel.}\sim\epsilon^{-1/2}\). The configurations shown are examples of collinear and perpendicular states on the CGSS.
Figure 3: Sampling probability in the symmetric quadrumer, obtained from energy-conserving Landau-Lifshitz dynamics. Plots correspond to energies \(\epsilon=10^{-3},10^{-4}\) and \(10^{-5}\) (left to right). Seen from the top, there are three circles as shown in the CGSS depicted in Fig. 1(b). The z-axis represents probability; note that the panels have different z-axis ranges.
Figure 2: Sampling probability over the CGSS coordinate \(A\) (in degrees) in the asymmetric quadrumer with \(\lambda=2\). Data are obtained from energy-conserving Landau-Lifshitz dynamics. In all three panels, they follow the same curve – obtained from Eq. 8 with \(\lambda=2\) with no fitting parameters. Plots correspond to energies \(\epsilon\sim 10^{-6},10^{-5},10^{-4}\) (left to right).
Here, \(f(A)\) is some function that depends on \(A\) but not on \(\epsilon\). The limits in the \(\epsilon\) integral are taken to be \([0,\infty)\). As \(\epsilon\) represents energy cost over the classical ground state (\(\epsilon=E_{system}-E_{CGS}\)), the lowest value it can take is zero. As contributions from high energy states are exponentially suppressed, we may safely extend the integration to \(\epsilon\to\infty\). At the final step, we have extracted the temperature dependence. While the integral can be evaluated explicitly, the \(T\)-dependence can be extracted immediately on dimensional grounds.
We now assert that given a point on the CGSS labelled by \(A\), the probability of the system exploring its neighbourhood is proportional to \(z_{canonical}^{asym.}(A)\). From the expression above, we see that the temperature-dependence in \(z_{canonical}^{asym.}(A)\) comes from an overall factor of \(T^{3}\). As a result, the relative probability between any two points on the CGSS is _temperature-independent_.
We now consider Eq. 12 for the symmetric quadrumer. We resolve this integral into neighbourhoods around each point of the CGSS. On each of the three circles of the CGSS, we obtain an equation of the same form as Eq. 13 above. However, as argued in Sec. III.2 above, the \(\epsilon\)-dependence of \(v(\epsilon,A)\) depends on \(A\). At collinear points (i.e., for \(A=0\) or \(\pi\)), we have \(v(\epsilon,A)\sim\epsilon^{3/2}\). At all other points, \(v(\epsilon,A)\sim\epsilon^{2}\). Therefore, we carry out the \(\epsilon\)-integral separately for the two cases. We have
\[z_{canonical}^{sym.}(A=0,\pi)\sim\int e^{-\epsilon/T}\epsilon^{3/2}d\epsilon \sim T^{5/2}. \tag{16}\]
In contrast,
\[z_{canonical}^{sym.}(A\neq 0,\pi)\sim\int e^{-\epsilon/T}\epsilon^{2}d\epsilon \sim T^{3}. \tag{17}\]
We now identify \(z_{canonical}^{sym.}(A=0,\pi)\) as the probability of a collinear state. The probability of accessing a non-collinear classical ground state is \(z_{canonical}^{sym.}(A\neq 0,\pi)\). These two probabilities scale differently with temperature. In particular, we have
\[\frac{P(T,\text{collinear})}{P(T,\text{non-collinear})}\sim\frac{T^{5/2}}{T^{3} }\sim T^{-1/2}. \tag{18}\]
That is, the relative probability between collinear and non-collinear states grows as temperature is lowered. In the \(T\to 0\) limit, collinear states dominate - reflecting strong state selection.
We conclude that state-selection-by-thermal-fluctuations is qualitatively different between the two models. In the asymmetric quadrumer, the selection is temperature-independent. In the symmetric quadrumer, selection grows as temperature decreases. In fact, selection is perfect in the \(T\to 0\) limit.
## VI Monte Carlo simulations
We now verify the arguments of the previous section by explicitly simulating thermal fluctuations. For each cluster, we carry out Monte Carlo simulations [20] at various temperatures. Single-spin Metropolis moves lead to very low acceptance as they invariably take the system away from the ground state space. Therefore, we employ all-spin moves where each spin is simultaneously deflected from its orientation by an angle \(\Delta(T)\). The direction of the deflection is chosen at random. The angle \(\Delta(T)\) is varied between \(0.2^{\circ}\) and \(1^{\circ}\)to ensure an acceptance rate of \(\sim 20\%\). For each temperature, we carry out \(500-3000\) runs, each consisting of \(10^{6}-10^{7}\) moves.
From the Monte Carlo simulations, we extract the sampling probability over the CGSS. We divide the CGSS circle(s) into 360 bins, each of width \(1^{\circ}\). At each Monte Carlo time, we identify the classical ground state that is closest to the current configuration, see Appendix B. We assign this to one of the 360 intervals. As the simulation proceeds, we keep track of the amount of Monte Carlo time spent within each bin. The fraction of time spent in each bin is interpreted as the probability of sampling that neighbourhood of the CGSS.
Fig. 6 shows the result for the asymmetric quadrumer with \(\lambda=0.2\). Probability is plotted against \(A\) for three different values of \(\beta=1/T\), the inverse temperature. Although \(\beta\) varies over several orders of magnitude, the probabilities remain roughly the same. In all three plots, the data follow the same curve, given by
\[P(A)\sim\frac{1}{\sqrt{(1+\lambda)^{2}-\cos^{2}A}}, \tag{19}\]
obtained from Eq. 8, where this is the only term that depends on \(A\). Note that this curve depends on the value of the anisotropy parameter, \(\lambda\), but not on \(T\). Fig. 5 shows similar plots for three values of \(\lambda\), but with \(\beta\) held fixed. In all three, the probability data follow Eq. 19 with the value of \(\lambda\).
These plots serve as numerical verification of the arguments in Sec. V above. The asymmetric quadrumer was argued to show temperature-independent state selection. The degree of selection varies with \(\lambda\), but not with temperature. These arguments were based on Eq. 5, with six quadratic terms in the energy. The equipartition theorem asserts that each quadratic term contributes a factor of \(1/2\) to the specific heat. Indeed, Monte Carlo simulations yield specific heats close to 3 at low temperatures.
Monte Carlo results for the symmetric quadrumer are plotted in Fig. 7. Probability is plotted over the three circles of the CGSS, for three different values of \(\beta\). For all three temperatures, we see that the collinear states (intersection points) have the highest probability. The lowest probability occurs for perpendicular states where the spin vectors point toward the corners of a square. As temperature is lowered, the probability profiles become sharper. The selection of intersection points (collinear states) grows as temperature decreases.
This observation is quantified in Fig. 8. The X axis here represents temperature, while the Y axis represents the relative probability of collinear and perpendicular states, \(P(\text{collinear})/P(\text{perpendicular})\). Here, \(P(\text{collinear})\) is the sum of probabilities of all collinear
states (representing three points of intersection on the CGSS space). As there are six perpendicular states in the CGSS, \(P\)(perpendicular) is summed over them. This ratio grows dramatically as \(T\) decreases. As shown in the figure, the temperature-dependence fits well to \(\sim T^{-1/2}\). This verifies the arguments of Sec. V regarding state-selection in the symmetric quadrumer.
## VII Discussion
We have demonstrated that thermal-order-by-disorder is qualitatively different between two model systems. The difference is manifested in the energy-dependence (microcanonical ensemble) or temperature-dependence (canonical ensemble) of state selection. This difference originates from the topological character of the CGSS, distinguishing systems with a smooth manifold from those that self-intersect. Several materials and models are known in either class. Among materials, MnSc\({}_{2}\)S\({}_{4}\)[21] is known to have a smooth CGSS while ErSn\({}_{2}\)O\({}_{7}\)[22] is proximate to a parameter regime with a self-intersecting CGSS. Among model systems, smooth CGSS' are found on the honeycomb [8], diamond [21] and BCC [11] lattices as well as in the square Heisenberg-compass model [5]. Self-intersections are found on the hyperhoneycomb [23] and HCP [14] lattices, as well as in the 1D Kitaev antiferromagnet [4].
In quantum spin clusters with self-intersecting CGSS', the geometry of intersections leads to bound-state-formation [24; 17]. This constitutes a distinct mechanism for state selection, named order-by-singularity [12]. Our results for the symmetric quadrumer can be viewed as a classical analogue of this phenomenon. At very low energies, the magnet is confined to the vicinity of an intersection point. In quantum magnets, this is due to bound-state-formation. In a classical setting, this is driven by the singular geometry of phase space. The quantum bound-state-problem is highly susceptible to the co-dimension of intersections [17]. An interesting future direction is to explore the role of dimensionality in the classical problem.
The analysis presented here is for small clusters with four spins. A naive comparison with results from literature suggests that our results do not hold for macroscopic magnets. For example, in systems with smooth CGSS', we argued for temperature-independent state selection. This goes against neutron scattering results on MnSc\({}_{2}\)S\({}_{4}\)[25] and Monte Carlo simulations on the square Heisenberg-compass model [5]. In both, the structure factor sharpens with decreasing temperature. However, these systems show'strong' state selection - some regions of the CGSS are sampled heavily while the others have essentially zero weight. In contrast, our clusters show 'weak' selection with the entire CGSS sampled in dynamics or in Monte Carlo simulations. Our arguments
Figure 5: Probability vs \(A\) (in degrees) obtained from Monte Carlo simulations of the asymmetric quadrumer. The plots are for the same anisotropy, \(\lambda=0.2\). They correspond to three values of inverse-temperature, \(\beta=5000\), 10000 and 100000 (left to right). In all three, the data follow the same curve obtained from Eq. 19. Deviations from the curve are comparable to error bars estimated from Monte Carlo runs.
Figure 6: Probability vs \(A\) for the asymmetric quadrumer obtained from Monte Carlo simulations. The plots are for the same temperature, \(\beta=10^{4}\). They correspond to four values of the anisotropy parameter: \(\lambda=0.1\), 0.2, 0.5 and 1 (in order of decreasing spread in the vertical direction). For each \(\lambda\), the data follow a curve obtained from Eq. 19 with the corresponding value of \(\lambda\).
only hold in the regime of weak selection, as they rely on relative probabilities. Future studies could examine the role of system size in state selection.
Our results could be directly measurable in systems with weak state-selection. Recently, there has been a surge of interest in'spiral liquids' - disordered materials that fluctuate over a large space of classical states [14; 10; 26; 11]. For example, neutron scattering may show a peak-like feature that is spread over a contour in the Brillouin zone. In such materials, the relative probability could be extracted by comparing neutron intensities. In materials with intersecting spiral surfaces, we may find qualitatively different temperature-variation in relative weights as discussed here.
###### Acknowledgements.
We thank Subhankar Khatua and Jeffrey Rau for insightful discussions. RG is supported by a Discovery Grant 2022-05240 from the Natural Sciences and Engineering Research Council of Canada.
## Appendix A Low energy phase space
With four spins, phase space volumes are defined as
\[\int\prod_{j=1}^{4}\Big{\{}dS_{j}^{x}dS_{j}^{y}dS_{j}^{z}\ \delta(\vec{S}_{j} \cdot\vec{S}_{j}-1)\Big{\}}g(\vec{S}_{1},\ldots,\vec{S}_{4}), \tag{10}\]
where \(g(\vec{S}_{1},\ldots,\vec{S}_{4})\) is the sampling probability of a given neighbourhood. The \(\delta\)-functions enforce unit spin length. With twelve integration variables and four \(\delta\)-function constraints, the phase space volume is effectively eight dimensional. In the asymmetric quadrumer, low-energy configurations are described by Eq. 4 above. We have two degrees of freedom (\(\phi_{1}\) and \(\phi_{2}\)) that select a particular ground state and six (\(\ell_{1,2}\), \(m_{1,2}\) and \(\mu_{1,2}\)) that encode fluctuations. We change the integration variables to these new coordinates so that the phase space volume becomes
\[\int\mathcal{J}(\phi_{1},\phi_{2},\ell_{1},\ell_{2},m_{1},m_{2}, \mu_{1},\mu_{2})\times\] \[d\phi_{1}\ d\phi_{2}\ d\ell_{1}\ d\ell_{2}\ dm_{1}\ dm_{2}\ d\mu_ {1}\ d\mu_{2}\times\] \[g(\phi_{1},\phi_{2},\ell_{1},\ell_{2},m_{1},m_{2},\mu_{1},\mu_{2}), \tag{11}\]
where the Jacobian \(\mathcal{J}(\phi_{1},\phi_{2},\ell_{1},\ell_{2},m_{1},m_{2},\mu_{1},\mu_{2})\) evaluates to 32, up to corrections that are quadratic in fluctuation variables.
We now consider the phase space of the microcanonical ensemble. The energy of the system is given by Eq. 5, with six quadratic terms. We rescale variables to set the coefficient of each of the six quadratic terms to unity. The accessible phase space then becomes a spherical shell in six dimensions with radius \(\epsilon\) and thickness \(d\epsilon\). This yields the phase-space-volume of Eq. 7.
Figure 8: Relative probability of collinear and perpendicular states in the symmetric quadrumer, obtained from Monte Carlo simulations. Error bars are estimated from the spread in values resulting from various choices of collinear and perpendicular states. The relative probability is plotted against \(\log(T)\). The data is fit to \(P_{rel.}\sim T^{-1/2}\).
Figure 7: Probability over the CGSS for the symmetric quadrumer, obtained from Monte Carlo simulations. The plots correspond to varying temperatures, \(\beta=10^{2},10^{3},10^{4}\) (from left to right). Seen from the top, we have three circles as shown in Fig. 1(b). The z-axis represents probability; note that the panels have different z-axis ranges.
## Appendix B Finding the nearest point on the CGSS
To find sampling probabilities from dynamics or Monte Carlo simulations, we assign a given configuration to a certain neighbourhood around the CGSS. Consider the asymmetric quadrumer where the CGSS is a circle. Given a low-energy configuration, we express it in the form of Eq. 4. For example, we identify the in-plane component of \((\vec{S}_{1}-\vec{S}_{2})/2\) as \(\hat{n}(\phi_{1})\) and that of \((\vec{S}_{3}-\vec{S}_{4})/2\) as \(\hat{n}(\phi_{2})\). Having thus extracted \(\phi_{1}\) and \(\phi_{2}\), we identify the CGSS coordinate as \(A=\phi_{2}-\phi_{1}\) in accordance with Fig. 1.
In the symmetric quadrumer, we have an additional layer of complexity. As the CGSS has three circles, we must first assign a given configuration to one of the circles. To do so, we consider three vector quantities: \(\vec{S}_{1}-\vec{S}_{2}\), \(\vec{S}_{1}-\vec{S}_{3}\) and \(\vec{S}_{1}-\vec{S}_{4}\). Based on which of these three has the largest magnitude, we identify the nearest circle. We then proceed in the same way as with the asymmetric quadrumer to find the angle coordinate (\(A\), \(B\) or \(C\)). If two of these vectors are comparably large (with magnitude close to 2), the configuration is close to an intersection point.
|
2306.05822 | Trimming of Finite Subsets of the Manhattan Plane | V. Turaev defined recently an operation of "Trimming" for pseudo-metric
spaces and analysed the tight span of (pseudo-)metric spaces via this process.
In this work we investigate the trimming of finite subspaces of the Manhattan
plane. We show that this operation amounts for them to taking the metric center
set and we give an algorithm to construct the tight spans via trimming. | Gökçe Ãakmak, Ali Deniz, Åahin Koçak | 2023-06-09T11:44:55Z | http://arxiv.org/abs/2306.05822v1 | # Trimming of Finite Subsets of the Manhattan Plane
###### Abstract
V. Turaev defined recently an operation of "Trimming" for pseudo-metric spaces and analysed the tight span of (pseudo-)metric spaces via this process. In this work we investigate the trimming of finite subspaces of the Manhattan plane. We show that this operation amounts for them to taking the metric center set and we give an algorithm to construct the tight spans via trimming.
**Keywords:** Trimming, Manhattan plane, Metric centers, Tight span
## 1 Introduction
Our aim in this work is to study the finite subsets of the Manhattan plane from the viewpoint of trimming theory introduced recently by V. Turaev [5, 6]. We want first to explain briefly the notions of trimming, trimming sequence and trimming cylinder defined by Turaev.
Let \((X,d)\) be a non-empty finite metric space. (We use the notion of metric space in the genuine sense that \(d(x,y)=0\) if and only if \(x=y\). We sometimes simply write \(X\) for \((X,d)\)). The process of trimming introduced by V. Turaev ([6]) can be explained as follows. First assume \(|X|\geq 3\). We define an equivalence relation \(\Re\) on \(X\):
\[x\Re y\ \text{for}\ x,y\in X:\Leftrightarrow\ x=y\quad\text{or}\quad d(x,y)= \underline{d}(x)+\underline{d}(y)\]
whereby
\[\underline{d}(x)=\min_{\begin{subarray}{c}y,z\in X\setminus\{x\}\\ y\neq z\end{subarray}}\frac{d(x,y)+d(x,z)-d(y,z)}{2}.\]
We then define the following metric \(d^{1}\) on the set \(X/\Re\) of equivalence classes by \(d^{1}(\overline{x},\overline{y})=d(x,y)-\underline{d}(x)-\underline{d}(y)\) for \(\overline{x},\overline{y}\in X/\Re\). The metric space \((X/\Re,d^{1})\) is said to be obtained by trimming of \((X,d)\) and is denoted by \((t(X),d^{1})\) (or simply by \(t(X)\)). The map \(p:(X,d)\to(t(X),d^{1}),p(x)=\overline{x}\), is called the trimming projection. For \(|X|=1\) or \(|X|=2\) we define \(t(X)\) to be a one-point space, the trimming projection being the trivial one.
**Some Remarks:**
1. For convenience, one can define \(\underline{d}(x)=\underline{d}(y)=\frac{1}{2}d(x,y)\) for \(X=\{x,y\}\) and \(\underline{d}(x)=0\) for \(X=\{x\}\).
2. The quantity \(\triangle_{x,yz}=\frac{1}{2}(d(x,y)+d(x,z)-d(y,z))\) for \(x,y,z\in X\) is called the Gromov product at \(x\) with respect to \(y\) and \(z\). For a discussion on Gromov products see [1].
3. The process of trimming can be defined for any pseudo-metric space ([6]).
4. \(\underline{d}(x)\) is called the pendant length of \(x\in X\). If \(\underline{d}(x)=0\) for all \(x\in X\), then \((X,d)\) is called pendant-free or trim. For distinct \(x,y,z\in X\), \(x\) is called "between" \(y\) and \(z\) in the Menger sense, if \(d(y,z)=d(y,x)+d(x,z)\). The pendant length \(\underline{d}(x)\) vanishes if and only if \(x\) is between two other points \(y\) and \(z\) (for \(|X|\geq 3\)).
5. For a 3-point space \(X=\{x,y,z\}\), \(t(X)\) is a singleton: Since \(\underline{d}(x)=\triangle_{x,yz},\underline{d}(y)=\triangle_{y,xz}\) and \(\underline{d}(z)=\triangle_{z,xy}\), \(x\Re y\) because \(d(x,y)=\underline{d}(x)+\underline{d}(y)\); respectively, \(x\Re z\) and \(y\Re z\).
One should not get the impression that the process of trimming produces a trim (pendant-free) space, as the following example shows.
**Example 1**: _Let \(X=\{1,2,3,4,5\}\subset\mathbb{R}\) with the metric induced from the standard metric of \(\mathbb{R}\). Then \(t(X)=\{\overline{1},\overline{3},\overline{4}\}\) and \(\underline{d}^{1}(\overline{1})=1\), so that \((t(X),d^{1})\) is not trim (see Figure 1)._
The goal of the process of trimming is to obtain a trim (pendant-free) space by applying this operation succesively. The consecutive trimmings of \((X,d)\) create metric spaces \((t^{i}(X),d^{i})\) (\(i>0\)) and non-expansive surjections
\(t^{i+1}(X),p_{i}(x_{i})=x_{i+1}\), so that we get a sequence
\[X=t^{0}(X)\xrightarrow{p_{0}=p}t(X)\xrightarrow{p_{1}}t^{2}(X)\to\cdots\to t^{i} (X)\xrightarrow{p_{i}}t^{i+1}(X)\to\cdots\]
which is called the trimming sequence of \(X\). Starting from a point \(x=x_{0}\in X\), we get a sequence \((x_{i})_{i\geq 0}\), \(x_{i}\in t^{i}(X)\) with \(x_{i+1}=p_{i}(x_{i})\), which is called the trimming sequence of \(x\). We write sometimes \(X_{i}\) for \(t^{i}(X)\). For \(x_{i},y_{i}\in X_{i},\ (i>0)\) we have \(d^{i}(x_{i},y_{i})=d^{i-1}(x_{i-1},y_{i-1})-\underline{d}^{i-1}(x_{i-1})- \underline{d}^{i-1}(y_{i-1})\).
If \(X\) is trim, then \(t(X)=X\) (or naturally isometric if one so wishes); thus, starting from an arbitrary \(X\), if one arrives at a trim \(t^{i}(X)\), then the trimming sequence stabilizes from there on.
If the trimming projection \(p:X\to t(X)\) is bijective then \(t(X)\) is trim (though \(X\) itself might not be). As an example see Figure 2, where the distances are to be read as the lengths of shortest paths on the given auxiliary graphs.
By this reason, successive trimming of a finite metric space will eventually produce a trim space since the sequence \(|t^{i}(X)|_{i\geq 0}\) is non-increasing and thus a bijective \(p_{i}\) will emerge.
Figure 1: A representation of \((X,d)\) and \((t(X),d^{1})\).
Figure 2: A bijective trimming projection \(p:X\to t(X)\) with non-trim \(X\) and trim \(t(X)\).
Now we want to recall another notion introduced by Turaev ([6]). The trimming cylinder \(C=C(X)\) of a finite metric space \(X\) is a pseudo-metric graph that can be defined as follows: The vertex set is \(\bigsqcup_{i\geq 0}t^{i}(X)\) and the edges are put between the vertices \(x_{i}\in t^{i}(X)\) and \(p_{i}(x_{i})\in t^{i+1}(X)\) with \(\underline{d}^{i}(x_{i})\) assigned as weight (see Figure 3). The pseudo-metric \(\rho\) on \(C(X)\) can be defined as follows (for more detail cf. [6]):
\[\rho:C(X)\times C(X)\to\mathbb{R}\] \[\rho(u,v)=\begin{cases}\sum_{n=i}^{j}\underline{d}^{n}(x_{n})- \lambda\underline{d}^{i}(x_{i})-(1-\mu)\underline{d}^{j}(x_{j}),\quad\text{if v is below u}\\ \\ d_{1}(x,y)-\sum_{n=0}^{i-1}\underline{d}^{n}(x_{n})-\sum_{n=0}^{j-1} \underline{d}^{n}(y_{n})-\lambda\underline{d}^{i}(x_{i})-\mu\underline{d}^{ j}(y_{j}),\text{ otherwise}\end{cases}\]
whereby \(u\) is on a path determined by a trajectory \((x_{i}\in t^{i}(X))_{i\geq 0}\) (with \(x_{0}=x\in X\)) and given by \(u=(1-\lambda)x_{i}+\lambda x_{i+1}\) (thus \(u\) lying on the edge connecting \(x_{i}\) with \(x_{i+1}\) with weight \(\underline{d}^{i}(x_{i})\), \(0\leq\lambda\leq 1\)) and similarly \(v=(1-\mu)y_{j}+\mu y_{j+1}\). "\(v\) is below \(u\)" means that \(v\) is on a path starting from an \(x\in X\) and occuring after \(u\) (see Figure 3).
We denote the subgraph restricted to \(\bigsqcup_{i=0}^{k}t^{i}(X)\) (with \(k>0\)) by \(C_{k}(X)\). The metric quotient of the trimming cylinder is denoted as \(\overline{C(X)}\).
Let us now define the notion of tight span for a finite metric space. Consider the set of functions \(f:X\to\mathbb{R}^{\geq 0}\) satisfying the following two conditions:
1. \(f(x)+f(y)\geq d(x,y)\) for all \(x,y\in X\),
2. For each \(x\in X\) there exists \(y\in X\) such that \(f(x)+f(y)=d(x,y)\).
Figure 3: The trimming cylinder of \((X,d)\) (In the figure, \(v_{1}\) is below \(u\) and \(v_{2}\) is not below \(u\)).
The tight span \(T(X)\) of \(X\) is then this set of functions with the maximum metric
\[d_{\infty}(f,g)=\max_{x\in X}|f(x)-g(x)|.\]
If \(X\) is a subset of the Manhattan plane, then a closed, geodesically convex subspace \(Y\supset X\) of the Manhattan plane, which is minimal with respect to these properties is isometric to the tight span \(T(X)\) of \(X\) ([2]).
In [6], Turaev defined a trimming filtration
\[T(X)\supset T(X_{1})\supset T(X_{2})\supset\cdots\supset T(X_{i})\supset T(X _{i+1})\supset\cdots\]
(where \(X_{i}=t^{i}(X)\)) and proved the main theorem that the tight span \(T(X)\) of any metric space \(X=(X,d)\) can be expressed as
\[T(X)=\uptau\cup\overline{C(X)}\quad\mbox{with}\quad\uptau\cap\overline{C(X)}= \overline{C(X)}_{*},\]
where \(\uptau=\cap_{i\geq 1}T(X_{i})\) and \(\overline{C(X)}_{*}\) is a certain subset of \(\overline{C(X)}\) called the roots. (For details see [6]; we use a special case of this theorem where \(X\) is a finite metric space.)
In Section 2, we characterize trim subspaces of the Manhattan plane \((\mathbb{R}^{2}_{1})\), define the metric centers of a triple of points and of a finite subspace \(X\) of \(\mathbb{R}^{2}_{1}\) and identify the abstract space \(t(X)\) with the metric center set \(m(X)\) which lives again in \(\mathbb{R}^{2}_{1}\).
In Section 3, we give a simple method to obtain the metric center set \(m(X)\) and in Section 4 we explain how the (metric quotient of the) trimming cylinder can be embedded into \(\mathbb{R}^{2}_{1}\).
In Section 5, we describe a conceptually lucid way of constructing the tight span of a finite subset of \(\mathbb{R}^{2}_{1}\) as an application of the related theorem of Turaev and we give an algorithm to implement it. (For another algorithm to construct the tight span of a finite subset of \(\mathbb{R}^{2}_{1}\) see [3].)
## 2 Trimming of Finite Subspaces of the Manhattan Plane
We remind that the Manhattan plane is the metric space \((\mathbb{R}^{2},d_{1})\) with \(d_{1}((x_{1},y_{1}),(x_{2},y_{2}))=|x_{1}-x_{2}|+|y_{1}-y_{2}|\) for \((x_{1},y_{1}),(x_{2},y_{2})\in\mathbb{R}^{2}\). We denote this space by \(\mathbb{R}^{2}_{1}\).
We first note the following simple and useful property for the relation of "betweenness" in the Manhattan plane.
**Lemma 1**: _A point \((x_{0},y_{0})\in\mathbb{R}^{2}_{1}\) lies between \((x_{1},y_{1})\) and \((x_{2},y_{2})\in\mathbb{R}^{2}_{1}\) in the sense of Menger if and only if \(x_{0}\) lies between \(x_{1}\) and \(x_{2}\) (in \(\mathbb{R}\)) and \(y_{0}\) lies between \(y_{1}\) and \(y_{2}\) (\(x_{1}\leq x_{0}\leq x_{2}\) or \(x_{2}\leq x_{0}\leq x_{1}\), respectively for \(y_{0}\))._
Recall that \(\triangle_{(x_{0},y_{0}),(x_{1},y_{1})(x_{2},y_{2})}=0\) if and only if \((x_{0},y_{0})\) lies between \((x_{1},y_{1})\) and \((x_{2},y_{2})\).
Now, we give a criterion for a finite subset of \(\mathbb{R}_{1}^{2}\) to be trim:
**Proposition 1**: _Let \((X,d_{1})\subset\mathbb{R}_{1}^{2}\) be a finite subspace with \(|X|\geq 4\). Let \(R_{X}\) denote the minimal rectangle in \(\mathbb{R}_{1}^{2}\) containing \(X\) with edges parallel to the axes. Then, \(X\) is trim if and only if each edge of \(R_{X}\) contains at least two points of \(X\)._
**Proof.** First assume that \(X\) is trim. Without loss of generality consider the right edge of \(R_{X}\). There must be at least one point of \(X\) on this edge, since otherwise the rectangle would not be minimal. Assume, there is only one point of \(X\) on this edge, say \((x_{0},y_{0})\). If \((x_{1},y_{1})\) and \((x_{2},y_{2})\) are two other points of \(X\), then necessarily \(x_{1}<x_{0}\) and \(x_{2}<x_{0}\) so that by Lemma 1, \(\triangle_{(x_{0},y_{0}),(x_{1},y_{1})(x_{2},y_{2})}>0\) and then \(\underline{d}_{1}((x_{0},y_{0}))>0\) since \(X\) is finite. This contradicts the assumption that \(X\) is trim. So, there must be at least two points of \(X\) on the considered edge of \(R_{X}\).
Conversely, let us assume that each edge of \(R_{X}\) contains at least two points of \(X\). Now, a point of \(X\) can be on the boundary or inside of \(R_{X}\). First, consider a point \((x_{0},y_{0})\in X\) lying on an edge of \(R_{X}\), say without loss of generality, on the right edge. Then choose another point \((x_{0},y_{1})\in X\) on the same edge and a third point \((x_{2},y_{2})\in X\) on the top or bottom edge of \(R_{X}\) such that \(y_{0}\) lies between \(y_{1}\) and \(y_{2}\) (see Figure 4). Then by the Lemma 1, \(\triangle_{(x_{0},y_{0}),(x_{0},y_{1})(x_{2},y_{2})}=0\) and then \(\underline{d}_{1}((x_{0},y_{0}))=0\).
Now, let us assume that \((x_{0},y_{0})\in X\) lies in the interior of \(R_{X}\). Divide the rectangle \(R_{X}\) with four subrectangles by the vertical and horizontal lines passing through \((x_{0},y_{0})\) (see Figure 5). There must be "boundary" points of \(X\) (i.e. points of \(X\) lying on the boundary of \(R_{X}\)) either on both of the subrectangles I and III, or on both of the subrectangles II and IV. Because otherwise there would be neighbouring subrectangles (such as I and II, or II and III, or III and IV, or IV and I) containing no boundary points, which would contradict the minimality of \(R_{X}\). Thus by Lemma 1, we have \(\underline{d}_{1}((x_{0},y_{0}))=0\) and \(X\) is trim.
Figure 4: An example of the positions of \((x_{0},y_{0})\), \((x_{0},y_{1})\) and \((x_{2},y_{2})\).
**Remark 1**: _The last part of the above proof shows that points \((x_{0},y_{0})\in X\) lying in the interior of \(R_{X}\) are always in-between points (i.e. \(\underline{d}_{1}((x_{0},y_{0}))=0\)), since one point on each edge of \(R_{X}\) is sufficient for this argument and this is obviously fulfilled by minimality of \(R_{X}\)._
Our next goal is to give an interpretation of the process of trimming of finite subsets of \(\mathbb{R}_{1}^{2}\) in terms of metric centers of this subset. We first give some auxiliary definitions.
For a two point subset \(\{a,b\}\subset\mathbb{R}_{1}^{2}\) let us denote \(R_{\{a,b\}}\) by \(R_{ab}(=R_{ba})\), which is the minimal rectangle containing \(a\) and \(b\), with edges parallel to the axes. In view of Lemma 1, a point \(c\in\mathbb{R}_{1}^{2}\) is between \(a\) and \(b\) if and only if \(c\in R_{ab}\).
For three points \(a,b,c\in\mathbb{R}_{1}^{2}\), the intersection of three rectangles \(R_{ab}\), \(R_{ac}\) and \(R_{bc}\) is a singleton (see Figure 6) and it is called the metric center of \(\{a,b,c\}\), or in other words:
**Definition 1**: _Let \(a,b,c\in\mathbb{R}_{1}^{2}\). The unique point \(m\in\mathbb{R}_{1}^{2}\) satisfying_
\[d_{1}(a,b) =d_{1}(a,m)+d_{1}(m,b),\] \[d_{1}(a,c) =d_{1}(a,m)+d_{1}(m,c),\] \[d_{1}(b,c) =d_{1}(b,m)+d_{1}(m,c)\]
_is called the metric center of \(\{a,b,c\}\). (We denote it sometimes by \(m=m(a,b,c)\).)_
As a consequence of these equations one gets \(d_{1}(a,m)=\triangle_{a,bc}\), \(d_{1}(b,m)=\triangle_{b,ac}\) and \(d_{1}(c,m)=\triangle_{c,ab}\).
**Definition 2**: _Let \(X\subset\mathbb{R}_{1}^{2}\) be a finite subspace and \(a\in X\). Assume that \(\triangle_{a,bc}\) (with \(b,c\in X\)) is the minimal Gromov product at \(a\). Then, the metric center of \(\{a,b,c\}\) is called the metric center associated with \(a\) (or simply, of \(a\)) and is denoted by \(m_{a}\) (or sometimes by \(m(a)\)). It holds then \(\underline{d}_{1}(a)=\min_{\begin{subarray}{c}u,v\in X\backslash\{a\}\\ u\neq v\end{subarray}}\triangle_{a,uv}=\triangle_{a,bc}=d_{1}(a,m_{a})\)._
_The set of all metric centers of the elements of \(X\) is called the metric center of \(X\) and it is denoted by \(m(X)\)._
Figure 5: The four subrectangles of \(R_{X}\).
_For reasons of convenience, we define the metric center \(m(X)\) of a two-point space \(X=\{a,b\}\subset(\mathbb{R}^{2},d_{1})\) as \(m(X)=\{\frac{a+b}{2}\}\) and the metric center \(m(X)\) of a singleton \(X=\{a\}\subset(\mathbb{R}^{2},d_{1})\) as \(m(X)=X\)._
**Remark 2**: _If the minimal Gromov product at \(a\) is realized simultaneously by another triple \(\{a,d,e\}\); then it can be easily seen that \(m(a,b,c)=m(a,d,e)\) so that \(m_{a}\) is well-defined. The following useful properties hold:_
\[d_{1}(a,x)=d_{1}(a,m_{a})+d_{1}(m_{a},x)\mbox{ for any }x\in X\setminus\{a\}\]
_and_
\[d_{1}(a,m_{x})=d_{1}(a,m_{a})+d_{1}(m_{a},m_{x})\mbox{ for any }x\in X.\]
_(These properties will be clear by the simple geometric construction of the metric center \(m(X)\) we give in the next section.)_
Now consider \(x,y\in X\subset\mathbb{R}_{1}^{2}\) and assume that \(\overline{x}=\overline{y}\in t(X)\). By definition,
\[d_{1}(x,y)=\underline{d}_{1}(x)+\underline{d}_{1}(y)\]
and thus
\[d_{1}(x,y)=d_{1}(x,m_{x})+d_{1}(y,m_{y}).\]
On the other hand,
\[d_{1}(x,y) =d_{1}(x,m_{x})+d_{1}(m_{x},y)\] \[=d_{1}(x,m_{x})+d_{1}(y,m_{y})+d_{1}(m_{y},m_{x})\]
by the above properties, so that we get \(d_{1}(m_{y},m_{x})=0\) and thus \(m_{x}=m_{y}\). In other words, if two elements are identified during the trimming, then they have the same metric centers. This interesting property enables us to define a map \(t(X)\to m(X)\), which turns out to be an isometry.
Figure 6: Intersection of the rectangles \(R_{ab}\), \(R_{ac}\) and \(R_{bc}\).
**Theorem 1**: _Let \(X\subset(\mathbb{R}^{2},d_{1})\) be a finite subspace and let \(m_{x}\) be the metric center of \(x\in X\). Then the map_
\[f:(t(X),d^{1}) \to(m(X),d_{1})\] \[\overline{x} \mapsto m_{x}\]
_is an isometry._
**Proof.**\(f\) is obviously surjective. By the definition of \(d^{1}\) and the above properties of centers we can write
\[d^{1}(\overline{x},\overline{y}) =d_{1}(x,y)-\underline{d}_{1}(x)-\underline{d}_{1}(y)\] \[=d_{1}(x,y)-d_{1}(x,m_{x})-d_{1}(y,m_{y})\] \[=d_{1}(x,m_{x})+d_{1}(m_{x},y)-d_{1}(x,m_{x})-d_{1}(y,m_{y})\] \[=d_{1}(m_{x},y)-d_{1}(y,m_{y})\] \[=d_{1}(m_{x},m_{y})+d_{1}(m_{y},y)-d_{1}(y,m_{y})\] \[=d_{1}(m_{x},m_{y}),\]
so that \(f\) is an isometry.
This relationship can also be expressed as a commutative diagram
where \(p\) is the trimming projection and \(g\) is the operation of taking the centers (or, the "center projection"). This means that the abstract trimming story of a finite subset of the Manhattan plane can be staged in this plane itself.
Instead of the trimming sequence
\[(X,d)\xrightarrow{p}(t(X),d^{1})\xrightarrow{p_{1}}(t^{2}(X),d^{2}) \xrightarrow{p_{2}}\cdots\]
we can work with the embedded sequence
\[(X,d)\xrightarrow{g}(m(X),d_{1})\xrightarrow{g_{1}}(m(m(X)),d_{1}) \xrightarrow{g_{2}}\cdots,\]
where all terms live in \(\mathbb{R}^{2}_{1}\). We will denote the \(i^{th}\) iterate \(m(m\cdots(m(X)))\) by \(m^{i}(X)\), the center projection \((m^{i}(X),d_{1})\to(m^{i+1}(X),d_{1})\) by \(g_{i}\) and call the sequence
\[(X,d)\xrightarrow{g}(m(X),d_{1})\xrightarrow{g_{1}}(m^{2}(X),d_{1})\to \cdots\to(m^{i}(X),d_{1})\xrightarrow{g_{i}}(m^{i+1}(X),d_{1})\to\cdots\]
the metric center sequence of \((X,d)\subset\mathbb{R}^{2}_{1}\).
How to Find the Metric Center of a Finite Subspace of the Manhattan Plane?
We now want to give a simple geometric construction to obtain the metric center \(m(X)\subset\mathbb{R}_{1}^{2}\) of a finite subspace \((X,d_{1})\subset\mathbb{R}_{1}^{2}\). Let \(X\) have \(n\) points and let us order the abcissas of these points as
\[x_{1}\leq x_{2}\leq\cdots\leq x_{n-1}\leq x_{n},\]
and the ordinates as
\[y_{1}\leq y_{2}\leq\cdots\leq y_{n-1}\leq y_{n}.\]
We then define a secondary rectangle \(S_{X}=[x_{2},x_{n-1}]\times[y_{2},y_{n-1}]\) (see Figure 7). Recall that \(R_{X}\) was the minimal, axes-parallel rectangle containing \(X\), which can be expressed with these notations as \(R_{X}=[x_{1},x_{n}]\times[y_{1},y_{n}]\).
The following proposition gives a nice device to determine the metric center \(m(X):\)
**Proposition 2**: _The metric center \(m_{a}\) of a point \(a\in X\) is the point of \(S_{X}\) nearest to \(a\) (with respect to the \(d_{1}-\)metric, or what the same is, with respect to the Euclidean metric; we call this point the projection of \(a\) on \(S_{X}\)). Especially, for a point \(a\in X\) belonging to \(S_{X}\), we get \(m_{a}=a\). The metric center \(m(X)\) of \(X\) thus consists of the points of \(X\) contained in \(S_{X}\) and the projections of the points of \(X\) outside \(S_{X}\) onto \(S_{X}\)._
Figure 7: An example of the minimal rectangle \(R_{X}\) and the secondary rectangle \(S_{X}\) for a 9-point subspace \(X\subset\mathbb{R}_{1}^{2}\) with \(X=\{(-5,-3),(-5,1),(-3,4),\)\((-2,-1),(1,2),(2,-2),(4,-3),(5,-5),(7,-4)\}\).
**Proof.** In Figure 8 all possible configurations of \(R_{X}\) (the minimal axes-parallel rectangle containing \(X\)), the secondary rectangle \(S_{X}\) and the points of \(X\) lying outside \(S_{X}\) are depicted. In the first row the possible relative positions of \(R_{X}\) and \(S_{X}\) (up to "symmetry") are shown and in the corresponding columns below the possible placements of points of \(X\) lying outside \(S_{X}\) (again up to "symmetry") are shown.
We first remark that by the minimality of \(R_{X}\) there exists at least one point on every edge of \(R_{X}\) belonging to \(X\) and all points of \(X\) lying in the interior of \(R_{X}\) are in-between points (i.e. have zero pendant length), so that their metric centers coincide with them.
We next remark that, in cases one edge of \(S_{X}\) is contained in an edge of \(R_{X}\), there must be at least two points of \(X\) on this edge of \(R_{X}\) by the definition of \(S_{X}\) and in such a case, points of \(X\) lying in the corresponding edge of \(S_{X}\) must be in-between points since one can use auxiliary points on approximate neighbouring edges of \(R_{X}\) to see this. Thus, these points also coincide with their metric centers.
Figure 8: Possible configurations of \(R_{X},S_{X}\) and the points of \(X\) lying outside \(S_{X}\).
Now, there remain the points of \(X\), which lie on the boundary of \(R_{X}\) but do not belong to \(S_{X}\). For these points there are two typical cases as shown in Figure 9, whereby the rays in Figure 9(a) and the line in Figure 9(b) are determined by the relevant edges of \(S_{X}\).
In Figure 9(a), the metric center \(m_{a}\) of the corner \(a\) is the corner where the rays emanate: Either there are points \(b\), \(c\) on the rays belonging to \(X\), in which case \(m_{a}\) is the metric center of the triple \((a,b,c)\); or the origin of the rays belongs to \(X\), in which case one can take any third auxiliary point of \(X\) and again the origin of the rays becomes the metric center \(m_{a}\). (If one chooses any other two points of \(X\), then their center is either farther away from \(a\), or the same corner again.)
In Figure 9(b), there is at least one point of \(X\) lying on the line and choosing an auxiliary point of \(X\) lying on the other part of the vertical line passing through \(a\) one obtains \(m_{a}\) which is the projection of \(a\) onto the line. (If one chooses any other two points of \(X\), then their center is either farther away from \(a\), or the same projection point again.)
## 4 How to Embed the Trimming Cylinder into the Manhattan Plane?
We have seen in Theorem 1 that the space \((t(X),d^{1})\), obtained by trimming a finite subspace \((X,d_{1})\subset\mathbb{R}^{2}_{1}\), can be isometrically embedded into the \(\mathbb{R}^{2}_{1}\), by the canonical
Figure 9: Two typical cases of points \(a\in X\) lying on the boundary of \(R_{X}\) which do not belong to \(S_{X}\).
map \(\overline{x}\mapsto m_{x}\), sending a class to the metric center of a representant of it. This enables us to work with the embedded version of the abstract trimming sequence, which we called the metric center sequence. This procedure can be extended to the trimming cylinder also. We can define the corresponding metric center cylinder \(MC(X)\) as a pseudo-metric graph with the vertex set \(\bigsqcup_{i\geq 0}m^{i}(X)\) and edges between \(a_{i}\in m^{i}(X)\) and \(g_{i}(a_{i})\in m^{i+1}(X)\) with weight \(\underline{d}_{1}(a_{i})\). It is instructive to look at the first portion \(MC_{1}(X)\) of this cylinder (which repeats itself in the following stages since in every stage the starting space is a subspace of \(\mathbb{R}^{2}_{1}\)):
The edge between \(x\in X\) and \(p(x)\in t(X)\) in \(C_{1}(X)\) has weight \(\underline{d}_{1}(x)\); the edge between \(x\in X\) and \(g(x)\in m(X)\) has also weight \(\underline{d}_{1}(x)\). On the other hand, the distance \(d_{1}(x,g(x))=d_{1}(x,m_{x})\) also equals \(\underline{d}_{1}(x)\) so that one can embed this "abstract" edge as the straight line segment connecting \(x\) with \(m_{x}\) in \(\mathbb{R}^{2}_{1}\). In case \(\underline{d}_{1}(x)=0\) we have \(m_{x}=x\) so that the metric quotient \(\overline{MC_{1}(X)}\) consists exactly of the set \(m(X)\) together with the segments \([x,m_{x}]\) in \(\mathbb{R}^{2}_{1}\). We note that the segments \([x,m_{x}]\) and \([y,m_{y}]\) in \(\mathbb{R}^{2}_{1}\) cannot cross themselves (unless maybe in the special case \(m_{x}=m_{y}\), where they are concatenated) since by Remark 2
\[d_{1}(x,y)=d_{1}(x,m_{x})+d_{1}(m_{x},y)=d_{1}(x,m_{x})+d_{1}(m_{x},m_{y})+d_{1 }(m_{y},y)\]
holds, which means that the points \(x,m_{x},m_{y}\) and \(y\) lie on a geodesic (shortest path) between \(x\) and \(y\). (Likewise in the following step \(x,m_{x},m_{x}^{2},m_{y}^{2},m_{y}\) and \(y\) will lie on a geodesic etc.) Iterating this procedure, which stabilizes after finitely many steps, one obtains an embedding of \(\overline{MC(X)}\) into \(\mathbb{R}^{2}_{1}\) for the finite subspace \(X\subset\mathbb{R}^{2}_{1}\). We illustrate this process by the following:
**Example 2**: _Let \(X\) be as in Figure 7, i.e. \(X=\{a=(-5,-3),b=(-5,1),c=(-3,4),d=(-2,-1),e=(1,2),\,f=(2,-2),g=(4,-3),h=(5,-5), i=(7,-4)\}\)._
_We show in Figure 10 the successive metric centers \(m(X),m^{2}(X)\) and \(m^{3}(X)\) in the left column of the figure. \(m^{3}(X)\) is a trim space so that the metric center sequence stabilizes at this stage (the minimal rectangle of \(m^{3}(X)\) is shown by the dashed lines). In the right column of the figure, we show the metric quotients \(\overline{MC_{1}(X)},\overline{MC_{2}(X)}\) and \(\overline{MC_{3}(X)}\) of the metric cylinder \(MC(X)\). Though \(\underline{MC(X)}\) extends beyond \(MC_{3}(X)\) with edges of zero weight they don't contribute to \(\overline{MC(X)}\) and we have \(\overline{MC(X)}=\overline{MC_{3}(X)}\). Note that \(\overline{MC_{3}(X)}\) is a collection of trees (some of which are trivial)._
Figure 10: Successive metric centers and metric cylinders for the space \(X\) of Example 2.
The Tight Span of a Finite Subspace of the Manhattan Plane
In this section, we apply the main theorem of Turaev [[6],Theorem 9.1] to obtain the tight span of a finite subspace \(X\subset\mathbb{R}^{2}_{1}\). Recall that, after successive trimming of \(X\), a trim space is obtained (say after \(N\) steps), so that the metric center sequence is stabilized at \(m^{N}(X)\subset\mathbb{R}^{2}_{1}\). We then get by the main theorem of Turaev and our identification of successive trimmings by the successive metric center sets
\[T(X) =\cap_{i\geq 0}T(X_{i})\cup\overline{C(X)}\] \[=T(X_{N})\cup\overline{C_{N}(X)}\] \[=T(m^{N}(X))\cup\overline{MC_{N}(X)}\]
with \(T(m^{N}(X))\cap\overline{MC_{N}(X)}\) being the roots.
Since we have described the \(\overline{MC_{N}(X)}\) as an embedded graph in \(\mathbb{R}^{2}_{1}\), which can be expressed as \(\bigcup_{x\in X}\bigcup_{i=0}^{N-1}[m^{i}(x),m^{i+1}(x)]\), it remains to understand the term \(T(m^{N}(X))\).
The stabilized (trim) \(m^{N}(X)\) has either at least four points or a single point since a triple of points is not trim and likewise, a pair of points is not trim. If \(m^{N}(X)\) is a singleton, then we get \(T(X)=\overline{MC_{N}(X)}\), in which case \(T(X)\) becomes a tree. Thus we can note the following property:
**Remark 3**: _The tight span of a finite subspace \(X\subset\mathbb{R}^{2}_{1}\) is a tree if and only if after successive trimming of \(X\) we get a singleton._
_(This property can easily be seen to be true for any finite metric space \(X\).)_
In case the stabilized (trim) \(m^{N}(X)\) has at least four points then, by Proposition 1 the minimal rectangle \(R_{X}\) has at least two points on every edge of it and we use the following theorem [2] to obtain the tight span of \(m^{N}(X)\):
**Theorem 2**: _Let \(A\subseteq\mathbb{R}^{2}_{1}\) be a nonempty subspace. Let \(B\subseteq\mathbb{R}^{2}_{1}\) be a closed, geodesically convex subspace containing \(A\) and minimal with these properties. Then \(B\) is isometric to the tight span \(T(A)\) of \(A\)._
(This theorem is proven for the plane \(\mathbb{R}^{2}_{\infty}\) with the maximum metric, but as \(\mathbb{R}^{2}_{\infty}\) and \(\mathbb{R}^{2}_{1}\) are isometric, it holds for \(\mathbb{R}^{2}_{1}\) also.)
Now, we will develop an algorithm to create a subset containing \(m^{N}(X)\) that is closed, geodesically convex and minimal with these properties to apply this theorem. Let \(m^{N}(X)\subset\mathbb{R}^{2}_{1}\) be the initial set (with at least four points). Our algorithm to obtain \(T(m^{N}(X))\) can be formulated as follows: (In the following, coordinates of a point \(a\in\mathbb{R}^{2}_{1}\) is denoted by \(a_{x},a_{y}\), i.e. \(a=(a_{x},a_{y})\).)
1. Define \(A:=m^{N}(X)\).
2. Determine the minimal rectangle with edges parallel to the \(x-\) and \(y-\)axis that contains all of the elements of \(m^{N}(X)\) and call it \(R\).
3. Create a loop variable \(P\) and define its initial value as \(P:=R\).
4. **Bottom Left Corner Procedure:** Choose the bottom left corner of \(R\) and assign it to a variable \(a=(a_{x},a_{y})\). 1. If \(a\in A\), then go to Step 5. 2. If \(a\notin A\), choose the closest point to \(a\) with respect to the Manhattan metric from the set \(A\) on the line \(y=a_{y}\) and assign it to a variable \(b=(b_{x},b_{y})\). 3. Determine a point \(d=(d_{x},d_{y})\in A\) (as a variable) such that \(a_{x}\leq d_{x}\leq b_{x}\) and \(a_{y}<d_{y}\) with smallest possible \(d_{y}-a_{y}\). 4. Remove \([a_{x},b_{x})\times[a_{y},d_{y})\) from \(P\) and assign the remaining polygon to \(P\). Add the point \((b_{x},d_{y})\) to the set \(A\). 5. Choose \((a_{x},d_{y})\) as the new starting point and assign it to \(a\). Return to Step 4(i).
5. **Top Left Corner Procedure:** Choose the top left corner of \(R\) and assign it to a variable \(a=(a_{x},a_{y})\). 1. If \(a\in A\), then go to Step 6. 2. If \(a\notin A\), then choose the closest point to \(a\) with respect to the Manhattan metric from the set \(A\) on the line \(y=a_{y}\) and assign it to a variable \(b=(b_{x},b_{y})\). 3. Determine a point \(d=(d_{x},d_{y})\in A\) (as a variable) such that \(a_{x}\leq d_{x}\leq b_{x}\) and \(d_{y}<a_{y}\) with smallest possible \(a_{y}-d_{y}\). 4. Remove \([a_{x},b_{x})\times(d_{y},a_{y}]\) from \(P\) and assign the remaining polygon to \(P\). Add the point \((b_{x},d_{y})\) to the set \(A\). 5. Choose \((a_{x},d_{y})\) as the new starting point and assign it to \(a\). Return to Step 5(i).
6. **Top Right Corner Procedure:** Choose the top right corner of \(R\) and assign it to a variable \(a=(a_{x},a_{y})\). 1. If \(a\in A\), then go to Step 7. 2. If \(a\notin A\), then choose the closest point to \(a\) with respect to the Manhattan metric from the set \(A\) on the line \(y=a_{y}\) and assign it to a variable \(b=(b_{x},b_{y})\). 3. If \(a\notin A\), then go to Step 8. 3. If \(a\notin A\), then go to Step 9. 4. If \(a\notin A\), then go to Step 10. 5. If \(a\notin A\), then go to Step 11. 6. If \(a\notin A\), then go to Step 12. 7. If \(a\notin A\), then go to Step 13. 8. If \(a\notin A\), then go to Step 14. 9. If \(a\notin A\), then go to Step 15. 10. If \(a\notin A\), then go to Step 16. 11. If \(a\notin A\), then go to Step 17. 11. If \(a\notin A\), then go to Step 18. 12. If \(a\notin A\), then go to Step 19. 13. If \(a\notin A\), then go to Step 19. 14. If \(a\notin A\), then go to Step 20. 15. If \(a\notin A\), then go to Step 21. 16. If \(a\notin A\), then go to Step 22. 17. If \(a\notin A\), then go to Step 23. 18. If \(a\notin A\), then go to Step 24. 19. If \(a\notin A\), then go to Step 25. 11. If \(a\notin A\), then go to Step 26. 12. If \(a\notin A\), then go to Step 27. 12. If \(a\notin A\), then go to Step 28. 13. If \(a\notin A\), then go to Step 29. 14. If \(a\notin A\), then go to Step 20. 15. If \(a\notin A\), then go to Step 21. 16. If \(a\notin A\), then go to Step 21. 17. If \(a\notin A\), then go to Step 22. 18. If \(a\notin A\), then go to Step 23. 19. If \(a\notin A\), then go to Step 24. 19. If \(a\notin A\), then go to Step 25. 12. If \(a\notin A\), then go to Step 26. 13. If \(a\notin A\), then go to Step 27. 14. If \(a\notin A\), then go to Step 28. 15. If \(a\notin A\), then go to Step 29. 16. If \(a\notin A\), then go to Step 29. 17. If \(a\notin A\), then go to Step 29. 18. If \(a\notin A\), then go to Step 29. 19. If \(a\notin A\), then go to Step 29. 19. If \(a\notin A\), then go to Step 29. 20. If \(a\notin A\), then go to Step 29. 21. If \(a\notin A\), then go to Step 29. 22. If \(a\notin A\), then go to Step 29. 22. If \(a\notin A\), then go to Step 29. 22. If \(a\notin A\), then go to Step 29. 23. If \(a\notin A\), then go to Step 29. 24. If \(a\notin A\), then go to Step 39. 24. If \(a\notin A\), then go to Step 39. 25. If \(a\notin A\), then go to Step 39. 26. If \(a\notin A\), then go to Step 49. 27. If \(a\notin A\), then go to Step 50. 28. If \(a\notin A\), then go to Step 60. 29. If \(a\notin A\), then go to Step 61. 29. If \(a\notin A\), then go to Step 62. 20. If \(a\notin A\), then go to Step 63. 29. If \(a\notin A\), then go to Step 64. 29. If \(a\notin A\), then go to Step 65. 29. If \(a\notin A\), then go to Step 66. 29. If \(a\notin A\), then go to Step 66. 29. If \(a\notin A\), then go to Step 67. 29. If \(a\notin A\), then go to Step 68. 30. If \(a\notin A\), then go to Step 69. 31. If \(a\notin A\), then go to Step 69. 32. If \(a\notin A\), then go to Step 69. 33. If \(a\notin A\), then go to Step 69. 34. If \(a\notin A\), then go to Step 69. 35. If \(a\notin A\), then go to Step 69. 36. If \(a\notin A\), then go to Step 69. 37. If \(a\notin A\), then go to Step 69. 38. If \(a\notin A\), then go to Step 69. 39. If \(a\notin A\), then go to Step 69. 39.
* Determine a point \(d=(d_{x},d_{y})\in A\) (as a variable) such that \(b_{x}\leq d_{x}\leq a_{x}\) and \(d_{y}<a_{y}\) with smallest possible \(a_{y}-d_{y}\).
* Remove \((b_{x},a_{x}]\times(d_{y},a_{y}]\) from \(P\) and assign the remaining polygon to \(P\). Add the point \((b_{x},d_{y})\) to the set \(A\).
* Choose \((a_{x},d_{y})\) as the new starting point and assign it to \(a\). Return to Step 6(i).
7. **Bottom Right Corner Procedure:** Choose the bottom right corner of \(R\) and assign it to a variable \(a=(a_{x},a_{y})\). 1. If \(a\in A\), then the algorithm terminates. 2. If \(a\notin A\), then choose the closest point to \(a\) with respect to the Manhattan metric from the set \(A\) on the line \(y=a_{y}\) and assign it to a variable \(b=(b_{x},b_{y})\). 3. Determine a point \(d=(d_{x},d_{y})\in A\) (as a variable) such that \(b_{x}\leq d_{x}\leq a_{x}\) and \(a_{y}<d_{y}\) with smallest possible \(d_{y}-a_{y}\). 4. Remove \((b_{x},a_{x}]\times[a_{y},d_{y})\) from \(P\) and assign the remaining polygon to \(P\). Add the point \((b_{x},d_{y})\) to the set \(A\). 5. Choose \((a_{x},d_{y})\) as the new starting point and assign it to \(a\). Return to Step 7(i).
The source code of this algorithm along with the trimming procedure, which is written in the interactive geometry software Cinderella [4] is given in the Appendix section.
**Example 3**: _Let \(X\) be as in Figure 7, i.e. \(X=\{(-5,-3),(-5,1),(-3,4),(-2,-1),\)\((1,2),(2,-2),(4,-3),(5,-5),(7,-4)\}\). We have obtained in Example 2 that \(m^{3}(X)=\{(-5,-3),(-5,-1),(-3,2),(-2,-1),(1,2),(2,-3),(2,-2)\}\) is a trim space (see Figure 11(a)). Applying the above algorithm to this space, one will first obtain by the top left corner procedure Figure 11(b) and then by the top right corner procedure the tight span of \(m^{3}(X)\) shown in Figure 11(c)._
Figure 11: Application of the algorithm on Example 2.
After applying the algorithm to the metric space \(m^{N}(X)\), denote the finally obtained set by \(B\). The set \(B\) contains \(m^{N}(X)\) and it can be seen to be closed and geodesically convex. It is in fact also minimal since between two points with the same abcissa (or the same ordinate) there exists a unique geodesic (namely, the segment connecting them) and by this reason no point of \(B\) can be discarded from the set. Having thus constructed the tight span of \(m^{N}(X)\), we need only to take the sum of it with \(\overline{MC_{N}(X)}\). Note that the intersection of these two sets consists of the points of \(m^{N}(X)\) which are the "roots" in the terminology of Turaev. The tips (leaves) of the attached trees belong to \(X\) (some points of \(X\) however might be intermediate points of the trees due to vanishing pendant lengths), and some trees might be constant (trivial) trees consisting of a single point, which is then a tip and a root simultaneously.
We exemplify this last step of constructing the tight span of a finite subset of \(\mathbb{R}_{1}^{2}\) on our ongoing example subspace \(X\) (of Example 2). We had constructed the \(\overline{MC(X)}\) in Example 2 (see Figure 10) and we have above constructed the tight span of \(m^{3}(X)\) (see Figure 11(c)) so that we can now take their union to obtain the tight span of \(X\) (see Figure 12).
Finally, we want to conclude with two more examples:
**Example 4**: _Let \(X=\{(-5,2),(-1,0),(-1,3),(3,2),(4,-3),(5,-1),(6,-4),\)\((8,-5)\}\). The tight span of \(X\) is shown in Figure 13. The metric center sequence stabilizes at \(m^{2}(X)=\{(-1,0),(-1,2),(3,2),(4,-3),(5,-3),(5,-1)\}\)._
**Example 5**: _Let \(X=\{(-15,7),(-13,10),(-12,5),(-9,-3),(-9,0),(-7,4),\\ (-5,-5),(-5,4),(-3,-7),(-2,-2),(-1,-9),(-1,2),(1,4),(2,-5),(4,-7),\\ (4,-4),(7,-6)\}.\) The tight span of \(X\) is shown in Figure 14. The metric center sequence stabilizes at \(m^{3}(X)=\{(-9,-3),(-9,0),(-9,4),(-7,4),(-5,-5),\\ (-5,4),(-3,-7),(-2,-2),(-1,-7),(-1,2),(1,4),(2,-5),(4,-7),(4,-6),\\ (4,-4)\}.\)_
Appendix
To use this code, copy and paste it page by page into "Draw" under Cinderella's Script Editor.
ptsall=allpoints(); trimseq=[ptsall]; //
if ((trimpts_i)_1 <= xmin1, (trimpts_i)_1=xmin2;); if ((trimpts_i)_1 >= xmax1, (trimpts_i)_1=xmax2;); if ((trimpts_i)_2 <= ymin1, (trimpts_i)_2=ymin2;); if ((trimpts_i)_2 >= ymax1, (trimpts_i)_2=ymax2;); draw((pts)_i,(trimpts)_i); ); ); trimseq=append(trimseq,trimpts); k=k+1; pts=set(trimseq_k); repeat(length(pts),i,draw(pts_i)); ); if (length(pts)==2, trimpts=[[((pts_1)_1+(pts_2)_1)/2,(trimpts_1)_2=((pts_1)_2+(pts_2)_2)/2]]; draw(pts_1,trimpts_1); draw(pts_2,trimpts_1); draw(trimps_1); trimseq=append(trimseq,trimpts); ); trimstep=length(trimseq); if (length(trimseq_trimstep)>=4, tightspan=[[xmin1,ymin1],[xmin1,ymax1],[xmax1,ymax1],[xmax1,ymin1]]]; pts=trimseq_trimstep; // BOTTOM-LEFT CORNER PROCEDURE refpoint=[xmin1,ymin1]; while(!contains(pts,refpoint), hat=select(pts,#.y==refpoint.y); xfirst=[infty,refpoint.y]; repeat(length(hat),i,if ((xfirst.x > (hat_i).x), xfirst=hat_i); ); strip=select(pts,((#.y > xfirst.y) & (#.x < xfirst.x))); yfirst=[xmin1,ymax1]; repeat(length(strip),i, if ((yfirst.y > (strip_i).y), yfirst=strip_i); ); init=[]; final=[]; mid=[xfirst,[xfirst.x,yfirst.y],[xmin1.x,yfirst.y]]; draw(mid_1,mid_2); draw(mid_2,[yfirst.x,yfirst.y]); control=0; repeat (length(tightspan),k, if (tightspan_k==refpoint,control=1; ); if (control==0, init=init ++[tigtspan_k]); if ((control==1)&(tightspan_k!=refpoint),final=final++[tigtspan_k]); ); tightspan=concat(concat(init,mid),final);
pts=append(pts,[xfirst.x,yfirst.y]); refpoint=mid_3; ); // TOP-LEFT CORNER PROCEDURE refpoint=[xmin1,ymax1]; while(!contains(pts,refpoint), hat=select(pts,#.y==refpoint.y); xfirst=[infty,refpoint.y]; repeat(length(hat),i,if ((xfirst.x > (hat_i).x), xfirst=hat_i); ); strip=select(pts,((#.y < xfirst.y) & (#.x<=xfirst.x) )); yfirst=[xmin1,ymin1]; repeat(length(strip),i,if ((yfirst.y < (strip_i).y), yfirst=strip_i); ); init=[]; final=[]; mid=[[xmin1.x,yfirst.y],[xfirst.x,yfirst.y],xfirst]; draw(mid_3,mid_2); draw(mid_2,[yfirst.x,yfirst.y]); control=0; repeat (length(tightspan),k, if (tightspan_k==refpoint,control=1; ); if (control==0, init=init++[tightspan_k]); if ((control==1)&(tightspan_k!=refpoint),final=final++[tightspan_k]); );
tightspan=concat(concat(init,mid),final); pts=append(pts,[xfirst.x,yfirst.y]); refpoint=mid_1; ); // TOP-RIGHT CORNER PROCEDURE refpoint=[xmax1,ymax1]; while(!contains(pts,refpoint), hat=select(pts,#.y==refpoint.y); xfirst=[-infty,refpoint.y]; repeat(length(hat),i,if ((xfirst.x < (hat_i).x), xfirst=hat_i); ); strip=select(pts,((#.y < xfirst.y) & (#.x >= xfirst.x) )); yfirst=[xmax1,ymin1]; repeat(length(strip),i,if ((yfirst.y < (strip_i).y), yfirst=strip_i); ); init=[]; final=[]; mid=[xfirst,[xfirst.x,yfirst.y],[xmax1.x,yfirst.y]]; draw(mid_1,mid_2); draw(mid_2,[yfirst.x,yfirst.y]); control=0;
repeat (length(tigtspan),k, if (tigtspan_k==refpoint,control=1; ); if (control==0, init=init++[tigtspan_k]); if ((control==1)&(tigtspan_k!=refpoint),final=final++[tigtspan_k]); );
tigtspan=concat(concat(init,mid),final); pts=append(pts,[xfirst.x,yfirst.y]); refpoint=mid_3; ); // BOTTOM-RIGHT CORNER PROCEDURE refpoint=[xmax1,ymin1]; while(!contains(pts,refpoint), hat=select(pts,#.y==refpoint.y); xfirst=[-infty,refpoint.y]; repeat(length(hat),i,if ((xfirst.x < (hat_i).x), xfirst=hat_i); ); strip=select(pts,((#.y > xfirst.y) & (#.x >= xfirst.x) )); yfirst=[xmax1,ymax1]; repeat(length(strip),i, if ((yfirst.y > (strip_i).y), yfirst=strip_i); ); init=[]; final=[]; mid=[[xmax1.x,yfirst.y],[xfirst.x,yfirst.y],xfirst]; draw(mid_3,mid_2); draw(mid_2,[yfirst.x,yfirst.y]); control=0; repeat (length(tigtspan),k, if (tigtspan_k==refpoint,control=1; ); if (control==0, init=init++[tigtspan_k]); if ((control==1)&(tigtspan_k!=refpoint),final=final++[tigtspan_k]); ); tightspan=concat(concat(init,mid),final); pts=append(pts,[xfirst.x,yfirst.y]); refpoint=mid_1; );
tigtspan=append(tigtspan,tigtspan_1); repeat(length(tigtspan)-1,i, draw(tigtspan_i,tigtspan_(i+1)); draw(tigtspan_i); ); alpha(0.3);
fillpoly(tightspan,color->hue(0.52)); );
|
2302.09537 | The Globalization-Inequality Nexus: A Comparative Study of Developed and
Developing Countries | This study examines the relationship between globalization and income
inequality, utilizing panel data spanning from 1992 to 2020. Globalization is
measured by the World Bank global-link indicators such as FDI, Remittance,
Trade Openness, and Migration while income inequality is measured by Gini
Coefficient and the median income of 50% of the population. The fixed effect
panel data analysis provides empirical evidence indicating that globalization
tends to reduce income inequality, though its impact varies between developed
and developing countries. The analysis reveals a strong negative correlation
between net foreign direct investment (FDI) inflows and inequality in
developing countries, while no such relationship was found for developed
countries.The relationship holds even if we consider an alternative measure of
inequality. However, when dividing countries by developed and developing
groups, no statistically significant relationship was observed. Policymakers
can use these findings to support efforts to increase FDI, trade, tourism, and
migration to promote growth and reduce income inequality. | Md Shah Naoaj | 2023-02-19T11:08:38Z | http://arxiv.org/abs/2302.09537v1 | # The Globalization-Inequality Nexus: A Comparative Study of Developed and Developing Countries
###### Abstract
This study examines the relationship between globalization and income inequality, utilizing panel data spanning from 1992 to 2020.Globalization is measured by the World Bank's global-link indicators such as FDI, Remittance, Trade Openness, and Migration while income inequality is measured by Gini Coefficientand the median income of 50% of the population. The fixed effect panel data analysis provides empirical evidence indicating that globalization tends to reduce income inequality, though its impact varies between developed and developing countries. The analysis reveals a strong negative correlation between net foreign direct investment (FDI) inflows and inequality in developing countries, while no such relationship was found for developed countries.The relationship holds even if we consider an alternative measure of inequality. However, when dividing countries by developed and developing groups, no statistically significant relationship was observed.Policymakers can use these findings to support efforts to increase FDI, trade, tourism, and migration to promote growth and reduce income inequality.
Globalization, income Inequality, FDI, GiniCoefficient, Developing Countries, Developed Countries
Date of Submission: 06-02-2023 Date of Acceptance: 17-02-2023
## 1 Introduction
Globalization is quantified by the "Global Connectedness Index" through its examination of trade, capital, information, and people flows(Altman & Bastian, 2021). Due to globalization, global interconnectedness has been increasing in the last two decades in the form of more financial flows, foreign direct investments, trade in goods and services, and movements of people among the countries. The global links enable the economies of individual countries to grow and expand. Moreover, as national economies develop, their links expand and grow more complex. But does this global link influences income inequality? If it does, then does it influence developed and developing countries differently? This paper aims to examine the relationship between World Bank's global-link indicators, including Foreign Direct Investment, Remittances, Trade Openness, and Migration, and the income inequality indicator represented by the Gini Coefficient.
## 2 Literature Review
The relationship between globalization and economic growth and other macroeconomic indicators has received much attention since the world saw a major trade liberalization process beginning in 1990. There is substantial literature that shows that there is an impact of foreign direct investment and economic growth (Borensztein, De Gregorio and Lee, 1998; Herzer, Klasen and Nowak-Lehmann, 2008; De Vita and Kyaw, 2009). But globalization doesn't constitute only FDI, many other economic factors are also contributing to the change of the shape of world poverty,development and inequality. Specifically, the causal relationship between the global connectedness indicators and income inequality was not received much attention until 2000 due to lack of data and sound economic theory.
Two channels that impact inequality due to FDI was suggested by Jensen and Rosas (2007). They argued that FDI brings capital into a country and reduces capital return but increases labor return as FDI competes with local capital to get the local workers, raising wages and reducing firms' profitability. It means that FDI reduces income inequality by reducing the wage gap. On the contrary, the authors argued that the FDI ideally does not attract all labor force but skilled workforces. Consequently, inequality increases due to the wage gap between skilled and unskilled workers. According to Velde (2003), there are three possible channels through which wage inequality in developing countries has been affected by FDI. First, a "composition effect" results from the fact that foreign firms tend to set up more intensive sectors of skilled labor, thus improving the position of these workers relative to the unskilled (Feenstra and Hanson, 1997). Second, FDI can affect the supply of skilled workers via training and specific contributions to general education (knowledge transfer).
Lastly, as advanced by Berman, Bound and Machin (1998), FDI can probably induce faster labor productivity growth both in foreign firms (technology transfer) and in local ones (secondary effects), and if productivity growth is skewed towards skilled sectors, then the gap between the sectors will grow. According to the World Economic Outlook (IMF) Chapter 4, 2007, the impact of globalization on inequality was limited, as trade globalization was found to decrease inequality, but financial globalization and foreign direct investment specifically were found to increase inequality. Technological progress was found to have a greater impact on inequality within countries.
Given the relevance of this topic, many empirical studies have been conducted on the FDI and remittance inflow on income inequality. However, no such studies have been done on based on the World Bank's global-link indicators1. This paper aims to this causal relationship between globalization and inequality by analyzing global link indicators as a proxy of globalization. The motivation for using global link indicators as a proxy for globalization stems from the innovative DHL Global Connectedness Index published jointly by DHL and New York University. The DHL Global Connectedness Index (GCI) is a research study that measures the extent of cross-border flows of trade, capital, information, and people. The study provides an analysis of the interconnectedness of countries around the world and highlights the most connected countries and regions. The GCI is widely cited in academic and policy circles as a valuable source of information on globalization and its impact on the global economy.
Footnote 1: According to World Bank, the Global Links indicators give an overview of the economic growth and expansion enabled by the flows and connections between the world’s economy and the economies of individual countries. These indicators assess the magnitude and direction of these flows, and record government actions such as tariffs, trade facilitation, and aid.
## 3 Data And Methods
The purpose of this study is to investigate the correlation between globalization and income inequality by analyzing several key variables, including the Gini coefficient (gini), net foreign direct investment inflow (l_fdi_net_in), net remittance inflow (inward_remittance), trade openness (trade_gdp), which is represented as the total trade as a percentage of Gross Domestic Product, and tourism receipts (tourist_receipt). To control for any unobserved heterogeneity, additional relevant variables have been included. The data used in this study was sourced from the World Development Indicators, World Bank, covering the period from 1992 to 2020. Due to many missing variables, the data set is unbalanced. The summary statistics are given in Table 1.
To understand the effect of globalization on income inequality, panel data regression is used for both developed and developing countries with the following specifications. In specification one, Gini coefficient is used as the dependent variable and the other global-link indicators as independent variables.
\[\begin{split} gini_{lt}=&\propto+\beta 1l\_fdi\_net\_in_{lt}+\beta 2 toursim\_receipt_{lt}+\beta 1trade\_gdp_{lt}+\beta 1inward\_remittance_{lt}+Ui\\ &\qquad\qquad+e_{lt}\end{split}\]
In the second specification, a proxy variable for income inequality is used. The variable represents the percentage of the population living below 50% of the median income or consumption per capita.
\[\begin{split} pop\_below\_50\_percent_{lt}=\\ &\propto+\beta 1l\_fdi\_net\_in_{lt}+\beta 2toursim\_receipt_{lt}+\beta 1trade\_gdp_{lt}\\ &\qquad\qquad+\beta 1inward\_remittance_{lt}+Ui+e_{lt}\end{split}\]
The term Ui represents fixed effects by country in the equation, and \(e_{lt}\) is the error term. The definitions of each of the variable are given in annex 1.
To understand the impact ofglobalization on income inequality, at the first step, countries are divided into developed and developing/emerging countries according to the definition of IMF. Thepanel regression analysis is conducted using the Gini coefficient as a dependent variable and global link indicators as
\begin{table}
\begin{tabular}{l c c c c c} \hline Variable & Obs & Mean & Std. Dev. & Min & Max \\ \hline gini & 1643 & 38.442 & 9.069 & 20.7 & 65.8 \\ pop\_below\_50 & 1598 & 14.087 & 5.944 &.9 & 34.6 \\ l\_fdi\_net\_in & 4676 & 19.948 & 2.815 & 2.303 & 27.322 \\ tourist\_receipt & 3494 & 5.237e+09 & 1.603e+10 & 100000 & 2.420e+11 \\ trade\_gdp & 4617 & 80.505 & 44.579 &.021 & 380.104 \\ inward\_remittance & 4573 & 4.249 & 8.221 & 0 & 167.432 \\ \hline \end{tabular}
\end{table}
Table 1: Summary Statistics
independent variables. In step 2, the impact is analyzed for developed and developing countries using percentage of population living below the median 50 percent of income of each country. In step 3, the results betweenthe developed and developing effectare compared.
Fixed-effect panel regression is used to analyze the causality as panel data analysis eliminates the effects of time-invariant unobservable heterogeneity and provides better causal inference, improved efficiency, and reduced omitted variable bias. The choice between fixed-effect and random-effect models depends on the nature of the sample. A random-effect model is suitable when the entities in the sample are randomly selected from the population, while a fixed-effect model is appropriate when the sample entities effectively make up the entire population. The Hausman test was conducted to determine the preferred model, and the results rejected the null hypothesis, indicating that the fixed-effect model was preferred over the random-effect model. As a result, the fixed-effect model was used in this analysis.
## 4 Result
Table 2 shows the regression output of the causal relationship between globalization on income inequality. This table presents the results of a regression analysis of the relationship between income inequality, as measured by the Gini coefficient, and various indicators of globalization and control variables. The Gini coefficient measures the distribution of income within a country, with higher values indicating greater income inequality.The first two columns of the table show the results of regression models that include all 146 countries in the sample. The next two columns show the results of regression models that include only developing countries, while the last two columns show the results of regression models that include only developed countries.
The independent variables in columns 1, 3, and 5 are globalization indicators, namely net foreign direct investment (FDI), tourist receipts, trade-to-GDP ratio, inward remittances, and the population growth rate. Meanwhile, columns 2, 4, and 6 also include control variables such as the primary education rate and GDP growth rate for robustness checks.
\begin{table}
\begin{tabular}{l c c c c c c} \hline & (1) & (2) & (3) & (4) & (5) & (6) \\ & & & gini & gini & gini & \\ & gini & gini & (developing & (developing & gini & gini \\ & (all & (all & countries) & countries) & counties) & (developed & (developed \\ & countries) & countries) & & countries) & countries) \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c c} L\_FDI\_net\_in & -0.616*** & -0.462*** & -0.672*** & -0.587*** & -0.097 & -0.069 \\ & (0.170) & (0.171) & (0.202) & (0.202) & (0.158) & (0.215) \\ tourist\_receipt & -0.000 & -0.000 & -0.000*** & -0.000*** & 0.000** & 0.000 \\ & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) \\ trade\_gdp & 0.017 & 0.014 & 0.019 & 0.014 & 0.002 & -0.003 \\ & (0.014) & (0.012) & (0.019) & (0.016) & (0.010) & (0.012) \\ inward\_remittance & -0.120*** & -0.070 & -0.114*** & -0.059 & 0.212 & 0.167 \\ & (0.040) & (0.045) & (0.040) & (0.048) & (0.323) & (0.283) \\ gdp\_growth & & 0.010 & & 0.029 & & -0.052 \\ & & (0.032) & & (0.036) & & (0.052) \\ toatal\_pop & & -0.000 & & 0.000 & & -0.000 \\ & & (0.000) & & (0.000) & & (0.000) \\ primary\_edu\_rate & & -0.049* & & -0.047* & & 0.018 \\ & & (0.025) & & (0.027) & & (0.064) \\ Constant & 51.161*** & 53.033*** & 55.324*** & 57.127*** & 33.300*** & 32.829*** \\ & (3.413) & (5.040) & (4.040) & (5.663) & (3.854) & (7.917) \\ Observations & 1,209 & 863 & 870 & 646 & 339 & 217 \\ R-squared & 0.074 & 0.094 & 0.126 & 0.125 & 0.014 & 0.028 \\ No of country & 146 & 129 & 114 & 103 & 32 & 26 \\ \hline \multicolumn{5}{l}{Standard errors in parentheses (*** p\(<\)0.01, ** p\(<\)0.05, * p\(<\)0.1)} \\ \end{tabular}
\end{table}
Table 2: Effect of globalization on income inequality (developing vs developed)
For all countries, net foreign direct investment has a statistically significant negative relationship with income inequality, with a coefficient of -0.616. This suggests that an increase in net foreign direct investment is associated with a decrease in income inequality. The relationship holds when we consider related control variables. Meanwhile, tourist recepts and trade to GDP ratio have no statistically significant relationship with income inequality. Inward remittances have a statistically significant negative relationship with income inequality, with a coefficient of -0.120. GDP growth rate is also positively related to income inequality, with a coefficient of 0.010. The primary education rate has a statistically significant negative relationship with income inequality, with a coefficient of -0.049.
When looking at the results for developing countries, we find similar results. Net foreign direct investment has a statistically significant negative relationship with income inequality, with a coefficient of -0.672. Tourist recepts have a statistically significant negative relationship with income inequality. Inward remittances have a statistically significant negative relationship with income inequality, with a coefficient of -0.114. GDP growth rate is negatively related to income inequality, with a coefficient of -0.052. The primary education rate has a statistically significant positive relationship with income inequality, with a coefficient of 0.018.
For developed countries, the results are different. Net foreign direct investment has a positive but not statistically significant relationship with income inequality, with a coefficient of -0.097. Tourist recepts have a positive but not statistically significant relationship with income inequality, with a coefficient of 0.000. Inward remittances have a statistically significant positive relationship with income inequality, with a coefficient of 0.212. GDP growth rate has no statistically significant relationship with income inequality. The primary education rate has a positive but not statistically significant relationship with income inequality, with a coefficient of -0.047.
In general, the results suggest that globalization, as measured by net foreign direct investment and inward remittances, is associated with lower levels of income inequality in both all countries and developing countries. However, trade as a percentage of GDP has a positive effect on income inequality in developing countries, but this effect is not statistically significant in developed countries. In developed countries, the relationship between globalization and income inequality is less clear. The primary education rate appears to be negatively related to income inequality in all countries, suggesting that an increase in education may help reduce income inequality. These findings offer insights into the relationship between globalization and income inequality and provide a foundation for further research on the topic.
The number of observations in the models ranges from 646 in the developing countries model to 1,209 in all countries model. The R-squared values indicate that the models explain a low to moderate amount of variation in the dependent variable. The number of countries in the sample ranges from 103 in the developing countries model to 146 in the all-countries model.The results of this analysis should be interpreted with caution, as they are based on cross-country comparisons and do not account for country-specific factors that may influence income inequality.
## 5 Alternative measures of Inequality
Table3 shows an alternative measure of inequality which is the percentage of the population living below 50% of median income. The results from the regression analysis show that there is a negative relationship between the population below the poverty line of 50 and the amount of foreign direct investment (FDI) received (l_fdi_net_in), with the coefficient being statistically significant at the 1% level (p-value \(<\) 0.01). This result holds for all countries and developing countries.
The trade openness measured by trade to GDP ratio (trade_gdp) has a positive relationship with poverty, with a statistically significant coefficient of 0.0195 (p-value \(<\) 0.05) for all countries. However, this relationship is not significant for either developed or developing countries separately.Inward remittances (inward_remittance1) have a negative relationship with poverty, but the coefficient is only statistically significant at the 10% level (p-value \(<\) 0.1) for developing countries (-0.055). This relationship is not significant for developed countries.
Primary education rate (primary_edu_rate) has a negative relationship with poverty, with the coefficient being statistically significant for both developing and developed countries. The coefficient is larger for developing countries (-0.0233) compared to developed countries (-0.0208).The R-squared value, a measure
of goodness of fit, is higher for developing countries (0.129) compared to developed countries (0.027). This suggests that the model fits better for developing countries than for developed countries.
The results suggest that FDI has a negative relationship with poverty in both developing and developed countries, with the relationship being stronger in developing countries. Other variables such as trade openness, inward remittances, and primary education rate also have a negative relationship with poverty, but their coefficients are not statistically significant for all countries or for either developing or developed countries. The results of this analysis can be used to inform policies aimed at reducing poverty in both developing and developed countries.
## 6 Conclusion
The study concludes that globalization has a positive effect on reducing income inequality, but its impact varies between developed and developing countries. The results show that net foreign direct investment inflow has a statistically significant negative impact on inequality for all countries and specifically for developing countries. However, no such relationship was found for developed countries even when analyzing the data over a shorter time frame. These findings suggest that globalization can not only enhance economic growth but also reduce income inequality in developing countries. The alternative measure of inequality supports these conclusions, but without a statistically significant difference between developed and developing countries. Given these outcomes, policymakers should be proactive in promoting globalization through the promotion of foreign direct investment, tourism, trade, and mobility to drive economic growth and decrease inequality.The findings offer insights into the relationship between globalization and income inequality and poverty, but it is important to keep in mind that the results should be interpreted with caution as they are based on cross-country comparisons and do not account for country-specific factors that may influence these outcomes.
|
2301.10536 | Understanding and Improving Deep Graph Neural Networks: A Probabilistic
Graphical Model Perspective | Recently, graph-based models designed for downstream tasks have significantly
advanced research on graph neural networks (GNNs). GNN baselines based on
neural message-passing mechanisms such as GCN and GAT perform worse as the
network deepens. Therefore, numerous GNN variants have been proposed to tackle
this performance degradation problem, including many deep GNNs. However, a
unified framework is still lacking to connect these existing models and
interpret their effectiveness at a high level. In this work, we focus on deep
GNNs and propose a novel view for understanding them. We establish a
theoretical framework via inference on a probabilistic graphical model. Given
the fixed point equation (FPE) derived from the variational inference on the
Markov random fields, the deep GNNs, including JKNet, GCNII, DGCN, and the
classical GNNs, such as GCN, GAT, and APPNP, can be regarded as different
approximations of the FPE. Moreover, given this framework, more accurate
approximations of FPE are brought, guiding us to design a more powerful GNN:
coupling graph neural network (CoGNet). Extensive experiments are carried out
on citation networks and natural language processing downstream tasks. The
results demonstrate that the CoGNet outperforms the SOTA models. | Jiayuan Chen, Xiang Zhang, Yinfei Xu, Tianli Zhao, Renjie Xie, Wei Xu | 2023-01-25T12:02:12Z | http://arxiv.org/abs/2301.10536v1 | # Understanding and Improving Deep Graph Neural Networks:
###### Abstract
Recently, graph-based models designed for downstream tasks have significantly advanced research on graph neural networks (GNNs). GNN baselines based on neural message-passing mechanisms such as GCN and GAT perform worse as the network deepens. Therefore, numerous GNN variants have been proposed to tackle this performance degradation problem, including many deep GNNs. However, a unified framework is still lacking to connect these existing models and interpret their effectiveness at a high level. In this work, we focus on deep GNNs and propose a novel view for understanding them. We establish a theoretical framework via inference on a probabilistic graphical model. Given the fixed point equation (FPE) derived from the variational inference on the Markov random fields, the deep GNNs, including JKNet, GCNII, DGCN, and the classical GNNs, such as GCN, GAT, and APPNP, can be regarded as different approximations of the FPE. Moreover, given this framework, more accurate approximations of FPE are brought, guiding us to design a more powerful GNN: coupling graph neural network (CoGNet). Extensive experiments are carried out on citation networks and natural language processing downstream tasks. The results demonstrate that the CoGNet outperforms the SOTA models.
## 1 Introduction
Graphs have received increasing attention in recent years, driving the unprecedented development of graph neural networks (GNNs). In particular, neural graph-based models are applied to various downstream tasks such as biology [1, 16], social analysis [13], and computer vision [14, 15, 16]. However, it has been observed that classical GNNs like GCN and GAT achieve the best performance in most downstream tasks at only a two-layer network. With the network depth increasing, the performance drops rapidly.
Hence, a profusion of work tries to tackle the performance degradation problem. These deep GNN models design various feature propagation processes based on the message-passing mechanism of GNN. For instance, JKNet [11] flexibly adjusts the neighbor range of each node by using a layer aggregation network architecture. Another excellent model Deep Adaptive Graph Neural Network decouples transformation and propagation to leverage larger receptive fields. GCNII [2] adds Initial residual and Identity mapping based on GCN, extending GCN to a \(k\)-th order of polynomial filter. Despite fruitful achievements, these works study GNNs from various perspectives, including the spatial and spectral domains that have garnered the most attention in recent years. Driven by the different interpretations of GNNs, a question arises: Is there a framework that can interpret these deep GNNs uniformly? If such a framework exists, we can leverage it to understand the performance degradation problem of classical GNNs more in-depth and identify the weakness of the existing deep GNNs to facilitate their development.
In this paper, we understand graph convolution from a probabilistic perspective and develop a general interpretation framework for deep GNNs. GNNs and probabilistic graphical models (PGMs) are prominent tools for studying structured data. [15] note the iterative propagation scheme of variational inference in PGMs, which provides an intuitively appealing direction for connecting GNNs. Thus, we start from the variational inference of Markov random fields and obtain its iterative solution embedded in the Hilbert space. Inspired by the close connection between GNNs and the first-order Taylor expansion of the iterative solution, we establish a unified framework for representing different GNNs and their deep variants. We then incorporate some popular deep graph networks, such as JKNet, DGCN, and GCNII [11, 12, 2], into the developed theoretical framework with the help of high-order Taylor approximation. Furthermore, with the proposed framework, we analyze the performance degradation problem and design a more effective model, the Coupling Graph Neural Network (CoGNet). Through extensive experiments on three semi-supervised node classification benchmarks and natural language processing downstream tasks, including text classifica
tion and multi-hop question answering, we demonstrate that the CoGNet achieves strong performances in both shallow and deep network layers, including multiple SOTA results.
## 2 Related Work
### Graph Neural Networks
Graph Neural Network (GNN) is a framework for graph representation learning. The basic GNNs follow a neural message-passing mechanism. They exchange messages between connected nodes and update node features using neural networks. In the past few years, some baseline GNN models, such as GCN, GIN, and GAT [2, 14, 15, 16], have been proposed and achieved great popularity. These models have been successfully applied to many downstream tasks, including recommender systems [17, 11], drug analysis [15], community detection [2, 14, 16], etc.
However, these GNN models are known to suffer from performance degradation when deepening the network structures. This phenomenon may be caused by problems such as over-parameterization, over-smoothing, and weak generalization of the model. Many works on improving the performance of deep GNN models have been proposed recently. Jumping Knowledge Network [14] is one of the earliest deep GNNs, combining the node representations of each layer at the last layer. Another excellent model DAGNN [14] adaptively incorporates information from large receptive fields. Recently, DropEdge [13] randomly mutes a certain number of edges during each training epoch to alleviate the over-smoothing issue. Other deep models, such as GCN-NII [2] and DGCN [12], have also achieved good performance. However, these models design their propagations from different perspectives, and a unified language is lacking to describe them.
### Interpretability of GNNs
Theoretical interpretability has become a focus of research on graph representation learning. As a generalization of convolution on graph-structured data, graph convolution has inspired many works to explore GNNs from the spectral perspective. [11] shows that the propagation process of the GCN is a low-pass filter and attributes performance degradation to the over-smoothing problem. [15] understands GNNs as graph signal denoising. Graph wavelet neural network [14] uses graph wavelets to replace the Fourier basis in GCN. [1, 16, 17] design the network aggregation from the perspective of spectral filters. Another line of work analyzes the connections between GNNs and graph isomorphism testing. [14] proves that message-passing GNNs are no more powerful than the Weisfeiler-Lehman (WL) algorithm [15] when given discrete information as node features. [16] shows that the expressive power of \(K\)-hop message passing is bounded by 3-WL test. Most recently, several works have been devoted to generalizing deep learning theory to graph machine learning. [16] uses Neural Tangent Kernel to analyze infinitely wide multi-layer GNNs trained by gradient descent. [2] explores generalization in GNNs through VC Dimension based generalization error bounds. In this work, we understand GNNs from a novel probabilistic perspective and analyze their intrinsic connections.
## 3 GNNs as Embedded Graphical Models
In this section, we establish the connections between the GNN models and the probabilistic inference in graphical models. We first introduce the notations used throughout the paper.
**Notations.** Given a weighted undirected Graph \(G=(V,E)\) with \(n\) nodes \(v_{i}\in V\) and edges \((v_{i},v_{j})\in E\), let \(\mathbf{A}\in\mathbb{R}^{n\times n}\) denote the adjacency matrix and \(\hat{\mathbf{D}}\) denote the diagonal degree matrix, i.e., \(\mathbf{D}_{ii}=\sum_{(v_{i},v_{j})\in E}\mathbf{A}_{ij}\). Let \(\hat{\mathbf{A}}=(\mathbf{D}+\mathbf{I}_{n})^{-\frac{1}{2}}\left(\mathbf{I}_{n }+\mathbf{A}\right)(\mathbf{D}+\mathbf{I}_{n})^{-\frac{1}{2}}\) denote a normalized variant of the adjacency matrix (with self-loops). We denote the node feature matrix as \(\mathbf{X}\in\mathbb{R}^{n\times d}\) for a \(d\)-dimensional feature vector \(\mathbf{x}_{i}\) associated with each node \(v_{i}\) and use \(\mathcal{N}(i)\) to represent the set of neighboring nodes of node \(v_{i}\) (including itself).
We next introduce message passing in graphical models. Our starting point is structured data modeled with the PGM. Markov random field is a PGM based on an undirected graph. It is especially suitable for handling high-dimensional data with graph structure. With the conditional independence assumptions of graph structure, we can simplify the joint distribution \(p\) over some set of discrete random variables \(\mathcal{X}=\{x_{1},...,x_{n}\}\), and then solve it by optimization. Considering an undirected Graph \(G\), we introduce latent variable \(z_{u}\) associated with each node \(u\). The input node features \(\mathbf{X}\in\mathbb{R}^{n\times d}\) is the observed variable. The aim is to find conditional distributions \(p\left(\mathbf{z}\mid\mathbf{X}\right)\) over latent variables \(\mathbf{z}=\{z_{1},\cdots,z_{n}\}\) so that we can make inference.
For a Markov random field with graph \(G\), computing the posterior \(p\) is a computationally intractable task. Mean-field variational inference, one of the most popular approximate inference techniques, is employed here for approximation:
\[p\left(z_{1},\cdots,z_{n}\mid\mathbf{X}\right)\approx\prod_{i=1}^{n}q_{i} \left(z_{i}\right). \tag{1}\]
By minimizing the free energy between the approximate posterior \(q\) and the true posterior \(p\), we obtain the optimal distribution \(q^{*}\). It's hard to find a direct solution to this optimization problem. [20] shows that the above optimization problem satisfies the following fixed point equations:
\[\log q_{i}^{*}\left(z_{i}\right)=c+\log\left(\phi\left(z_{i},x_{i}\right) \right)+\sum_{j\in\mathcal{N}(i)}\sum_{z_{j}}q_{j}^{*}\left(z_{j}\right)\log \left(\psi\left(z_{i},z_{j}\right)\right),\]
where \(c\) is a constant, \(\psi\) and \(\phi\) are non-negative potential functions. The above equation exhibits the relationship between marginals of adjacent nodes, and we can obtain its iterative
form solution:
\[\log q_{i}^{(l+1)}\left(z_{i}\right)= c_{i}^{(l)}+\log\left(\phi\left(z_{i},x_{i}\right)\right)\] \[+\sum_{j\in\mathcal{N}(i)}\sum_{z_{j}}q_{j}^{(l)}\left(z_{j}\right) \log\left(\psi\left(z_{i},z_{j}\right)\right), \tag{2}\]
where \(q_{i}^{*}(z_{i})=\underset{l\rightarrow\infty}{\lim}q^{l}(z_{i})\). We can abbreviate Eq. (2) as:
\[q_{i}^{(l+1)}\left(z_{i}\right)=\mathcal{F}_{i}\left(z_{i},q_{j}^{(l)}\left(z _{j}\right)\right),\quad j\in\mathcal{N}(i). \tag{3}\]
Here \(\mathcal{F}_{i}(\cdot)\) is a function determined by the potential functions. Note that these potential functions are defined on the clique where node \(i\) is located. Eq. (3) establishes an iterative update formula to aggregate the neighbor information. And [1] further mapped it to a high-dimensional space with the help of Hilbert space embedding, thus linking the high-dimensional data and the low-dimensional structure. They map distributions \(q(z)\) into Hilbert space using some injective functions \(\phi_{\alpha}\). As a result, we can obtain the embedding \(\mu_{z_{i}}\) in Hilbert space by \(\mu_{z_{i}}:=\int_{z_{i}}\phi_{\alpha}(z_{i})q_{i}(z_{i})dz_{i}\).
**Theorem 1**.: _[_1_]_ _Given a finite-dimensional feature map \(\phi\) that maps \(p(x)\) to \(\mu\), if \(\phi\) is injective, then any function that applies on p(x) is equivalent to computing a corresponding function on \(\mu\)._
According to Theorem 1, we can denote the function corresponding to \(\mathcal{F}_{i}\) in Eq. (3) after the kernel embedding as \(\tilde{\mathcal{F}}_{\alpha}\). For the recursive expression of \(q_{i}\) (Eq. (3)), we have the following iterative formula after Hilbert space embedding:
\[u_{i}^{(l)}=\tilde{\mathcal{F}}_{\alpha,i}\left(u_{j}^{(l-1)},j\in\mathcal{N }(i)\right). \tag{4}\]
In order to have a more straightforward distinction, we use \(\ell\) to denote the number of network layers in graph networks and \(l\) to represent the number of iterations.
Following the breakthrough work of [1], several papers put forward further discussions. [1] find the theoretical connection between message-passing GNN and mean-field variational inference. This inspires us to apply the idea to analyzing GNNs and explore it further. For simplicity of derivation, we start from the assumption that \(u_{i}\) is unidimensional. We denote \(\mathbf{U}_{i}\in\mathbb{R}^{m\times 1},m=\left|\mathcal{N}(i)\right|\), as the vector consisting of the neighbourhoods of node \(i\):
\[\mathbf{U}_{i}^{(l)}=\left[u_{1}^{(l)},u_{2}^{(l)},...,u_{m}^{(l)}\right]^{ \top},\quad m\in\mathcal{N}(i). \tag{5}\]
where \(u_{m}^{(l)}\) is the \(m\)-th entrie of vector \(\mathbf{U}_{i}^{(l)}\). Now we can give the Taylor expansion of Eq. (4) at \(\mathbf{U}_{i}^{(l)}=\mathbf{0}\) as:
\[u_{i}^{(l)}= \tilde{\mathcal{F}}_{\alpha,i}(\mathbf{0})+\tilde{\mathcal{F}}_{ \alpha,i}^{\prime}(\mathbf{0})\mathbf{U}_{i}^{(l-1)}+\frac{1}{2}\mathbf{U}_{i }^{(l-1)^{\top}}\tilde{\mathcal{F}}_{\alpha,i}^{\prime\prime}(\mathbf{0}) \mathbf{U}_{i}^{(l-1)}\] \[+\cdots+\frac{\tilde{\mathcal{F}}_{\alpha,i}^{n}(\mathbf{0})}{n! }\left[\mathbf{U}_{i}^{(l-1)}\right]^{n}+o\left[\mathbf{U}_{i}^{(l-1)}\right]^ {n}. \tag{6}\]
We denote the \(n^{th}\) order derivative by \(\tilde{\mathcal{F}}_{\alpha,i}^{n}\). The value of \(\tilde{\mathcal{F}}_{\alpha,i}^{n}(\mathbf{0})\) is node-specific. Actually, we can already find the connection between the probabilistic graph model and the GNN through Eq. (6). The embedded high-dimensional variables and the iterative message-passing mechanism match perfectly with GNN. To make it more understandable, in the following, we briefly analyze the first-order approximation of Eq. (6):
\[u_{i}^{(l)}\approx\tilde{\mathcal{F}}_{\alpha,i}(\mathbf{0})+\tilde{\mathcal{ F}}_{\alpha,i}^{\prime}(\mathbf{0})\mathbf{U}_{i}^{(l-1)}, \tag{7}\]
where \(\mathbf{U}_{i}^{(l-1)}\) consists of its neighbors and \(u_{i}^{(l)}\) corresponds to the embedding of conditional probability \(q_{i}\) in the Hilbert space. We can find that Eq. (7) satisfies the recursive message-passing mechanism. \(\tilde{\mathcal{F}}_{\alpha,i}^{\prime}(\mathbf{0})\mathbf{U}_{i}^{(l-1)}\) is an inner product term, which aggregates neighbors \(u_{j}^{(l-1)}\) (including \(u_{i}^{(l-1)}\)) after multiplying by the coefficients:
\[u_{i}^{(l)}\approx\tilde{\mathcal{F}}_{\alpha,i}(\mathbf{0})+\sum_{j\in \mathcal{N}(i)}f_{ij}\cdot u_{j}^{(l-1)}, \tag{8}\]
where \([f_{ij},j\in\mathcal{N}(i)]^{\top}\triangleq\tilde{\mathcal{F}}_{\alpha,i}^{ \prime}(\mathbf{0})\). This form is exactly consistent with the neural message-passing mechanism of GNN.
## 4 A Unified Framework
In the previous section, we elaborate message-passing mechanism of GNN from the perspective of inference on PGMs. According to Eq. (8), we can find that the vanilla GNN is equivalent to the first-order approximation of Eq. (6). In this section, we establish a unified framework to interpret existing GNN baselines and some well-designed deep GNN variants.
The formula of variational inference after embedding into Hilbert space exhibits a close connection with the mechanism of GNNs. Eq. (4) aggregates neighborhood variables and updates its own variable, which is the same as the key core of neural message-passing. This implies that different GNN propagation mechanisms are different approximations of the function \(\tilde{\mathcal{F}}_{\alpha,i}\). With the help of Taylor expansion Eq. (6), we can understand this framework more clearly. In the following, we theoretically prove that the propagation mechanisms of some popular GNN variants can be seen as fitting different order approximations of Eq. (6).
### Interpreting GCN and SGC
Graph Convolutional Network (GCN) is a popular baseline GNN inspired by spectral graph convolutions. We show the definition of vanilla graph convolution [12] with bias as follows:
\[h_{i}^{(\ell)}=\sigma\left(b_{i}^{(\ell)}+\sum_{j\in\mathcal{N}(i)}\frac{w_{j}^ {(\ell)}}{\sqrt{d_{i}d_{j}}}h_{j}^{(\ell-1)}\right). \tag{9}\]
A \(k\)-layer GCN can aggregate the messages of \(k\)-hop neighbors. Simplifying Graph Convolutional Network (SGC) removes nonlinearity activation in GCN while doing propagations. We will discuss the effect of nonlinearity activation in Section 4.5. Here we assume that nonlinearity activations do not significantly affect the network and analyze the two similar models, SGC and GCN, together.
As a basic message-passing neural network, GCN naturally satisfies the form of Eq. (8). To be specific, \(\tilde{\mathcal{F}}_{\alpha,i}\) is determined by the potential functions defined on the clique, but since the kernel mapping \(\phi_{\alpha}\) is undetermined, the learnable weights \(w\) coming with clique-related constants \(d_{i},d_{j}\) are used to approximate it. On the other hand, the bias in Eq. (9) corresponds to the constant term in Eq. (8). The specific equivalence is as follows:
\[h_{i}^{(\ell)}\Rightarrow u_{i}^{(l)},\quad b_{i}\Rightarrow\tilde{\mathcal{ F}}_{\alpha,i}(\mathbf{0}),\quad\frac{w_{j}^{(\ell)}}{\sqrt{d_{i}d_{j}}} \Rightarrow f_{ij}.\]
The aggregation layer of the GCN is inherently connected to the first-order approximation of Eq. (6). Through the analysis in Section 5.1, we will find that such a propagation process will deteriorate the performance after the network structure is deepened.
### Interpreting GAT
Graph attention network (GAT) [20] applies a popular attention mechanism to the aggregation layer. It uses attention weights to weigh aggregate neighbor messages. The ways to parameterize the attention weights \(\alpha\) can be dot product or feedforward neural network. The layers of the GAT-styled model can be represented as follows:
\[\begin{split} h_{i}^{(\ell)}=\sigma\left(\sum_{j\in\mathcal{N}( i)}\alpha_{ij}w_{j}h_{j}^{(\ell-1)}\right),\\ \alpha_{ij}=\text{Softmax}\left(f(h_{i}^{(\ell-1)},h_{j}^{(\ell -1)})\right),\end{split} \tag{10}\]
where \(\alpha_{ij}\) denotes the attention weight on neighbor \(j\in\mathcal{N}(i)\) when we are aggregating information at node \(v_{i}\), and \(f(\cdot)\) denotes a learnable attention weight calculation function.
We notice that the node aggregation process of GCN and GAT is quite similar. Compared with GCN, which uses the degree of nodes \(d\) for weighted aggregation, GAT uses attention weights \(\alpha\) instead. So its equivalence with Eq. (8) is:
\[\alpha_{ij}w_{j}^{\ell}\approx\left.\frac{\partial\tilde{\mathcal{F}}_{\alpha, i}(\mathbf{U}_{i})}{\partial u_{j}}\right|_{\mathbf{U}_{i}=0}. \tag{11}\]
The attention mechanism has a greater computational cost, but its attention coefficients can better fit \(\tilde{\mathcal{F}}_{\alpha,i}\) than GCN. This is because the value of the function \(\tilde{\mathcal{F}}_{\alpha,i}\) is determined by the node \(v_{i}\) and its neighbors. GAT can learn the features of the clique according to the node vector and its neighbors, while GCN can only use the node degree. The experimental results in Section 6 also verified this conclusion.
Both GCN and GAT are derived based on the first-order Taylor expansion, so they have performance degradation problems. There are several existing works on this issue, and their network architectures can alleviate the performance degradation problem of deep networks to some extent. In the following, we will interpret them from the perspective of PGM representation and analyze why they can achieve better results.
### Interpreting APPNP and GCNII
In the previous section, we elaborated on the classical GNNs under our framework. Intuitively, we have two ways to design a more powerful GNN. One is to find a more precise technique to fit \(\tilde{\mathcal{F}}_{\alpha,i}\), and the other is to perform message propagation based on the higher-order expansion of Eq. (6). Indeed, the propagation process of APPNP and GCNII can be regarded as the generalized form of second-order expansion. To prove this, we first briefly present two models.
APPNP [1] is a simple GNN originated from Personalized PageRank. It can be considered as the linear combination of the initial features and the current layer:
\[\mathbf{H}^{(\ell+1)}=(1-\alpha)\hat{\mathbf{A}}\mathbf{H}^{(\ell)}+\alpha \mathbf{H}^{(0)}. \tag{12}\]
GCNII [10] and APPNP explore the improvement of GCN from different perspectives. Still, the feature propagation of GCNII can be regarded as introducing identity mapping and a trainable weight matrix based on APPNP. The network propagation of GCNII is:
\[\begin{split}\mathbf{H}^{(\ell+1)}=\sigma\left(\left((1-\alpha) \,\hat{\mathbf{A}}\mathbf{H}^{(\ell)}+\alpha\mathbf{H}^{(0)}\right)\right.\\ \left.\left((1-\beta)\,\mathbf{I}_{n}+\beta\mathbf{W}^{(\ell)} \right)\right).\end{split} \tag{13}\]
The identity mapping in GCNII model helps assure that it performs at least as well as a shallow network. While in the theoretical analysis, we do not need to consider network optimization. In this case, we can reformulate the aggregation formula of GCNII as:
\[\mathbf{H}^{(\ell+1)}=\sigma\left(\hat{\mathbf{A}}\mathbf{H}^{(\ell)}\mathbf{ W}^{(\ell_{1})}+\mathbf{H}^{(0)}\mathbf{W}^{(\ell_{2})}\right). \tag{14}\]
When we adopt \(\beta=0\) in GCNII, we can find out that GCNII reduces to APPNP [1]. In other words, APPNP can be seen as a degenerate form of GCNII. Next, we will only analyze the propagation process of GCNII, and it is straightforward to modify the proof to obtain a nearly equivalent result for the APPNP.
**Proposition 1**.: _The propagation process of GCNII is equivalent to the second-order expansion of Eq. (6), where the initial residual corresponds to the approximation of the quadratic term._
Proof.: We start with the second-order Taylor expansion:
\[\begin{split} u_{i}^{(l)}&\approx\tilde{\mathcal{F}}_ {\alpha,i}(\mathbf{0})+\tilde{\mathcal{F}}_{\alpha,i}^{\prime}(\mathbf{0}) \mathbf{U}_{i}^{(l-1)}\\ &+\frac{1}{2}\mathbf{U}_{i}^{(l-1)^{\top}}\tilde{\mathcal{F}}_{ \alpha,i}^{\prime\prime}(\mathbf{0})\mathbf{U}_{i}^{(l-1)}+o\left[\mathbf{U}_ {i}^{(l-1)}\right]^{2}.\end{split} \tag{15}\]
We cannot directly express the quadratic term in the form of a graph convolution. To circumvent this problem, we approximate it with a linear term of the node features \(\mathbf{U}_{i}\). We know from Eq. (5) that \(\mathbf{U}_{i}\) is a tensor consisting of the features of node \(v_{i}\) and its neighbors. So we can rewrite the quadratic term as:
\[\frac{1}{2}\mathbf{U}_{i}^{(l-1)^{\top}}\tilde{\mathcal{F}}_{\alpha,i}^{\prime \prime}(\mathbf{0})\mathbf{U}_{i}^{(l-1)}=f_{1}\left(u_{i}^{(0)}\right)+\sum_{j \in\mathcal{N}(i)}f_{2}\left(u_{j}^{(l-1)}\right), \tag{16}\]
where \(f_{1}\) and \(f_{2}\) denote different transformations. Hence Eq. (15) is deduced as the following form:
\[u_{i}^{(l)}\approx\tilde{\mathcal{F}}_{\alpha,i}(\mathbf{0})+\sum_{j\in\mathcal{ N}(i)}g_{ij}\cdot u_{j}^{(l-1)}+f_{1}\left(u_{i}^{(0)}\right), \tag{17}\]
where we only keep the linear terms about \(u_{i}\) and \(u_{j}\), and \(g_{ij}\) represent the parameters. We represent Eq. (15) as the first-order expansion plus the initial residual, which is consistent with the ideas of GCNII and APPNP. The detailed proof is deferred to the appendix.
With this connection, one can show that the propagation process of GCNII is equivalent to the approximate form of the second-order expansion of Eq. (6). This means that our framework based on PGM representation can well explain this deep GNN model.
### Interpreting JKNet and DGCN
Jumping Knowledge Network (JKNet) and Decoupled Graph Convolutional Network (DGCN) are well-designed deep GNNs, and we connect their propagation operations in our framework using a similar pattern.
The JKNet proposed by [20] combines all previous representations at the last layer. The specific combination mechanisms can be max-pooling, concatenation, and LSTM-attention. For convenience, as is similar to [15], we omit the non-linear activation \(\sigma(\cdot)\). We denote \(\mathbf{Z}\) as the final output of the model and take the attention mechanism as an example of combination process for analyzing. The attention-JKNet can be described as:
\[\mathbf{Z}=\sum_{\ell=1}^{L}\alpha_{\ell}\hat{\mathbf{A}}^{\ell}\mathbf{X} \mathbf{W}^{\ell}, \tag{18}\]
where \(\alpha_{\ell}\) are learnable attention weights and \(\sum_{\ell=1}^{L}\alpha_{\ell}=1\). DGCN adopts a representation ensemble similar to JKNet, and uses a transformation process in each layer similar to GCNII :
\[\mathbf{Z}=\sum_{\ell=1}^{L}\alpha_{\ell}\hat{\mathbf{A}}^{\ell}\mathbf{X} \left(\beta_{\ell}\mathbf{W}^{(\ell)}+\left(1-\beta_{\ell}\right)\mathbf{I}_{ n}\right). \tag{19}\]
\(\alpha_{\ell}\) and \(\beta_{\ell}\) are trainable weights, \(\mathbf{I}_{n}\) is the identity mapping. DGCN ensures that the performance of the deep network is at least not worse than the shallow network by adding identity mapping. In fact, its propagation process is the same as JKNet.
For this kind of operation that ensemble the representations of \(k\) layers, it corresponds to the \(k\)-th expansion of Eq. (6):
\[u_{i}^{(l)}=\sum_{k=1}^{K}\frac{\tilde{\mathcal{F}}_{\alpha,i}^{k}(\mathbf{0} )}{k!}\left[\mathbf{U}_{i}^{(l-1)}\right]^{k}, \tag{20}\]
\[\frac{\tilde{\mathcal{F}}_{\alpha,i}^{k}(\mathbf{0})}{k!}\left[\mathbf{U}_{i }^{(l-1)}\right]^{k}\Rightarrow\alpha_{k}\hat{\mathbf{A}}^{k}\mathbf{X} \mathbf{W}^{k}.\]
Since the \(k\)-th order term of the expansion will contain high-dimensional tensors, we do not show the specific correspondence here. But we can know that the \(k\)-th term in Eq. (6) can aggregate k-hop information, and the coefficient items \(\frac{\tilde{\mathcal{F}}_{\alpha,i}^{k}(\mathbf{0})}{k!}\) are fitted by the trainable weight matrix.
In section 4.3, we proved that the transformations and identity mappings of GCNII can effectively reduce the error, so DGCN that applies a similar transformations method will theoretically perform better than JKNet. The experimental results also verify our presumptions.
### Discussion on Non-linearity
In the above subsections, we did not pay attention to non-linearity when analyzing these models. Here we discuss the role of activation functions in our framework. Recall what we stated in section 3, when we embed the formula of the obtained iterative solution into the Hilbert space, the mapping function needs to be injective. In [20], the authors discussed the graph convolution and proved its injectivity. This implies that adding an activation function (ReLU) satisfies our assumption. On the other hand, our network parameters need to fit the function \(\tilde{\mathcal{F}}\), and the non-linear activation can help the network better fit the non-linear functions.
## 5 On Designing Deep GNNs
In section 4, we obtained a unified framework that establishes the connection between the GNN models and the Taylor expansions of Eq. (4). However, the popular message-passing GNNs, such as GCN and GAT, are known to suffer from the performance degradation problem, which limits the deepening of their network structure. In this section, we use our framework to understand the performance degradation problem and explore a methodology to improve GNNs.
### Performance Degradation Problem
Several works study this performance degradation phenomenon from different perspectives, including generalization capability, low pass filtering on graphs, optimization of GNN, etc. Here we use the relationship between the GNN and Eq. (6) to analyze the reasons for the performance degradation of the classical GNNs.
We know that after enough iterations of Eq. (3), \(q^{l}\) will converge to the optimal solution \(q^{*}\). However, the propagation formula of GNN is not identical to Eq. (6) (Hilbert space embedding of Eq. (3)), but more similar to its first-order approximation. This means each layer of the GNN introduces an extra noise \(\epsilon^{(l)}\) when updating \(q^{(l)}\):
\[\hat{q_{i}}^{(l+1)}\left(z_{i}\right)+\epsilon_{i}^{(l+1)}=F_{i}\left(z_{i}, \hat{q_{j}}^{(l)}\left(z_{j}\right)\right),\quad j\in\mathcal{N}(i), \tag{21}\]
where \(F_{i}\) denotes the approximate form of Eq. (3), and \(\hat{q_{i}}\) denotes the iterative result1 of the GNNs. As the network deepens (the number of iterations increases), the convergence rate of \(q^{(l)}\) gradually slows down, but the accumulation of errors \(\epsilon^{(l)}\) increases. Therefore, the performance of baseline GNNs degrades greatly when stacking more layers. For concreteness, we limit our discussion here. We refer readers to the appendix for a more detailed description.
Footnote 1: Technically, the iteration of the GNN acts on the embedding vector of \(q_{i}\), here we simply replace it with \(q_{i}\).
### Coupling Graph Neural Network
Based on the established framework, we have some approaches to improve the performance of the GNN model from a high-level view. Using GNN for semi-supervised node classification can be regarded as a marginal distribution estimation of latent variables on PGMs. Thus, we can improve GNN from the perspective of the iterative solution of PGM. A deeper network corresponds to more iterations in Eq. (4); hence it can obtain better results. But this still requires the propagation process of the network to fit Eq. (6) as closely as possible. The approximation to Eq. (6) will introduce errors, causing performance degradation after the network reaches a certain depth.
Intuitively, Eq. (6) can be considered as the upper bound of the performance that variants of GNN can achieve. However, the computational cost of the propagation process and network structure which completely follows Eq. (6), is unacceptable. To this end, We need to design a computationally efficient approximation of Eq. (6) with as little error as possible. We propose a novel GNN model, Coupling Graph Neural Network (CoGNet), based on our established framework. Formally, we define the \(\ell\)-th layer of CoGNet as:
\[\begin{split}&\mathbf{P}^{(\ell+1)}=\mathcal{G}(\mathbf{H}^{ \ell},\mathbf{H}^{\ell-1})(\lambda_{\ell}\mathbf{W}_{\ell}+(1-\lambda_{\ell} )\mathbf{I}_{n}),\\ &\mathcal{G}(\mathbf{H}^{\ell},\mathbf{H}^{\ell-1})=\gamma_{\ell }\hat{\mathbf{A}}\mathbf{H}^{(\ell)}+(1-\gamma_{\ell})\mathbf{H}^{(\ell-1)}, \end{split} \tag{22}\]
where \(\lambda\) and \(\gamma\) are layer-specific learnable weights, and \(\mathbf{H}^{(\ell+1)}=\text{ReLU}(\mathbf{P}^{(\ell+1)})\).
In fact, the propagation process of CoGNet is equivalent to the second-order Taylor expansion of Eq. (4), but it is more accurate than GCNII. See the appendix for more detailed proof. We use the coupling of the two representations \(\mathcal{G}(\mathbf{H}^{\ell},\mathbf{H}^{\ell-1})\) as an approximation to Eq. (6), which reduces computational cost with small approximation errors. Note that in deep network layers, we use the initial representation as the coupling term. On the other hand, the learnable weights \(\lambda\) and \(\gamma\) enable the network to fit better the propagation process of Eq. (6). In addition, following the idea of GCNII, we also introduce identity mapping to enhance the deep network performance further.
The propagation process of CoGNet, compared with the vanilla GCN, introduces additional storage costs. Therefore, we follow the idea of the Reversible Transformer [14] and design reversible coupling, which recovers the representation of the previous layer in the backward pass without the need for checkpoints. For the large graph input, the reversible coupling can effectively reduce the model's memory requirements.
## 6 Experiment
In this section, we conduct extensive semi-supervised node classification experiments to evaluate CoGNet. We test the performance of CoGNet on the citation networks and natural language processing (NLP) downstream tasks.
### Citation Networks
#### 6.1.1 Dataset and Baselines
We use three standard benchmark citation datasets: Cora, Citeseer, and Pubmed for semi-supervised node classification. Their statistics are summarized in the appendix. We use the same fixed training/validation/testing split as [12] on three datasets. We evaluate our model against the following baselines:
* Graph convolutions: Chebyshev [13], GCN [11], GAT [15], SGC [20], APPNP [16], Graph U-Net [17].
* Deep GNNs: JKNet [21], GCNII [2], DAGNN [18], DGCN [2].
* Regularized GNN: Dropedge [14], GraphMix [23], GRAND [15]. Same as [15], we report the results of these methods with GCN as the backbone.
We use CoGNet-S to denote a shallow CoGNet with less than 4 layers. We further propose the CoGNet++ model, which applies two strategies, Consistency Regularization [15] and Dropedge [14], on CoGNet.
#### 6.1.2 Experiment Results
We summarize the performance on citation datasets in Table 1. We run CoGNet 100 times and report the mean and standard deviation of these 100 runs.
The upper part of Table 1 shows the variants of GNN, which are all shallow models. We can observe that CoGNet-S outperforms most baseline models. Specifically, the accuracy of the CoGNet compared with GCN and GAT has been greatly improved. In addition, we also compare the performance of popular deep GNN models, such as JKNet, GCNII, DAGNN, etc. We observe CoGNet outperforms all baseline deep GNN models. On the other hand, the deep CoGNet has a significant improvement over its shallow model, indicating that it can effectively alleviate the performance degradation problem of GNNs.
To better compare the performance of CoGNet and other deep GNNs, we plot the performance of the deep models with various numbers of layers in Fig. 1. We can observe that for the same number of network layers, CoGNet performs best in most cases. This result verifies the superiority of the propagation process of our model. It is worth noting that for the performance of shallow layers, models such as JKNet and GCNII have no advantage over GCN, but CoGNet can achieve better results than them in shallow networks.
We also apply the popular regularization strategy on graph-based models to our model, proposing CoGNet++. We can observe that it achieves state-of-the-art performance on Cora and Pubmed datasets, and it also achieves competitive results on the Citeseer dataset.
### NLP Tasks
Many NLP problems can be naturally expressed as graph structures, such as constructing sentences as graphs with words as nodes and constructing text as graphs composed of
paragraphs and entities. Therefore, neural graph-based models have emerged in recent years and have received increasing attention. However, shallow networks limit their performance. For example, in multi-hop Question Answering (QA), the two-layer graph convolutional network makes the nodes only access two-hop information and invisible for long-range knowledge. We only need to replace the GNN in the model [23, 24] with the CoGNet, which makes the network acquire a wider node perception field while deepening network layers and thus reach better results. We conduct experiments to test the performance of CoGNet on two NLP tasks, including text classification and multi-hop QA. Specific parameters of the experiment and more detailed illustrations refer to the appendix.
### Datasets
To thoroughly compare our model with the GCN, we test it on five widely used datasets of text classification: 20NG, MR, Ohsumed, R8, and R52. Their statistics are summarized in the appendix. HotpotQA is a dataset for explainable multi-hop QA.
#### Text Classification
Text classification is a fundamental task in NLP. [23] constructs a corpus-level heterogeneous graph containing document nodes and word nodes and uses a 2-layer GCN for classification. The results in Table 2 demonstrate a significant improvement of CoGNet over GCN and SGC in text classification.
#### Multi-hop QA
Multi-hop text-based QA is a popular topic in NLP. Many models explicitly find the inference path of the question with the help of graphs. For this case, multi-layer GNNs can obtain multi-hop information to assist inference. DFGN [24] constructs a graph of the entities mentioned in the query and then dynamically combines them with the help of a GAT. The two-layer GAT in DFGN can only reach two hops of information. Deepening the network depth can obtain multi-hop information, but the accuracy of downstream tasks is affected by the deep GNN. Table 3 reveals the test performance on the Hotpot QA dataset. We can observe that GCN has the worst performance, while shallow CoGNet (CoGNet-S) has a significant improvement over DFGN (GAT). On the other hand, deepening the CoGNet can achieve better results.
## 7 Conclusion and Future Work
In this paper, we developed a unified theoretical framework to understand the GNN baselines as well as various deep
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Model & Cora & Citeseer & Pubmed \\ \hline Chebyshev & 81.2 & 69.8 & 74.4 \\ GCN & 81.4 & 70.9 & 79.1 \\ GAT & 83.3 & 72.6 & 78.5 \\ SGC & 81.0 & 71.9 & 78.9 \\ APPNP & 83.3 & 71.8 & 80.1 \\ Graph U-Net & 84.4 & 73.2 & 79.6 \\ \hline JKNet & 81.1 & 69.8 & 78.1 \\ GCNII & 85.5 & 73.4 & 80.2 \\ DAGNN & 84.4 & 73.3 & 80.5 \\ DGCN & 84.8 & 72.7 & 80.0 \\ \hline Dropedge & 82.8 & 72.3 & 79.6 \\ GraphMix & 83.9 & 74.5 & 81.0 \\ GRAND & 85.4 & \(\mathbf{75.4}\) & 82.7 \\ \hline CoGNet-S & \(84.4\pm 0.7\) & \(72.4\pm 0.6\) & \(79.6\pm 0.9\) \\ CoGNet & \(85.6\pm 0.5\) & \(73.7\pm 0.7\) & \(80.5\pm 0.5\) \\ CoGNet++ & \(\mathbf{86.5\pm 0.3}\) & \(75.2\pm 0.4\) & \(\mathbf{82.9\pm 0.4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of semi-supervised classification accuracy results on Cora, Citeseer, and Pubmed.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Model & 20NG & R8 & R52 & Ohsumed & MR \\ \hline TextGCN & 86.3 & 97.1 & 93.5 & 68.4 & 76.8 \\ +SGC & \(\mathbf{88.5}\) & 97.2 & 94.0 & 68.5 & 75.9 \\ +CoGNet & \(88.3\) & \(\mathbf{97.4}\) & \(\mathbf{94.1}\) & \(\mathbf{69.1}\) & \(\mathbf{77.2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of test accuracy on text classification.
Figure 1: Semi-supervised node classification performance with various depths from 4 to 32.
GNN models using a graphical model representation. Specifically, we obtained an iterative solution of variational inference on Markov random fields, and the propagation operation of GNNs can be represented as approximate forms of it. We also proposed a theoretically motivated and powerful GNN which performs well on both shallow and deep network layers. An interesting direction for future work is to establish the connection between the approximate sampling methods, and the graph neural network to pursue a faster and more powerful sample-based GNN [3, 10, 11]. To complete the picture, understanding and improving the general GNN with the help of other variational methods would also be interesting.
|
2302.11378 | Quantifying the common genetic variability of bacterial traits | The study of common heritability, or co-heritability, among multiple traits
has been widely established in quantitative and molecular genetics. However, in
bacteria, genome-based estimation of heritability has only been considered very
recently and no methods are currently available for considering
co-heritability. Here we introduce such a method and demonstrate its usefulness
by multi-trait analyses of the three major human pathogens \textit{Escherichia
coli}, \textit{Neisseria gonorrhoeae} and \textit{Streprococcus pneumoniae}. We
anticipate that the increased availability of high-throughput genomic and
phenotypic screens of bacterial populations will spawn ample future
opportunities to understand the common molecular basis of different traits in
bacteria. | T. Tien Mai, Gerry Tonkin-Hill, John A. Lees, Jukka Corander | 2023-02-22T13:52:40Z | http://arxiv.org/abs/2302.11378v1 | # Quantifying the common genetic variability of bacterial traits
###### Abstract
The study of common heritability, or co-heritability, among multiple traits has been widely established in quantitative and molecular genetics. However, in bacteria, genome-based estimation of heritability has only been considered very recently and no methods are currently available for considering co-heritability. Here we introduce such a method and demonstrate its usefulness by multi-trait analyses of the three major human pathogens _Escherichia coli_, _Neisseria gonorrhoeae_ and _Streproococcus pneumoniae_. We anticipate that the increased availability of high-throughput genomic and phenotypic screens of bacterial populations will spawn ample future opportunities to understand the common molecular basis of different traits in bacteria.
\({}^{(1)}\) Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
\({}^{(2)}\) Parasites and Microbes, Wellcome Sanger Institute, Cambridgeshire, UK.
\({}^{(3)}\) Oslo Centre for Biostatistics and Epidemiology, Department of Biostatistics, University of Oslo, Norway.
\({}^{(4)}\) MRC Centre for Global Infectious Disease Analysis, Department of Infectious Disease Epidemiology, Imperial College, UK.
\({}^{(5)}\) Department of Mathematics and Statistics, University of Helsinki, Finland.
\({}^{(6)}\)European Molecular Biology Laboratory, European Bioinformatics Institute EMBL-EBI, Hinxton, UK.
email: [email protected]
Keywords: Co-heritability; Heritability; antimicrobial resistance; genetic relatedness; invasiveness; carriage duration.
## 1 Introduction
Understanding heritability of traits is one of the cornerstones of both quantitative and molecular genetics, representing a rich history that spans more than a century of research (Lynch and Walsh, 1998), (Burger, 2000). More recently, the genetic relationships between different traits have received substantial attention in human molecular genetics, for example leading to the identification of important relationships such as the one between height and coronary artery disease, as well as between ulcerative colitis and childhood obesity (Bulik-Sullivan et al., 2015). While methods for genome-wide association studies (GWAS) originally developed in human genetics have been successfully adapted to the field of bacterial genomics (Lees et al., 2016; Collins and Didelot, 2018; Earle et al., 2016), genome-wide estimation of heritability in bacteria has only more recently received attention (Lees et al., 2017; Mai et al., 2021). Thus far, identification of genetic correlations between traits in bacteria has received little attention. Hence, there is an opportunity to broaden our understanding of the genetic correlations of important bacterial traits, including antibiotic resistance, transmission rate, carriage duration, virulence and gene expression, to name a few.
Compared to studies in human genetics, genetic association studies in bacteria present unique methodological challenges due in part to the extensive variation in gene content and sequence diversity even within members of the same species (Tettelin et al., 2005). The clonal reproduction mode of bacteria further implies that linkage disequilibrium (LD) typically stretches much further than in sexually reproducing species and consequently many genomic regions may be associated with any given trait (Chen and Shapiro, 2015).
These issues have been addressed through the use of short reference independent sequence features as a replacement for the SNP-based analyses common in human genetics (Lees et al., 2016; Jaillard et al., 2018). Simulations, linear mixed models (LMMs) and penalised multiple regression have also been used to account for the highly clonal nature of bacterial genome datasets (Lees et al., 2016; Collins and Didelot, 2018; Lees et al., 2020). Lineage level effects are often reported when very high LD obscures locus-specific associations (Earle et al., 2016).
These advances have led to significant discoveries between genetic variation in bacteria and important traits such as antibiotic resistance (Earle et al., 2016), carriage duration (Lees et al., 2017) and virulence (Chaguza et al., 2020; Weinert et al., 2015). However, investigations of genetic correlations between such traits have so far mostly been restricted to broader scale comparisons of lineages, as exemplified by the identification of an association between antimicrobial resistance and carriage duration in _Streptococcus pneumoniae_(Lehtinen et al., 2017).
There exists a large body of literature for investigating genetic correlations between traits in humans and in animals in general. These include those based on Mendelian randomisation, which focuses on investigating previous variants found to be significantly associated with a phenotype, to investigate causal relationships between risk factors and disease. Other approaches include restricted maximum likelihood (REML) and polygenic scores which typically make use of linear mixed models (LMMs) to account for all observed SNPs. A comprehensive discussion of these approaches can be found in Van Rheenen et al., (van Rheenen et al., 2019). For many complex human traits such as height, heritability is spread over many sites with each contributing a small effect (Yang et al., 2010; Speed et al., 2017; Speed and Balding, 2019). Conversely, in bacteria, many phenotypes of interest are the result of a small number of variants with larger effect sizes, with a notable example being the resistance conferring mutations in the penicillin binding proteins of _S. pneumoniae_(Chewapreecha et al., 2014). For this reason, in bacterial genomics, penalised regression techniques present an attractive alternative to the more common LMM approach as it has been shown to better account for the clonal population structure of bacteria (Saber and Shapiro, 2020; Lees et al., 2020).
Here, we develop a high-dimensional penalised linear regression model to investigate genetic correlations between bacterial phenotypes. We demonstrate the accuracy and advantages of the approach by investigating the association between resistance to different antibiotic classes, carriage duration and virulence in a selection of _Streptococcus pneumoniae_, _Escherichia coli_ and _Neisseria gonorrhoeae_ data sets.
## 2 Results
### Overview
To investigate the genetic covariance and coheritability between two phenotypic traits of interest, we built off of previous work which used elastic net regression to generate accurate phenotypic prediction models and more accurately estimated heritability in related populations (Lees et al., 2020). Here, we model each phenotype as a linear combination of genetic variants with independent error terms to account for environmental and unmeasured genetic effects. The genetic covariance between the two traits is then obtained by subtracting the covariance of the inferred error terms from the covariance between the two traits of interest (Methods). This can then be standardised to obtain the genetic correlation with the resulting scale being between -1 indicating perfect negative correlation and 1 indicating perfect positive correlation between the estimated genetic effects. Similar to previous studies in human genetics, in addition to genetic correlation, we also consider the covariance of the causal effects (Shi et al., 2017; Bulik-Sullivan et al., 2015). This allows for the separation of causal correlations from those that may be driven by linkage disequilibrium (LD) and is particularly important in analysing highly clonal datasets such as those commonly found in bacterial genomics. Using this approach, we examined the genetic correlations between resistance to different
antibiotics across a number of bacterial species as well as virulence and carriage duration in _Streptococcus pneumoniae_.
Antibiotic resistance, virulence and carriage duration are genetically correlated in _Streptococcus pneumoniae_
To examine the ability of our approach to identify meaningful genetic correlations between bacterial phenotypes, we initially considered two large _S. pneumoniae_ datasets. The Maela dataset consists of 3069 whole genomes taken from an infant cohort study in a refugee camp on the Thailand-Myanmar border (Chewapreecha et al., 2014). This dataset has been used in genome wide association studies to identify genetic loci associated with antibiotic resistance pneumococcal carriage duration (Lees et al., 2017, Chewapreecha et al., 2014). Using our approach, we investigated the genetic correlation between the resistance to five antibiotics and carriage duration. Figure 1 indicates that pneumococcal carriage duration is positively correlated with resistance particularly in the case of Erythromycin. We found a similar correlation in the cases of Penicillin, Tetracycline and co-trimoxazole although there was a higher degree of uncertainty in these estimates. This is consistent with a recent study that investigated the association between resistance and carriage duration in the Maela dataset (Lehtinen et al., 2017). Interestingly, we found that there appears to be no covariance between the causal effects of carriage duration and resistance to any of the antibiotics considered. While laboratory studies have shown that resistance to antibiotics leads to reduced growth rates in _S. pneumoniae_(Rozen et al., 2007, Trzcinski et al., 2006) these results suggest that it is not the fitness costs of resistance that drive the association with carriage duration. Rather the fitness benefits of resistance are likely to be greater for those lineages with a longer duration of carriage due to the increased probability that they encounter antimicrobial treatment.
In addition to the correlation with carriage duration, we found that there were strong genetic correlations between resistance to each of the antibiotics considered (Figure 1). In the cases of tetracycline resistance, erythromycincin resistance, and chloramphenicol resistance, we also observed correlations between the causal effects. To investigate whether these correlations were unique to this dataset, we ran the same analysis on a large subset of the Global Pneumococcal Sequencing project consisting of genomes sampled in South Africa for which accurate resistance and virulence phenotypic information was known (Lo et al., 2019, Gladstone et al., 2019). Figure 2 shows that a very similar profile of genetic correlations between resistance to the same antibiotics was present in this dataset, which indicates that our results are consistent across multiple distinct locations. Although resistance to these antibiotics is thought to be mediated by different mechanisms, the corresponding resistance genes can be found in a single cassette ICESp23FST81. This cassette can be inherited both vertically and horizontally and varies in which genes it harbours: _tetM_ for tetracycline; _ermB_ or _mel_ and _mef_ for erythromycincin; _cat_ for chloramphenicol (Croucher et al., 2011). Thus, the causal correlation identified is likely to be driven by the presence and absence of this cassette which is not in strong LD with the rest of the genome.
The South African dataset also included accurate information on which genomes were isolated from invasive disease cases allowing us to consider the genetic correlation between resistance and virulence. We found the virulence was negatively genetically correlated with resistance although there was considerable uncertainty in these estimates. Similar to resistance and carriage duration, we did not observe a strong covariance between the causal effects of virulence and resistance. The anti-correlation between virulence and resistance has been observed previously in isolates of _S. pneumoniae_ from China where resistance was observed at much lower frequencies in isolates taken from children with meningitis (Wang et al., 2019). A similar finding has also been made in _Klebsiella pneumoniae_ where common virulence genes are rarely found together with antimicrobial resistance genes (Holt et al., 2015). A potential explanation for this is that, in contrast to carriage duration, resistance is less beneficial to those pneumococcal lineages that commonly cause invasive disease. In _S. pneumoniae_, these lineages are often rarely found in carriage studies indicating
they may encounter antimicrobial treatment less frequently than the common multidrug resistant lineages (Chaguza et al., 2016, Gladstone et al., 2019).
Figure 1: **Maela** data. Heritability in diagonals, genetic relatedness in the upper diagonals, genetic correlation in lower diagonals.
Figure 2: **Streptococcus pneumoniae data.** Heritability in diagonals, genetic relatedness in the upper diagonals, genetic correlation in lower diagonals.
Genetic correlation between resistance to different antibiotics is observed in multiple bacterial species
We also observed genetic correlations between resistance phenotypes in both _Escherichia coli_ and _Neisseria gonorrhoeae_. Using a dataset of 1509 _Escherichia coli_ isolates taken from an 11 year systematic hospital survey of bacteremia associated isolates in England, we explored whether there were identifiable correlations between the resistance profiles of each isolate (Kallonen et al., 2017). Figure 3 demonstrates that with the exception of Amoxicillin there was a positive genetic correlation between the resistance phenotypes. However, we observed considerable uncertainty in these estimates and thus can not rule out correlations between the other resistance profiles.
We also investigated the genetic correlations between antibiotic resistance phenotypes in 1595 _N. gonorrhoeae_ isolates collected from the USA, Canada and England (Unemo et al., 2016; Grad et al., 2016; Demczuk et al., 2015; De Silva et al., 2016; Schubert et al., 2018). This identified strong positive genetic correlations between the resistance phenotypes to Tetracycline, Ciprofloxacin and Penicillin (Figure 4). Similar to _S. pneumoniae_, _N. gonorrhoeae_ is thought to have different resistance mechanisms to each of these antibiotics. However, unlike in _S. pneumoniae_ we did not find a causal association between the effects of each antibiotic. Rather it is likely that the genetic correlation is driven by the two major modern gonococcal lineages with a multidrug resistance lineage being common in high-risk sexual networks and a multisusceptible lineage often associated with lower risk heterosexual networks (Sanchez-Buso et al., 2019). Strikingly, we also observed a negative genetic correlation between azithromycin and tetracycline or ciprofloxacin albeit with a greater level of uncertainty. This is consistent with the very rare instances of resistance to the common dual therapy (injectable ceftaxone plus oral azithromycin) suggesting that the fitness costs of gaining resistance to azithromycin in addition to certain other antibiotics is sufficiently high to prevent such isolates from proliferating (Sanchez-Buso et al., 2019; Grad et al., 2016).
Figure 3: **Ecoli data**. Heritability in diagonals, genetic covariance in the upper diagonals, genetic correlation in lower diagonals.
## 3 Discussion
The increasingly common use of genome wide association studies in bacterial genomics has led to vast improvements in our understanding of the links between bacterial genomes and important phenotypes such as antibiotic resistance and virulence. Here, we have developed a method to estimate the coheritability between traits in bacterial populations. Building on our earlier work using elastic net regression to fit whole genome models to estimate regression slopes (Lees et al., 2020), and our approaches to more accurately estimate heritability in related populations (Mai et al., 2021), we are able to use the covariance between fitted predictor values to estimate shared genetic contributions between pairs of traits. By looking at causal correlations, we have demonstrated that it is possible to distinguish when two phenotypes are likely to be the result of similar genetic mutations or when population structure and epidemiology could be driving the genetic correlation. This is highlighted by the observed associations between resistance, virulence and carriage duration in _S. pneumoniae_. Here, statistical genetics approaches are particularly powerful as these phenotypes can be difficult to measure in lab models, whereas they can be more readily quantified in the natural population. We show that consistent with their known causal genetic mechanisms, typically point mutations or genes which are not co-localised, these phenotypes are unlikely to be driven by the same genetic features.
In _S. pneumoniae_ we found evidence of co-heritability between tetracycline resistance, erythromycinycin resistance, and chloramphenicol resistance - though the strength and significance of this finding varied between populations. Although the causal elements themselves are not directly coheritable, the molecule they can be found on is inherited both vertically and horizontally. From a more methodological perspective, the SNPs on these elements would typically all be associated with each phenotype, and shared when multiple resistances are present (Lees et al., 2016). The replication of results across distinct pneumococcal datasets in Thailand and South Africa suggests that the method is robust to the impacts of population structure and sampling biases. We also demonstrated weaker genetic correlations between resistance to antibiotics in _N. gonorrhoeae_ and _E. coli_. Unlike in _S. pneumoniae_, these were found to not have causal correlations and are likely to be the result of the significant resistance associated with population structure observed in the
Figure 4: **Neisseria gonorrhoeae data**. Heritability in diagonals, genetic covariance in the upper diagonals, genetic correlation in lower diagonals.
species.
In summary, our method is capable of quantifying the shared genetic variability related to variance in pairs of phenotypes, in closely related bacterial populations. We did not explicitly model LD or relatedness, nor look at accessory genome variation or more complex genetic variation. Additionally, our findings at this point are largely observational. Nevertheless, we believe this will be a useful first step into further research on shared genomic bases between traits in pathogenic bacteria. To enable other researchers to easily conduct similar analyses, we have also provided an accompanying R package on GitHub ([https://github.com/tienmt/coher](https://github.com/tienmt/coher)).
## Acknowledgments
TTM is supported by the Norwegian Research Council grant number 309960 through the Centre for Geophysical Forecasting at NTNU. JAL acknowledges funding from the MRC Centre for Global Infectious Disease Analysis (reference MR/R015600/1), jointly funded by the UK Medical Research Council (MRC) and the UK Foreign, Commonwealth & Development Office (FCDO), under the MRC/FCDO Concordat agreement and is also part of the EDCTP2 programme supported by the European Union.
|
2303.11737 | Uncertainty estimation on frequency shift of Brillouin light scattering | Better knowing the precision on the measured value of the Brillouin peak
frequencies is essen-tial in order to use it to deduce parameters related to
studied materials. Modern methods for evaluating uncertainties are based on the
recommendations of the Guide to the expression of uncertainty in measurement.
After checking the agreement between the measured 15.7 GHz shift on a Brillouin
signal on Poly-(methyl methacrylate) and the expected value, the elementary
un-certainty terms are evaluated in two groups, by statistical methods and by
other means. We de-scribe the general principle of operation of Brillouin light
scattering setup with a high power laser and evaluate the elementary terms of
uncertainty, according to international standards en-acted for metrology. The
global uncertainty on frequency shift is then calculated and we find
+-1.20x10-2 at 2 {\sigma}. | Patrice Salzenstein | 2023-03-21T10:55:49Z | http://arxiv.org/abs/2303.11737v1 | # Uncertainty estimation on frequency shift of Brillouin light scattering
###### Abstract
Better knowing the precision on the measured value of the Brillouin peak frequencies is essential in order to use it to deduce parameters related to studied materials. Modern methods for evaluating uncertainties are based on the recommendations of the Guide to the expression of uncertainty in measurement. After checking the agreement between the measured 15.7 GHz shift on a Brillouin signal on Poly-(methyl methacrylate) and the expected value, the elementary uncertainty terms are evaluated in two groups, by statistical methods and by other means. We describe the general principle of operation of Brillouin light scattering setup with a high power laser and evaluate the elementary terms of uncertainty, according to international standards enacted for metrology. The global uncertainty on frequency shift is then calculated and we find \(\pm\)1.20x10\({}^{-2}\) at 2 \(\sigma\).
## 1 Introduction
Brillouin light scattering (BLS) is gradually gaining popularity in various industrial applications and in laboratories. This work is about BLS and focus on the discussion about the associated uncertainty.
BLS is the inelastic scattering of light through sound waves. BLS is then a good way to study the elastic properties of materials. It is a non-contact, non-destructive method, and relatively easy to implement with appropriate means. We propose here to come back to the main steps, which allow having specific instrumentation in order to estimate the speed of propagation of phononic waves in materials. Of course, we realize that there are necessary aspects of theoretical knowledge. It is not a question here of going back to all the theories in order to understand the matter. This would require going too deeply in a systematical description. To better frame the subject that we are discussing here, we remind about the main contributions to detect sound waves, by brothers Curie [1], and inelastic light scattering in materials thanks to their excitation by sound waves with Brillouin [2]. The whole instrumental aspect has importance as modern benches for Brillouin diffusion need the appropriate technology. Among other necessities, there are interferometry technologies. In addition, such a Brillouin scattering bench needs conventional optical components, and primarily a sufficiently powerful laser.
Stimulated Brillouin scattering is a nonlinear process. It can also occur in optical fibers. It manifests itself concretely through the creation of a backward propagating Stokes wave, which carries most of the input power, once the one reaches Brillouin threshold [3]. The magnetic properties of materials are via their magnetic excitations - magnons, thanks to Brillouin scattering. As with phonons, magnons concern surface and bulk excitations [4]. Indeed, Brillouin inelastic light scattering spectroscopy is widely used for the study of phonons but also magnons in materials. This technique has become an essential tool [5, 6]. It is of course complementary to inelastic light scattering Raman spectroscopy [7]. Kojima shows that those techniques have become essential for studying materials science [8]. Grimsditch and Ramdas have made precise measurements with Brillouin scattering in the early seventies on Diamond [9]. It is also useful to recall the differences between Brillouin scattering and another well-known technique, Raman spectroscopy. The latter type of spectroscopy is used to determine the chemical composition and molecular structure of the transmission medium, while Brillouin scattering can be used to measure the elastic behavior of a material. A more systematic method was implemented based on a tandem Fabry-Perot interferometer by Sandercock
[10] in 1970 and then by Lindsay, Anderson and Sandercock [11], and Dil et al [12]. Hillebrands [13] and Scarponi et al [14] improved corresponding instrumentation techniques.
Our objective is to assess with a consistent metrological approach the uncertainties of BLS. This paper provides an experimental part and the determination of the uncertainties associated with the determination of peaks corresponding to the shift between the frequency of the signal refracted by a material and to the laser serving as an interrogation signal. The knowledge of this value, as well as the parameters of the studied materials, can also provide the value of the speed of the corresponding phononic waves, when intrinsic characteristic data of the evaluated materials are known. To lead the discussion on the uncertainty associated with the BLS, we rely on the standards of metrology.
The work is organized as follows: section 2 presents materials and methods, section 3, results and verification. Section 4 consists in a discussion about the uncertainty.
## 2 Materials and Methods
The analysis mainly consists in a method of detection of the refracted light emitted by a material under test, i. e. the Device under Test (DUT). This material can be isotropic or anisotropic. One of the key of the measure is the Tandem Fabry-Perot double interferometer. Detected peaks are shifted from the wavelength of the laser. Those offset frequencies depend on the properties of the material of the DUT. This paper aims to lead an estimation of the uncertainty obtained on the frequency shift that can lead later to parameters of the material like phase velocity of transverse and longitudinal waves, deduced from BLS. Estimating the uncertainty requires knowing the contribution of the different fixed parameters like the optical index, the wavelength, the diffusion angle, the density of the material, and the longitudinal and shear modulus, but especially fluctuation of the source, mechanical stability of the setup, and environmental parameters in the room. For this uncertainty estimation, we use a similar method like in optics and microwaves based on the requirement delivered by the _Bureau International des Poids et Mesures_ (BIPM) in the guide "Evaluation of measurement data - Guide to the expression of uncertainty in measurement (GUM)" [15].
We will focus in this part on the principle of Brillouin light scattering. BLS using a 532 nm powerful Class 4 laser up to 600 mW is efficient to reveal spin wave or acoustic signals, at frequencies from few Giga Hertz to more than a hundred of Giga Hertz. Fluctuations of refractive index in a medium enables the detection and analysis of laser light scattered, thanks to BLS setup [11, 12]. The double interferometer is visible on Figure 1. The general principle is to send the signal generated by the laser focusing it on the part of the sample that we want to characterize. The photons arrive in the material or in the thin layer and interact with the lattice or more generally with the material.
Figure 1: Photography of the double Fabry-Pérot interferometer.
Light helps to create phonons. These phonons propagate with speeds that may be different depending on whether the mode is transverse or longitudinal. It depends on the nature of the material, as it can be isotropic or anisotropic. The phonons in turn create light, which are shifted in frequency relatively to the wavelength of the laser. The BLS precisely consists in analysis of the refracted light emitted by a material [11, 13].
Tandem Fabry-Perot interferometer produces peaks shifted from the frequency of the laser to characteristic frequencies depending on the material. Figure 2 gives the typical set-up used for the measurement, showing the typical setup (a) and a picture of the system (b).
We calibrated the bench with part of the laser signal, used as the bench reference. Inside the commercial bench developed by the Swiss company "The time Stable", the light goes with six passages through two different interferometers. Each pair of mirrors is very precisely aligned during the calibration procedure.
It is necessary to calibrate accurately the instrument. It is sensitive to mechanical vibrations, temperature and hygrometry. Alignment process requires an alignment of the two cavities. Each of the two cavities consists in a pair of parallel mirrors. Tandem interferometer produces two series of absorption peaks with respect to a flat noisy intensity level. We then obtain a curve providing the number of absorbed photons versus frequency.
We detail in section 4 the different contributions of some of those elements to the global uncertainty on the measured signal.
## 3 Results and verification
Experiments reproduce known Brillouin scattering peaks of bulk materials and some thin films.
Typical Brillouin scattering stimulation reveals acoustic or spin waves frequencies in the range between 3 and 150 GHz, although this is generally limited to around thirty Giga Hertz. In this section, we provide some examples of spectra with detected photons versus frequency shift for isotropic or anisotropic materials.
As a way of calibration, we can characterized a bulk material such as for example Poly-(methyl methacrylate) (PMMA). This isotropic material exhibit well-defined peaks.
From BLS we can deduce parameters of the material like phase velocity of transverse and
Figure 2: **(a):** Typical setup for BLS. JRS TFP2 is a commercial Tandem Fabry-Pérot interferometer. BLS: Brillouin Light Scattering. DUT: device under test. M: mirror. FP: Fabry-Pérot. P: prism. PD: photodetector. E: electronics. CU: computer unit. **(b):** The commercial Tandem Fabry-Pérot interferometer is inside the box on the right side of this picture.
longitudinal waves: Knowing \(n\) (optical index), \(\lambda\) (wavelength), \(\theta\) (diffusion angle), v (phase velocity of transverse or longitudinal waves), \(\rho\) (density of the material), c\({}_{11}\) and c\({}_{44}\) (longitudinal and shear modulus), the Brillouin frequency v\({}_{\rm B}\) is given by (i):
\[\nu_{{}_{B}}=\frac{2nv}{\lambda_{0}}\cdot\sin\left(\frac{\theta}{2}\right)\]
As a concrete example of an isotropic material, we check PMMA, given by the curve on figure 3.c. with \(\lambda\)=532nm, n=1.49, c\({}_{11}\)=9 GPa, \(\rho\)=1.19x10\({}^{3}\) kg/m\({}^{3}\), \(\theta\)=180\({}^{\circ}\), v\({}_{\rm L}\)=(c\({}_{11}\)/ \(\rho\))\({}^{-1/2}\)=2750 m/s, we can check that v\({}_{\rm B}\)=15.7 GHz. We demonstrated a good agreement between predicted value for frequency peak and its measured value.
This curve of measurements, given in Fig. 3, is primarily for illustrative purposes. We will focus in the next section on how we can trust the measured frequency values.
Note that for a material like sapphire, which is isotropic, the peaks will depend on the orientation of the DUT sample to be measured. In this case, it would be useful to take an interest in the curves of slowness in the space of the k-vectors, corresponding to the orientation of the sample with respect to the laser signal sent to such DUT.
## 4 Discussion about the uncertainty
In this section, we aim to lead an estimation of the uncertainty on the frequency shift induced by BLS.
Before going into more details in how we may estimate the uncertainty, it is useful to think about the approach in its determination. In the scientific community, it is important to underline that a debate exists as to whether there is a true value. Thomas von Clarmann et al offer the benefit of a critical discussion on the error concept versus the uncertainty concept [16]. Jong Wha Lee et al [17] compare the realist view of measurement and uncertainty versus the instrumentalist view of measurement when quantities are not natural attributes of the world that exist independently of the human perception. They show that a clear understanding of the two views is critical for understanding the guide "Evaluation of measurement data - Guide to the expression of uncertainty in measurement (GUM)" [15].
Estimating the uncertainty requires the knowledge of the contribution of the different fixed parameters, such as the optical index, the wavelength, the diffusion angle, the density of the material, and the longitudinal and shear modulus, but especially fluctuation of the source, mechanical stability of the setup, and environmental parameters in the room. From the equation given in the previous part, we see that the phase velocity of the transversal or longitudinal waves linearly depends on v\({}_{\rm B}\) (the Brillouin frequency), n (the optical index), \(\lambda\) (the wavelength), Q (the
Figure 3: BLS spectrum for PMMA with peaks at 15.7 GHz. Frequency shift expressed in Giga Hertz is on horizontal axis. Vertical axis corresponds to the number of detected photons; it can be in an arbitrary unit.
diffusion angle).
The estimation of uncertainty follow the modern way of performing it [18]. For this uncertainty estimation, we use a similar method like in optics [19 - 21] and microwaves [22, 23] based on the requirement delivered by the Bureau International des Poids et Mesures (BIPM) in its guide "Evaluation of measurement data - Guide to the expression of uncertainty in measurement (GUM)", available in reference [15].
The uncertainty in the result of a measurement consists of several components, which may be easily grouped into two main categories according to the way in which their numerical value is estimated.
### Statistical contributions
Following the guidelines, the first category is called A-type. It corresponds to contributions evaluated by statistical methods such as reproducibility, repeatability. We can point out here the variations of results versus time, or with various operators. There is also a question of finding a good compromise between measuring fast enough from few minutes to few hours, and increasing the resolution by having enough samples, as we choose to have at least 2048 samples.
Repeatability A1 is the variation in measurements obtained by one person on the same item and under the same conditions. Repeatability conditions include the same measurement procedure, the same observer, the same measuring instrument, used under the same conditions, repetition over a short period of time, the same location. As the operator does not change, this term is chosen to be zero.
For the reproducibility A2, the same operator perform the measurements. There is no changes caused by differences in operator behavior. All components and devices are dedicated to the instrument, and none of them is replaced. This term is chosen to be zero.
Frequency resolution A3 of the measurement depends on the number of samples, i. e. the difference between two measurement frequency points along the horizontal axis. For 2048 points on a 30 GHz scale, we have a 15 MHz interval. The characteristic peak of Brillouin is a Lorentzian distribution [24], also known as a Cauchy distribution, which is a probability density function. Lorentzian, noted L(f) versus the frequency shift of the optical signal is given in the following expression (ii):
\[L(f)=\left(\frac{\gamma}{\pi}\right)*\left[\frac{1}{\gamma^{2}+(\mathrm{f}- \mathrm{f}0)^{2}}\right]=\left(\frac{1}{\pi\gamma}\right)*\frac{1}{1+(\mathrm{ f}-\mathrm{f}0)^{2}/\gamma^{2}}\]
(ii)
where \(\gamma\) is half of the width at half maximum (FWHM): \(\gamma\)=FWHM/2, \(\mathrm{f}_{0}\) is the true value of the Brillouin frequency peak. For a given true value, with a peak of great smoothness, we will have in the worst configuration 3 points, which will allow us to approximate a curve in the form of Lorentzian. BLS on PMMA, Nylon or glass show well-defined peaks for isotropic materials. Considering a rectangular distribution, we deduce that A3=0.0005/3=2.89x10\({}^{-4}\).
Finally, statistical contribution is:
(iii) A = \(\sqrt{(}\mathrm{\Sigma A_{i}}^{2})\)
We assume that it is not negligible, even when we track peaks up to 150 GHz from the optical carrier at a 532 nm wavelength, regarding the contributions of other elementary terms we describe in the next sub-section. The more samples there are, the higher the peak is. However, it has no influence on the precision of the frequency shift. Concerning the contribution of the elementary
terms of statistical type, we can see the impact of the resolution on the uncertainty of a measured peak, but there is still a risk of not detecting a peak if the sampling frequency is too low.
### Contributions evaluated by other mean
Second family of uncertainties contributions is for those assessed by other means. They are called B-type and depend on various components and temperature control. Experience with or general knowledge of the behavior and properties of relevant materials and instruments determines them.
Frequency references of the 5 MHz or 10 MHz type possibly ensures the traceability of the BLS method to national standards [25, 26]. Indeed, it is then possible to have the best references in terms of frequency stability to connect them to additional measuring devices such as oscilloscopes or any means of frequency measurement.
When there is no traceability or calibration certificates, we refer to manufacturer's specifications, data provided in calibration and other certificates, or uncertainties assigned to reference data taken from handbooks. Such terms are called B\({}_{\mbox{R}}\). BLS is not referenced to a standard, as the method is intrinsic. So the data provided in calibration and other certificates, noted B\({}_{\mbox{R}}\), are not applicable. It results that we can keep zero as a good approximation of BR.
Other elementary terms are noted BL: the components in B-type category should be characterized by quantities, which may be considered as approximations to the corresponding variances.
We find here the contribution of the laser, noted BL\({}_{1}\), as it mainly contributes with its uncertainty on the wavelength, but also with its uncertainty relative to the beam diameter given at 1.7 \(\pm\)0.2 mm, according to the manufacturer Laser quantum Ltd., to the pointing stability less than 2 \(\upmu\)rad\({}^{\circ}\)C, and the beam angle less than 1 mrad. It may contribute to geometrical error, especially in the double interferometer. Some cosine error can then occur. The laser beam and the axis of displacement are not completely parallel [27, 28]. If we call A the angle between the two axis (beam axis and displacement axis) we have then an elementary term of error e\({}_{\mbox{A}}\) = L(cosA - 1), which is approximatively equal to - LxA\({}^{2}\)/2 as A\(<<\)1. For a 1 mm distance, we have A up to 10\({}^{-4}\) and e\({}_{\mbox{A}}\) up to 8x10\({}^{-11}\). It leads only to an is negligible elementary term. Another contribution to this term is called Abbe error [29, 30]. It corresponds to the magnification of angular error over distance. An angular error of 1 degree corresponds to a positional error of over 1.745 cm, equivalent to a distance-measurement error of 1.745%. In our case, we have an error up to 0.000085%. This elementary term of error of Abbe is dominating on the error of parallelism. Assuming a rectangular distribution of this term, we deduce BL\({}_{1}\)=4.91x10\({}^{-7}\).
Contribution of the laser to the noise is noted BL\({}_{2}\). Relative Intensity Noise (RIN) of lasers are related to the ratio between the average of the square of the fluctuation optical power (\(\delta\varphi\)) on the the square of the average optical power \(\varphi_{0}^{2}\).
(iv) RIN(\(\omega\))=\(\left|\,\delta\varphi\,\right|\,^{2}>/\,\varphi_{0}^{2}\)
where \(\omega\) is the pulsation. RIN generally present a floor until the Fourier frequency, which is equal to the relaxation frequency of the laser. Then the noise decreases. This relaxation frequency is generally in the range of one Mega Hertz. Datasheet of the Torus 532 nm laser (Laser quantum Ltd.) indicates a RIN no worse than -135 dB/Hz at 10 GHz. Using a Fabry-Perot interferometer (JRS Scientific Instruments) the torus laser typically shows high spectral purity with side bands \(<\)-110 dB compared with the central mode. This laser is set to operate in normal conditions between 15 to 35\({}^{\circ}\)C. We can consider that BL\({}_{2}\)=5.77x10\({}^{-11}\).
PD contribution is BL\({}_{3}\). Datasheet of Hamamatsu H10682-210 indicates specifications in the
range of -20 to +50\({}^{\circ}\)C, with count sensitivity respectively at wavelengths of 500 nm and 600 nm typically between 4.6 x 10\({}^{5}\) and 1.3 x 10\({}^{5}\) s\({}^{-1}\). pW\({}^{-1}\). We assume it is not limitative for photons detected during measurements. This contribution is negligible for the effect on the frequency shift.
We consider the contribution of temperature, noted BL4. The temperature varying less than 0.1\({}^{\circ}\)C during measurement, despite it can vary from 0.5 degrees during a whole day. Temperature variation in the laboratory is in the range 21 - 25 \({}^{\circ}\)C. The maximum variation is \(\pm\)2\({}^{\circ}\)C. It is compatible with what is written in the datasheet of the double interferometer : "The temperature of the environment should be regulated to better than \(\pm\)2\({}^{\circ}\)C over a 24 hour period." Its influence in terms of noise is e\({}_{\textrm{Temp}}\)=10xLog (298/296) =0.0292dB. This distribution is rectangular. It is important to clarify that these variations are slow variations, and that the double interferometer is stable against temperature changes. We deduce that BL4=3.89x10-3.
Contribution on the value of the wavelength relies on environmental conditions such as pressure and humidity. We call it BL5. Under normal laboratory measurement conditions, the contribution of small pressure variations and of relative humidity remain negligible. The measurements do not show any dependence to those environmental contributions. This term BL5 is systematically negligible.
Resolution of instruments is noted BL6. It is determined with a rectangular distribution by the value read on each voltmeter for power meter. Resolution is then no worse than 5x10-7. BL6=5x10-7/3=2.89x10-7.
Contribution of the electronics and especially the use of automatic/manual range is noted BL7. We deduce from knowledge of experimenter that this influence is no more than 2.5x10-3. BL7=0.0025/3=1.44x10-3.
Contribution of vibrations due to the environment is noted BL8. We do not operate in case of known vibration source in the environment and the table is enough robust to prevent diffusion of vibration. Pneumatic legs support the optical table. The spectrometer can then operate, but it is only isolated against building vibrations and not against vibrations introduced directly into the table. We operate in safe conditions and avoid any vibration due to components placed on the table. This term can be negligible in normal operating conditions.
The total contribution of BL=\(\Sigma\)BLi is the arithmetic sum of each elementary contribution. It is determine to be BL=4.91x10-7+5.77x10-11+3.89x10-3+2.89x10-7+1.44x10-3=5.33x10-3.
### Estimation of the global uncertainty
Uncertainty at a 1 \(\sigma\) interval of confidence is calculated as follows:
(v) \[\textrm{u}_{\textrm{c}}=\sqrt[]{(\textrm{A}^{2}+\textrm{BR}^{2}+\textrm{BL}^{2})}\]
We deduce from equation (v) that uncertainty at 1 sigma, noted uc, is better than \(\sqrt[]{[(2.89x10^{-4})^{2}+0^{2}+(5.33x10^{-3})^{2}]}\). Its leads to a global uncertainty of \(\pm\)5.34x10-3 at 1 \(\sigma\).
For convenience, and to keep an operational uncertainty in case of degradation or drift of any elementary term of uncertainty, it is wise to slightly degrade the global uncertainty. This is why we choose to keep U = \(\pm\)1.20x10-2 at 2 \(\sigma\) for a common use, corresponding to a voluntary degradation of the uncertainty evaluation assuming that uc \(<\) 6.00x10-3. This final uncertainty is defined at 2 \(\sigma\)
according to the empirical rule as 68.27% at 1 \(\sigma\) is not enough, but 95.45% at 2 \(\sigma\) is more efficient for a normal distribution in statistics.
## 5 Conclusion
We first remind that Brillouin Scattering Stimulation of bulk materials and some thin films is non-intrusive and relatively easy to perform. We note that further improvements are still in progress especially for sending light through the sample to be characterized, through its backside. The main result our work, is that after checking the agreement between the measured 15.7 GHz shift on a Brillouin signal on PMMA and the expected value, we make an estimation of the uncertainty on the frequency of the shifted signal corresponding to the speed of phonons inside a material, according to the standards of metrology. The uncertainty is assumed to be of \(\pm\)1.20x10\({}^{-2}\) at 2 \(\sigma\) for an accurate measurement.
|
2307.12911 | On the impact of the Migdal effect in reactor CE$ν$NS experiments | The search for coherent elastic neutrino nucleus scattering (CE$\nu$NS) using
reactor antineutrinos represents a formidable experimental challenge, recently
boosted by the observation of such a process at the Dresden-II reactor site
using a germanium detector. This observation relies on an unexpected
enhancement at low energies of the measured quenching factor with respect to
the theoretical Lindhard model prediction, which implies an extra observable
ionization signal produced after the nuclear recoil. A possible explanation for
this additional contribution could be provided by the so-called Migdal effect,
which however has never been observed. Here, we study in detail the impact of
the Migdal contribution to the standard CE$\nu$NS signal calculated with the
Lindhard quenching factor, finding that the former is completely negligible for
observed energies below $\sim 0.3\,\mathrm{keV}$ where the signal is
detectable, and thus unable to provide any contribution to CE$\nu$NS searches
in this energy regime. To this purpose, we compare different formalisms used to
describe the Migdal effect that intriguingly show a perfect agreement, making
our findings robust. | M. Atzori Corona, M. Cadeddu, N. Cargioli, F. Dordei, C. Giunti | 2023-07-24T16:09:15Z | http://arxiv.org/abs/2307.12911v3 | # On the impact of the Migdal effect in reactor CE\(\nu\)NS experiments
###### Abstract
The search for coherent elastic neutrino nucleus scattering (CE\(\nu\)NS) using reactor antineutrinos represents a formidable experimental challenge, recently boosted by the observation of such a process at the Dresden-II reactor site using a germanium detector. This observation relies on an unexpected enhancement at low energies of the measured quenching factor with respect to the theoretical Lindhard model prediction, which implies an extra observable ionization signal produced after the nuclear recoil. A possible explanation for this additional contribution could be provided by the so-called Migdal effect, which however has never been observed. Here, we study in detail the impact of the Migdal contribution to the standard CE\(\nu\)NS signal calculated with the Lindhard quenching factor, finding that the former is completely negligible for observed energies below \(\sim 0.3\,\mathrm{keV}\) where the signal is detectable, and thus unable to provide any contribution to CE\(\nu\)NS searches in this energy regime. To this purpose, we compare different formalisms used to describe the Migdal effect that intriguingly show a perfect agreement, making our findings robust.
## I Introduction
Coherent elastic neutrino-nucleus scattering (CE\(\nu\)NS) is a pure weak neutral current low-energy process predicted by Freedman in 1973 [1] where the neutrino interacts with the nucleus as a whole. Namely, all the nucleons within the nucleus respond coherently to the neutrino interaction, leading to a higher cross section than other low-energy processes involving neutrinos. The remarkable observations of CE\(\nu\)NS from the COHERENT Collaboration [2; 3; 4] have opened a new era to test our knowledge of the Standard Model (SM) of particle physics and has posed an exciting technological challenge to develop innovative detectors capable of spotting the extremely tiny nuclear recoils produced as a single outcome of the interaction [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. These experiments play a crucial role in advancing our knowledge of neutrino interactions and for their implications for fundamental physics [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36].
CE\(\nu\)NS requires a high neutrino flux to produce signal events above experimental backgrounds. Among the different neutrino sources available, in this study, we focus on CE\(\nu\)NS produced from reactor antineutrinos and observed with germanium detectors. There are three main germanium detectors currently operating, namely NCC-1701 [13; 14] (also referred to as Dresden-II), CONUS [37] and \(\nu\)GEN [12], located 10.39 m, 17.1 m and 11 m away from 2.96 GW\({}_{\mathrm{th}}\), 3.9 GW\({}_{\mathrm{th}}\), 3.1 GW\({}_{\mathrm{th}}\) commercial reactors, respectively. Interestingly, the Ricochet experiment at the ILL site, 8.8 m away from the core of the 58.3 MW\({}_{\mathrm{th}}\) research nuclear reactor, is also aiming to measure CE\(\nu\)NS down to the sub-100 eV nuclear energy recoil regime [38].
In particular, the recent first observation of CE\(\nu\)NS at the Dresden-II reactor [14] has gained a lot of attention due to the broad impact of such a result on current and future CE\(\nu\)NS searches, and the physics that can be extracted within the SM and beyond [39; 40; 41; 42]. As a matter of fact, this measurement is highly affected by the knowledge of the germanium quenching factor (QF) at low nuclear recoil energies. The QF quantifies the reduction of the ionization yield produced by a nuclear recoil with respect to an electron recoil of the same energy. Indeed, the CE\(\nu\)NS observation by Dresden-II depends crucially on the two new QF measurements reported in Ref. [43]. They have been obtained from photo-neutron source measurements, so-called YBe, and from iron-filtered monochromatic neutrons, so-called Fef [43]. However, these two QF determinations are in contrast with and significantly higher than the standard Lindhard prediction with the parameter \(k=0.157\)[44] and other independent experimental measurements (see e.g. Ref. [45] for a recent measurement and the summary plot in
Fig. 1). Moreover, CONUS data disfavours quenching parameters above \(k=0.27\)[9] and a recent low-energy determination of the QF finds a good agreement with the Lindhard theory with a parameter \(k=0.162\pm 0.004\) (stat+sys) [46].
Among the possible solutions to explain the increase of the QF at very low energies, it has been proposed that the Lindhard model might be not sufficient to encode the full behavior of the QF in the low-energy regime. In particular, it has been suggested [43] that the Migdal effect may play a crucial role. This yet unobserved process1 has been first proposed in 1941 [48] and recently taken into serious consideration in the context of dark matter searches [49; 50; 51; 52; 53; 54; 55; 56] and neutrino physics [57; 58]. The Midgal effect might happen after a nuclear recoil is induced by a neutral particle, i.e., a neutron, a neutrino, or dark matter, due to the displacement between the recoiling nucleus and the electronic cloud. Indeed, the atomic electrons do not follow immediately the motion of the recoiling nucleus. Thus, in order to restore equilibrium, extra ionization can be injected into the detector. In Refs. [59; 60] there have been attempts to include the Migdal effect by means of some phenomenological parameters originally introduced to go beyond the standard Lindhard theory [61]. However, the approach exploited in these studies neglects the microscopic physics of the Migdal process that can instead be accounted for using existing formalisms.
Footnote 1: On the same day our manuscript has been made public on arXiv, another one appeared on this topic [47]. In this work, the authors reported the first direct search for the Migdal effect in liquid xenon using nuclear recoils produced by tagged neutron scatters. They did not observe a signal consistent with predictions, supporting the findings of our work.
The goal of this paper is to perform a detailed characterization of the Migdal effect in a germanium detector searching for CE\(\nu\)NS at a reactor site. To do so, we use different approaches to describe the microscopic physics of the phenomenon to quantify the robustness of our findings. Finally, we discuss to which extent the Migdal effect plays a role in the observation of CE\(\nu\)NS at the Dresden-II reactor and thus, whether it can be claimed as an explanation of the anomalous enhancement of the QF at low energies.
## II Ce\(\nu\)NS
The differential CE\(\nu\)NS cross section for a neutrino of flavor \(\ell\) that scatters off a nucleus \(\mathcal{N}\) with \(Z\) protons and \(N\) neutrons is given by [1; 62; 63; 64; 65]
\[\frac{d\sigma_{\nu\epsilon\mathcal{N}}}{dT_{\mathrm{nr}}}=\frac{ G_{\mathrm{F}}^{2}M}{\pi}\left(1-\frac{MT_{\mathrm{nr}}}{2E_{\nu}^{2}}\right)\times\] \[\left[g_{V}^{p}\left(\nu_{\ell}\right)ZF_{Z}\left(\left|\vec{q} \right|^{2}\right)+g_{V}^{n}NF_{N}\left(\left|\vec{q}\right|^{2}\right)\right] ^{2}, \tag{1}\]
where \(T_{\mathrm{nr}}\) is the nuclear recoil energy, \(E_{\nu}\) is the neutrino energy, and the term inside the square brackets is the so-called weak charge of the nucleus \(\mathcal{Q}_{W}\). In Eq. (II) \(G_{\mathrm{F}}\) is the Fermi constant and \(M\) the nuclear mass. For Ge (\(Z\)=32) we use \(N\)=(38, 40, 41, 42, 44) corresponding to the natural abundances of 0.2057 (\({}^{70}\)Ge), 0.2745 (\({}^{72}\)Ge), 0.0775 (\({}^{74}\)Ge), 0.3650 (\({}^{74}\)Ge), 0.0773 (\({}^{76}\)Ge) [66]. \(F_{Z}\left(\left|\vec{q}\right|^{2}\right)\) and \(F_{N}\left(\left|\vec{q}\right|^{2}\right)\) are, respectively, the proton and neutron form factors that parameterize the loss of coherence for increasing values of the momentum transfer \(\left|\vec{q}\right|\). In the low-energy regime that characterizes reactor neutrino experiments, the form factors are practically equal to one, so basically the result does not depend on the parametrization used as well as on the values of the proton and neutron distribution radii. The neutrino-nucleon couplings are computed using the radiative corrections as described in Refs. [64; 67] that yield to \(g_{V}^{p}(\nu_{e})=0.0382\) and \(g_{V}^{n}=-0.5117\).
For a CE\(\nu\)NS experiment located at a reactor site, only the \(\bar{\nu}_{e}\) flavor contributes. The antineutrino spectrum \(dN_{\bar{\nu}_{e}}/dE_{\nu}\) is obtained by combining the expected spectra for \(E_{\nu}>2\) MeV from Ref. [68] with the low energy part determined in Ref. [69]. This parameterization is referred to as HMVE, and it has been shown [39] that the usage of other parameterizations [70; 71; 72] does not significantly affect the results of the Dresden-II analysis.
When an isolated nuclear recoil occurs, the electron-equivalent (ee) nuclear recoil energy observed in the detector, \(E_{\mathrm{det}}\), is expressed as
\[E_{\mathrm{det}}=f_{Q}(T_{\mathrm{nr}})T_{\mathrm{nr}}, \tag{2}\]
where \(f_{Q}(T_{\mathrm{nr}})\) is the energy-dependent parameterization of the QF. The theoretical CE\(\nu\)NS event rate is therefore
\[\frac{dR}{dE_{\mathrm{det}}}= N_{T}(\mathrm{Ge})\int_{E_{\nu}^{\mathrm{min}}}^{E_{\nu}^{ \mathrm{max}}}dE_{\nu}\frac{dN_{\bar{\nu}_{e}}}{dE_{\nu}}\frac{d\sigma_{\bar{ \nu}_{e}\mathcal{N}}}{dT_{\mathrm{nr}}}\times\] \[\left(f_{Q}+T_{\mathrm{nr}}\frac{df_{Q}}{dT_{\mathrm{nr}}} \right)^{-1}, \tag{3}\]
where \(N_{T}(\text{Ge})\) is the number of target atoms in the detector, \(E_{\nu}^{\text{min}}(T_{\text{nr}})\simeq\sqrt{MT_{\text{nr}}/2}\) and \(E_{\nu}^{\text{max}}\simeq 10\) MeV. The last term in Eq. (3) is \(dT_{\text{nr}}/dE_{\text{det}}\), which is needed to express the rate in terms of the electron-equivalent nuclear recoil energy defined in Eq. (2).
## III The Migdal effect
Despite the very intense \(\bar{\nu}_{e}\) flux, the search for CE\(\nu\)NS from reactor neutrinos is very challenging due to the tiny signal produced and the still-under-debate behavior of the quenching factor at low energies. In this scenario, it is crucial to characterize the impact of the Migdal effect to prevent potential misinterpretations of experimental data. We first calculate the Migdal rate using the formalism of Ibe _et al._[49], which considers the target as composed of isolated atoms. Moreover, this formalism relies on the dipole approximation that allows one to write the Migdal transition matrix element, \(M_{fi}\), in the form
\[M_{fi} =\langle\psi_{f}|e^{-im_{e}\vec{v}\cdot\sum_{i=1}^{Z}\vec{r}_{i}} |\psi_{i}\rangle\simeq\] \[\simeq-im_{e}\vec{v}\cdot\langle\psi_{f}|\sum_{i=1}^{Z}\vec{r}_{i }|\psi_{i}\rangle\] \[\equiv-im_{e}\vec{v}\cdot\vec{D}_{fi}, \tag{4}\]
where \(\vec{r}_{i}\) is the position operator of the \(Z\) electrons, \(\vec{D}_{fi}\) is the dipole matrix element, \(m_{e}\) is the electron mass, \(\vec{v}\) is the nuclear recoil velocity, while \(\psi_{f}\) and \(\psi_{i}\) are the wavefunctions of the final and the initial atomic states in the nucleus rest frame. The final state wavefunctions are boosted to the rest frame of the recoiling nucleus by a Galilean transformation and are computed using the Dirac-Hartree-Fock method.
Under these assumptions, the differential cross section for the Migdal effect can be written as
\[\left(\frac{d\sigma_{\bar{\nu}_{e}\cdot\mathcal{N}}}{dT_{\text{nr }}}\right)^{\text{Ibe}\;et\;al.}_{\text{Migdal}}= \frac{G_{\text{F}}^{2}M}{\pi}\left(1-\frac{MT_{\text{nr}}}{2E_{ \nu}^{2}}\right)\mathcal{Q}_{W}^{2}\times\] \[\times\left|Z_{\text{ion}}(q_{e})\right|^{2}, \tag{5}\]
where \(\left|Z_{\text{ion}}(q_{e})\right|\) is the ionization rate of an individual electron in the target with momentum \(q_{e}\). It is defined as
\[\left|Z_{\text{ion}}(q_{e})\right|^{2}=\frac{1}{2\pi}\sum_{n,\ell}\int dT_{e} \frac{d}{dT_{e}}p_{q_{e}}^{c}\left(n\ell\to T_{e}\right), \tag{6}\]
where \(p_{q_{e}}^{c}(n\ell\to T_{e})\) are the ionization probabilities for an atomic electron with quantum numbers \(n\) and \(\ell\) that is ionized with a final energy \(T_{e}\). It should be noticed that very similar results are expected if one relies on the probabilities calculated in Ref. [73] with an independent approach. Indeed, the authors have shown a very good agreement with the results obtained with the Ibe _et al._ formalism that is also used in this work, demonstrating that the addition of semi-inclusive ionization probabilities is not significant at low recoil energies for atomic germanium.
The double differential cross section for the \(n\ell\) state contribution as a function of both the electron and the nuclear recoil energy is hence
\[\left(\frac{d^{2}\sigma_{\bar{\nu}_{e}\cdot\mathcal{N}}}{dT_{ \text{nr}}dT_{e}}\right)^{\text{Ibe}\;et\;al.}_{n\ell}= \frac{G_{\text{F}}^{2}M}{\pi}\left(1-\frac{MT_{\text{nr}}}{2E_{ \nu}^{2}}\right)\mathcal{Q}_{W}^{2}\times\] \[\frac{1}{2\pi}\,\frac{d}{dT_{e}}p_{q_{e}}^{c}(n\ell\to T_{e}). \tag{7}\]
If the nuclear recoil is followed by a Migdal emission, the total energy deposit of the event in the detector is
\[E_{\text{det}}=f_{Q}T_{\text{nr}}+T_{e}+E_{n\ell}, \tag{8}\]
where the first term is the nuclear recoil energy deposit, while \(T_{e}\) and \(E_{n\ell}\) account for the extra energy injected in the detector, \(E_{n\ell}\) being the atomic de-excitation energy for Ge [49]. We evaluate the theoretical event rate as a function of the detected energy, which is given by
\[\left(\frac{dR}{dE_{\text{det}}}\right)^{\text{Ibe}\;et\;al.}_{ \text{Migdal}} =N_{T}(\text{Ge})\sum_{n,\ell}\int_{E_{\text{in}}^{\text{min}}}^{E_ {\nu}^{\text{max}}}dE_{\nu}\frac{dN\bar{\nu}_{e}}{dE_{\nu}}\] \[\int dT_{e}\int_{T_{\text{nr}}^{\text{min}}}^{T_{\text{nr}}^{ \text{max}}}dT_{\text{nr}}\left(\frac{d^{2}\sigma_{\bar{\nu}_{e}\cdot \mathcal{N}}}{dT_{\text{nr}}dT_{e}}\right)^{\text{Ibe}\;et\;al.}_{n\ell}\] \[\delta(E_{\text{det}}-f_{Q}T_{\text{nr}}-T_{e}-E_{n\ell}), \tag{9}\]
where we have imposed energy conservation using the Dirac \(\delta\) and \(T_{\text{nr}}\) is now constrained within the values \(T_{\text{nr}}^{\text{min}}\) and \(T_{\text{nr}}^{\text{max}}\) given by [57]
\[\frac{\left(T_{e}+E_{n\ell}\right)^{2}}{2M}\leq T_{\text{nr}}\leq\frac{\left(2E _{\nu}-\left(T_{e}+E_{n\ell}\right)\right)^{2}}{2(M+2E_{\nu})}. \tag{10}\]
The rate in Eq. (9) represents the Migdal contribution summed over all the possible \(n\ell\) atomic states. The total predicted event rate is thus given by the sum of Eq. (3) and Eq. (9).
Migdal photo absorption
The formalism described so far to compute the dipole matrix element for the Migdal rate relies on the assumption that the target atom is isolated. While this assumption is acceptable for noble elements, as for argon or xenon detectors [58], it is expected to be less valid in semiconductors, where solid-state effects should be considered. However, developing a first-principle theory that goes beyond the isolated atom approximation is challenging because of the many-body effects that need to be taken into account. Remarkably, the formalism developed in Ref. [52] relates the photoabsorption cross section \(\sigma_{\gamma}\) to the dipole matrix element, necessary to compute the Migdal ionization rate, without requiring any many-body calculation. This scheme will be referred to as Migdal photoabsorption approximation (MPA). One of the major advantages of MPA is that the photoabsorption cross section is experimentally known, such that the Migdal rate suffers from very small uncertainties [52], well below the precision required in this work. MPA has been so far adopted in the context of dark matter searches, where the power of the formalism has been proved by comparing it to other computations for silicon and xenon [49, 50]. However, MPA has never been exploited in the context of neutrino scattering. Here, for the first time, we use it in this context and we compare its predictions with the formalism of Ibe _et al._ for germanium detectors. Explicitly, we derive the Migdal contribution to the CE\(\nu\)NS cross section under MPA as
\[\left(\frac{d^{2}\sigma_{\bar{\nu}_{e}\text{-}\mathcal{N}}}{dT_{ \text{nr}}dE_{r}}\right)^{\text{MPA}}_{\text{Migdal}}= \frac{G_{\text{F}}^{2}M}{\pi}\left(1-\frac{MT_{\text{nr}}}{2E_{ \nu}^{2}}\right)\mathcal{Q}_{W}^{2}\times\] \[\frac{1}{2\pi^{2}\alpha_{\text{EM}}}\frac{m_{e}^{2}}{M}\frac{T_ {\text{nr}}}{E_{r}}\sigma_{\gamma}^{\text{Ge}}(E_{r}), \tag{11}\]
where \(\alpha_{\text{EM}}\) is the fine structure constant and \(E_{r}\) is the energy deposit due to atomic excitation or ionization such that \(E_{\text{det}}\leq f_{Q}T_{\text{nr}}+E_{r}\). The photoabsorption cross section \(\sigma_{\gamma}^{\text{Ge}}(E_{r})\) for Ge has been taken from Refs. [74, 75] for \(E_{r}\geq 10\) eV\({}_{\text{ee}}\). The theoretical event rate as a function of \(E_{\text{det}}\) is obtained by integrating over all possible nuclear recoil and neutrino energies and imposing energy conservation.
It should be noticed that depending on the crystal scale that one is able to probe, other effects that account for the response of multiple atoms at once, should be considered as they have been proved to highly enhance the Migdal rate [76, 77, 78, 79]. However, in current CE\(\nu\)NS reactor experiments, the typical nuclear recoil energy \(T_{\text{nr}}\) for Ge is of the order of 1 keV. Thus, the momentum transfer is \(|\vec{q}|\simeq\sqrt{2MT_{\text{nr}}}\sim 10\) MeV with a corresponding de Broglie wavelength of about 20 fm. The latter is much smaller than the scale of the interparticle spacing in the crystal so, in this work, we can safely neglect these effects.
## V Results and discussion
In order to compare the impact of the Migdal contribution to the standard CE\(\nu\)NS rate, we consider a 1 kg Ge detector located 10 m away from a 3 GW\({}_{\text{th}}\) reactor power plant whose \(\bar{\nu}_{e}\) spectrum is given by the HMVE parameterization. This specific configuration resembles that of current CE\(\nu\)NS reactor experiments, like Dresden-II, CONUS and \(\nu\)GEN.
Under these assumptions, in Fig. 1 we show the theoretical CE\(\nu\)NS rate obtained with Eq. (3) as a function of the detected energy, adopting for the QF the standard Lindhard parameterization. We also show the Migdal rate determined with the Ibe _et al._ formalism as in Eq. (9), isolating the contributions from the different \(n\) shells, obtained by summing over all the contributions from different orbital angular momenta in the initial state \(\ell\). We compare this result with the Migdal prediction under the MPA scheme, finding intriguingly that the two formalisms give practically identical results in the energy range considered.
Figure 1: CE\(\nu\)NS theoretical differential rate for a Ge detector located 10 m away from a 3 GW\({}_{\text{th}}\) reactor using the Lindhard QF (black solid line). We also show the Migdal rate obtained under the Ibe _et al._ formalism (solid blue line), highlighting also the contributions of the different \(n=2,3,4\) atomic shells (dashed curves), and the rate obtained with the MPA formalism (solid red line).
the Migdal contribution is completely subdominant with respect to the CE\(\nu\)NS one for energies below \(\sim 0.6\) keV\({}_{\rm ee}\), with the most significant contribution given by the \(n=2,\,3\) shells. Above \(\sim 0.6\) keV\({}_{\rm ee}\) it starts to dominate2, and it could provide the possibility to observe CE\(\nu\)NS above this threshold, even if being so small it would require extremely low levels of background.
Footnote 2: Note that a similar trend is also found in Ref. [58], where a comparison between the CE\(\nu\)NS rate and the Migdal contribution using the formalism of Ibe _et al._ has been evaluated for xenon and argon detectors in a reactor site. We have used their publicly available code [58] to verify our results.
We now focus on the Dresden-II science case. Here, as already stated in the introduction, the Migdal effect has been invoked as a possible explanation of the enhancement measured in the Fef and YBe quenching factors at low energies that in turn enabled the observation of CE\(\nu\)NS in the Dresden-II data. In the top panel of Fig. 2, we show the Dresden-II reactor ON (Rx-ON) data along with the standard CE\(\nu\)NS predictions obtained with three different QFs, namely Lindhard, Fef and YBe. To derive these spectra we used all the experimental information on the Dresden-II detector, including energy-smearing effects, following Refs. [14; 39; 40]. In the bottom panel of Fig. 2 we show the same spectra for the three QFs but we compare them to the Dresden-II data residuals after background subtraction [14]. It is evident that only the Fef and marginally the YBe QFs fit the excess and lead to a statistically significant CE\(\nu\)NS observation for \(E_{\rm det}\lesssim 0.3\) keV\({}_{\rm ee}\). In the same figure, we also show the Migdal contribution using the MPA formalism. It is clear that adding the Migdal contribution to the CE\(\nu\)NS Lindhard prediction is not sufficient to explain the CE\(\nu\)NS Fef or YBe predictions, given that the former is completely negligible with respect to the CE\(\nu\)NS signal. Moreover, we find that the neutrino-electron scattering (\(\nu\)ES) [39; 80] overcomes the Migdal rate for \(E_{\rm det}\gtrsim 0.7\) keV\({}_{\rm ee}\), as shown in the top panel of Fig. 2. Overall, both Migdal and \(\nu\)ES rates are so small that with the current experimental precision can be overlooked in SM CE\(\nu\)NS searches. However, in some scenarios of physics beyond the SM, like non-standard properties of neutrinos, their contribution could be significantly enhanced and thus they need to be taken into account [39; 40] to derive meaningful limits.
In the inset of Fig. 2, for comparison purposes, we show a review of the existing data from germanium detectors searching for CE\(\nu\)NS at a reactor site, i.e. Dresden-II [14], \(\nu\)GEN [12] and CONUS (C1 Run-1) [9]. Interestingly, despite the fact that CONUS and \(\nu\)Gen have reached a much lower background level compared to Dresden-II, they have not detected CE\(\nu\)NS yet. Having a conservative threshold of \(\sim 0.3\) keV\({}_{\rm ee}\), future data will be needed to confirm the Dresden-II observation of CE\(\nu\)NS, which is localized below this threshold. Nevertheless, despite the low background level reached, the Migdal contribution is so small that it could be safely neglected also in experiments like \(\nu\)GEN and CONUS, which show a good agreement with the expected background. Similar conclusions are expected also for silicon detectors like CONNIE [10; 15] that operate in a similar energy range.
Figure 2: Expected number of CE\(\nu\)NS events in the Dresden-II detector obtained using different quenching factors, i.e. Fef (blue line), YBe (cyan line) and Lindhard (purple line). The Migdal contribution corresponds to the red curve, while the neutrino electron scattering (\(\nu\)ES) contribution is given by the green line. In the top panel, we compare these curves with the Dresden-II reactor ON data, while in the bottom panel we show the Dresden-II data residuals after background subtraction. The inset shows a comparison of Dresden-II reactor ON (Rx-ON) [14], \(\nu\)GEN [12] and CONUS (C1 Run-1) [9] data, all rescaled to the same units.
To conclude, we have shown that the Migdal contribution is orders-of-magnitude subdominant in the region of interest for reactor CE\(\nu\)NS searches with germanium detectors, independently of the formalism used to model the Migdal effect. Thus, the enhancement of the quenching factor at low energies found in Ref. [14] that enabled the observation of CE\(\nu\)NS at the Dresden-II site requires a different explanation than the standard Migdal effect.
###### Acknowledgements.
The authors are thankful to Simon Knapen for a fruitful discussion on the topic. The work of C. Giunti is supported by the research grant "The Dark Universe: A Synergic Multimessenger Approach" number 2017X7X85K under the program "PRIN 2017" funded by the Italian Ministero dell'Istruzione, Universita e della Ricerca (MIUR).
|
2306.14091 | Sterile Neutrino Portal Dark Matter with $Z_3$ Symmetry | In this paper, we consider the sterile neutrino portal dark matter with $Z_3$
symmetry. This model further extends the canonical type-I seesaw with a fermion
singlet $\chi$ and a scalar singlet $\phi$. Under the $Z_3$ symmetry, the dark
sector transforms as $\chi\to e^{i2\pi/3}\chi, \phi\to e^{i2\pi/3}\phi$, while
the standard model particles and the sterile neutrino $N$ transform trivially.
Besides the interactions as $y_{N} \phi \bar{\chi}N$ and $\lambda_{H\phi}
(H^\dag H) (\phi^\dag \phi)$ allowed in the $Z_2$ symmetry, the $Z_3$ symmetry
also introduces two new terms, i.e., $y_\chi \phi \overline{\chi^{c}} \chi$ and
$\mu\phi^3/2$. These new interactions induce additional semi-annihilation
processes as $\chi\chi\to N\chi$ and $\phi\phi\to h\phi$ for WIMP dark matter.
We then perform a comprehensive analysis of the phenomenology of this $Z_3$
symmetric model. Viable parameter space is explored under the constraints from
dark matter relic density, Higgs invisible decay, indirect and direct detection
for both fermion and scalar dark matter. We find that the semi-annihilation
channels $\chi\chi\to N\chi$ and $\phi\phi\to N\chi$ can lead to quite
different phenomena from the $Z_2$ symmetric model, which provides a viable
pathway to distinguish these two kinds of model. | An Liu, Zhi-Long Han, Yi Jin, Honglei Li | 2023-06-25T01:41:40Z | http://arxiv.org/abs/2306.14091v1 | # Sterile Neutrino Portal Dark Matter with \(Z_{3}\) Symmetry
###### Abstract
In this paper, we consider the sterile neutrino portal dark matter with \(Z_{3}\) symmetry. This model further extends the canonical type-I seesaw with a fermion singlet \(\chi\) and a scalar singlet \(\phi\). Under the \(Z_{3}\) symmetry, the dark sector transforms as \(\chi\to e^{i2\pi/3}\chi,\phi\to e^{i2\pi/3}\phi\), while the standard model particles and the sterile neutrino \(N\) transform trivially. Besides the interactions as \(y_{N}\phi\bar{\chi}N\) and \(\lambda_{H\phi}(H^{\dagger}H)(\phi^{\dagger}\phi)\) allowed in the \(Z_{2}\) symmetry, the \(Z_{3}\) symmetry also introduces two new terms, i.e., \(y_{\chi}\phi\overline{\chi}^{c}\chi\) and \(\mu\phi^{3}/2\). These new interactions induce additional semi-annihilation processes as \(\chi\chi\to N\chi\) and \(\phi\phi\to h\phi\) for WIMP dark matter. We then perform a comprehensive analysis of the phenomenology of this \(Z_{3}\) symmetric model. Viable parameter space is explored under the constraints from dark matter relic density, Higgs invisible decay, indirect and direct detection for both fermion and scalar dark matter. We find that the semi-annihilation channels \(\chi\chi\to N\chi\) and \(\phi\phi\to N\chi\) can lead to quite different phenomena from the \(Z_{2}\) symmetric model, which provides a viable pathway to distinguish these two kinds of model.
Introduction
The identity of particle dark matter (DM) and the explanation for the tiny mass of neutrinos remain outstanding questions in particle physics, garnering attention as crucial topics in current research. Their common origin presents the possibility of future exploration into new physics beyond the standard model. The Weakly Interacting Massive Particle (WIMP) is the most promising dark matter candidate [1]. However, this scenario, such as the extensively studied Higgs portal [2; 3; 4] and \(Z^{\prime}\) portal model [5; 6; 7; 8], usually suffer stringent constraints from direct detection. Therefore, a new interaction portal for the WIMP dark sector should be considered.
Sterile neutrinos \(N\) are introduced to generate the tiny neutrino mass via the canonical Type-I seesaw mechanism [9; 10]. For a proper mixing angle with active neutrinos, the keV scale sterile neutrino can serve as a decaying dark matter [11; 12]. Then the radiative decay \(N\to\nu\gamma\) leads to an observable signature at X-ray telescopes [13], which is able to explain the tentative 3.5 keV line signal [14]. However, the parameter space for sterile neutrino dark matter now is tightly constrained [15]. If the sterile neutrinos \(N\) are charged under the dark group, the lightest sterile neutrino becomes a stable dark matter. In this scenario, the tree-level Type-I seesaw is also forbidden by the dark group, then light neutrino mass could be generated via the radiative mechanism [16; 17; 18; 19].
On the other hand, the electroweak scale sterile neutrino \(N\) is an ideal messenger between the dark sector and the standard model [20; 21; 22; 23; 24; 25; 26; 27; 28]. This is facilitated through the new Yukawa coupling \(y_{N}\phi\bar{\chi}N\), which enables the secluded channel \(\phi\phi/\chi\chi\to NN\), providing an additional annihilation pathway for the WIMP dark matter. In particular, this scenario features a relatively small nucleon scattering cross section, and permits the indirect detection of observable gamma-ray signals [29; 30; 31; 32; 33; 34]. For an electroweak scale dark matter annihilating via the sterile neutrino portal, it is still allowed by direct detection and is hopefully probed by indirect detection in the near future. Meanwhile, the sterile neutrino portal dark matter produced through the freeze-in mechanism is also extensively studied [35; 36; 37; 38; 39; 40; 41; 42].
The interactions between the dark sector and the standard model particles are typically governed by the dark group, such as the well-studied \(Z_{2}\)[43] or \(U(1)_{B-L}\) symmetry [44]. Although the simplest \(Z_{2}\) symmetry has demonstrated success in dark matter with a simplified phenomenology, more sophisticated dark groups, e.g. \(Z_{N}(N\geq 3)\)[45], \(A_{4}\)[46] and \(SU(2)\) symmetry [47], are also options. For instance, an alternative explanation for the observed relic abundance of dark matter through semi-annihilation may be achieved by introducing the next simplest \(Z_{3}\) symmetry, leading to a lower bound on the direct detection cross section [48; 49].
In this paper, we consider the sterile neutrino portal dark matter with \(Z_{3}\) symmetry [50; 51]. This model
includes a fermion singlet \(\chi\) and a scalar singlet \(\phi\), both of which transform non-trivially under the exact \(Z_{3}\) symmetry as \(\chi\to e^{i2\pi/3}\chi,\phi\to e^{i2\pi/3}\phi\). The standard model particles and the sterile neutrinos are not charged under the imposed \(Z_{3}\) symmetry. Compared with the \(Z_{2}\) symmetry, the \(Z_{3}\) symmetry allows two new interaction terms, i.e., \(y_{\chi}\phi\overline{\chi^{e}}\chi\) and \(\mu\phi^{3}/2\), which would lead to new annihilation channels of dark matter. The semi-annihilation of fermion dark matter via the process \(\chi\chi\to N\chi\) in the framework of effective field theory has been considered in Ref. [50]. Focusing on the self-interaction of dark scalar \(\phi\), Ref. [51] studies the non-thermal production of dark matter \(\chi\) by the late time decay \(\phi\to\chi\nu\). In this paper, we perform a comprehensive analysis of WIMP dark matter for both scalar and fermion scenarios. The non-thermal production of dark matter will be considered in a separate paper [52].
This paper is structured into several sections. In Sec. II, we introduce the sterile neutrino portal dark matter model with \(Z_{3}\) symmetry. In Sec. III, we illustrate the evolution of the dark matter abundance under certain scenarios. Then we perform a random scan to obtain the viable parameter space for correct relic abundance. In Sec. IV, we calculate the branching ratio of Higgs invisible decay. In Sec. V, we explore the indirect detection constraints of dark matter. In Sec. VI, we compute the direct detection cross section of dark matter. Finally, in Sec. VII, we provide concluding remarks for our study.
## II The model
This model further extends the Type-I seesaw with a dark sector under the \(Z_{3}\) symmetry. Sterile neutrinos \(N\) are introduced to generate tiny neutrino mass. In this paper, we consider the electroweak scale \(N\) in order to accommodate WIMP dark matter. The dark sector consists of a scalar singlet \(\phi\) and a fermion singlet \(\chi\). Under the dark \(Z_{3}\) symmetry, the dark sector transforms as \(\chi\to e^{i2\pi/3}\chi,\phi\to e^{i2\pi/3}\phi\), while the standard model particles and the sterile neutrinos transform trivially. The lightest particle in the dark sector serves as dark matter. In this paper, both fermion and scalar dark matter will be considered.
The Yukawa interaction takes the form of
\[-\mathcal{L}_{Y}=\left(y_{\nu}\overline{L}\tilde{H}N+y_{N}\phi \bar{\chi}N+h.c.\right)+y_{\chi}\phi\overline{\chi^{e}}\chi, \tag{1}\]
where \(L\) is the left-handed lepton doublet and \(H\) is the Higgs doublet with \(\tilde{H}=i\sigma_{2}H^{*}\). Light neutrino mass is generated by the Type-I seesaw as
\[m_{\nu}=-\frac{v^{2}}{2}y_{\nu}m_{N}^{-1}y_{\nu}^{T}, \tag{2}\]
where \(v=246\) GeV is the vacuum expectation value of the Higgs field. For electroweak scale sterile neutrinos, the mixing angle with light neutrino \(\theta\) is at the order of \(\sqrt{m_{\nu}/m_{N}}\sim 10^{-6}\), which is far below current collider limits [53].
The scalar potential under the exact \(Z_{3}\) symmetry is
\[V=-\mu_{H}^{2}H^{\dagger}H+\mu_{\phi}^{2}\phi^{\dagger}\phi+\lambda_{H}(H^{ \dagger}H)^{2}+\lambda_{\phi}(\phi^{\dagger}\phi)^{2}+\lambda_{H\phi}(H^{\dagger }H)(\phi^{\dagger}\phi)+\left(\frac{\mu}{2}\phi^{3}+h.c.\right), \tag{3}\]
where all the parameters are taken to be real. After the electroweak symmetry breaking, the physical mass of dark scalar \(\phi\) is \(m_{\phi}^{2}=\mu_{\phi}^{2}+\lambda_{H\phi}v^{2}/2\). The scalar potential in Equation (3) must have a finite minimum to prevent unbounded energy, which requires [49]
\[\lambda_{H}>0,\quad\lambda_{\phi}>0,\quad\lambda_{H\phi}+2\sqrt{\lambda_{H} \lambda_{\phi}}>0. \tag{4}\]
Meanwhile, the stability of electroweak vacuum sets an upper bound on the cubic coupling \(\mu\) as
\[\mu\leq 2\sqrt{\lambda_{\phi}}m_{\phi}, \tag{5}\]
in the limit of small \(\lambda_{H\phi}\). In order to maintain the validity of perturbation theory, \(|\lambda_{\phi}|\leqslant\pi\) and \(|\lambda_{H\phi}|\leqslant 4\pi\) should be further satisfied. In the following studies, we assume \(\mu<3m_{\phi}\), which is also allowed by the unitarity constraints [54].
## III Relic density
Figure 1: The dominant annihilation channels of fermion dark matter. Panel (a) is for the secluded channel \(\chi\chi\to NN\), while panels (b)-(d) are for the semi-annihilation channel \(\chi\chi\to N\chi\).
In this section, we first discuss the annihilation channels of dark matter. As shown in Figure 1, there are two dominant annihilation channels for the fermion dark matter \(\chi\). One is the secluded channel \(\chi\chi\to NN\), which also exists in the \(Z_{2}\) symmetry model. The other one is the semi-annihilation channel \(\chi\chi\to N\chi\)[55], which is induced by the new Yukawa coupling \(y_{\chi}\phi\overline{\chi^{c}}\chi\) under the \(Z_{3}\) symmetry. In principle, there is also the scalar semi-annihilation channel \(\chi\chi\to\phi h\), however the annihilation cross section is \(p\)-wave suppressed for the Majorana-like Yukawa coupling \(y_{\chi}\phi\overline{\chi^{c}}\chi\)[56; 57]. For the scalar dark matter \(\phi\), there are four kinds of annihilation channels as depicted in Figure 2. Apart from the extensively studied Higgs portal channels \(\phi\phi\to\) SM, the secluded channel \(\phi\phi\to NN\) is also allowed. Meanwhile, the cubic term \(\mu\phi^{3}\) and the Yukawa coupling \(y_{\chi}\phi\overline{\chi^{c}}\chi\) induce two additional semi-annihilation channels \(\phi\phi\to N\chi\) and \(\phi\phi\to h\phi\) if kinematically allowed. Comparing with the simplest \(Z_{3}\) scalar singlet dark matter [54], the fermion channel \(\phi\phi\to N\chi\) is unique in this model. Therefore, the sterile neutrino portal semi-annihilation channels \(\chi\chi\to N\chi\) and \(\phi\phi\to N\chi\) provide a viable pathway to distinguish from the other models. It is notable that
Figure 2: The dominant annihilation channels of scalar dark matter. Panel (a) and (b) denote the annihilation channels to SM final states. Panel (c) is for the secluded channel \(\phi\phi\to NN\). Panels (d)-(f) are for the fermion semi-annihilation channel \(\phi\phi\to N\chi\). Panels (g)-(i) are for the scalar semi-annihilation channel \(\phi\phi\to h\phi\).
when masses of the dark sector are nearly degenerate, the co-annihilation channels as \(\phi\chi\to h\chi/\phi N\) are also possible. For simplicity, we do not consider such co-annihilation channels in this paper.
As the WIMP dark matter candidate, it is in thermal equilibrium at the very beginning, and then decouples from the thermal bath when the temperature is low enough. Defining the variable \(z=m_{\rm DM}/T\), the evolution of fermion dark matter abundance \(Y_{\chi}\) is determined by the Boltzmann equation
\[\frac{{\rm d}Y_{\chi}}{{\rm d}z}=-\frac{\lambda}{z^{2}}\langle\sigma v\rangle_ {\chi\chi\to NN}\left(Y_{\chi}^{2}-(Y_{\chi}^{\rm eq})^{2}\right)-\frac{ \lambda}{2z^{2}}\langle\sigma v\rangle_{\chi\chi\to N_{\chi}}\left(Y_{\chi}^{2 }-Y_{\chi}^{\rm eq}Y_{\chi}\right), \tag{6}\]
where \(\lambda\) is defined as \(\lambda\equiv\sqrt{\pi g_{*}/45}m_{\rm DM}M_{\rm Pl}\). Here, \(g_{*}\) is the effective number of degrees of freedom of the relativistic species and \(M_{\rm Pl}=1.2\times 10^{19}\) GeV is the Planck mass. The sterile neutrino \(N\) is assumed in thermal equilibrium [58]. Similarly, the evolution of scalar dark matter is calculated as
\[\frac{{\rm d}Y_{\phi}}{{\rm d}z} = -\frac{\lambda}{z^{2}}\langle\sigma v\rangle_{\phi\phi\to{\rm SM }}\left(Y_{\phi}^{2}-(Y_{\phi}^{\rm eq})^{2}\right)-\frac{\lambda}{2z^{2}} \langle\sigma v\rangle_{\phi\phi\to h\phi}\left(Y_{\phi}^{2}-Y_{\phi}^{\rm eq }Y_{\phi}\right)\] \[-\frac{\lambda}{z^{2}}\langle\sigma v\rangle_{\phi\phi\to NN }\left(Y_{\phi}^{2}-(Y_{\phi}^{\rm eq})^{2}\right)-\frac{\lambda}{2z^{2}} \langle\sigma v\rangle_{\phi\phi\to N_{\chi}}\left(Y_{\phi}^{2}-\frac{(Y_{ \phi}^{\rm eq})^{2}}{Y_{\chi}^{\rm eq}}Y_{\chi}\right).\]
The thermally averaged cross sections \(\langle\sigma v\rangle\) are calculated with MicrOMEGAs [59].
Figure 3 and Figure 4 present the evolution of the dark matter relic abundance via various annihilation channels during the early universe. According to the general definition, the lighter of fermion \(\chi\) and scalar \(\phi\) is the dark matter candidate. For illustration, we set the dark matter mass \(m_{\rm DM}=500\) GeV, the other heavier particle mass \(m_{\rm Heavier}=800\) GeV, and the sterile neutrino mass \(m_{N}=180\) GeV. The secluded channel \(\chi\chi\to NN\) only involves the Yukawa coupling \(y_{N}\), whose impacts on the abundance is shown in panel (a) of Figure 3. The contribution of semi-annihilation channel \(\chi\chi\to N\chi\) is turned off simply by
Figure 3: The evolution of fermion dark matter abundance in different major annihilation channels. The orange horizontal lines correspond to the Planck observed abundance for \(m_{\rm DM}=500\) GeV.
Figure 4: Same as Figure 3, but for scalar dark matter.
setting \(y_{\chi}=0\). To obtain correct relic abundance, \(y_{\chi}\sim{\cal O}(0.1)\) is required when \(\chi\chi\to NN\) is the only annihilation channel. For the process \(\chi\chi\to N\chi\), both the Yukawa coupling \(y_{N}\) and \(y_{\chi}\) contribute. We then fix \(y_{N}=0.2\), and show the impact of \(y_{\chi}\) in panel (b). The observed relic abundance is reproduced with \(y_{\chi}\sim{\cal O}(1)\). Since \(\chi\chi\to NN\) is also kinematically allowed, it is clear that when \(y_{\chi}\ll y_{N}\), the relic abundance is actually determined by the secluded channel.
For the scalar dark matter \(\phi\), we first show the impact of \(\lambda_{H\phi}\) on the canonical Higgs portal annihilation channels in panel (a) of Figure 4. Contributions of other kinds of annihilation channels are forbidden by fixing \(y_{\chi}=y_{N}=0\) and \(\mu=0\) GeV. These Higgs portal channels are efficient to obtain the desired abundance with \(\lambda_{H\phi}\gtrsim{\cal O}(0.01)\). The contribution of scalar semi-annihilation \(\phi\phi\to h\phi\) is depicted in panel (b) while setting \(\lambda_{H\phi}=0.05\). A relatively large cubic coupling \(\mu\gtrsim 100\) GeV is required to make this channel the dominant one. In panels (c) to (f), we consider the secluded channel \(\phi\phi\to NN\) and fermion semi-annihilation channel \(\phi\phi\to N\chi\). Similar to the fermion dark matter scenario, correct abundance is achieved with \(y_{N}\sim{\cal O}(0.1)\) or \(y_{\chi}\sim{\cal O}(1)\) when \(\phi\phi\to NN\) or \(\phi\phi\to N\chi\) is the dominant annihilation channel respectively. Different from the fermion dark matter, the \(s\)-channel of semi-annihilation \(\phi\phi\to N\chi\) is induced by the cubic term \(\mu\phi^{3}\) but not the Yukawa coupling \(y_{\chi}\phi\overline{\chi}^{c}\chi\). The contributions of the \(s\)- and \(t/u\)-channels are then separately shown in panels (d) and (e). Since the contribution of \(\phi\phi\to N\chi\) is suppressed by the final states phase space in the benchmark points, a larger contribution is possible with lighter \(\chi,N\), which is illustrated in panel (f).
Generating the appropriate cosmological relic density, as determined with high accuracy by the Planck experiment \(\Omega_{\rm DM}h^{2}=0.120\pm 0.001\)[60], is a crucial prerequisite for a viable dark matter candidate. With a seesaw related mixing angle \(\theta\sim\sqrt{m_{\nu}/m_{N}}\), the lifetime of sterile neutrino \(\tau_{N}\) would be longer than \({\cal O}(0.1)\) s for \(m_{N}<1\) GeV, which is excluded by Big Bang Nucleosynthesis [61]. So we assume \(m_{N}>1\) GeV in this paper, and then perform a random scan to explore the following dark sector parameter space:
\[y_{\chi,N}\in[10^{-4},1],\lambda_{H\phi}\in[10^{-6},1],\mu/m_{\phi}\in[0,3],m_ {\chi,\phi}\in[1,10^{3}]\;\text{GeV}. \tag{8}\]
Samples with correct relic density in the \(3\sigma\) range of the Planck value are kept for later study. Survival samples are then classified by the dominant annihilation channels. The results are shown in Figure 5 and Figure 6 for the case of fermion and scalar dark matter respectively. When \(\chi\chi\to NN\) is the dominant channel, it is clear that a lower bound on \(y_{N}\) exists, which is approximately \(y_{N}\gtrsim(m_{\chi}/10^{4}\;\text{GeV})^{1/2}\). Including the contribution of semi-annihilation channel \(\chi\chi\to N\chi\) would allow \(y_{N}\) to be about two orders of magnitude smaller. Similarly, the lower limit \(y_{\chi}\gtrsim m_{\chi}/10^{4}\) GeV should be satisfied when \(\chi\chi\to N\chi\) is the dominant channel. It is also clear that for large enough \(y_{\chi}\), i.e., \(y_{\chi}\gtrsim 0.3\), the semi-annihilation channel
will always have the largest contribution. These two channels are well separated in the \(y_{N}-y_{\chi}\) plane, namely \(\chi\chi\to N\chi\) is the leading one when \(y_{\chi}\gtrsim 0.6y_{N}\). As shown in Figure 1 (b), there is a \(s\)-channel for the \(\chi\chi\to N\chi\) annihilation. Therefore, the contribution of this channel is enhanced when \(m_{\phi}\simeq 2m_{\chi}\), which leads to the deep cusp in panel (d) of Figure 5. Apart from the resonance region, the larger the ratio \(m_{\phi}/m_{\chi}\) is, the bigger the factor \(\sqrt{y_{\chi}y_{N}}\) is required.
For the scalar dark matter, \(\lambda_{H\phi}\lesssim 0.1\) is enough to realize the correct relic density when 10 GeV \(\lesssim m_{\phi}\lesssim 10^{3}\) GeV. The sharp dip around \(m_{\phi}\sim m_{h}/2\) corresponds to the on-shell production of \(h\) in the \(s\)-channel, where \(\lambda_{H\phi}\) can be as small as \(10^{-4}\). The SM channel dominant samples via the Higgs portal distribute mainly on the upper edge of the allowed region of \(\lambda_{H\phi}\). For the scalar semi-annihilation dominant samples, \(\lambda_{H\phi}\gtrsim 10^{-3}\) is required. The lower limit on \(\lambda_{H\phi}\) for the \(h\phi\) dominant channel grows as \(m_{\phi}\) increases. So the contribution of the \(\phi\phi\to h\phi\) channel is only important in the range of \([10^{2},10^{3}]\) GeV.
Figure 5: Distributions of samples with correct relic density for fermion dark matter. The purple and blue points denote that the dominant annihilation channel is \(\chi\chi\to NN\) and \(\chi\chi\to N\chi\) respectively.
Figure 6: Distributions of samples with correct relic density for scalar dark matter. The purple, blue, yellow and green points denote that the dominant annihilation channel is \(\phi\phi\to NN\), \(\phi\phi\to N\chi\), \(\phi\phi\to\) SM, and \(\phi\phi\to h\phi\) respectively.
Since the \(h\phi\) channel also involves the cubic coupling \(\mu\), the \(h\phi\) dominant samples have the largest value of \(\mu\lambda_{H\phi}\) in the allowed region. For the secluded channel \(\phi\phi\to NN\) and the semi-annihilation channel \(\phi\phi\to N\chi\), lower limits on \(y_{N}\) are required, which is similar to the fermion dark matter. However, due to additional contributions from the \(\phi\phi\to\) SM and \(\phi\phi\to h\phi\) channels, \(y_{N}\) can be as small as \(10^{-4}\) when \(m_{\phi}\) is above \(\mathcal{O}(10)\) GeV, which is different from the fermion dark matter. In panels (d) and (e) of Figure 6, we show the corresponding parameters that are involved in the \(\phi\phi\to N\chi\) channel. Approximate lower limits on the factors \(\sqrt{y_{\chi}y_{N}}\) and \(\mu y_{N}\) exist for \(\phi\phi\to N\chi\) dominant samples. To make sure this semi-annihilation channel is kinematically allowed, \(m_{\chi}/m_{\phi}\lesssim(2m_{\phi}-m_{N})/m_{\phi}\lesssim 2\) should be satisfied, which is depicted in panel (f). There is also a fake upper limit \(m_{\chi}/m_{\phi}<10\) for the \(\phi\phi\to h\phi\) dominant samples, because it happens to be \(m_{\chi}^{\text{max}}\simeq 10^{3}\) GeV and \(m_{\phi}^{\text{min}}\simeq 10^{2}\) GeV.
## IV Higgs invisible decay
In principle, the light sterile neutrino could induce additional Higgs decay mode \(h\to\nu N\) when \(m_{N}\lesssim m_{h}\). However, with the seesaw predicted Yukawa coupling \(y_{\nu}\sim\sqrt{2m_{\nu}m_{N}}/v\sim\mathcal{O}(10^{-6})\), the corresponding decay width is heavily suppressed. Meanwhile, for a light enough dark matter, it can contribute to the Higgs invisible decay. The corresponding branching ratio has been constrained by the ATLAS experiment with [62]:
\[\mathrm{Br}_{\text{inv}}=\frac{\Gamma_{\text{inv}}}{\Gamma_{\text{inv}}+\Gamma _{\text{SM}}}<0.11, \tag{9}\]
where \(\Gamma_{\text{SM}}\simeq 4\) MeV is the standard Higgs width. The theoretical Higgs invisible decay widths into the dark matter are given by [43]:
\[\Gamma(h\to\phi\phi) = \frac{\lambda_{H\phi}^{2}v^{2}}{8\pi m_{h}}\sqrt{1-\frac{4m_{ \phi}^{2}}{m_{h}^{2}}}, \tag{10}\] \[\Gamma(h\to\bar{\chi}\chi) = \frac{m_{h}(\lambda_{H\chi}^{eff})^{2}}{8\pi}\left(1-\frac{4m_{ \chi}^{2}}{m_{h}^{2}}\right)^{3/2}, \tag{11}\]
where the one-loop effective \(h\bar{\chi}\chi\) coupling is
\[\lambda_{H\chi}^{eff} = \lambda_{H\phi}\frac{y_{N}^{2}}{16\pi^{2}}\frac{m_{N}}{(m_{\phi}^ {2}-m_{N}^{2})^{2}}\left(m_{\phi}^{2}-m_{N}^{2}+m_{N}^{2}\log\frac{m_{N}^{2}}{ m_{\phi}^{2}}\right). \tag{12}\]
Figure 7 shows the theoretical branching ratios of Higgs invisible decay induced by dark matter and various constraints. For fermion dark matter, the branching ratio of Higgs invisible decay is below \(10^{-10}\), which is because of the loop suppression of effective Higgs coupling \(\lambda_{H\chi}^{eff}\). The predicted branching ratio is proportional to the coupling \(\lambda_{H\phi}\). For maximum \(\text{Br}_{\text{inv}}\simeq 10^{-10}\), the dominant annihilation channel is found
Figure 7: Branching ratios of invisible Higgs decay induced by fermion (left panels) and scalar dark matter (right panels). In panels (a)-(d), the purple, slate blue, and red points are excluded by ATLAS search of Higgs invisible decay [62], direct detection of LZ [64], and indirect detection [31], respectively. The blue points satisfy all current constraints but are within the reach of the future LZ experiment [65], meanwhile the green points are unconstrained. In panels (e) and (f), we also classify the samples according to the dominant annihilation channels. Labels of the samples are the same as in Figure 5 and Figure 6.
to be \(NN\). Meanwhile, for extremely tiny \(\text{Br}_{\text{inv}}\lesssim 10^{-22}\), the \(N\chi\) channel is the dominant one. Because the future HL-LHC could only probe \(\text{Br}_{\text{inv}}\gtrsim 0.033\)[63], this negligible tiny branching ratio induced by fermion dark matter is thus far beyond the reach of direct collider measurement. As will show later, most samples in the light dark matter region below about 60 GeV are excluded by indirect search.
The scenario for scalar dark matter is quite different, where \(\text{Br}_{\text{inv}}\) could be the dominant decay channel of Higgs when \(\lambda_{H\phi}>0.1\). Under current constraints, the region with \(m_{\phi}\lesssim 53\) GeV and \(\text{Br}_{\text{inv}}>0.11\) is excluded by the ATLAS experiment, which corresponds to \(\lambda_{H\phi}\gtrsim 10^{-2}\) disallowed in this region. According to panel (f), most excluded samples in this region are dominantly annihilation via the \(\phi\phi\to\text{SM}\) channel. For \(m_{\phi}\gtrsim 10\) GeV, the direct detection experiment as LZ [64] sets a more stringent constrain than the Higgs invisible decay, where \(\text{Br}_{\text{inv}}\gtrsim 10^{-3}\) and \(\lambda_{H\phi}\gtrsim 4\times 10^{-4}\) might be excluded. Such a small branching ratio is also beyond the reach of future HL-LHC [63]. As for scalar dark matter lighter than 10 GeV, the Higgs invisible decay leads to more strict constrain than direct detection. The annihilation channels of samples in the region which escapes the limits from Higgs invisible decay and direct detection are \(\phi\phi\to NN\) and \(\phi\phi\to N\chi\). Although this region is also tightly constrained by indirect detection, there are still some samples that satisfy all current constraints. We have checked that most of these light-mass survived samples annihilate via \(\phi\phi\to N\chi\) with the special requirement \(2m_{\phi}\lesssim m_{N}+m_{\chi}\). So if HL-LHC discovers a relatively large \(\text{Br}_{\text{inv}}\), the dark matter candidate should be a scalar with mass around a few GeV, meanwhile the sterile neutrino is also at the GeV scale.
## V Indirect detection
The indirect detection experiments aim to search for various types of particles produced in dark matter annihilation. The differential flux arising from the annihilation of dark matter is calculated as
\[\frac{d\Phi}{dE}=\frac{1}{4\pi}\frac{\langle\sigma v\rangle}{2m_{ \text{DM}}^{2}}\frac{dN}{dE}\cdot\int_{\Delta\Omega}d\Omega\int\rho_{\text{DM }}^{2}(s)ds, \tag{13}\]
where \(\rho_{\text{DM}}\) is the dark matter density of the observed object. The energy spectrum \(dN/dE\) describes the distribution of observed particles from dark annihilation. According to previous studies, current indirect limits could constrain dark matter mass below about 50 GeV [32]. In this region, the dominant annihilation channels for both fermion and scalar dark matter are \(NN\) and \(N\chi\) final states, after imposing the constraint from Higgs invisible decay. The resulting spectrum \(dN/dE\) depends on both the masses \(m_{\text{DM}},m_{N}\) and decay modes of sterile neutrino \(N\). With light \(m_{N}<m_{W}\), the three-body decay width via off-shell \(W\) and \(Z\) can be estimated as
\[\Gamma_{N}\approx\frac{G_{f}^{2}}{192\pi^{3}}|\theta_{\alpha}|^{2}m_{N}^{5}, \tag{14}\]
Figure 8: Exclusion limits of indirect detection experiments. In panels (a) and (b), the yellow and orange curves represent the bounds of the antiproton-to-proton flux ratio from AMS-02 [66] and gamma-rays in the Milky Way dSphs from FermiLAT [67] with typical thermal annihilation cross section \(\langle\sigma v\rangle=2.2\times 10^{-26}\) cm\({}^{3}\)s\({}^{-1}\). In panels (c) to (f), the blue and purple curves illustrate the limits on annihilation cross section from Fermi-LAT and H.E.S.S. [31]. The yellow line shows the sensitivity of future CTA experiment for the \(W^{+}W^{-}\) annihilation modes [68]. Other labels are the same as in Figure 7.
where \(\theta_{\alpha}(\alpha=e,\mu,\tau)\) describes the mixing angle between sterile and active neutrinos for different flavors. For a heavier sterile neutrino, the two-body decays \(N\to W^{\pm}\ell^{\mp},Z\nu,h\nu\) are the dominant channels. The continuum spectra of muon flavor \(N\) is similar to the electron flavor scenario, while the tau flavor \(N\) produces a slightly stronger gamma-ray spectrum [31]. In this paper, we consider an electron flavor \(N\) for a conservative study.
Using the observed spectrum, the exclusion limits on dark matter annihilation can be derived by performing a likelihood analysis, although the large astrophysical uncertainties would affect these limits. In Figure 8, we show the indirect detection constraints from the antiproton observations of AMS-02 and the gamma-rays observations of Fermi-LAT [32]. For light \(m_{N}\sim\mathcal{O}\)(GeV), the Fermi-LAT experiment could exclude \(m_{\text{DM}}\lesssim 60\) GeV. Meanwhile for heavier \(m_{N}\), the AMS-02 result would exclude \(m_{\text{DM}}\lesssim 80\) GeV. The combination of these two limits excludes most samples in the region below \(m_{\text{DM}}\lesssim 50\text{GeV}\). It is notable that these two limits in Figure 8 (a) and (b) are obtained with fixed thermal annihilation cross section \(\langle\sigma v\rangle=2.2\times 10^{-26}\) cm\({}^{3}\)s\({}^{-1}\). In panel (c)-(f), we show the theoretically predicted annihilation cross section for indirect detection, where the cross sections of the semi-annihilation process as \(\chi\chi/\phi\phi\to N\chi\), \(\phi\phi\to h\phi\) are multiplied by a factor of \(1/2\). Since the annihilation cross section today can be much smaller than the thermal target \(\langle\sigma v\rangle=2.2\times 10^{-26}\) cm\({}^{3}\)s\({}^{-1}\), we require that the samples are excluded by indirect detection when the corresponding cross sections are also above the Fermi-LAT limit on \(\langle\sigma v\rangle\).
In the low mass region below 50 GeV, the two annihilation channels of fermion dark matter lead to quite different results. For the \(\chi\chi\to NN\) dominant samples, the corresponding annihilation cross sections are at the typical value \(\langle\sigma v\rangle=2.2\times 10^{-26}\) cm\({}^{3}\)s\({}^{-1}\), so these samples are excluded by indirect detection. However for the \(\chi\chi\to N\chi\) dominant samples, due to the existence of \(s\)-channel contribution via the dark scalar \(\phi\), the annihilation cross section could be much smaller than \(2.2\times 10^{-26}\) cm\({}^{3}\)s\({}^{-1}\) when \(2m_{\chi}\simeq m_{\phi}\). In this special scenario, the \(\chi\chi\to N\chi\) dominant samples also satisfy the indirect constraints. For the scalar dark matter, although there are three annihilation modes in these low mass region, the \(\phi\phi\to\text{SM}\) channel is tightly constrained by Higgs invisible decay and direct detection.
Similar to the fermion dark matter, most of the \(\phi\phi\to NN\) dominant samples could be excluded by indirect detection in the low mass region, while some of the \(\phi\phi\to N\chi\) dominant samples are still allowed. Although there is no on-shell \(s\)-channel contribution of the \(\phi\phi\to N\chi\) channel, we find that the allowed samples satisfy \(2m_{\phi}\lesssim m_{\chi}+m_{N}\) as shown in Figure 9. So these samples fall into the forbidden region, where the non-relativistic velocity of dark matter today can not overcome the mass splitting \(m_{\chi}+m_{N}-2m_{\phi}\)[69]. The scalar dark matter annihilates into SM particles also has on-shell \(s\)-channel contributions when \(2m_{\phi}\simeq m_{h}\). Meanwhile, the scalar semi-annihilation \(\phi\phi\to h\phi\) meets the forbidden condition when \(m_{\phi}\lesssim m_{h}\). The resulting annihilation cross sections of these two kinds are much smaller than the typical
value \(\langle\sigma v\rangle=2.2\times 10^{-26}\) cm\({}^{3}\)s\({}^{-1}\), so these samples are hard to probe by indirect detection.
For dark matter around the TeV scale, the most stringent constraint is from H.E.S.S. observation [70]. Currently, no samples could be excluded by H.E.S.S.. If there is no positive signal at future direct detection experiments, the scalar semi-annihilation \(\phi\phi\to h\phi\) would be excluded. So in this heavy mass region, we only consider the sterile neutrino portal annihilation channels \(\phi\phi/\chi\chi\to NN\). Because the photon spectrum from \(NN\) final state is similar to \(W^{+}W^{-}\) final state, we also show the future limits from the Cherenkov Telescope Array (CTA) [68], which will cover most samples above 200 GeV.
## VI Direct detection
In this model, the dark matter scatters off the atomic nucleus elastically via the \(t\)-channel exchange of Higgs boson \(h\). For scalar dark matter, this scattering happens at tree-level, while for fermion dark matter, it is induced at one-loop level [43]. The dark matter direct detection experiments measure the nuclear recoil energy and set constraints on the dark matter-nucleon scattering cross section. Until now, no concrete signal is observed by direct detection experiments, such as PandaX-4T [71], XENONnT [72], and LZ [64]. In this paper, we consider the most stringent limit from LZ and Darkside-50 [73] at present and the future projected limit from LZ [65]. For light dark matter below about 10 GeV, the Darkside-50 experiment sets the most stringent limit, which excludes \(\sigma_{\rm SI}\gtrsim 10^{-43}\) cm\({}^{2}\). For heavier dark matter, the LZ limit is the most tight one, where the minimum is at \(m_{\text{DM}}=30\) GeV with \(\sigma_{\rm SI}=5.9\times 10^{-48}\) cm\({}^{2}\).
Within the context of the Higgs-portal effective scenarios [74], the spin-independent cross section for
Figure 9: Annihilation cross section for indirect detection. The horizontal dash-dotted line is the minimum value of the Fermi-LAT limit in Figure 8
dark matter collision with nucleon can be expressed as
\[\sigma^{\phi n}_{\rm SI} =\frac{\lambda_{H\phi}^{2}}{\pi m_{h}^{4}}\frac{m_{n}^{4}f_{n}^{2}}{ (m_{\phi}+m_{n})^{2}}\;, \tag{15}\] \[\sigma^{\chi n}_{\rm SI} =\frac{(\lambda_{H\chi}^{eff})^{2}}{\pi m_{h}^{4}}\frac{m_{n}^{4} m_{\chi}^{2}f_{n}^{2}}{(m_{\chi}+m_{n})^{2}}\;. \tag{16}\]
The nucleon mass is denoted as \(m_{n}\), and the parameter \(f_{n}\simeq 0.3\) are used to parameterize the Higgs-nucleon interactions [4]. The effective coupling \(\lambda_{H\chi}^{eff}\) is calculated in Equation (12).
The scanned results are shown in Figure 10 for both fermion and scalar dark matter. Because the scattering cross section for fermion dark matter is suppressed by the one-loop factor, the predicted values are usually far below current experimental limits. Within the parameter space scanned in Equation (8), even the future projection of the LZ experiment could not have a positive signature when taking into account the limits from indirect detection. We also find that samples with relatively large scattering cross section \(\sigma_{\rm SI}\gtrsim 10^{-50}\) cm\({}^{2}\) are dominant by the semi-annihilation channel \(\chi\chi\to N\chi\). In principle, increasing the maximum values of related coupling \(\lambda_{H\phi}\) and \(y_{N}\) to the perturbation limits could lead to the predicted scattering cross section above current limits [43]. These detectable samples are expected to annihilate via \(\chi\chi\to N\chi\) channel in this model.
As for the scalar dark matter, the coupling \(\lambda_{H\phi}\) induces quite a large scattering cross section with correct relic density. The direct detection experiments, such as LZ, could now exclude samples with \(\lambda_{H\phi}\gtrsim 4\times 10^{-4}\). As already discussed in Section IV, the Higgs invisible decay has excluded \(\lambda_{H\phi}\gtrsim 10^{-2}\) for \(m_{\phi}<m_{h}/2\). It is clear in panel (d) of Figure 10 that the exclusion limit from Higgs invisible decay is more stringent than the direct limit from Darkside-50, therefore we do not consider the Darkside-50 limit in this study. For \(m_{\phi}\gtrsim 10\) GeV, the direct limit from LZ is over two orders of magnitudes tighter than the Higgs invisible limit. Since the indirect detection has already excluded most samples with \(m_{\phi}\lesssim 50\) GeV, the future projection of LZ is also hard to probe the allowed samples in this region. Together with panel (f) of Figure 10, the only viable region of \(\phi\phi\to\) SM dominant samples is the narrow resonance region at \(m_{\phi}\lesssim m_{h}/2\). The predicted cross section of SM dominant samples can be as small as \(\sigma_{\rm SI}\sim 10^{-49}\) cm\({}^{2}\), which is also beyond the reach of future LZ.
For heavier scalar dark matter above 100 GeV, only the direct detection experiments set the corresponding limit now. The \(\phi\phi\to\) SM dominant samples lead to the largest predicted cross section \(\sigma_{\rm SI}\simeq 10^{-45}\) cm\({}^{2}\), which is disfavored by the current LZ limit when \(m_{\phi}<1\) TeV. It is still possible for \(\phi\phi\to\) SM dominant samples with \(m_{\phi}\) above 1 TeV [75], but is out of the parameter space we scanned in Equation (8). The new contribution of the semi-annihilation \(\phi\phi\to h\phi\) channel would induce a smaller scattering cross section. Since this semi-annihilation channel also involves the cubic coupling, the stability and unitarity bounds
Figure 10: The predicted spin-independent cross section and various exclusion limits. The purple and red lines are the current exclusion limits from Darkside-50 and LZ respectively. The yellow line is the projected LZ limit. Other labels are the same as in Figure 7.
\(\mu<3m_{\phi}\) then lead to a lower bound for the predicted cross section. For instance, the minimum predicted cross section is about \(3\times 10^{-47}\) cm\({}^{2}\) with \(m_{\phi}\sim 130\) GeV. Once \(\phi\phi\to h\phi\) is kinematically allowed, this lower bound on \(\sigma_{\rm SI}\) increases as \(m_{\phi}\) is larger. Under the current LZ limit, some of the \(\phi\phi\to h\phi\) dominant samples are still allowed. In the future, the projected LZ limit could probe all the \(\phi\phi\to h\phi\) dominant samples. So if there is still no positive signature at future LZ, the allowed samples will be dominant by \(\phi\phi\to NN\) and \(\phi\phi\to N\chi\) channels. We then expect observable signatures at indirect detection experiments from these two channels for most of the allowed parameter space.
## VII Conclusion
Besides generating tiny neutrino mass via the type-I seesaw mechanism, the electroweak scale sterile neutrino \(N\) can also mediate the interaction between the dark sector and the standard model. Beyond the simplest \(Z_{2}\) symmetry, we extend the sterile neutrino portal dark matter with \(Z_{3}\) symmetry in this paper. We introduce a scalar singlet \(\phi\) and a fermion singlet \(\chi\) to the dark sector. Under the dark \(Z_{3}\) symmetry, the dark sector transforms as \(\chi\to e^{i2\pi/3}\chi,\phi\to e^{i2\pi/3}\phi\), while the standard model particles and the sterile neutrinos transform trivially. In this paper, we consider WIMP dark matter for both scalar and fermion scenarios. The \(Z_{3}\) symmetry introduces two new interactions, i.e., \(y_{\chi}\phi\overline{\chi^{c}}\chi\) and \(\mu\phi^{3}/2\), which lead to semi-annihilation channels as \(\chi\chi\to N\chi\) and \(\phi\phi\to h\phi\).
For the fermion dark matter \(\chi\), the annihilation channels are secluded \(\chi\chi\to NN\) and the semi-annihilation \(\chi\chi\to N\chi\). Because the effective \(h\bar{\chi}\chi\) coupling is induced at the one-loop level, the contribution of fermion dark matter to Higgs invisible decay is negligible tiny. The resulting dark matter-nucleon scattering cross section is also beyond future experimental reach. Currently, the indirect detection could exclude most of the samples with \(m_{\chi}\lesssim 50\) GeV. In the future, the CTA experiment is expected to probe the high mass region. However, due to the \(s\)-channel contribution of dark scalar \(\phi\) to the semi-annihilation \(\chi\chi\to N\chi\), the corresponding annihilation cross section is much smaller than the usual thermal value \(\langle\sigma v\rangle=2\times 10^{-26}\) cm\({}^{3}\)s\({}^{-1}\) when \(2m_{\chi}\simeq m_{\phi}\). In this special scenario, even the indirect detection can not have a positive signature.
For the scalar dark matter \(\phi\), there are four kinds of annihilation channels, i.e., the Higgs portal \(\phi\phi\to\) SM, the secluded channel \(\phi\phi\to NN\), and the semi-annihilations \(\phi\phi\to N\chi,h\phi\). The direct Higgs portal interaction \(\lambda_{H\phi}(H^{\dagger}H)(\phi^{\dagger}\phi)\) generates observable signatures from Higgs invisible decay, indirect and direct detection. Under these constraints, the Higgs portal \(\phi\phi\to\) SM dominant samples are only viable at the resonance region \(m_{\phi}\lesssim m_{h}/2\) for dark scalar below 1 TeV. The semi-annihilation \(\phi\phi\to h\phi\) might be the dominant one in the range of \([10^{2},10^{3}]\) GeV. This channel predicts a lower bound on dark matter-nucleon
scattering cross section, which can be fully detected by the future LZ experiment. Meanwhile, the secluded \(\phi\phi\to NN\) and semi-annihilation \(\phi\phi\to N\chi\) channels can easily satisfy current bounds and are promising at indirect detection experiments. Although the light mass region is tightly constrained, we find that when the forbidden relation \(2m_{\phi}\lesssim m_{\chi}+m_{N}\) is satisfied, the semi-annihilation channel \(\phi\phi\to N\chi\) also has a suppressed annihilation cross section for indirect detection.
Compared with the \(Z_{2}\) symmetric model, the new interactions introduced in the \(Z_{3}\) symmetric model enlarge the viable parameter space. For instance, light dark matter below about 50 GeV is entirely excluded by indirect detection in the \(Z_{2}\) symmetric model. However, in the \(Z_{3}\) symmetric model, the semi-annihilation \(\chi\chi\to N\chi\) and \(\phi\phi\to N\chi\) are still allowed under certain circumstances. Therefore, once light dark matter is discovered, the \(Z_{3}\) symmetric model would be preferred.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China under Grant No. 11805081 and 11635009, Natural Science Foundation of Shandong Province under Grant No. ZR2019QA021 and ZR2022MA056, the Open Project of Guangxi Key Laboratory of Nuclear Physics and Nuclear Technology under Grant No. NLK2021-07.
|
2310.09040 | Optimal Scheduling of Electric Vehicle Charging with Deep Reinforcement
Learning considering End Users Flexibility | The rapid growth of decentralized energy resources and especially Electric
Vehicles (EV), that are expected to increase sharply over the next decade, will
put further stress on existing power distribution networks, increasing the need
for higher system reliability and flexibility. In an attempt to avoid
unnecessary network investments and to increase the controllability over
distribution networks, network operators develop demand response (DR) programs
that incentivize end users to shift their consumption in return for financial
or other benefits. Artificial intelligence (AI) methods are in the research
forefront for residential load scheduling applications, mainly due to their
high accuracy, high computational speed and lower dependence on the physical
characteristics of the models under development. The aim of this work is to
identify households' EV cost-reducing charging policy under a Time-of-Use
tariff scheme, with the use of Deep Reinforcement Learning, and more
specifically Deep Q-Networks (DQN). A novel end users flexibility potential
reward is inferred from historical data analysis, where households with solar
power generation have been used to train and test the designed algorithm. The
suggested DQN EV charging policy can lead to more than 20% of savings in end
users electricity bills. | Christoforos Menos-Aikateriniadis, Stavros Sykiotis, Pavlos S. Georgilakis | 2023-10-13T12:07:36Z | http://arxiv.org/abs/2310.09040v1 | Optimal Scheduling of Electric Vehicle Charging with Deep Reinforcement Learning Considering End Users Flexibility
###### Abstract
The rapid growth of decentralized energy resources and especially Electric Vehicles (EV), that are expected to increase sharply over the next decade, will put further stress on existing power distribution networks, increasing the need for higher system reliability and flexibility. In an attempt to avoid unnecessary network investments and to increase the controllability over distribution networks, network operators develop demand response (DR) programs that incentivize end users to shift their consumption in return for financial or other benefits. Artificial intelligence (AI) methods are in the research forefront for residential load scheduling applications, mainly due to their high accuracy, high computational speed and lower dependence on the physical characteristics of the models under development. The aim of this work is to identify households' EV cost-reducing charging policy under a Time-of-Use tariff scheme, with the use of Deep Reinforcement Learning, and more specifically Deep Q-Networks (DQN). A novel end users flexibility potential reward is inferred from historical data analysis, where households with solar power generation have been used to train and test the designed algorithm. The suggested DQN EV charging policy can lead to more than 20% of savings in end users electricity bills.
DEEP Reinforcement Learning, Deep Q-Networks (DQN), SMART Grid, RESIDENTIAL LOAD SCHEDULING, DEMAND RESPONSE, ELECTVE CHICLE, SOLAR POWER
## 1 Introduction
The main focus of an electric power system is to ensure security of supply with the least possible cost [1]. On a residential level, the penetration of decentralized energy resources, such as photovoltaic systems (PV), local energy storage and electric vehicles (EV), have increased the difficulty in load and generation forecasting, and have consequently created the need for higher flexibility on both the demand and the supply side. In addition, this necessity is expected to grow more in the next decade, in line with the ongoing penetration of EV on a residential level. Based on the latest IEA's Global EV Outlook, EV global electricity demand can account for 1,100 TWh, around 4% of total global demand by 2030 [2]. Demand response (DR) can offer such flexibility in residential, low voltage networks by controlling large flexible household loads, such as EV, through the end users' home energy management systems (HEMS). Incentivizing end users to shift their consumption in different hours can be beneficial for network operators to tackle voltage, frequency and demand-related issues. In the meantime, consumers can decrease their electricity bills or be rewarded in other ways (incentives) by the network operators when agreeing to participate on a DR event.
Smart meters are a key component of such grid monitoring systems, since real data can be used to train AI models that can accurately analyze aggregated consumption signals of domestic appliances [3], infer battery's State of Charge (SoC) [4], identify consumption patterns and optimize the scheduling of residential resources in demand response applications [5, 6]. Reinforcement Learning (RL) and nature-inspired algorithms, such as genetic algorithm (GA) and particle swarm optimization (PSO), are commonly used for optimally scheduling residential energy resources given the lower modelling complexity and the lack of linearity in the problem formulation [7, 8]. However, determining optimal solutions with model-based approaches can become challenging when stochastic variables, such as end users consumption behavior, are considered [9]. In this case, system model information is required, modeling complexity increases and there is a lack of scalability and adaptability in the designed algorithmic solutions. On the contrary, Reinforcement Learning shows a more dynamic character than nature-inspired methods, since RL can be updated while in operation and continuously learn from past experiences.
Many authors have investigated and evaluated the performance of Reinforcement Learning in residential DR applications, since model-free RL methods can be used without the need of explicit mathematical formulation to model end users' consumption habits [10]. More specifically, the use of Deep Reinforcement Learning for EV scheduling has attracted increasing interest in the last few years. Both works [11, 12] use a Deep Q-learning (DQN) algorithm to optimize EV charging scheduling for residential DR, considering past electricity prices. Even if both works show high performance on electricity costs reduction, end users commuting habits and therefore
their charging flexibility is modeled as a normal distribution with randomly selected variables. Similarly, in work [13], which aims at scheduling multiple energy resources, including EV, residential consumption patterns are based on Gaussian probability density functions with consumer preferences not inferred from historical data. In work [14] end user driving preferences have been considered known in an attempt to identify a cost-reducing, long-term charging policy for a plug-in EV with batch, fitted Q-Iteration algorithm. On the contrary, in work [15] electricity costs and user discomfort have been jointly minimized with a DQN algorithm, where end user feedback is included in the rewards and sensor-based human activity in the RL states. Despite the clear contributions of this work, authors did not consider the contribution of renewable power generation and therefore consumption's carbon footprint in the energy model. The latter has been introduced in work [16] where DQN, double DQN and dueling double DQN are compared for a household energy management system that controls Heating, Ventilation, and Air Conditioning (HVAC), PV, EV and energy storage. EV charging has been based on end users' daily satisfaction, as obtained from historical data.
The above literature review indicates that there has been an increasing interest in EV scheduling optimization for DR applications with the use of Deep RL. However, in the majority of existing works, end users charging availability and flexibility are being modeled as stochastic variables without any correlation with historical consumption data. Furthermore, the contribution of solar power generation on EV optimal charging scheduling has been rarely witnessed in the literature reviewed.
In this work a novel framework for EV charging cost minimization that considers end user flexibility and solar power generation, is proposed with the use of DQN. The contributions of this paper can be summarized as follows:
* Provides a thorough residential EV load scheduling model through solar power generation, expanding on top of the rather limited existing research on this area. Different households and days with solar power generation are included in the training and test data sets.
* Utilizes a flexibility potential reward inferred from real household measurements. Instead of modeling end users charging habits with a probability density function, as in works [11, 14], a charging availability index is integrated in the Deep RL reward function, reflecting the probability for a user to charge their EV at each time interval.
* Formulates a constrained EV load scheduling problem, considering battery's technical characteristics and end user driving daily patterns. Both aspects are integrated into the Deep RL environment as EV Battery SoC monitoring and daily EV consumption rewards, respectively.
The structure of this paper is as follows. Section 2 presents the methodology followed to formulate the problem from an energy and RL perspective. Section 3 presents the experimental setup and compares the proposed DQN method with metered data. Section 4 concludes the paper.
## 2 Methodology
This work focuses on optimally charging the electric vehicle of residential EV owners. The energy system under examination consists of a household connected to the main grid, a battery electric vehicle (BEV), an EV slow charger, rooftop solar PV panels and a'residual load' consisting of the cumulative consumption from all other appliances. Real-time optimal EV charging is formulated as a 15-minute (discrete) time step optimization problem, aiming at distributing the charging load to optimal time intervals throughout the day. At each time step \(t\) of the 24-hour modeling horizon (\(T\) = 96), the aim is to define whether it is beneficial for the end user to charge their EV as well as estimate the amount of energy required, based on electricity retail tariffs and user daily driving habits, without jeopardizing their convenience.
User convenience can be described through a charging availability index that takes into consideration the historical daily EV charging pattern. The charging availability index expresses the frequency for an EV user to charge their vehicle at each 15-minute interval, as inferred by analysis performed on smart metered data from households in Austin, Texas, US found in the open-source data set of Pecan Street [17]. Fig. 1 shows the charging availability profile for an indicative household of the dataset. The high flexibility index from 17.00 to 21.00 indicates the user's preference to charge their BEV in the respective time slots. The quantiles of the charging probability density function are computed and specific thresholds are being selected to represent low, medium and high preference periods for EV charging.
In a Reinforcement Learning context, mechanisms of the optimization task need to be defined as an environment, which produces observations, and rewards an optimizer (agent), in the form of a Deep Neural Network, depending on its actions. Optimal EV charging task is formulated as a Markov Decision Process (MDP) defined by the tuple \((\mathcal{S},\mathcal{A},R,P)\). \(\mathcal{S}\) signifies the state space, i.e. the observations that the agent will evaluate to choose an action.
Fig. 1: Charging availability index for household #4373 in Austin, Texas. This index shows the historical EV charging frequency of user #4373 throughout the day (%).
The actions are obtained in the set \(\mathcal{A}=\{1,0\}\), meaning that the agent can, at any given time step \(t\), choose either to charge the EV or remain idle. The environment will then evaluate the action of the agent, depending on specific criteria and assign a respective reward. This process is iterated until the agent learns to choose the actions that maximize its rewards by learning the optimal state-action pairs \((s,a)\) at any timepoint \(t\). The agent then receives the next state _s'_ and the same routine is repeated through the training phase. A high-level overview of this approach is illustrated in Fig. 2.
### State
At each time step \(t\), the environment produces a vector \(s_{t}=(p_{t},P^{PV}_{t},P^{non-EV}_{t},P^{EV}_{t,run},SoC_{t},t)\) that is passed to the optimization agent. The information included in the state space is the following [18]: (1) \(p\), represents the Time-of-Use (ToU) electricity tariff at time \(t\), based on the load distribution (On-Peak, Mid-Peak, Off-Peak); (2) \(P^{PV}_{t}\) is the power generated by the rooftop solar PV; (3) \(P^{non-EV}_{t}\) denotes the total residual (non-EV) load; (4) \(P^{EV}_{t,run}\) shows the running EV power consumption, starting from the initialization of the episode (t=1) until the current time step \(t\); (5) \(SoC_{t}\) indicates the State of Charge at time step \(t\), which (6) holds information about the current time step of the episode.
In this work, the data are split on a daily basis to formulate episodes. Before the episode starts, the environment calculates the amount of power that the end user consumed to charge their EV (\(P^{EV}_{day}\)) using historical consumption data. Given that the number of EV battery charging cycles per day is unknown, it is assumed that the historically consumed daily power corresponds to a full charging cycle [18] and starting SoC, \(SOC_{start}\), is calculated as in Equation (1):
\[SoC_{start}=1-\eta\frac{P^{EV}_{day}}{4E_{batt}} \tag{1}\]
where \(\eta\) is the charging efficiency of the battery pack and \(E_{batt}\) is the rated battery capacity in kWh. Equation (2) ensures that optimized EV daily power consumption will remain similar (\(\pm 5\%\)) to the historical consumption \(P^{EV}_{day}\) after any load shifting actions on that specific episode (day).
\[P^{EV}_{day}\in[0.95\sum_{t=1}^{T}P^{EV}_{t},1.05\sum_{t=1}^{T}P^{EV}_{t}] \tag{2}\]
### Action
At every state \(s_{t}\) the agent should select an action as follows:
\[\alpha^{EV}_{t}=\{1,0\},\forall\alpha\in A,\forall t\in T \tag{3}\]
where \(\alpha^{EV}_{t}\) equals to 1 when the optimization algorithm suggests the BEV to charge and 0 when it is preferable to remain idle and not charge.
In addition, RL actions are constrained by the technical characteristics and the physical properties of the EV battery pack, which are described as follows:
\[SoC_{start}\leq SoC_{t}\leq SoC_{max},\forall t\in T \tag{4}\]
\[SoC_{t+1}=SoC_{t}+\frac{\eta P^{EV}_{t}}{4E_{batt}},\forall t\in T \tag{5}\]
\[P^{EV}_{t}=\begin{cases}3.3kW,&SoC_{min}\leq SoC_{t}\leq 0.9\\ 1.5kW,&SoC_{t}>0.9\end{cases},\forall t\in T \tag{6}\]
where \(SoC_{start}\), \(SoC_{max}\) are the starting and maximum BEV State of Charge, \(P^{EV}_{t}\) expresses the charging power consumption at a given time step \(t\) and \(SoC_{t}\) expresses the State of Charge at time step \(t\), based on the charging activity as described in Equation (6).
### Rewards
Each new state \(s_{t+1}\) depends on the current state \(s_{t}\), on the selected action \(\alpha^{EV}_{t}\), as well as on the reward \(r\) associated with this action. Through the assignment of rewards, the agent can evaluate the quality of each action and learn by that. In this work, the aim is to decide which is the most cost-effective EV charging strategy considering end user flexibility potential, without violating battery's technical constraints (SoC) and user's daily driving habits (power consumption).
To translate the aforementioned modeling process into rewards, the sub-rewards of daily power consumption (\(r_{1}\)), flexibility potential (\(r_{2}\)), cost minimization (\(r_{3}\)) and EV Battery SoC monitoring (\(r_{4}\)) have been formulated as follows:
\[r_{1}=\begin{cases}1,&\alpha^{EV}_{t}=1\ \&\ P^{EV}_{t,run}\leq 1.05P^{EV}_{day} \\ -10,&\alpha^{EV}_{t}=1\ \&\ P^{EV}_{t,run}\geq 1.05P^{EV}_{day}\\ -0.5,&\alpha^{EV}_{t}=0\ \&\ P^{EV}_{t,run}\leq 1.05P^{EV}_{day}\\ 1,&\alpha^{EV}_{t}=0\ \&\ P^{EV}_{t,run}\geq 1.05P^{EV}_{day}\end{cases} \tag{7}\]
\[r_{2}=\begin{cases}-2,&\alpha^{EV}_{t}=1\ \ \&\ U^{flex}_{t}\leq Q_{f}(0.25)\\ -1,&\alpha^{EV}_{t}=1\ \ \&\ U^{flex}_{t}\leq Q_{f}(0.50)\\ 0,&\alpha^{EV}_{t}=0\\ 1,&\alpha^{EV}_{t}=1\ \ \&\ U^{flex}_{t}\leq Q_{f}(0.75)\\ 2,&\alpha^{EV}_{t}=1\ \ \&\ U^{flex}_{t}>Q_{f}(0.75)\end{cases} \tag{8}\]
Fig. 2: Overview of the proposed DQN framework for optimal EV charging. A neural network is trained by the Agent to choose an action based on a state of the environment. A reward characterizes the optimality of the action.
where \(U_{t}^{flex}\) represents the user charging flexibility potential, which expresses the historical consistency of the user to charge their EV on a time step \(t\). \(Q_{f}(X)\) is the BEV charging flexibility quantile, as obtained from historical data analysis. \(U_{t}^{flex}\) and quantiles \(Q_{f}(X)\) are illustrated in Fig. 1. As can be seen in Equation (8), the more likely it is for the end user to charge their EV on a time step \(t\), the highest the reward of such action should be provided by the agent. The cost minimization sub-reward (\(r_{3}\)) is given by:
\[r_{3}=\begin{cases}2,&\alpha_{t}^{EV}=1\ \ \&\ C_{t}\leq Q_{c}(0.25)\\ 1,&\alpha_{t}^{EV}=1\ \&\ C_{t}\leq Q_{c}(0.50)\\ 0,&\alpha_{t}^{EV}=0\\ -1,&\alpha_{t}^{EV}=1\ \ \&\ C_{t}\leq Q_{c}(0.75)\\ -2,&\alpha_{t}^{EV}=1\ \ \&\ C_{t}>Q_{c}(0.75)\end{cases} \tag{9}\]
where the electricity consumption cost \(C_{t}\) on a time step \(t\) is defined as:
\[C_{t}=r_{t}\cdot(\alpha_{t}^{EV}\cdot P_{t}^{EV}+P_{t}^{non-EV}-P_{t}^{PV}) \tag{10}\]
The electricity cost quantiles \(Q_{c}(X)\) and the daily average electricity cost for household #4373, as obtained from the Pecan Street data set [17], are shown in Fig. 3. \(Q_{c}(X)\) has been calculated excluding all negative cost periods, to avoid setting extremely low reward thresholds. From Equation (9) it can be seen that the greater the cost on a time step \(t\) the worse the assigned reward will be by the agent.
Last but not least, EV Battery SoC monitoring sub-reward \(r_{4}\) negatively penalizes any charging action leading to a SoC above 100 %.
\[r_{4}=\begin{cases}-10,&\alpha_{t}^{EV}=1\ \ \&\ SoC_{t}\geq 1\\ 0,&otherwise\end{cases} \tag{11}\]
After each sub-reward is being calculated, the cumulative total reward \((\mathcal{R})\) is computed as follows:
\[R=\delta_{1}\cdot r_{1}+\delta_{2}\cdot r_{2}+\delta_{3}\cdot r_{3}+\delta_{4 }\cdot r_{4} \tag{12}\]
where \(\delta_{k}\) is the weight factor of each sub-reward \(k\). In this work, it is assumed that all proposed rewards weigh the same, i.e. \(\delta_{k}=0.25\ \ \forall k\in\{1,..,4\}\).
## 3 Results
accounts for 76.5 kW when the metered consumption was 76.1 kW. As shown in Fig. 5, EV charging is being shifted from high-price time periods (e.g. 19:00 - 21:30) to lower price periods not only with high penetration of solar power generation but also with a high flexibility potential, as in periods 14:00 - 15:30 and 11:00 - 12:30. The electricity cost savings for the metered and the proposed DQN EV solutions over the test days are shown in Table 2. It can be seen that the proposed, controlled solution leads to around 9.8 % of average daily savings when compared to uncontrolled EV power consumption. These savings can reach up to 21 % (day 09/09/2018) depending on the test case.
time zones within the day can reduce peak power consumption and therefore, reduce the stress on the side of the power distribution network.
## 4 Conclusions
In this paper, intraday EV load scheduling optimization has been formulated as a MDP problem, when considering end users flexibility potential and solar PV power generation. A DQN EV load optimization model has been proposed, using real measurements from households in the area of Austin, Texas. A novel set of rewards has been formulated to account for end user flexibility potential, as inferred from historical user consumption data, while respecting EV battery's technical constraints (SoC) and user's daily habits (daily power consumption). Experimental results show that the suggested DQN EV charging solution can lead to above 20% of electricity costs savings while reducing the stress on power distribution networks during Peak hours.
## 5 Acknowledgements
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 955422.
|
2310.19253 | Flow-based Distributionally Robust Optimization | We present a computationally efficient framework, called $\texttt{FlowDRO}$,
for solving flow-based distributionally robust optimization (DRO) problems with
Wasserstein uncertainty sets while aiming to find continuous worst-case
distribution (also called the Least Favorable Distribution, LFD) and sample
from it. The requirement for LFD to be continuous is so that the algorithm can
be scalable to problems with larger sample sizes and achieve better
generalization capability for the induced robust algorithms. To tackle the
computationally challenging infinitely dimensional optimization problem, we
leverage flow-based models and continuous-time invertible transport maps
between the data distribution and the target distribution and develop a
Wasserstein proximal gradient flow type algorithm. In theory, we establish the
equivalence of the solution by optimal transport map to the original
formulation, as well as the dual form of the problem through Wasserstein
calculus and Brenier theorem. In practice, we parameterize the transport maps
by a sequence of neural networks progressively trained in blocks by gradient
descent. We demonstrate its usage in adversarial learning, distributionally
robust hypothesis testing, and a new mechanism for data-driven distribution
perturbation differential privacy, where the proposed method gives strong
empirical performance on high-dimensional real data. | Chen Xu, Jonghyeok Lee, Xiuyuan Cheng, Yao Xie | 2023-10-30T03:53:31Z | http://arxiv.org/abs/2310.19253v4 | # Flow-based Distributionally Robust Optimization
###### Abstract
We present a computationally efficient framework, called FlowDRO, for solving flow-based distributionally robust optimization (DRO) problems with Wasserstein uncertainty sets while aiming to find continuous worst-case distribution (also called the Least Favorable Distribution, LFD). The requirement for LFD to be continuous is so that the algorithm can be scalable to problems with larger sample sizes and achieve better generalization capability for the induced robust algorithms. To tackle the computationally challenging infinitely dimensional optimization problem, we leverage flow-based models, continuous-time invertible transport maps between the data distribution and the target distribution, and develop a Wasserstein proximal gradient flow type of algorithm. In theory, we establish the equivalence of the solution by optimal transport map to the original formulation, as well as the dual form of the problem through Wasserstein calculus and Brenier theorem. In practice, we parameterize the transport maps by a sequence of neural networks progressively trained in blocks by gradient descent. Our computational framework is general, can handle high-dimensional data with large sample sizes, and can be useful for various applications. We demonstrate its usage in adversarial learning, distributionally robust hypothesis testing, and a new mechanism for data-driven distribution perturbation differential privacy, where the proposed method gives strong empirical performance on real high-dimensional data.
###### Contents
* 1 Introduction
* 1.1 Proposed: Flow-DRO
* 1.2 Motivating example: Why continuous density for LFD?
* 1.3 Flow-based generative models
* 1.4 Applications
* 2 Framework
* 2.1 Dual formulation and Wasserstein proximal problem
* 2.2 Solving the Wasserstein proximal problem by transport map
* 2.3 Connection to existing Wasserstein DRO
* 2.3.1 Reduction in the case of discrete reference measure
* 2.3.2 Connection to the dual formulation of WDRO
* 3 Theory
* 3.1 Preliminaries
* 3.2 \(\mathcal{W}_{2}\)-differentials
* 3.3 First-order condition of LFD problem
* 3.4 First-order condition of proximal problem
* 4 Algorithm: Flow-DRO
* 4.1 Flow-based neural network parametrization of transport map
* 4.2 Block-wise progressive training algorithm
* 4.3 Generative model for sampling from LFD
* 5 Applications
* 5.1 Adversarial learning with distributional attack
* 5.2 Robust hypothesis testing
* 5.3 Differential privacy
* 6 Numerical Examples
* 6.1 Comparison with existing DRO methods
* 6.1.1 WDRO with Gaussian smoothed discrete LFD
* 6.1.2 Sinkhorn DRO
* 6.1.3 Results
* 6.2 Adversarial distributional attack
* 6.2.1 CIFAR10 against point-wise attacks
* 6.2.2 MNIST trajectory illustration
* 6.3 Data-driven differential privacy
* 6.3.1 MNIST raw digit classification
* 6.3.2 MNIST missing digit detection
* 7 Summary and Discussion
* A Proofs
## 1 Introduction
Distributionally Robust Optimization (DRO) is a fundamental problem in optimization, serving as a basic model for decision-making under uncertainty and in statistics for addressing general minimax problems. It aims to identify a minimax optimal solution that minimizes an expected loss over the worst-case distribution within a pre-determined set
of distributions (i.e., an uncertainty set). DRO arises from various applications, including robust hypothesis testing [1, 2], boosting [3], semi-supervised learning [4], fair classification [5], clustering [6], and so on; see [7] for a more complete review. Inherently, DRO leads to an infinite dimensional problem, and thus it faces a significant computational challenge in most general settings. Despite the existing efforts to solve DRO that allow analytic or approximate solutions, current approaches still have limited scalability in solving high-dimensional, large-sample problems with general risk functions. In this work, we aim to address the computational challenge using a new neural network flow-based approach; the connection with existing approaches is further discussed in Section 2.3.
The basic setup for DRO is given below. Let \(\mathcal{X}=\mathbb{R}^{d}\) be the data domain. Assume a real-valued _risk function_\(\mathcal{R}(P;\phi)\) taking as inputs a \(d\)-dimensional distribution \(P\) (with a finite second moment) and a measurable decision function \(\phi\in\Phi\) in a certain function class (problem specific and possibly parametric). Assume a pre-specified scalar loss function \(r:\mathcal{X}\times\Phi\rightarrow\mathbb{R}\) so that
\[\mathcal{R}(P;\phi)=\mathbb{E}_{x\sim P}[r(x;\phi)]. \tag{1}\]
Some examples of the decision function \(\phi\) and loss function \(r\) include \(\phi\) being a multi-class classifier and \(r\) being the cross-entropy loss, and \(\phi\) being a scalar test function and \(r\) being the logistic loss. We are interested in solving the following minimax problem:
\[\min_{\phi\in\Phi}\max_{Q\in\mathcal{B}}\ \mathcal{R}(Q;\phi). \tag{2}\]
In (2), \(\mathcal{B}\) is a pre-defined uncertainty set that contains a set of (possibly continuous) distributions that are variations from a _reference distribution_\(P\); this is known as the distributionally robust optimization (DRO) problem [8]. In particular, we are interested in Wasserstein DRO or WDRO (see, e.g., the original contribution [9]), where the \(\mathcal{B}\) is the Wasserstein uncertainty set centered around the reference distribution induced by Wasserstein distance. WDRO receives popularity partly due to its data-driven uncertainty sets and no parametric restriction on the distributional forms considered.
The worst-case distribution that achieves the saddle point in (2) is called the Least Favorable Distribution (LFD) (also called the "extreme distributions" in prior works, e.g., [9]). In this work, we consider the problem of finding LFD for a given algorithm \(\phi\), which is useful in various practical settings such as generating _worst scenarios_ to test the algorithm and develop robust algorithms.
### Proposed: Flow-DRO
In this paper, we propose a _computational_ framework, a flow-based neural network called FlowDRO to find the worst-case distributions (LFDs) for DRO or solve the inner maximization of minimax problem (2). In particular, FlowDRO can efficiently compute worst-case distributions for various high-dimensional problems, thanks to the strong representation power of neural network-based generative models. The main idea is to connect the WDRO problem through Lagrangian duality to a function optimization problem with Wasserstein proximal regularization. This connection enables us to adapt the recently developed computationally efficient Wasserstein proximal gradient flow [10, 11] to develop computationally efficient _flow-based models_ parametrized by neural networks. Our framework can be viewed as a generative model for LFDs. It is thus suitable for many statistical and machine learning tasks, including adversarial learning, robust hypothesis testing, and differential privacy, leading to computationally efficient solutions and performance gain, as we demonstrated using numerical examples.
Our main contributions are:
* Develop a new Wasserstein proximal gradient descent approach to find worst-case distributions (or Least Favorable Distributions, LFDs) in WDRO by re-formulating the problem into its Wasserstein proximal form using Lagrangian duality. We introduce an alternative way to represent the LFDs through the _optimal transport maps_ from a continuous reference measure to induce continuous LFD and use data to estimate.
* Algorithm-wise, we adopt a new neural-network generative model approach to find LFD, called FlowDRO. The proposed neural network-based method can be scalable to larger sample sizes and high dimensionality, overcoming the computational challenges of previous WDRO methods. FlowDRO parameterize LFD by a transport map represented by a neural network, which can learn from training samples and automatically generalizes to unseen samples and efficiently generate samples from the LFDs, which can be used in various applications; we demonstrate its versatility in various applications and demonstrate the effectiveness of FlowDRO on various applications with high-dimensional problems (including images) from adversarial attack and differential privacy using numerical results.
* Theoretically, we approach the problem in a different route, relying on the tools of optimal transport: we derive the equivalence between the original \(\mathcal{W}_{2}\)-proximal problem and the transport-map-search problem making use of Brenier theorem enabled by considering continuous distributions. Our theory also shows that the first-order condition of our \(\mathcal{W}_{2}\)-proximal problem using Wasserstein calculus leads to an optimality condition of solving the Moreau envelope without assuming the convexity of the objective. As a by-product, we recover the closed-form expression of the dual function involving the Moreau envelope of the (negative) loss, consistent with existing work, and highlight the computational advantages of using our alternative optimal transport map reformulation.
To the best of our knowledge, FlowDRO is the first work that finds the worst-case distributions in DRO using flow-based models. However, we would like to emphasize that our approach is general and does not rely on neural networks; one can potentially use an alternative representation of the transport maps (e.g., [12]) in low-dimensional and small sample settings for stronger learning guarantees.
### Motivating example: Why continuous density for LFD?
One may quickly realize that finding LFD is an infinite-dimensional optimization problem that is particularly challenging in high dimensions and general risk functions. A useful observation that occurs in such infinite-dimensional optimization problem is that the worst-case distribution solution of the WDRO problem (2) turns out to be discrete, as shown in the original paper [9] and various follow-up works including [2] for the distributionally robust hypothesis test. This particular solution structure does help to overcome the computational challenge caused by the infinite-dimensional optimization problem.
However, the discrete nature of LFD, as coming from the WDRO formulation, is not a desirable property in practice, as explained in the following. First, there is a significant computational challenge and the method is not scalable to large dataset: the discrete WDRO formulation will require solving a Linear Program (LP) with the number of decision variables to be \(\mathcal{O}(n^{2})\), where \(n\) is the total number of training data points and the complexity of solving an LP is typically quadratic on the number of the decision variable. Such computational complexity for problems with thousands
of training data points can be prohibitive (e.g., the MNIST handwritten digit example in our later section uses \(\sim\)5000 samples per class). So typically, the current WDRO formulation usually can only be used to find discrete LFDs for small sample settings (e.g., [2, 6]). Second, the discrete LFD will limit the _generalization_ capability of the resulting algorithm. In machine learning applications, when we develop a robust detector (binary classifier) using DRO [1, 2], the LFD is discrete with a support on the training data set, as shown in Fig. 1(a). As a result, the optimal detector is also _only defined_ on the support of training data points. Such an optimal detector does not generalize in that, given a new test sample, if it does not coincide completely with one of the training data points, we cannot directly apply the optimal detector to the test data. An ad-hoc approach could be to "interpolate" the optimal detector on the training samples by convolving with a smoothing kernel (such as a Gaussian kernel); however, this will lose the property of the original minimax optimality of the detector. It would be better to seek continuous worst-case distributions (LFDs) when we solve the minimax problem. Thus, we may want to add a constraint in the formulation and consider the uncertainty set as the intersection of the Wasserstein uncertainty set and the set of continuous functions.
Now, suppose for the above consideration, we would like to find _continuous worst-case distribution_ instead. However, if one restricts \(\mathcal{P}\) in the minimax problem (2) to be the Wasserstein uncertainty set _intersecting all continuous distribution functions_, that will lead to an even more difficult infinite dimensional problem involving distribution functions, and the (discrete) solution structure property no longer holds. This brings out the main motivation of our paper: we will introduce _a neural network (NN) approach to solve minimax problem_ leveraging the strong approximation power of NN and that they _implicitly regularize_ the solution to achieve continuous density. To carry out the plan, we need a carefully designed NN architecture and training scheme leveraging the recent advances in _normalizing flow_ to represent distribution functions. Recently, there have also been works considering entropy regularized Wasserstein uncertainty sets, called the Sinkhorn DRO problems [13], which lead to continuous LFDs with kernel-type solutions, but it is more suitable for low dimensional problems due to nature of the kernel solutions.
### Flow-based generative models
Recently, diffusion models and closely related flow-based models have drawn much research attention, given their state-of-the-art performance in image generation; see [11] for a complete summary. Flow-based generative models enjoy certain advantages in computing the data generation and the likelihood and have recently shown competitive empirical performance. They can be understood as continuous-time models that gradually transform the input distribution \(P\) into a target distribution \(Q\). These models are popularized in the context of normalizing flow, where the target distribution \(Q=\mathcal{N}(0,I_{d})\), the standard multivariate Gaussian [14]. They can be largely categorized into discrete-time flows [15, 16, 17] and continuous-time flows [18, 19, 20, 10]. The discrete-time flows can be viewed as Euler approximation of the underlying continuous-time probability trajectory, where the continuous-time flows are based on neural ordinary differential equation (NeuralODE) [21] to learn the probability trajectory directly.
We remark that different from both normalizing flow and flow between arbitrary pre-specified pairs of distributions, our flow model tries to learn the worst-case distribution \(Q^{*}\) that maximizes certain risk functions. Compared with other flow-based generative models, such as the traditional settings of normalizing flow or pre-specified target distributions, FlowDRO does not choose a target distribution _a-priori_, which is instead learned by maximizing the objective function.
We also note that different from training generative adversarial networks (GAN) [22] that may also generate worst
case samples, our flow-based approach can be more stable during training as it involves neither auxiliary discriminators nor inner loops. Compared with recent works on flow-based generative models[10, 11], where only KL divergence was considered for the loss function, we consider general loss as motivated by various applications.
### Applications
FlowDRO can also directly benefit several applications, which can be formulated as DRO problems, as we present in more detail in section 5. First, in the case of an adversarial attack, our flow model is an _attacker_ that can find the distribution causing the most disruption to existing systems. This is especially important for engineering system design. For example, in power systems, we are interested in understanding the resiliency of a power network. Given limited historical observations, we are interested in finding out whether there is any unseen scenario that may cause a catastrophic consequence to the system. Finding such scenarios can be useful for evaluating engineering systems and improving network resiliency. Second, in the case of differential privacy (DP), our flow model acts as a _distributional perturbation mechanism_ to dataset queries. Upon finding the worst-case distribution around the data distribution over queries, we can provide much protection against potential data disclosure and/or privacy loss. This is extremely useful in high-stakes settings where sensitive information must be protected. We also note that the existing framework of DP is largely not data-driven. Specifically, DP mechanisms often take the simple approach of adding i.i.d. noise to each dimension of queries, and the noises are unrelated to data. Nowadays, there is growing interest in developing data-driven mechanisms by exploiting the query structure or the data distribution, which may bring potential gains to performances. However, finding such optimal perturbation subject to the privacy constraint poses a computational challenge, which we try to address through the proposed FlowDRO.
The rest of the paper is organized as follows. Section 2 formally introduces the framework of solving for the
Figure 1: Comparison of WDRO and FlowDRO on the 1D example following [2, Figure 1]. **Left of (a) and (b):** Empirical distributions of two sets of _training_ (shown in (a)) and _test_ (shown in (b)) samples from \(\mathcal{N}(0,1)\) and \(\mathcal{N}(2,1.2)\). **Right of (a) and (b):** Least-favorable distributions (LFD) found by WDRO and FlowDRO, where LFDs are within the \(W_{2}\) ball (4) with radius \(\varepsilon=0.1\). As expected, the LFDs overlap more with each other than the empirical distributions do. Note that WDRO solves a convex problem to obtain the LFD by moving the probability mass on _discrete training samples_. In particular, WDRO does not generalizes to a test sample unless it coincides exactly with some of the training samples. In comparison, FlowDRO yields a one-to-one continuous-time transport map that can be directly applied to both training and test samples. The resulted LFD is also continuous, as it is the push-forward distribution by the transport map on the underlying continuous data distribution.
worst-case distribution, along with theoretical analyses. Section 4 describes the FlowDRO method and the concrete training algorithm. Section 5 considers several important applications for which FlowDRO can be used. Section 6 shows numerical results of FlowDRO on high-dimensional problems. Section 7 concludes the work with discussions. All proofs are delegated to the appendix.
## 2 Framework
Below, we focus on Wasserstein-2 (\(\mathcal{W}_{2}\)) in this work, and extensions to \(\mathcal{W}_{p}\) with other \(p\) are left to future extension. Let \(\mathcal{X}=\mathbb{R}^{d}\), and denote by \(\mathcal{P}_{2}(\mathcal{X})\) the space of all distributions on domain \(\mathcal{X}\) that has a second moment, that is, \(\mathcal{P}_{2}(\mathcal{X}):=\{P,\,\int_{\mathcal{X}}|x|^{2}dP(x)<\infty\}\). Define \(\mathcal{P}_{2}^{r}(\mathcal{X}):=\{P\in\mathcal{P}_{2},\,P\ll\mathrm{Leb}\}\), that is, all distributions in \(\mathcal{P}_{2}(\mathcal{X})\) that also have continuous densities (absolute continuous with respect to the Lebesgue measure). We may omit \((\mathcal{X})\) in the notation \(\mathcal{P}_{2}\) and \(\mathcal{P}_{2}^{r}\).
### Dual formulation and Wasserstein proximal problem
The \(\mathcal{W}_{2}\)-distance between two distributions in \(\mathcal{P}_{2}\) is defined by
\[\mathcal{W}_{2}^{2}(\mu,\nu):=\inf_{\pi\in\Pi(\mu,\nu)}\int_{\mathbb{R}^{d} \times\mathbb{R}^{d}}\|x-y\|^{2}d\pi(x,y), \tag{3}\]
where \(\Pi(\mu,\nu)\) denotes the family of all joint distributions with \(\mu\) and \(\nu\) as marginal distributions, called the couplings of \(\mu\) and \(\nu\). For any given \(\nu\in\mathcal{P}_{2}\), the functional \(\mathcal{W}_{2}^{2}(\cdot,\nu)\) maps from \(\mathcal{P}_{2}\) to \([0,\infty)\), by the following lemma.
**Lemma 2.1**.: _For any \(\mu,\nu\in\mathcal{P}_{2}\), \(\mathcal{W}_{2}(\mu,\nu)<\infty\)._
Let \(\mathcal{B}_{\varepsilon}(P)\) be the \(\mathcal{W}_{2}\)-ball in \(\mathcal{P}_{2}\) around the reference distribution \(P\) of radius \(\varepsilon>0\), namely
\[\mathcal{B}_{\varepsilon}(P)=\{Q\in\mathcal{P}_{2},\,\mathcal{W}_{2}(Q,P)\leq \varepsilon\}. \tag{4}\]
As has been explained in the introduction, we will focus on the case where \(P\) has (continuous) density, that is \(P\in\mathcal{P}_{2}^{r}\). We focus on the inner loop (the "max") of the min-max problem (2) where the uncertainty set \(\mathcal{B}=\mathcal{B}_{\varepsilon}(P)\). For fixed decision function \(\phi\), we cast the maximization as a minimization by defining \(V(x):=-r(x;\phi)\). The central problem we aim to solve in the paper is to find the LFD, which can be equivalently written as the following, called the _LFD problem_:
\[\min_{Q\in\mathcal{B}_{\varepsilon}(P)}\ \mathbb{E}_{x\sim Q}V(x),\quad\{\texttt{ LFD problem}\}. \tag{5}\]
The idea is to convert the uncertainty set constraint as a regularization term of the original objective function by introducing a Lagrangian multiplier. Then, we can leverage this connection to build a Wasserstein gradient flow type of algorithm to solve the LFD problem.
Dual form and proximal problem.The constrained minimization (5) is a trust region problem. It is well known that in vector space, trust-region problem can be solved by a proximal problem where the Lagrangian multiplier defined
through \(\lambda>0\) corresponds to the radius \(\varepsilon\)[23]. Specifically, consider the _dual form_ of the LFD problem (5), which can be written as
\[\sup_{\lambda\geq 0}\;G(\lambda):=\min_{Q\in\mathcal{P}_{2}}\mathbb{E}_{x\sim Q}V (x)+\lambda(\mathcal{W}_{2}^{2}(P,Q)-\varepsilon^{2}),\quad\{\text{dual form}\}. \tag{6}\]
We restrict ourselves to the case when \(\lambda>0\), and introduce the change of variable
\[\lambda=\frac{1}{2\gamma},\quad\gamma>0.\]
After dropping the constant term \(\lambda\varepsilon^{2}\) in (6), we obtain the following Wasserstein _proximal problem_
\[\min_{Q\in\mathcal{P}_{2}(\mathcal{X})}\mathbb{E}_{x\sim Q}V(x)+\frac{1}{2 \gamma}\mathcal{W}_{2}^{2}(P,Q),\quad\{\text{proximal problem}\}. \tag{7}\]
The \(\mathcal{W}_{2}\)-proximal problem can be viewed as the Moreau envelope (or the Moreau-Yosida regularization) in the Wasserstein space [24]. Similar to the vector-space case, we will have a correspondence between (5) and (7), see Remark 3.9, which will be introduced in Section 3 after we derive the first-order optimality conditions of the two problems.
Explicit form of dual function.It has been pointed out in several prior works that the dual form can be reformulated using the Moreau envelope of the (negated) loss function under different scenarios [7, 9, 25]. Specifically, the explicit expression of the dual form (6) is written as
\[\sup_{\lambda\geq 0}\;G(\lambda):=\mathbb{E}_{x\sim P}\inf_{z}\left[V(z)+ \lambda\|z-x\|^{2}\right]-\lambda\varepsilon^{2}. \tag{8}\]
Assuming \(\lambda>0\), the dual function \(G\) in (8) can be equivalently written as
\[G\left(\frac{1}{2\gamma}\right)=\mathbb{E}_{x\sim P}\;u(x,\gamma)-\frac{ \varepsilon^{2}}{2\gamma}, \tag{9}\]
where \(u(x,\gamma)\) is the Moreau envelope of \(V\) defined as
\[u(x,t):=\inf_{z}\left[V(z)+\frac{1}{2t}\|z-x\|^{2}\right],\quad t>0. \tag{10}\]
This form of the dual function echos the observation that the Wasserstein proximal operator for the functional in the form of \(\varphi(\mu)=\int Vd\mu\) can be solved by the proximal operator (Moreau envelope) of \(V\), as has been pointed out in the PDE literature, see e.g. [26].
We will recover the same explicit form of the dual function under certain conditions in Section 3 where the Moreau envelope has unique minimizer \(z\) for each \(x\), see Corollary 3.10. Meanwhile, from the computational perspective, the Moreau envelope \(u(x,\gamma)\) may still be challenging to solve for general loss function in high dimensions, among other algorithmic challenges. We further discuss this and the connections to previous studies of the dual form in Section 2.3. Instead of using the dual form (8), we propose the solve the dual problem (equivalently the \(\mathcal{W}_{2}\)-proximal problem (7)) by parameterizing a transport map \(T:\mathbb{R}^{d}\to\mathbb{R}^{d}\) possibly by a neural network, to be detailed in the next section.
### Solving the Wasserstein proximal problem by transport map
We show that the problem (7) that minimizes over \(Q\), can be solved by minimizing over the transport map \(T\), which will pushforward \(P\) to \(Q\). (Recall that for \(T:\mathcal{X}\to\mathcal{X}\), the _pushforward_ of a distribution \(P\) is denoted as \(T_{\#}P\), such that \(T_{\#}P(A)=P(T^{-1}(A))\) for any measurable set \(A\).) This reformulation is rooted in the Monge formulation of the Wasserstein distance.
When \(P\in\mathcal{P}_{2}^{r}\), the Brenier theorem allows a well-defined and unique optimal transport (OT) map from \(P\) to any \(\mu\in\mathcal{P}_{2}\). For completeness, we include the argument as follows. We denote by \(T_{P}^{\square}\) the OT map from \(P\) to \(\square\in\mathcal{P}_{2}\), which is defined \(P\)-a.e., and \((T_{P}^{\mu})_{\#}P=\mu\). Given any \(\mu\in\mathcal{P}_{2}\), for any \(T:\mathbb{R}^{d}\to\mathbb{R}^{d}\) s.t. \(T_{\#}P=\mu\), \((\mathrm{I}_{\mathrm{d}},T)_{\#}P\) is a coupling of \(P\) and \(\mu\), and thus
\[\mathcal{W}_{2}^{2}(\mu,P)\leq\mathbb{E}_{x\sim P}\|x-T(x)\|^{2}. \tag{11}\]
The problem of minimizing the r.h.s. of (11) over all \(T\) that pushforwards \(P\) to \(\mu\) is known as the Monge Problem. _By Brenier Theorem, when \(P\in\mathcal{P}_{2}^{r}\), the OT map attains the minimum of the Monge problem_, that is,
\[\mathcal{W}_{2}^{2}(\mu,P)=\mathbb{E}_{x\sim P}\|x-T_{P}^{\mu}(x)\|^{2}. \tag{12}\]
We introduce the following transport map minimization problem corresponding to the \(W_{2}\)-proximal problem (7)
\[\min_{T:\mathcal{X}\to\mathcal{X},\,T_{\#}P\in\mathcal{P}_{2}(\mathcal{X})} \mathbb{E}_{x\sim P}\left(V\circ T(x)+\frac{1}{2\gamma}\|x-T(x)\|^{2}\right). \tag{13}\]
The formal statement of the equivalence between (7) and (13) is by applying Proposition 2.2 with \(\varphi(\mu):=\mathbb{E}_{x\sim\mu}V(x)\), which is assumed to be finite for any \(\mu\in\mathcal{P}_{2}(\mathcal{X})\), and \(\lambda=1/2\gamma>0\). The proof follows a similar argument as in [10, Lemma A.1] and is included in Appendix A for completeness.
**Proposition 2.2** (Equivalent solution by transport map).: _Suppose \(\varphi:\mathcal{P}_{2}(\mathcal{X})\to(-\infty,\infty)\), \(P\in\mathcal{P}_{2}^{r}(\mathcal{X})\), and define \(\mathcal{T}_{2}:=\{T:\mathcal{X}\to\mathcal{X},\,T_{\#}P\in\mathcal{P}_{2}( \mathcal{X})\}\). For any \(\lambda>0\), the following two problems_
\[\min_{\mu\in\mathcal{P}_{2}(\mathcal{X})}L_{\mu}(\mu)=\varphi(\mu)+\lambda \mathcal{W}_{2}^{2}(P,\mu), \tag{14}\]
\[\min_{T\in\mathcal{T}_{2}}L_{T}(T)=\varphi(T_{\#}P)+\lambda\mathbb{E}_{x\sim P }\|x-T(x)\|^{2}, \tag{15}\]
_satisfy that_
_(a) If \(T^{*}\) is a minimizer of (15), then \((T^{*})_{\#}P\) is a minimizer of (14)._
_(b) If \(\mu^{*}\) is a minimizer of (14), then the OT map from \(P\) to \(\mu^{*}\) minimizes (15)._
_In both cases, the minimum \(L_{\mu}^{*}\) of (14) and the minimum \(L_{T}^{*}\) of (15) equal._
We will solve (13) by parametrizing the transport map \(T\) by a flow network on \([0,\gamma]\) and learn \(T\) by setting (13) as the training objective. Details will be introduced in section 4.
### Connection to existing Wasserstein DRO
The dual form (8) has been derived in several works under different settings [7, 9, 25] - noting that we define \(V\) to be the negative loss, thus (8) is "sup-inf", while the dual of the original LFD problem is "inf-sup". Below we discuss the connection with our framework.
#### 2.3.1 Reduction in the case of discrete reference measure
We show a connection of our problem to the known result in the literature (see, e.g., [7]): when the reference distribution \(P\) is discrete (rather than having a density, i.e., a continuous distribution considered in our setting), [7] proved a "strong duality" result (16). Here we show that the dual form (6) will end up being in the same as the dual form therein (which is equivalent to (8)), and the argument is via (13) which illustrates the role played by the transport map \(T\). This is an interesting connection because the dual form in [7, Theorem 7] plays a role in reducing the original complex infinite-dimensional problem to a finite-dimensional problem to solve the discrete LFD [7, 9]. However, such reduction only happens when the center of the uncertainty set \(P\) is discrete; when \(P\) is not discrete rather than continuous, the case considered in our paper, we need to develop an alternative computational scheme.
When \(P\) is an empirical distribution (thus discrete), we denote \(P=\hat{P}\) and
\[\hat{P}=\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}},\]
for a dataset \(\{x_{i}\}_{i=1}^{n}\). We first restate [7, Theorem 7] using our notations (\(p=2\) in \(\mathcal{W}_{p}\)):
\[\sup_{Q\in\mathcal{B}_{s}(\hat{P})}\mathbb{E}_{x\sim Q}r(x,\phi)=\inf_{\lambda \geq 0}\,\left\{\mathbb{E}_{x\sim\hat{P}}\sup_{z}\left[r(z;\phi)-\lambda\|z-x \|^{2}\right]+\lambda\varepsilon^{2}\right\}. \tag{16}\]
Note that the dual form (the r.h.s. of (16)) is equivalent to (8) replacing \(P\) to be \(\hat{P}\) (and swapping to "sup-inf").
Recall the dual form (6) where we take \(P=\hat{P}\). After dropping the constant term \(\lambda\varepsilon^{2}\), the following proposition gives the explicit expression of the dual function \(G(\lambda)\). We believe similar arguments have appeared in the literature and we include a proof for completeness.
**Proposition 2.3** (Dual form for discrete \(P\)).: _Given \(\lambda>0\), suppose \(\forall i=1,\ldots,n\), \(\inf_{z}\left[V(z)+\lambda\|x_{i}-z\|^{2}\right]\) attains its minimum at some point \(z_{i}\in\mathbb{R}^{d}\), then_
\[\min_{Q\in\mathcal{P}_{2}}\,\mathbb{E}_{x\sim Q}V(x)+\lambda\mathcal{W}_{2}^{ 2}(\hat{P},Q)=\mathbb{E}_{x\sim\hat{P}}\inf_{z}\left[V(z)+\lambda\|x-z\|^{2} \right]. \tag{17}\]
As a result, we have
\[G(\lambda)=\mathbb{E}_{x\sim\hat{P}}\inf_{z}\left[V(z)+\lambda\|x-z\|^{2} \right]-\lambda\varepsilon^{2}.\]
This dual function is equivalent to the dual form on the r.h.s. of (16), recall that \(V(x)=-r(x;\phi)\).
It will be illustrative to derive the r.h.s. of (17) formally from the transport-map-search problem (13): with \(P=\hat{P}\), we obtain
\[\min_{T:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}}\frac{1}{n}\sum_{i=1}^{n}\left( V\circ T(x_{i})+\lambda\|x_{i}-T(x_{i})\|^{2}\right). \tag{18}\]
Since \(x_{i}\) are discrete points, the effective variable are \(z_{i}:=T(x_{i})\), that is, the minimization is equivalent to
\[\min_{\{z_{i}\}_{i=1}^{n},\,z_{i}\in\mathbb{R}^{d}}\frac{1}{n}\sum_{i=1}^{n} \left(V(z_{i})+\lambda\|x_{i}-z_{i}\|^{2}\right).\]
This minimization is decoupled for the \(n\) points \(z_{i}\), and the minimization of each \(z_{i}\) This gives that
\[\min_{T:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}}\mathbb{E}_{x\sim\hat{P}} \left[V(T(x))+\lambda\|x-T(x)\|^{2}\right]=\mathbb{E}_{x\sim\hat{P}}\inf_{z} \left[V(z)+\lambda\|x-z\|^{2}\right].\]
#### 2.3.2 Connection to the dual formulation of WDRO
Prior works have also attempted to use the dual formulation to evaluate the _objective value_ under the worst-case distribution \(L:=\max_{Q\in\mathcal{B}}\ \mathcal{R}(Q;\phi)\). Among others, the dual form (8) was derived in [7] for discrete reference measure, and in [25] under a so-called "Interchangeability Principle" which essentially assumes
\[\mathbb{E}_{x\sim P}\sup_{z}F(x,z)=\sup_{T}\mathbb{E}_{x\sim P}F(x,T(x)) \tag{19}\]
for all measurable function \(F\) and applied to \(F(x,z)=r(z;\phi)-\lambda\|z-x\|^{2}\) (Their setup adopted Wasserstein uncertainty set with transport cost \(c\), and \(c(x,y)=\|x-y\|^{2}\) in the \(\mathcal{W}_{2}\)-distance here). Note that the pointwise sup \(\sup_{z}F(x,z)\) in this case corresponds to the (negative) Moreau envelope of \(V\).
This approach is helpful to directly evaluate the objective function value under the worst case distribution, and thus, can help to develop a robust algorithm \(\phi(\theta)\) with respect to its parameter \(\theta\). However, the approaches along this line of thought have a few notable limitations both in computation and in theory. First, it is well understood that general functions do not admit explicit formulas for their proximal operators, that is, finding the Moreau envelop, namely finding the inner pointwise supermum problem \(\sup_{z\in\mathcal{X}}\left\{r(z;\phi)-\lambda\|z-x\|^{2}\right\}\) does not have a closed-form solution, e.g., when the objective \(r\) is non-linear and non-convex, see a recent discussion in [27]. In addition, computational challenges arise in evaluating the expectation \(\mathbb{E}_{x\sim P}\) in (8). When the reference distribution \(P\) is not discrete, the expectation may not have a closed-form expression or one may have to rely on sampling from \(P\) and perform a Sample Average Approximation (SAA), and the accuracy of SAA in high dimension relies on processing a large number of data samples. At last, even the formulation (8) can be useful for finding robust algorithms that minimize the worst-case loss, the LFD cannot be identified using the formulation, and one cannot sample from the LFD, which is desirable for applications such as adversarial scenario generation.
Theoretically, while the assumption (19) allows in [25] a direct reduction to the dual form, this interchangeability assumption is left abstract and generally difficult to verify: in this context, the \(T\) at supremum corresponds to the optimal transport map between \(P\) and the LFD which may not always exist. Our theory in this work handles these entities in a different route, primarily relying on the theoretical tools of optimal transport: We will derive the dual form (8) in Section 3 by showing that the first-order condition of the \(\mathcal{W}_{2}\)-proximal problem in Wasserstein calculus leads to an optimality condition of solving the Moreau envelope (Corollary 3.10); To justify the algorithm based on parameterizing the transport map, we derive the equivalence between the distribution-search problem (the original \(\mathcal{W}_{2}\)-proximal problem) and the transport-map-search problem in Proposition 2.2 making use of the Brenier theorem. These theoretical analyses utilize the continuous densities of the relevant distributions.
Theory
In this section, we derive the first-order optimality condition for the LFD problem (5) and the proximal problem (7), when considering the primal formulation to find LFD. Although the derivation is elementary, such characterization seems to not exist in the literature as far as we know, and the characterization may shed some insights into the Wasserstein space nature of the problem. Moreover, the first-order conditions also help to establish the dual form of the LFD problem.
### Preliminaries
Notations.To state the main result, we first introduce some necessary notations. For a distribution \(P\) on \(\mathbb{R}^{d}\), define the second moment \(M_{2}(P):=\int_{\mathbb{R}^{d}}\|x\|^{2}dP(x)\). Given \(\mu\in\mathcal{P}_{2}\), the \(L^{2}\) space denoted by \(L^{2}(\mu)\) is for the vector fields \(v:\mathbb{R}^{d}\to\mathbb{R}^{d}\). For \(u,v:\mathbb{R}^{d}\to\mathbb{R}^{d}\), the inner-product
\[\langle u,v\rangle_{\mu}:=\int_{\mathbb{R}^{d}}u(x)^{T}v(x)d\mu(x),\]
and the \(L^{2}\)-norm is defined as
\[\|u\|_{\mu}^{2}=\int_{\mathbb{R}^{d}}\|u(x)\|^{2}d\mu(x).\]
We will use \(v\in L^{2}(\mu)\) as a (small) displacement field; that is, we will consider the perturbation of \(\mu\) to \((\mathrm{I}_{\mathrm{d}}+v)_{\#}\mu\). By Lemma 3.1, if \(v\in L_{2}(\mu)\), then \((\mathrm{I}_{\mathrm{d}}+v)_{\#}\mu\) remains in \(\mathcal{P}_{2}\).
**Lemma 3.1**.: _If \(\mu\in\mathcal{P}_{2}\), \(T\in L^{2}(\mu)\), then \(T_{\#}\mu\in\mathcal{P}_{2}\)._
We introduce notations of the following key functionals on \(\mathcal{P}_{2}\),
\[\varphi(\mu):=\int_{\mathbb{R}^{d}}V(x)d\mu(x),\quad\psi(\mu):=\frac{1}{2} \mathcal{W}_{2}^{2}(\mu,P). \tag{20}\]
Then the LFD problem can be written as
\[\min_{Q\in\mathcal{P}_{2},\,\psi(Q)\leq\varepsilon^{2}/2}\varphi(Q).\]
Because \(\mathcal{P}_{2}\) lies inside the manifold of all distributions over \(\mathbb{R}^{d}\), the notion of calculus and convexity of \(\varphi\) and \(\psi\) in \(\mathcal{P}_{2}\) are very different from the case in vector space, though it is reasonable to expect certain results of the optimization in vector space to find the analog here. The analysis here centers around the (sub)differential of \(\varphi\) and \(\psi\) in \(\mathcal{P}_{2}\), which has been systematically studied in the analysis literature, see Sections 9 and 10 of [28]. Our argument essentially follows the constructions in [28], and we simplify the notions and make the theoretical argument self-contained.
### \(\mathcal{W}_{2}\)-differentials
Recall that \(\varphi\) defined in (20) is a linear function of \(\mu\); however, being linear generally does not imply that the functional is "convex" on \(\mathcal{P}_{2}\). Specifically, the convexity in \(\mathcal{P}_{2}\) needs to be defined along geodesics (or general geodesics). As a simple example, \(\mu_{0}=\delta_{x_{0}}\) and \(\mu_{1}=\delta_{x_{1}}\), then the geodesic from \(\mu_{0}\) to \(\mu_{1}\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\) will consists the
Dirac measure \(\mu_{t}=\delta_{x_{t}}\), \(t\in[0,1]\) where \(x_{t}\) lies on the geodesic from \(x_{0}\) to \(x_{1}\) in \(\mathbb{R}^{d}\) namely the line connecting the two points. For any \(t\in[0,1]\), \(\varphi(\mu_{t})=V(x_{t})\). Then, unless the function \(V\) is convex, the functional \(\varphi(\mu)\) will not be convex along the geodesic from \(\mu_{0}\) to \(\mu_{1}\).
We first introduce a lemma concerning the behavior of \(\varphi\) when the distribution is perturbed in \(\mathcal{P}_{2}\). For \(\varphi(\mu)=\int Vd\mu\), we introduce the following assumption on the potential \(V\) (without assuming its convexity).
**Assumption 3.2** (\(L\)-smooth loss).: \(V\) is \(L\)-smooth on \(\mathbb{R}^{d}\) for some \(L>0\), meaning that \(V\) is \(C^{1}\) on \(\mathbb{R}^{d}\) and \(\nabla V\) is \(L\)-Lipschitz.
**Lemma 3.3** (Strong differential of \(\varphi\)).: _Under Assumption 3.2, \(\varphi:\mathcal{P}_{2}\to(-\infty,\infty)\). At any \(\mu\in\mathcal{P}_{2}\), \(\nabla V\in L^{2}(\mu)\) and \(\varphi\) has strong \(\mathcal{W}_{2}\)-differential_
\[\nabla_{\mathcal{W}_{2}}\varphi(\mu)=\nabla V,\quad\mu\text{-a.e.},\]
_in the sense that \(\forall v\in L^{2}(\mu)\), \(\|v\|_{\mu}=1\), and \(\delta\to 0+\),_
\[\varphi((\mathrm{I}_{\mathrm{d}}+\delta v)_{\#}\mu)=\varphi(\mu)+\delta\langle \nabla V,v\rangle_{\mu}+o(\delta). \tag{21}\]
For \(\psi(\mu)=\frac{1}{2}\mathcal{W}_{2}^{2}(\mu,P)\), where \(P\in\mathcal{P}_{2}^{r}\) is fixed, the \(\mathcal{P}_{2}\) calculus is more conveniently derived in a neighborhood of \(\mu\in\mathcal{P}_{2}^{r}\). It is known that the \(\mathcal{W}_{2}\) differential (both sub- and super-differential) of \(\psi\) at \(\mu\in\mathcal{P}_{2}^{r}\) has the expression as \((\mathrm{I}_{\mathrm{d}}-T_{\mu}^{P})\), see e.g. [28, Corollary 10.2.7] where the subdifferential is defined not in the "strong" sense. Here, we give a lemma on the strong super-differential of \(\psi\) (i.e. strong subdifferential of \(-\psi\)), which suffices for our purpose.
**Lemma 3.4** (Strong super-differential of \(\psi\)).: _Let \(P\in\mathcal{P}_{2}\) be fixed, for any \(\mu\in\mathcal{P}_{2}^{r}\), the optimal transport map \(T_{\mu}^{P}\) is defined \(\mu\)-a.e., and the functional \(-\psi\) has strong \(\mathcal{W}_{2}\)-subdifferential at \(\mu\),_
\[-(\mathrm{I}_{\mathrm{d}}-T_{\mu}^{P})\in\partial_{\mathcal{W}_{2}}(-\psi)(\mu),\]
_in the sense that \(\forall v\in L^{2}(\mu)\), \(\|v\|_{\mu}=1\), and \(\delta\to 0+\),_
\[\psi((\mathrm{I}_{\mathrm{d}}+\delta v)_{\#}\mu)\leq\psi(\mu)+\delta\langle \mathrm{I}_{\mathrm{d}}-T_{\mu}^{P},v\rangle_{\mu}+o(\delta). \tag{22}\]
One remark is that, in the above lemma, we only need \(P\in\mathcal{P}_{2}\) and no need to have density. The unique existence of the optimal transport map \(T_{\mu}^{P}\) needs \(\mu\) to have density.
### First-order condition of Lfd problem
We will analyze the first-order condition around a local minimum of the LFD problem based on the relations (21) and (22). While (21) holds at any \(\mu\in\mathcal{P}_{2}\), (22) requires \(\mu\in\mathcal{P}_{2}^{r}\), Thus, we assume the minimizer \(Q\) of the TR problem has density.
**Assumption 3.5** (Minimizer of Lfd problem in \(\mathcal{P}_{2}^{r}\)).: The problem (5) attains a (local) minimum at \(Q\in\mathcal{P}_{2}^{r}\).
_Remark 3.6_.: In our theory we do not use the assumption \(P\in\mathcal{P}_{2}^{r}\) explicitly, however, if \(P\) does not have density, then usually the minimizer \(Q\) will not have density, e.g., in the discrete LFD considered in [7, 9]. Thus we assume \(P\) has density so that Assumption 3.5 can be reasonable.
**Theorem 3.7** (First-order condition of LFD problem).: _Let \(P\in\mathcal{P}_{2}\) be fixed, under Assumptions 3.2 and 3.5, at a local minimizer \(Q\) of (5) which is in \(\mathcal{P}_{2}^{r}\),_
1. \(\mathcal{B}_{\varepsilon}\) _constraint not tight: If_ \(\mathcal{W}_{2}(Q,P)<\varepsilon\)_, then_ \(\nabla V=0\)_,_ \(Q\)_-a.e._
2. \(\mathcal{B}_{\varepsilon}\) _constraint tight: If_ \(\mathcal{W}_{2}(Q,P)=\varepsilon\)_, then either_ \(\nabla V=0\)_,_ \(Q\)_-a.e. or_ \(\exists\lambda>0\)_, s.t.,_ \[\nabla V+\lambda(\mathrm{I}_{\mathrm{d}}-T_{Q}^{P})=0,\quad Q\text{-a.e.}\] (23)
Note that the statement of the proposition implies that \(T_{Q}^{P}=\mathrm{I}_{\mathrm{d}}+\frac{1}{\lambda}\nabla V\), when \(\lambda>0\), and otherwise \(\nabla V=0\), which takes the form of _complementarity condition_.
### First-order condition of proximal problem
For any \(\gamma>0\), the first-order condition of the Wasserstein proximal problem (7) is derived in the following proposition.
**Theorem 3.8** (First-order condition of proximal problem).: _Let \(P\in\mathcal{P}_{2}\) be fixed, under Assumption 3.2, for \(\gamma>0\), suppose the problem (7) attains a (local) minimum at \(Q\in\mathcal{P}_{2}^{r}\), then_
\[0=\nabla V+\frac{1}{\gamma}(\mathrm{I}_{\mathrm{d}}-T_{Q}^{P}),\quad Q\text{-a.e.} \tag{24}\]
_Remark 3.9_ (Correspondence between of LFD problem and proximal problem).: We can see that the condition (24) matches the first order condition (23) (when Wasserstein ball constraint is tight) by setting \(\gamma=1/\lambda\).
The \(\mathcal{W}_{2}\)-proximal problem has been studied in Section 10.1 of [28], and in particular, Lemma 10.1.2 derived a first-order condition (in terms of strong subdifferential of \(\varphi\)) at a minimizer. In our case, the strong \(\mathcal{W}_{2}\)-differential of \(\varphi\) exists at \(Q\) and thus the subdifferential uniquely exists, i.e., \(\partial_{\mathcal{W}_{2}}\varphi=\{\nabla V\}\). Then the conclusion of [28, Lemma 10.1.2] directly implies (24). We include a direct proof of the proposition for completeness.
The first-order condition of the \(\mathcal{W}_{2}\)-proximal problem allows us to prove the explicit expression of the dual form (8), technically with small enough \(\gamma\) s.t. the Moreau envelope of \(V\) has unique minimizer in the \(\inf_{z}\).
**Corollary 3.10** (Dual form).: _Let \(P\in\mathcal{P}_{2}\) be fixed, under Assumption 3.2, for \(0<\gamma<\frac{1}{2L}\), suppose the proximal problem (7) attains a local minimum at \(Q\in\mathcal{P}_{2}^{r}\). Then, the Moreau envelope \(u(x,\gamma)\) defined in (10) is solved at an unique minimizer \(z^{*}\) for each \(x\), \(Q\) is a global minimum of the_ proximal problem (7)_, and the dual function \(G\) defined in_ dual form (6) _has the expression as in (9)._
_Remark 3.11_ (Interpretation of the optimal transport map).: When the optimal transport map from \(P\) to \(Q\) also exists, it can be interpreted as the map from \(x\) to \(z^{*}\), which solves (the unique minimizer of) the Moreau envelope as well as a Backward Euler scheme to solve the continuous-time gradient flow. Specifically, when \(P\in\mathcal{P}_{2}^{r}\), the optimal transport map \(T_{P}^{Q}\) is defined \(P\)-a.e., and \(T_{Q}^{P}\circ T_{P}^{Q}=\mathrm{I}_{\mathrm{d}}\), \(P\)-a.e. By Theorem 3.8, we have (24), which implies that
\[T_{P}^{Q}=\mathrm{I}_{\mathrm{d}}-\gamma\nabla V\circ T_{P}^{Q},\quad P\text{- a.e.} \tag{25}\]
By a similar argument as in the proof of Corollary 3.10, \(z=T_{P}^{Q}(x)\) solves the unique minimizer of the Moreau envelope \(u(x,\gamma)=\inf_{z}\left[V(z)+\frac{1}{2\gamma}\|z-x\|^{2}\right]\). To view the map \(T_{P}^{Q}\) as a Backward Euler scheme to solve the \(\mathcal{W}_{2}\)-proximal gradient descent: Suppose we use \(T_{P}^{Q}\) to pushforward from the current distribution \(P_{k}=P\) to the next distribution \(P_{k+1}=Q\), then each point \(x_{k}\) is moved to \(x_{k+1}\) by \(T_{P}^{Q}\), i.e. \(x_{k+1}=T_{P}^{Q}(x_{k})\), then (25) gives that
\[x_{k+1}=x_{k}-\gamma\nabla V(x_{k+1}), \tag{26}\]
which is a Backward Euler scheme to integrate the continuous-time gradient descent ODE \(\dot{x}(t)=-\nabla V(x(t))\) with step size \(\gamma\).
## 4 Algorithm: Flow-DRO
In this section, we present a neural network flow-based approach to solve the LFD problem by representing the optimal transport maps by ResNet blocks [29]. It should be noted that our framework does not rely on neural networks, and there can be other ways to represent the transport map (e.g., kernel representation). For high-dimensional data and with sufficient training data, neural networks tend to have competitive performance due to their expressiveness power. Below, in section 4.1, we first parametrize the transport map \(T\) in (13) as the solution map of a NeuralODE [21]. In section 4.2, we present the block-wise progressive training algorithm of the proposed flow model. In section 4.3, we explain how FlowDRO can be used as an adversarial generative sampler.
### Flow-based neural network parametrization of transport map
Consider a density evolution (i.e., flow) \(\rho(x,t)\) such that \(\rho(x,0)=P\) at \(t=0\), and \(\rho(x,t)\) approaches \(Q^{*}\) as \(t\) increases, where \(Q^{*}\) is the minimizer of (7) (unknown a priori). Below, we interchangeably refer \(\rho(x,t)\) both as the marginal distribution of \(x(t)\) and its corresponding density function. Given the initial distribution \(\rho(x,0)=P\), such a flow is typically non-unique. We consider when the flow is induced by an ODE of \(x(t)\) in \(\mathbb{R}^{d}\):
\[\dot{x}(t)=f(x(t),t), \tag{27}\]
where \(x(0)\sim P\). Note that by the Liouville equation (the continuity equation) (see, e.g., [11]), the marginal distribution \(\rho(x,t)\) of \(x(t)\) satisfies
\[\partial_{t}\rho+\nabla\cdot(\rho f)=0.\]
We choose to parametrize \(f(x(t),t)\) in (27) by a neural network \(f(x(t),t;\theta)\) with trainable parameters \(\theta\in\Theta\) (using continuous-time NeuralODE [21]). As a result, at any time \(t>0\), the \(\theta\)-parametrized solution map \(T_{s}^{t}\) over an arbitrary time interval \([s,t)\) can be expressed as
\[T_{s}^{t}(x;\theta)=x+\int_{s}^{t}f(x(s^{\prime}),s^{\prime};\theta)ds^{\prime },x(s)=x. \tag{28}\]
Without loss of generality, we assume the flow map is within the unit interval \(t\in[0,1)\). Using (28), the problem of
finding \(T\) in (13) thus reduces to training \(\theta\) in the following problem:
\[\min_{\theta\in\Theta}\mathbb{E}_{x\sim P}\left(V\circ T_{0}^{1}(x;\theta)+\frac {1}{2\gamma}\|x-T_{0}^{1}(x;\theta)\|^{2}\right). \tag{29}\]
There are two main benefits of parametrizing \(T\) as a flow model with parameters \(\theta\). First, flow models are continuous in time so that we can directly control the amount of perturbation added to samples from \(P\). Namely, let \(\varepsilon_{t}=\mathbb{E}_{x\sim P}\|T_{0}^{t}(x;\theta)-x\|_{2}^{2}\) be the empirical \(W_{2}\) distance between \(P\) and \((T_{0}^{t})_{\#}P\). We can directly control \(\varepsilon_{t}\) by varying \(t\in[0,1)\), and because the underlying neural network \(f(x(t),t;\theta)\) is smooth, the changes of \(\varepsilon_{t}\) are also gradual and controllable. In practice, these gradually transformed samples \(T_{0}^{t}(x;\theta)\) can be directly compared against those obtained by other additive baselines, where numerical results are presented in section 6. Second, compared to other popular generative models such as GAN [22], the proposed flow model based on NeuralODE can be simpler and easier to train. This is because our objective (29) involves no additional discriminators to guide the training of \(T(\cdot;\theta)\), and therefore no additional inner loops are required.
We also note a close connection between training \(\theta\) in (29) and training continuous normalizing flow (CNF) models with transport-cost regularization [17, 19, 30]. In CNF, the problem is to train \(\theta\) so that \(T_{0}^{1}(\cdot;\theta)_{\#}P\) is close to the isotropic Gaussian distribution \(P_{Z}=\mathcal{N}(0,I_{d})\). To do so, the CNF objective minimizes the KL-divergence \(\mathrm{KL}(T_{0}^{1}(\cdot;\theta)_{\#}P||P_{Z})\) up to constants, upon utilizing the instantaneous change-of-variable formula [21]. To ensure a smooth and regularized flow trajectory, the transport cost \(\frac{1}{2\gamma}\|T_{0}^{1}(x;\theta)-x\|_{2}^{2}\) is also commonly used as a regularization term. Hence, the only difference between training our FlowDRO and a transport-regularized CNF model lies in the expression of the _first term_ in (29): our FlowDRO minimizes \(\mathbb{E}_{x\sim P}\left(V\circ T_{0}^{1}(x;\theta)\right)\), which is guided by \(V\) dependent on the loss function \(r\) and decision function \(\phi\), while CNF minimizes the KL-divergence between \(T_{0}^{1}(\cdot;\theta)_{\#}P\) and \(P_{Z}\).
### Block-wise progressive training algorithm
We propose a block-wise progressive training algorithm of minimizing (29) with respect to the network parameters \(\theta\). We build on the JKO-iFlow method in [10], originally developed for training normalizing flows. The convergence of JKO-type \(\mathcal{W}_{2}\) proximal GD for learning a generative model (a special case when the loss function is the KL divergence between the data density and the multi-variate Gaussian distribution) is shown in [11].
Specifically, we would learn \(K\) optimal transports block-wise, where the \(k\)-th transport \(T_{0}^{1}(\cdot,\theta_{k})\) is parametrized by \(\theta_{k}\). After training, the final optimal transport map \(T_{\mathrm{final}}\) is approximated by \(T_{K}\circ\cdots\circ T_{1}\) for \(T_{k}:=T_{0}^{1}(\cdot;\hat{\theta}_{k})\) with trained parameters \(\hat{\theta}_{k}\); here for two mappings \(T_{1},\,T_{2}:\mathcal{X}\rightarrow\mathcal{X}\), \(T_{2}\circ T_{1}(x)=T_{2}(T_{1}(x))\). To perform block-wise progressive training, we first train \(\theta_{1}\) using (29) with the penalty parameter \(\gamma=\gamma_{1}\). The expectation is taken over \(x(0)\sim P\), the data distribution. Using the trained parameters \(\hat{\theta}_{1}\), we could thus compute the push-forward distribution \(P(1)=(T_{1})_{\#}P\). This push-forward operation is done empirically by computing \(x(1)=T_{1}(x(0)),x(0)\sim P\) using the first trained flow block. Then, we continue training \(\theta_{2}\) using (29) with \(\gamma=\gamma_{2}\), where the expectation is taken over \(x(1)\sim P(1)\). In general, starting at \(P(0)=P\), we are able to train the \((k+1)\)-th block parameters \(\theta_{k+1}\) with \(\gamma=\gamma_{k+1}\) given previous \(k\) blocks, where the expectation is taken over \(x(k)\sim P(k)\).
This leads to a block-wise progressive training scheme of the proposed FlowDRO, as summarized in Algorithm 1. Note that the regularization parameters \(\{\gamma_{k}\}\) indirectly control the amount of perturbation, which is represented by the radius \(\varepsilon\) in the uncertainty set (4). Smaller choices of \(\gamma\) induce greater regularization and hence allow less
perturbation of \(P\) by the flow model, whereas larger choices of \(\gamma\) impose less regularization on the amount of transport. Regarding the specification of these regularization parameters, we note that the desired specification varies across different problems, but setting an even choice (i.e., \(\gamma_{k}=\gamma\)) or changing by a constant factor (i.e., \(\gamma_{k}=c\gamma_{k-1}\) for \(c>0\)) typically work well in practice. To further improve the empirical performance, one can also consider adaptive step size using the time reparametrization technique [10], which is an attempt to encourage a more even amount of \(W_{2}\) transport cost by individual blocks.
The block-wise progressive training scheme has several benefits when compared to training a single large model with parameters \(\theta\) using (29). First, FlowDRO helps reduce the memory and computational load during training, as has been identified in [10] when training normalizing flows block-wise. This feature allows the use of larger batch sizes and more accurate numerical ODE integrators when integrating \(f(x(t),t;\theta_{k})\) at block \(k\). In our experiments, we break \([0,1)\) into three fixed-stage Runge-Kutta-fourth-order steps [31] to compute the numerical integration at each block. Second, FlowDRO training is adaptive: one can always terminate after training a specific number of blocks depending on current performance, and different blocks are allowed to differ in architecture.
We also discuss the computational complexity of Algorithm 1. We do so in terms of the number of function evaluations of the network \(f(x(t),t;\theta)\) when computing (29), as this is the most expensive step. Suppose at block \(k\), we break the integral of \(f(x(t),t;\theta_{k})\) over \([0,1)\) into \(S\geq 1\) smaller pieces \(\{[t_{i},t_{i+1})\}_{i=0}^{S-1}\). Let the integral on each piece be numerically estimated by the fixed-stage Runge-Kutta fourth-order method. As a result, it takes \(O(4NS)\) evaluation of \(f(x(t),t;\theta_{k})\) on \(N\) samples per block. The total computation is on the order of \(O(4SKN)\) when training \(K\) blocks. Note that the overall computation is linear in the number of samples and thus scalable to large datasets.
### Generative model for sampling from LFD
We now show how FlowDRO can be conveniently used to generate samples from LFD. This means that we can generate samples from the worst-case distribution \(Q^{*}\), which is found as the push-forward distribution by FlowDRO. Specifically, let \(T_{\hat{\theta}}\) be a trained FlowDRO model composed of \(K\) small blocks. Recall that \(Q^{*}=(T_{\hat{\theta}})_{\#}P\) where \(P\) is the data distribution. Therefore, generating samples from the LFD \(Q^{*}\) is straightforward: one first obtains a new sample from \(X\sim P\) and then computes \(\tilde{X}=T_{\hat{\theta}}(X)\sim Q^{*}\). It remains to sample \(X\sim P\) to build the sampler. To do so, one can train an alternative generic flow model [10, 18]\(T_{\rm gen}\) between \(P\) and \(P_{Z}\), where \(P_{Z}\) is the standard multivariate Gaussian \(\mathcal{N}(0,I_{d})\), which is easy to sample from.
As a result, we can build the sampler from LFD as \(T_{\rm adv}=T_{\hat{\theta}}\circ T_{\rm gen}\). This means we can first sample from multivariate Gaussian \(Z\sim P_{Z}\), propagate it through the generic generative model \(T_{\rm gen}\) to obtain a sample from \(P\), and then propagate the sample through the map \(T_{\hat{\theta}}\) to obtain a sample from the LFD \(Q^{*}\), i.e., \(\tilde{X}=T_{\hat{\theta}}(T_{\rm gen}(Z))\sim Q^{*}\)
Figure 3 illustrates the idea.
Meanwhile, we can also perform conditional generation, which is useful for classification problems. Suppose \(X=(X_{\mathrm{sub}},Y)\) where \(Y\in[C]\) is a discrete label for \(X_{\mathrm{sub}}\). To generate \(X_{\mathrm{sub}}\) with its corresponding \(Y\), we can follow the suggestion in [17] to train \(T_{\mathrm{gen}}\): let \(P_{\mathrm{sub}}\) be the distribution of \(X_{\mathrm{sub}}\). Then, train a flow model to map between \(P_{\mathrm{sub}}|Y\) and \(H|Y\), where \(H|Y\) is a pre-specified Gaussian mixture in \(\mathcal{P}_{2}^{r}(\mathcal{X})\). Hence, we can sample \(X_{\mathrm{sub}}\) with label \(c\in[C]\) by sampling from the corresponding \(H|Y=c\) and mapping through \(T_{\mathrm{gen}}\). The sample \(X_{\mathrm{sub}}\) can then be passed through \(T_{\hat{\theta}}\) get the sample with its corresponding label \(c\) from LFD.
## 5 Applications
We consider several applications that can be formulated as DRO problems so that our proposed FlowDRO can be used to find the worst-case distribution.
### Adversarial learning with distributional attack
It has been widely known that state-of-the-art machine learning models are often adversarially vulnerable. Under small but carefully crafted perturbations to the inputs, the models can make severely wrong predictions on the adversarial examples [32, 33]. Adversarial training thus refers to the defense strategy in the clean training dataset augmented with adversarial examples, upon which retraining increases the robustness of the model on new adversarial examples.
Figure 3: Construction of the proposed sampler from LFD. After training the proposed FlowDRO \(T_{\hat{\theta}}\), we train a separate generic flow model \(T_{\mathrm{gen}}\) to map between the noise distribution \(P_{Z}\) (a multivariate Gaussian \(\mathcal{N}(0,I_{d})\)) and the data distribution \(P\). The full sampler \(T_{\mathrm{adv}}=T_{\hat{\theta}}\circ T_{\mathrm{gen}}\).
Figure 2: An illustration of the FlowDRO framework, which learns a sequence of invertible optimal transport maps that pushes the underlying population density \(P\) to a target LFD \(Q^{*}\); the maps are learned from finite training samples. The handwritten digits represent samples in each stage that show the gradual (continuous) transition of samples.
Finding suitable adversarial examples before retraining is a critically important step. Most methods, such as the widely-used FGSM [34] and PGD [35], are based on the _point-wise_ attack. We can show that the solution of the \(W_{2}\) trust-region problem (5) more effectively "disrupts" a fixed decision function \(\phi\) than the solution induced by the transport map of point-wise attack. Specifically, let \(\phi\in\Phi\) be a fixed decision function. Given \(x\in\mathcal{X}\), we define \(T_{\text{point}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) as the transport port associated with the following point-wise perturbation problem:
\[T_{\text{point}}(x):=x+\delta_{x}^{*},\ \delta_{x}^{*}=\arg\max_{\|\delta_{x} \|_{2}\leq\varepsilon}r(x+\delta_{x},\phi). \tag{30}\]
Denote \(Q^{*}_{\text{point}}=(T_{\text{point}})_{\#}P\) as the push-forward distribution by \(T_{\text{point}}\) on the data distribution \(P\). Denote \(Q^{*}_{\text{dist}}\) as the solution of the \(W_{2}\) trust-region problem (5), which is our objective of interest. We thus have the following result in terms of reaching _higher risk_ under \(Q^{*}_{\text{dist}}\), the proof is in Appendix A.
**Proposition 5.1**.: _For a fixed decision function \(\phi\), we have \(\mathcal{R}(Q^{*}_{\text{dist}},\phi)\geq\mathcal{R}(Q^{*}_{\text{point}},\phi)\)._
To extend beyond point-wise attacks, several works have also considered _distributional_ attacks on the input distribution. For example, [36] uses the Wasserstein distance to measure the difference between input and adversarial distribution. It then proposes to solve a Lagrangian penalty formulation of the distributional attack problem by stochastic gradient methods with respect to the inputs \(x\). Additionally, [37] shows the generality of such distributional attack methods by subsuming different point-wise attack methods under the distributional attack framework under a new Wasserstein cost function. While these works share the similar goal of solving for adversarial distributions, the proposed solutions do not solve for a continuous-time transport map as we intend to do, whose push-forward distribution of \(P\) yields the worst-case distribution.
We now formally introduce the adversarial learning problem under the current DRO framework, using image classification as a canonical example [34]. Let \(X=(X_{\text{img}},Y),X\sim P\) be an image-label pair with raw image \(X_{\text{img}}\) and its label \(Y\in[C]\). The decision function \(\phi\) is typically chosen as a \(C\)-class classifier taking \(X_{\text{img}}\) as the input, and the loss function \(r(X,\phi)=-\log(\phi(X_{\text{img}})_{Y})\) is the cross-entropy loss. To find an alternative distribution \(Q^{*}\) on which the risk is high, it is conventional to keep \(Y\) the same and perturb the corresponding \(X_{\text{img}}\). Thus, for a given image-label distribution \(P\), let \(P_{\text{img}}=\{X_{\text{img}}:X=(X_{\text{img}},Y),X\sim P\}\). As a result, the \(W_{2}\) ball \(\mathcal{B}_{\varepsilon}(P)\) with radius \(\varepsilon\) around the data distribution \(P\) is defined as
\[\mathcal{B}_{\varepsilon}(P)=\{Q\in\mathcal{P}_{2}(\mathcal{X}):W_{2}^{2}(Q_{ \text{img}},P_{\text{img}})\leq\varepsilon^{2}\}. \tag{31}\]
Now, let \(\Phi\) be the set of \(C\)-class classifiers on images \(X_{\text{img}}\). The DRO problem under \(\mathcal{B}_{\varepsilon}(P)\) in (31) is
\[\min_{\phi\in\Phi}\max_{Q\in\mathcal{B}_{\varepsilon}(P)}\mathbb{E}_{X\sim Q} [-\log(\phi(X_{\text{img}})_{Y})]. \tag{32}\]
### Robust hypothesis testing
The goal of hypothesis testing is to develop a detector which, given two hypotheses \(H_{0}\) and \(H_{1}\), discriminates between the hypotheses using input data while reaching a small error probability. In practice, true data distribution often deviates from the assumed nominal distribution, so one needs to develop robust hypothesis testing procedures to improve
the detector's performance. The seminal work by [38] considers the problem in term of using \(\epsilon\)-contamination sets, which are all distributions close to the base distributions in total variation. Later, [39] considers uncertainty sets under the KL-divergence and develops robust detectors for one-dimensional problems. More recently, [1] developed data-driven robust minimax detectors for non-parametric hypothesis testing, assuming the uncertainty set is a Wasserstein ball around the empirical distributions. In addition, [40] derives the optimal detector by considering Sinkhorn uncertainty sets around the empirical distributions. Compared to robust detectors under Wasserstein uncertainty sets, the Sinkhorn-based method is applicable even if the test samples do not have the same support as the training samples.
We follow the notations in [1] to introduce the problem. Given data \(X\in\Omega\), we test between \(H_{0}:X\sim Q_{0},Q_{0}\in\mathcal{B}_{0,\varepsilon}(P_{0})\) and \(H_{1}:X\sim Q_{1},Q_{1}\in\mathcal{B}_{1,\varepsilon}(P_{1})\), where \(\mathcal{B}_{i,\varepsilon}(P_{i})\) denotes the \(W_{2}\) ball of radius \(\varepsilon\) as in (4) around the corresponding data distribution \(P_{i}\). Then, we find a measurable scalar-valued detector \(\phi:\Omega\to\mathbb{R}\) to perform the hypothesis test. Specifically, for a given observation \(X\in\Omega\), \(\phi\) accepts \(H_{0}\) and rejects \(H_{1}\) whenever \(\phi(X)<0\) and otherwise rejects \(H_{0}\) and accepts \(H_{1}\). In this problem, the risk function \(\mathcal{R}((Q_{0},Q_{1}),\phi)\) is defined to provide a convex upper bound on the sum of type-I and type-II errors. Specifically, consider a so-called _generating function_\(f\) that is non-negative, non-decreasing, and convex. The risk is thus defined as
\[\mathcal{R}((Q_{0},Q_{1}),\phi)=\mathbb{E}_{x\sim Q_{0}}[f\circ(-\phi)(x)]+ \mathbb{E}_{x\sim Q_{1}}[f\circ\phi(x)]. \tag{33}\]
Examples of the generating function \(f\) to defined (33) include \(f(x)=\exp(t)\), \(f(x)=\log(1+\exp(t))\), \(f(x)=(t+1)_{+}^{2}\), and so on. As a result of \(\mathcal{R}\) in (33), the robust hypothesis testing can be formulated as the following DRO problem
\[\min_{\phi:\Omega\to\mathbb{R}}\ \max_{Q_{i}\in\mathcal{B}_{1,\varepsilon}(P_{i }),i=0,1}\ \mathbb{E}_{x\sim Q_{0}}[f\circ(-\phi)(x)]+\mathbb{E}_{x\sim Q_{1}}[f\circ\phi (x)]. \tag{34}\]
Note that solving the inner maximization of (34) requires finding a pair of worst-case distributions \(Q_{0}^{*}\) and \(Q_{1}^{*}\). However, using the change-of-measure technique [1, Theorem 2], we can solve an equivalent problem of finding \(Q^{*}\) within a \(W_{2}\) ball round the data distribution \(P=P_{1}+P_{2}\) to fit our original formulation (2).
### Differential privacy
Established by [41, 42], differential privacy (DP) offers a structured method to measure how well individual privacy is secured in a database when collective data insights are shared as answers to the query. In short, DP upholds robust privacy assurances by ensuring that it is nearly impossible to determine an individual's presence or absence in the database from the disclosed information. These can be realized by introducing random perturbations to the query function output before release.
To be precise, consider datasets \(D,D^{\prime}\in\mathcal{D}^{n}\) where each consists of \(n\) rows, and \(\mathcal{D}\) is the space where each datum lies. We say \(D\) and \(D^{\prime}\) are neighboring datasets if they differ in exactly a single element (i.e., in the record of one individual), and denote \(D\simeq D^{\prime}\). An output of the query function \(q:\mathcal{D}^{n}\to\Omega\) is given based on the dataset. A randomized mechanism \(M:\mathcal{D}^{n}\to\Omega\), which maps a dataset to a random output under the probability space \((\Omega,\mathcal{F},\mathbb{P})\), imparts randomness to the answer to the query by perturbing \(q(D)\). Differentially private randomized mechanisms secure privacy by ensuring that the outputs of \(M\) from neighboring datasets are nearly indistinguishable.
The most represented standard for DP is \((\epsilon,\delta)\)-DP [42] (without causing confusion, here \(\epsilon\) is not related to the radius
of the uncertainty set \(\varepsilon\)). Given \(\epsilon,\delta\geq 0\), a randomized mechanism \(M\) is \((\epsilon,\delta)\)-differentially private, or \((\epsilon,\delta)\)-DP, if
\[\mathbb{P}(M(D)\in A)\leq e^{\epsilon}\mathbb{P}(M(D^{\prime})\in A)+\delta\]
for any \(D\simeq D^{\prime}\in\mathcal{D}^{n}\) and \(A\in\mathcal{F}\). When \(\delta=0\), we simply say that \(M\) is \(\epsilon\)-DP. Besides, numerous variants of DP with rigorous definitions such as \(f\)-DP [43], Renyi DP [44], and Concentrated DP [45] have been established and studied; for a comprehensive overview, see [46].
The randomized mechanisms exhibit a clear trade-off: the more they secure privacy, the more they sacrifice statistical utility [47]. Therefore, the constant focus of research has been to design mechanisms that minimize the perturbation and thus the loss of utility (based on specific criteria such as \(l_{p}\) cost) while ensuring a certain level of privacy. Below, we borrow the notion of DP to conceptualize the design of a privacy protection mechanism as a DRO problem and propose the potential applicability of our FlowDRO as a data-dependent distributional perturbation mechanism.
DP can be understood as a hypothesis-testing problem [48, 49, 50, 43]. Consider an adversary trying to differentiate between neighboring datasets \(D\) and \(D^{\prime}\) based on the mechanism output. In this context, the hypothesis testing problem of interest is
\[H_{0}:X\stackrel{{ d}}{{=}}M(D)\sim Q_{0}\quad\text{vs.}\quad H_ {1}:X\stackrel{{ d}}{{=}}M(D^{\prime})\sim Q_{1} \tag{35}\]
where \(X\in\Omega\) is a single perturbed observation. The harder this test is, the more difficult it is to distinguish between neighboring datasets, which implies that strong privacy is ensured. Consider testing (35) with a decision function \(\phi:\Omega\rightarrow[0,1]\), and denote the type-I and type-II errors as \(\alpha_{\phi}=\mathbb{E}_{X\sim Q_{0}}\phi(X)\) and \(\beta_{\phi}=\mathbb{E}_{X\sim Q_{1}}(1-\phi(X))\). Then, a mechanism is \((\epsilon,\delta)\)-DP if and only if \(\alpha_{\phi}+e^{\epsilon}\beta_{\phi}\geq 1-\delta\) and \(e^{\epsilon}\alpha_{\phi}+\beta_{\phi}\geq 1-\delta\) for any \(D\simeq D^{\prime}\) and decision function \(\phi\) that is a deterministic function of \(X\)[48, Theorem 2.4; 49, Theorem 2.1].
Now, we first set up an optimization problem with the risk function measuring indistinguishability between \(Q_{0}\) and \(Q_{1}\) in (35), given the restricted level of perturbation and the neighboring datasets \(D\) and \(D^{\prime}\). Consider a risk function \(\mathcal{R}((Q_{0},Q_{1}),\phi)\) representing the ease of (35) with a decision function \(\phi:\Omega\rightarrow[0,1]\). To ensure strong privacy with a randomized mechanism, even in the "worst-case scenario" with a powerful discriminator, one should make it difficult to distinguish \(Q_{0}\) and \(Q_{1}\) by bringing the two distributions closely together, thereby reducing the risk function. Hence, finding such a pair of indistinguishable distributions with perturbation levels controlled by the Wasserstein-2 distance reduces to
\[\min_{Q_{i}\in\mathcal{B}_{i,\epsilon}(P_{i}),i=0,1}\max_{\phi\in\Phi}\ \mathcal{R}((Q_{0},Q_{1}),\phi) \tag{36}\]
where \(\mathcal{B}_{i,\epsilon}(P_{i})\) denotes the \(\mathcal{W}_{2}\) ball of radius \(\varepsilon\) as in (4) around the corresponding data distribution \(P_{i}\).
In this context, the risk function can be chosen based on which measure reflects the indistinguishability of outputs from neighboring datasets. For instance, fundamentally under the \(f\)-DP criterion, one must first consider the most powerful test for a given level \(\alpha\), that is, the decision function that minimizes \(\beta_{\phi}\). The corresponding problem is formulated as finding \(\min_{\phi}\,\beta_{\phi}\) subject to \(\alpha_{\phi}\leq\alpha\). Therefore, using the Lagrange multiplier and the change-of-measure technique, our DRO formulation (36) becomes
\[\max_{Q_{i}\in\mathcal{B}_{i,\epsilon}(P_{i}),i=0,1}\min_{\phi}\max_{\lambda \geq 0}-\mathbb{E}_{x\sim Q_{0}+Q_{1}}\left[\frac{dQ_{1}}{d(Q_{0}+Q_{1})}[x] \phi(x)-\lambda\left(\frac{dQ_{0}}{d(Q_{0}+Q_{1})}[x]\phi(x)-\alpha\right) \right]. \tag{37}\]
In our experiments, we will utilize \(\alpha_{\phi}\) and \(\beta_{\phi}\) themselves as performance measures by replacing them with sample average approximations.
The conventional and straightforward method to privatize a query function is to apply a calibrated additive noise. In this case, the \(i\)-th uncertainty set in (36) is \(\mathcal{B}_{i,\varepsilon}(P_{i})=\{Q_{i}:M(D)\sim Q_{i},M(D)=q(D)+\xi_{i},q(D) \sim P_{i},D\in\mathcal{D}^{n}\}\), where \(\xi_{i}\) with \(\mathbb{E}\|\xi_{i}\|_{2}\leq\varepsilon\) is a random noise following certain distributions from a specific family. We call such a mechanism that adds noise of a certain distribution an _additive perturbation mechanism_ (APM). Typical noise distributions used in APM include the Laplace [41] and Gaussian distributions [51].
In contrast, based on the formulation (36), we aim to introduce distributional perturbation with our FlowDRO to provide a more flexible mechanism. Consequently, we want to ensure the mechanism outputs are indistinguishable with less perturbation than additive mechanisms. We refer to the corresponding mechanism as the _distributional perturbation mechanism_ (DPM), and illustrate its comparison with APM in Figure 4. We remark that the proposed FlowDRO allows the DPM to apply an arbitrary amount of perturbation to the original distribution of queries. Thus, we can apply DPM at arbitrary precision by controlling the perturbation to satisfy the privacy constraints with reasonable utility.
## 6 Numerical Examples
We conduct experiments to examine the effectiveness of FlowDRO on high-dimensional data. First, in section 6.1, we compare our proposed FlowDRO with existing DRO methods in solving robust hypothesis testing problems to showcase their differences. Then, in section 6.2, we use FlowDRO to perform the adversarial attack on pre-trained image classifiers and compare against existing point-wise attack methods. In section 6.3, we use FlowDRO as the DPM in differential privacy settings and compare it against APM under different noise distribution specifications. In all examples, we assume the decision function \(\phi\) is pre-trained on the data distribution \(P\) and fixed, so the goal is to find the worst-case distribution \(Q^{*}\in\mathcal{B}_{\varepsilon}(P)\) defined in (4) and compare what FlowDRO found against that by other methods.
Figure 4: Comparison between APM and DPM for differential privacy. APM adds random noises \(\xi\)_independently_ to queries, whereas DPM (through the use of proposed FlowDRO) considers the data distribution \(P\) defined over _all_ queries to find a worst-case distribution \(Q^{*}\) within \(\mathcal{B}_{\varepsilon}(P)\).
### Comparison with existing DRO methods
In particular, we quantitatively compared the proposed FlowDRO with WDRO and Sinkhorn DRO (SDRO). The WDRO method for this problem is based on [2], and the SDRO method is based on [13].
#### 6.1.1 WDRO with Gaussian smoothed discrete LFD
We explain how the WDRO method works for this problem. For class \(k=1,2\), let \(x_{k}^{i}\) be the \(i\)-th training sample from classes \(k\). Suppose there are \(n_{1}\) training samples in class 1 and \(n_{2}\) training samples in class 2. Denote \(\{x^{i}\}_{i=1}^{n}\) as the collection of \(n=n_{1}+n_{2}\) samples from both classes. Then, given radius \(\{\varepsilon_{1},\varepsilon_{2}\}\) and training samples \(\{x^{i}\}\), WDRO solves the following finite-dimensional convex program [2, Lemma 2] to find the LFDs supported on \(\{x^{i}\}\):
\[\max_{p_{1},p_{2}\in\mathbb{R}_{+}^{n}},\quad\quad\sum_{l=1}^{n} \min\{p_{1}^{l},p_{2}^{l}\} \tag{38}\] \[\gamma_{1},\gamma_{2}\in\mathbb{R}_{+}^{n\times n}\] subject to \[\sum_{l=1}^{n}\sum_{m=1}^{n}\gamma_{k}^{lm}||x^{l}-x^{m}||_{2} \leq\varepsilon_{k},\quad k=1,2\] \[\sum_{m=1}^{m}\gamma_{1}^{lm}=\frac{1}{n_{1}}\text{ and }\sum_{m=1}^{n} \gamma_{2}^{lm}=0,\quad 1\leq l\leq n_{1}\] \[\sum_{m=1}^{n}\gamma_{1}^{lm}=0\text{ and }\sum_{m=1}^{n} \gamma_{2}^{lm}=\frac{1}{n-n_{1}},\quad n_{1}+1\leq l\leq n\] \[\sum_{l=1}^{m}\gamma_{k}^{lm}=p_{k}^{m},\quad 1\leq m\leq n, \quad k=1,2\]
Note that (38) has \(O(n^{2})\) decision variables, so the complexity of solving this linear program is on the order of \(O(n^{4})\). Thus, solving (38) is computationally infeasible for large sample sizes (e.g., when \(n\sim 10^{3}\)). As a result, instead of using all training samples from both classes, which consist of \(\sim 10^{4}\) samples, we first perform \(K\)-means clustering on each class separately and then use \(n=2K\) cluster centroids as data input \(\{x^{i}\}\) into (38). We let the radius \(\varepsilon_{k}=\hat{W}_{2}(P,Q)\), which is the empirical \(W_{2}\) distance computed by the trained FlowDRO evaluated on the training set. Then, to sample from the _discrete_ LFDs obtained from (38), we follow [2, Section 3.4] to use kernel smoothing with the Gaussian kernel under bandwidth \(h\). In particular, the smoothed LFD for class \(k\) becomes a Gaussian mixture with \(n\) components, where the \(i\)-th component \(\mathcal{N}(x^{i},h^{2}I)\) is chosen with probability \(p_{k}^{i}\). We let \(h=\sqrt{\hat{W}_{2}^{2}(P,Q)/d}\) so that on average, the \(W_{2}\) distance between original latent codes \(\{x^{i}\}\) and samples from the smoothed LFD is \(\hat{W}_{2}^{2}(P,Q)\).
#### 6.1.2 Sinkhorn DRO
In SDRO, the LFD [13, Remark 4] (which is derived using a similar approach as in [25]) is known to be continuous with respect to a pre-specified continuous distribution. However, unlike the WDRO mentioned above, where the LFD after kernel smoothing is equivalent to a Gaussian mixture with \(n\) components, sampling from the SDRO LFD is generally challenging. This is because the density function of the LFD depends on the loss function \(r(x;\phi)\), which can be non-convex with respect to parameters of \(\phi\), especially when \(\phi\) is a neural network.
Nevertheless, we can approximate the worst-case risk under the LFD upon utilizing its dual program [13, Eq (I)]:
\[\min_{\lambda\geq 0}\left\{\lambda\rho+\frac{\lambda\epsilon}{n}\sum_{i=1}^{n} \log\mathbb{E}_{z\sim P_{Z}}\left[\exp\bigl{(}(r(z;\phi)-\lambda\|x_{i}-z\|^{2} )/(\lambda\epsilon)\bigr{)}\right]\right\}. \tag{39}\]
In (39), \(\rho>0\) is the radius of the Sinkhorn ball with regularization parameter \(\epsilon\geq 0\) (under the Sinkhorn distance given in [13, Definition I]), where the Sinkhorn ball centers at the data distribution \(P\). The expectation is taken over \(P_{Z}\), which can be any continuous distribution over \(\mathcal{X}\); we choose \(P_{Z}=\mathcal{N}(0,I_{d})\).
We explain how to approximate (39) in practice. For general loss function \(r(x;\phi)\), analytically evaluating the expectation is challenging, so we use Monte Carlo approximation--for each \(x_{i}\) in the test set, we choose \(300d\) samples \(z(x_{i})\sim P_{Z}\) to evaluate the expectation. To solve for \(\lambda^{*}\) as the minimizer, we could either use simple grid search or bisection search as proposed in [13]. In addition, we note that because \(r(x;\phi)\) may not be convex in parameters of \(\phi\), the optimal value of (39) can only serve as an upper bound on the worst-case risk under the LFD.
#### 6.1.3 Results
We compare the performance of the above two approaches with ours in finding the LFD given binary MNIST digits (i.e., digits from classes 0 and 8). We evaluate the performance by comparing the performance of a pre-train binary classifier \(\phi\) under samples from each of the LFDs: a low prediction accuracy and high risk on samples from the LFD means a better solution. We specifically train a LeNet [52] as the classifier on the original training images from these two classes, so that the loss function \(r(x;\phi)\) is the cross-entropy loss. Given \(\phi\), we then train FlowDRO for three blocks by setting \(\gamma_{k}\equiv 5\). We do so in the latent space with dimension \(d=16\) of a pre-trained auto-encoder.
Table 1 shows the risk and accuracy of the pre-trained LeNet \(\phi\) on the LFDs obtained by different methods; samples from the LFDs are decoded by the auto-encoder before passing into \(\phi\). We see that the worst-case distribution by FlowDRO induces higher risk and lower accuracy than both SDRO and WDRO trained with different training samples, indicating that FlowDRO finds a more effective LFD than these baselines.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline & Risk = \(\mathbb{E}_{x\sim Q^{*}}[r(x;\phi)]\) (\(\uparrow\)) & Accuracy = \(\mathbb{E}_{x\sim Q^{*}}[100\cdot\mathbb{1}(Y=\arg\max_{j}\phi(x)_{j})]\) (\(\downarrow\)) \\ \hline FlowDRO & 7.96 & 0.51\% \\ WDRO (\(n=20\)) & 5.13 & 51.60\% \\ WDRO (\(n=40\)) & 5.45 & 49.10\% \\ WDRO (\(n=60\)) & 5.02 & 53.10\% \\ WDRO (\(n=100\)) & 5.50 & 49.60\% \\ WDRO (\(n=200\)) & 5.31 & 50.20\% \\ SDRO & 5.75 & N/A \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test performance of a pre-trained LeNet classifier \(\phi\) on LFDs \(Q^{*}\) of binary MNIST digits (encoded so \(d=16\)), where the LFDs are found by FlowDRO, WDRO, and SDRO. More effective LFDs are reflected in higher risks and lower accuracies by \(\phi\). Accuracy on samples from the SDRO LFD is not available because sampling from this LFD is highly challenging. We control the empirical \(W_{2}\) distance to be the same across different methods for a fair comparison. For WDRO, \(n\) indicates the number of samples for solving (38).
### Adversarial distributional attack
We consider two sets of experiments in this section. The first example finds the distributional perturbation of CIFAR-10 images by FlowDRO, where we compare the effectiveness of our distributional attack against the widely-used projected gradient descent (PGD) baselines under \(\ell_{2}\) and \(\ell_{\infty}\) perturbation [35]. The second example finds the distributional perturbation of MNIST digits by FlowDRO.
#### 6.2.1 CIFAR10 against point-wise attacks
We describe the setup, introduce the comparison metrics and present the comparative results.
_Setup_. Given a pre-trained image classifier \(\phi\) and a test image \(X_{\mathrm{test},\mathrm{img}}\) with labels \(Y_{\mathrm{test}}\), the goal of adversarial attack as introduced in section 5.1 is to find a perturbed image \(\tilde{X}_{\mathrm{test},\mathrm{img}}\) of \(X_{\mathrm{test},\mathrm{img}}\) so that \(\phi\) makes an incorrect classification on image \(\tilde{X}_{\mathrm{test},\mathrm{img}}\). For this task, instead of performing a point-wise attack given individual \(X_{\mathrm{test},\mathrm{img}}\), our FlowDRO finds a continuous flow \(T\) that gradually transports the distribution of raw images to an adversarial worst-case distribution, on which the classifier \(\phi\) makes incorrect classification and induces high classification losses.
Regarding training specifics, we pre-train a VGG-16 classifier [53]\(\phi\) with cross-entropy loss on the set of clean CIFAR-10 images, and then train three FlowDRO flow blocks with \(\gamma_{k}\equiv 10\) using Algorithm 1. We train FlowDRO in the latent space of a variational auto-encoder as proposed by [54], where the latent space dimension \(d=192\). The architecture of the FlowDRO model on CIFAR10 consists of convolutional layers of 3-128-128-256, followed by convolutional transpose layers of 256-128-128-3. The kernel sizes and strides in the CIFAR10 attacker are 3-3-3-3-4-3 and 1-2-1-1-2-1. We use the softplus activation with \(\beta=20\). Each block is trained for 15 epochs using a batch size of 500, with the Adam optimizer [55] with a constant learning rate of 1e-3.
_Comparison metric_. Denote \(P_{\mathrm{test}}\) as the distribution of raw image-label pairs in the test set. We evaluate the effectiveness of adversarial attack by FlowDRO and PGD on the pre-trained classifier \(\phi\). Specifically, given test images \(X_{\mathrm{test},\mathrm{img}}\) with labels \(Y_{\mathrm{test}}\), we find adversarial samples \(\tilde{X}_{\mathrm{test},\mathrm{img}}\) using different attack mechanisms, where we fix _identical_ amounts of Wasserstein-2 perturbation measured by \(\mathbb{E}_{X_{\mathrm{test}}\sim P_{\mathrm{test}}}\|X_{\mathrm{test}, \mathrm{img}}-\tilde{X}_{\mathrm{test},\mathrm{img}}\|^{2}\) to ensure a fair comparison. Then, given \(Q^{*}_{\mathrm{test}}\) defined by the set of \((\tilde{X}_{\mathrm{test},\mathrm{img}},Y_{\mathrm{test}})\), we evaluate \(\phi\) on \(Q^{*}_{\mathrm{test}}\) based on the sample average of
\[\mathcal{R}(Q^{*}_{\mathrm{test}},\phi) =\mathbb{E}_{X\sim Q^{*}_{\mathrm{test}}}[r(\phi(X_{\mathrm{img}} ),Y)], \tag{40}\] \[\mathrm{Accuracy}(Q^{*}_{\mathrm{test}},\phi) =\mathbb{E}_{X\sim Q^{*}_{\mathrm{test}}}[100\cdot\mathbb{1}(Y= \arg\max_{j}\phi(X_{\mathrm{img}})_{j})]. \tag{41}\]
Hence, under the same amount of perturbation to find \(Q^{*}_{\mathrm{test}}\), a higher risk (40) and a lower accuracy (41) indicate a more effective adversarial attack on \(\phi\). We also evaluate (40) and (41) on the clean test data distribution \(P_{\mathrm{test}}\) for reference.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline & Clean data & Attack by FlowDRO & Attack by PGD-\(\ell_{2}\) & Attack by PGD-\(\ell_{\infty}\) \\ \hline Risk of \(\phi\) in (40) & 2.03 & 32.32 & 6.22 & 10.51 \\ Accuracy of \(\phi\) in (41) & 87.02 & 24.22 & 61.44 & 41.57 \\ \hline \end{tabular}
\end{table}
Table 2: Risk and accuracy of a pre-trained VGG-16 classifier \(\phi\) on clean test data distribution \(P_{\mathrm{test}}\) and adversarially perturbed data distribution \(Q^{*}_{\mathrm{test}}\) by FlowDRO and by PGD under \(\ell_{2}\) and \(\ell_{\infty}\) perturbation. For a fair comparison, we control the same amount of \(W_{2}\) perturbation on the test distribution by different attackers.
Results.Table 2 quantitatively compares the risk and accuracy of the pre-trained classifier \(\phi\) on CIFAR10. We notice that under the same amount of \(W_{2}\) perturbation between raw and perturbed images, \(\phi\) on the adversarial distribution found by FlowDRO yields significantly larger risk and lower accuracy. Hence, we conclude that FlowDRO performs much more effective attacks than the PDG baselines. Meanwhile, Figure 5 visualizes the qualitative changes to test images \(X_{\mathrm{test,img}}\) by FlowDRO and PGD, where the proposed FlowDRO also induces more meaningful contextual changes to the input image. Lastly, Figure 6 visualizes the gradual changes of \(X_{\mathrm{test,img}}\) over blocks and their integration steps by FlowDRO, demonstrating the continuous deformation by our trained flow model on test images \(X_{\mathrm{test,img}}\).
#### 6.2.2 MNIST trajectory illustration
We now apply FlowDRO on finding the worst-case distribution, given a pre-trained LeNet classifier [52]\(\phi\). On this example, we focus on providing more insights into the behavior of FlowDRO without comparing it against other baselines. We train FlowDRO using Algorithm 1 for three blocks with \(\gamma_{k}\equiv 1\), on the latent space of an auto-encoder with latent dimension \(d=16\). The architecture of the flow model consists of fully connected layers of d-256-256-d with softplus activation.
Figure 7 visualizes the gradual and smooth perturbation of test images \(X_{\mathrm{test,img}}\) by FlowDRO. We notice the
Figure 5: Raw and adversarial samples found by FlowDRO and by PGD-\(\ell_{2}\). Captions show prediction by the pre-trained classifier \(\phi\) on raw input images \(X_{\mathrm{test,img}}\) before attack and adversarial samples \(\tilde{X}_{\mathrm{test,img}}\) after attack. FlowDRO results in more meaningful contextual changes of the raw images.
cost-effectiveness and interpretability of FlowDRO. First, the T-SNE embedding in Figure 6(a) shows that FlowDRO tends to push digits around the _boundary_ of certain digit clouds to that of other digit clouds, as such changes take the least amount of transport cost but can likely induce a great increase of the classification loss by \(\phi\). Second, changes in the pixel space in Figure 6(b) show that visible perturbation is mostly applied to the foreground of the image (i.e., actual digits), as changes in the foreground tend to have a higher impact on the classification by \(\phi\).
### Data-driven differential privacy
In this section, we demonstrate the benefit of our FlowDRO DPM in privacy protection. We specifically focus on the examples of image recognition based on MNIST, where the decision function \(\phi\) is specified as pre-trained classifiers. We mainly compare DPM against two APM baselines: APM under Gaussian noise (APM-G) and APM under Laplacian noise (APM-L).
Figure 6: Trajectory of FlowDRO adversarial attacks on different \(X_{\mathrm{test,img}}\) (shown as columns) to \(\tilde{X}_{\mathrm{test,img}}\). We visualize the changes as rows over three FlowDRO blocks, each of which breaks \([0,1)\) into three evenly spaced sub-intervals, resulting in nine integration steps along the perturbation trajectory. Captions on the top and bottom indicate predictions by the pre-trained \(\phi\) on raw \(X_{\mathrm{test,img}}\) and final perturbed adversarial \(\tilde{X}_{\mathrm{test,img}}\).
#### 6.3.1 MNIST raw digit classification
We first describe the precise DP setup and comparison metrics and then show the results against the baselines. This example directly follows from the adversarial attack example on MNIST in section 6.2.2. Specifically, the decision function \(\phi\) is a pre-trained LeNet classifier on raw MNIST digits with 10 classes, and we train three continuous flow blocks on the class of all digits using Algorithm 1.
_DP setup_. We describe the following components of a hypothesis-testing-based DP framework: (1) the definition of neighboring datasets \(D\) and \(D^{\prime}\), where the two datasets differ in exactly one record; (2) the choice of the query function \(q\) taking the datasets as inputs; (3) the privacy-protection randomized mechanism \(M_{\varepsilon}\) applied to queries, under the constraint that \(Q\in\mathcal{B}_{\varepsilon}(P)\) for \(\mathcal{B}_{\varepsilon}(P)\) defined in (4); (4) the hypothesis testing problem with the decision function \(\phi\) to distinguish between \(D\) and \(D^{\prime}\). Notation-wise, we assume \(X\sim P\) is a pair \(X=(X_{\mathrm{img}},Y)\), where \(X_{\mathrm{img}}\) is the raw image and \(Y\in\{0,\dots,9\}\) is the corresponding label.
For (1), we let each dataset \(D\) contain one image-label pair \(X\sim P\) so that two datasets \(D\) and \(D^{\prime}\) are naturally neighbors in terms of \(X\). In other words, \(D\) and \(D^{\prime}\) contain digits either from the same class or from different classes. For (2), given \(D=\{X\}\), we let the query function \(q(D)=X_{\mathrm{img}}\) so that it returns the raw image of the image-label pair \(X\). For (3), the privacy-protection randomized mechanism \(M_{\varepsilon}\) either applies our trained FlowDRO model to \(q(D)\) or adds random Gaussian or Laplacian noises to \(q(D)\), both under the pre-specified amount of perturbation controlled
Figure 7: FlowDRO perturbation of MNIST digits over blocks and integration steps. Figure (a) visualizes the perturbation trajectories from digits 0 to 8 under 2D T-SNE embedding. Figure (b) shows the trajectory in pixel space, along with the corresponding \(W_{2}\) distance between original and perturbed images over integration steps.
by \(\varepsilon\). For (4), given a privacy-protected image \(M_{\varepsilon}(D)\) with the (unknown) label \(Y\), we consider the following sets of \(M_{\varepsilon}(D)\):
\[\begin{split}\text{supp}(M_{\varepsilon}(D))&= \left\{\begin{array}{ll}\varepsilon&\text{if }\varepsilon\leq\varepsilon\\ \varepsilon&\text{if }\varepsilon\leq\varepsilon\end{array}\right.\end{split} \tag{44}\]
Figure 8: Differential privacy example of raw MNIST digit recognition. We control the amount of \(W_{2}\) perturbation by DPM, APM-G, and APM-L to be identical for a fair comparison. Figures (a)-(c) visualize privacy-protected queries \(M_{\varepsilon}(D)\) by DPM, APM-G, and APM-L over different \(\varepsilon\). Figure (d) examines the corresponding type-I and type-II errors defined in (45) by these mechanisms.
hypotheses depending on labels \(k\in\{0,\dots,9\}\):
\[H_{0}(k):Y\neq k\text{ and }H_{1}(k):Y=k. \tag{42}\]
Hence, the goal of a randomized mechanism \(M_{\varepsilon}\) in this case is not to let the classifier \(\phi\) correctly classify the true class of a privacy-protected test image \(M_{\varepsilon}(D_{\mathrm{test}})\).
_Comparison metrics._ We measure the performance of different privacy-protecting randomized mechanisms \(M_{\varepsilon}\) at radius \(\varepsilon\) by the type-I and type-II errors of the classifier \(\phi\) on testing (42) over different classes \(k\). Recall the classifier \(\phi\) maps an arbitrary input image to a probability distribution over the 10 classes. Given a test dataset \(D_{\mathrm{test}}=\{X_{\mathrm{test}}\}\) with \(X_{\mathrm{test}}\sim P_{\mathrm{test}},\) we let \(\hat{Y}(M_{\varepsilon})=\arg\max_{j=0,\dots,9}\phi(M_{\varepsilon}(D_{ \mathrm{test}}))_{j}\) be the predicted class of \(M_{\varepsilon}(D_{\mathrm{test}})\) by \(\phi\). Then, for this particular setting, the type-I error \(\alpha(k,M_{\varepsilon})\) and type-II error \(\beta(k,M_{\varepsilon})\) are computed as
\[\alpha(k,M_{\varepsilon}) =\mathbb{P}(\hat{Y}(M_{\varepsilon})=k|Y\neq k) \tag{43}\] \[\beta(k,M_{\varepsilon}) =\mathbb{P}(\hat{Y}(M_{\varepsilon})\neq k|Y=k), \tag{44}\]
where the probability is taken over test image-label pairs \(X_{\mathrm{test}}=(X_{\mathrm{test,img}},Y_{\mathrm{test}})\) for \(X_{\mathrm{test}}\sim P_{\mathrm{test}}\). We then measure the performance of \(M_{\varepsilon}\) by taking the average of (43) and (44) over \(k\):
\[\alpha(M_{\varepsilon})=\sum_{k=0}^{9}\alpha(k,M_{\varepsilon})/10,\;\beta(M_{ \varepsilon})=\sum_{k=0}^{9}\beta(k,M_{\varepsilon})/10. \tag{45}\]
If a mechanism \(M_{\varepsilon}\) provides strong privacy, we should expect high values of \(\alpha(M_{\varepsilon})\) and \(\beta(M_{\varepsilon})\), as the classifier \(\phi\) would make high errors on privacy-protected images \(M_{\varepsilon}(D_{\mathrm{test}})\).
_Results._ Figure 9 shows the comparative results by the proposed FlowDRO DPM against the APM-G and APM-L baselines. Qualitatively, we notice in (a)-(c) that under the same amount of perturbation \(\varepsilon\), DPM induces meaningful contextual changes to the queries \(q(D)\) (i.e., changing a digit 0 to a digit 8). In contrast, the additive mechanisms only blur the queries slightly. Quantitatively, as shown in (d), such difference helps protect privacy against the decision function \(\phi\): the type-I and type-II errors of \(\phi\) under our proposed DPM are much higher than those of \(\phi\) under the additive perturbation mechanisms. As a result, our DPM is an empirically more effective privacy-protecting mechanism under the same amount of average perturbation as measured in \(\varepsilon\).
#### 6.3.2 MNIST missing digit detection
We consider an alternative setting that is a type of _membership inference attack_ problem [56] and can be viewed as a more natural DP task. In short, we construct _average_ images from digits of 9 classes, where the goal of the decision function \(\phi\), which is still a 10-class classifier, is to determine the class of the _missing_ digit based on a given average image. We follow the notations in section 6.3.1 when describing the setup and metrics.
_DP setup and comparison metric._ We define (1)-(4) in this new setting. For (1), we define a dataset \(D=\{X_{1},\dots,X_{9}:X_{i}=(X_{\mathrm{img},i},Y_{i})\sim Q,Y_{i}\neq Y_{j}\) if \(i\neq j\}\). Thus, the dataset \(D\) has precisely 9 random image-label pairs, one from
each distinct class. Given two datasets \(D\) and \(D^{\prime}\), they are neighbors in the sense that the set of labels \(\{Y_{i}\}\) in \(D\) and \(D^{\prime}\) differ by at most one entry. For (2), the query function \(q(D)=\sum_{i=1}^{9}X_{\mathrm{img},i}/9\), where the sum is taken pixel-wise so that \(q(D)\) returns an average image of the same dimension as raw images. For (3), the privacy-protection mechanism is the same as the previous one, but the privacy-protection mechanism is the same as the previous one.
Figure 9: Differential privacy example of MNIST missing digit detection. We present similar sets of figures as in Figure 8, where the main difference lies in the definition of dataset \(D\) and query function \(q(D)\), which returns an average image of images in \(D\).
\(M_{\varepsilon}\) either applies our trained FlowDRO model to the average image \(q(D)\) or adds random noises to it. For (4), given the true missing label \(Y(D)\) of the dataset \(D\), we then consider the following sets of hypotheses depending on the label \(k\in\{0,\ldots,9\}\):
\[H_{0}(k):Y(D)\neq k\text{ and }H_{1}(k):Y(D)=k. \tag{46}\]
In this new setup, we still evaluate the effectiveness of a randomized mechanism \(M_{\varepsilon}\) using (45), where the probabilities of type-I and type-II errors are taken over test datasets \(D_{\mathrm{test}}\), each of which contains nine random test image-label pairs \(X_{\mathrm{test}}\sim P_{\mathrm{test}}\).
We also explain how we train the classifier \(\phi\) and the flow model \(T\) in this new setting. The architecture of \(\phi\) is still based on convolutional layers, where the training data of \(\phi\) consists of \(\{q(D),Y(D)\}\), which are the set of raw average images \(q(D)\) and corresponding missing labels \(Y(D)\). We then train \(\phi\) using empirical risk minimization under the cross-entropy loss by sampling mini-batches of datasets \(D\). The classifier \(\phi\) is thus trained to determine the missing label \(Y(D)\) given the average image. To train the flow model \(T\) using Algorithm 1, we adopt the identical network architecture as in the previous MNIST examples and train three blocks given the classifier \(\phi\) with \(\gamma_{k}\equiv 2\).
_Results._ Figure 9 shows both qualitative and quantitative comparisons of our proposed DPM against APM-G and APM-L in this more challenging setting. The interpretations of results are similar to those in section 6.3.1. Specifically, we notice more contextual changes by DPM in subfigure (a) than APMs in subfigures (b) and (c), and the higher type-I and type-II errors in subfigure (d) demonstrate the benefit of DPM at protecting privacy against a pre-trained decision function \(\phi\).
## 7 Summary and Discussion
In this paper, we have presented a computational framework called FlowDRO for solving the worst-case distribution, the Least Favorable Distributions (LFD), in Wasserstein Distributionally Robust Optimization (WDRO). Specifically, the worst-case distribution is found as the push-forward distribution induced by our FlowDRO model on the data distribution, and the entire probability trajectory is continuous and invertible due to the use of flow models. We demonstrate the utility of FlowDRO in various applications of DRO, including adversarial attacks of pre-trained image classifiers and differential privacy protection through our distributional perturbation mechanism. FlowDRO demonstrates strong improvement against baseline methods on high-dimensional data.
There are a few future directions to extend the work. Here, we set aside the min-max exchange issue here for the following reasons. It has been shown in the original contribution [9] that when the reference measure (i.e., the center of the uncertainty set) is empirical distribution and thus discrete, the problem (2) has _strong duality_: one can exchange the min and max in the formulation and the solutions for the primal and the dual problems are the same when the loss function is convex-concave in the vector space. The results are shown leveraging the fact that the worst-case distributions for the Wasserstein DRO problem are discrete when the reference measure is discrete, thus reducing the infinite-dimensional optimization problem to a finite-dimensional minimax problem. Thus, one can invoke the standard minimax theorem (see, e.g., [57]). Here, since later on we restrict the LFD to be a continuous function, the strong duality proof in [9] no longer carries through, and one has to extend the minimax theorem (e.g., [58] and [57] using Kakutani theorem) for the most general version involving functionals that are geodesic convex on the manifold
of distribution functions; the proof is rather technical, and we leave it for future work. Second, theoretically, how to formalize our distributional perturbation mechanism on high-dimensional queries to make it satisfy a DP criterion is also an important question. Third, algorithm-wise, one can potentially develop an alternating algorithm to iterate between finding LFD for a given algorithm \(\phi\) and improving the algorithm by the outer minimization for given LFDs. However, the convergence of such an alternating minimization algorithm is unclear since one has to extend the notion of concave-convexity of a loss from vector space to general function space and the manifold of distribution functions. Lastly, our approach is general and does not rely on neural networks, and in future work, one can potentially extend to other alternative representation of the optimal transport maps that works particularly well for low-dimensional and small sample settings.
## Acknowledgement
This work is partially supported by an NSF CAREER CCF-1650913, NSF DMS-2134037, CMMI-2015787, CMMI-2112533, DMS-1938106, DMS-1830210, and the Coca-Cola Foundation. XC is also partially supported by NSF DMS-2237842 and Simons Foundation. The authors would like to thank the helpful discussion with Dr. Daniel Kuhn, Dr. Jose Blanchet, Dr. Arkadi Nemirovski, Dr. Alexander Shapiro, and Dr. Georgia-Ann Klutke.
|
2304.12299 | Developing a cost-effective emulator for groundwater flow modeling using
deep neural operators | Current groundwater models face a significant challenge in their
implementation due to heavy computational burdens. To overcome this, our work
proposes a cost-effective emulator that efficiently and accurately forecasts
the impact of abstraction in an aquifer. Our approach uses a deep neural
operator (DeepONet) to learn operators that map between infinite-dimensional
function spaces via deep neural networks. The goal is to infer the distribution
of hydraulic head in a confined aquifer in the presence of a pumping well. We
successfully tested the DeepONet on four problems, including two forward
problems, an inverse analysis, and a nonlinear system. Additionally, we propose
a novel extension of the DeepONet-based architecture to generate accurate
predictions for varied hydraulic conductivity fields and pumping well locations
that are unseen during training. Our emulator's predictions match the target
data with excellent performance, demonstrating that the proposed model can act
as an efficient and fast tool to support a range of tasks that require
repetitive forward numerical simulations or inverse simulations of groundwater
flow problems. Overall, our work provides a promising avenue for developing
cost-effective and accurate groundwater models. | Maria Luisa Taccari, He Wang, Somdatta Goswami, Jonathan Nuttall, Xiaohui Chen, Peter K. Jimack | 2023-03-05T15:05:13Z | http://arxiv.org/abs/2304.12299v1 | # Developing a cost-effective emulator for groundwater flow modeling using deep neural operators
###### Abstract
Current groundwater models face a significant challenge in their implementation due to heavy computational burdens. To overcome this, our work proposes a cost-effective emulator that efficiently and accurately forecasts the impact of abstraction in an aquifer. Our approach uses a deep neural operator (DeepONet) to learn operators that map between infinite-dimensional function spaces via deep neural networks. The goal is to infer the distribution of hydraulic head in a confined aquifer in the presence of a pumping well. We successfully tested the DeepONet on four problems, including two forward problems, an inverse analysis, and a nonlinear system. Additionally, we propose a novel extension of the DeepONet-based architecture to generate accurate predictions for varied hydraulic conductivity fields and pumping well locations that are unseen during training. Our emulator's predictions match the target data with excellent performance, demonstrating that the proposed model can act as an efficient and fast tool to support a range of tasks that require repetitive forward numerical simulations or inverse simulations of groundwater flow problems. Overall, our work provides a promising avenue for developing cost-effective and accurate groundwater models.
keywords: Deep neural operator, Groundwater flow, Surrogate modelling, Deep learning +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
The computational efficiency of existing numerical models for groundwater becomes an issue when dealing with large-scale or highly nonlinear systems, particularly where decision-making relies on real-time simulation inference. The nonlinear nature of many groundwater problems necessitates the use of an iterative process to solve equations, while the computational cost can escalate rapidly when re-calibrating the initial model to incorporate new observational data [1]. Consequently, the heavy computational burden of groundwater models can limit their implementation in decision-making processes, as is the case for the National Water Model (NWM) used to manage drought risk in the Netherlands, for example [2].
In recent years, the use of deep learning techniques for functional approximation has gained significant attention due to their potential in developing efficient low-fidelity models that can approximate expensive numerical methods in a wide range of applications [3; 4; 5; 6; 7]. One area where such approximations have
been successful is in using deep convolutional neural networks (CNNs) as surrogates for dynamic multiphase flow problems [8; 9; 10; 11]. The deep CNN-based surrogate models treat the problem as an image-to-image regression, where the input and output functions are represented as images and the deep CNNs learn the mapping between them. The resulting models are capable of accurately predicting pressure and saturation fields with highly heterogeneous aquifer conductivity fields at arbitrary time instances. However, employing deep CNNs as surrogate models has several challenges. It is limited to problems where the input and output functions are defined on a lattice grid, and the training data encompasses all grid values within the computational domain. The solution cannot be evaluated at any arbitrary query point lying within the trained domain. The accuracy of the model and its architecture both depend on the mesh resolution, meaning that the model must be retrained for different mesh resolutions to maintain its accuracy [12]. Furthermore, independent simulations need to be performed for every different domain geometry, input parameter set, or initial/boundary conditions (I/BCs).
For the generalization of the solution, we need to look into higher levels of abstraction to learn the mapping from an input functional space to an output functional space (and not a vector space as in functional regression). To that end, the universal approximation theorem for operators [13] is suggestive of the potential application of deep neural networks in learning nonlinear operators from data. The neural operators, introduced in 2019 in the form of deep operator networks (DeepONet)[14], learns the mapping between two infinite dimensional Banach spaces, providing a unique simulation framework for real-time prediction of multi-dimensional complex dynamics. Once trained, the DeepONet is discretization invariant, which means the same network parameters are shared across different parameterizations of the underlying functional data, and hence can be used to obtain the solution at any arbitrary spatial and temporal location (interpolation). Furthermore, a recent theoretical work [15] has shown that DeepONet can break the curse of dimensionality in the input space.
In this paper, we demonstrate through multiple problem setups that DeepONet provides fast and accurate inferences for both explicit as well as implicit operators, and hence can be employed as an efficient surrogate model in approximating various quantities of interest in the domain of subsurface flows. The readers can refer to section 3 for details of the method and an overview of related works. Specifically, we employ the DeepONet framework to design an efficient emulator model to estimate the impact of abstraction in the distribution of hydraulic heads in a heterogeneous confined aquifer. We demonstrate, for the first time, that DeepONet can be applied effectively to groundwater problems and we illustrate some of the potential benefits of this learning approach in the domain of subsurface flows. The proposed method can efficiently
learn solutions of both the forward and inverse problems, the latter being notoriously difficult and time-consuming with traditional methods. We successfully employ DeepONet for fast inference of a nonlinear system, which would require the use of an iterative methods using standard numerical solvers. Finally, we propose a modification to the vanilla DeepONet in order to successfully predict the distribution of spatially varying groundwater heads given a well is randomly positioned in the heterogeneous aquifer.
The paper is organized as follows. Section 2 discusses the problem statement for groundwater flows and the four experiments analysed in this study: \(i)\) mapping from the hydraulic conductivity field to the groundwater field, \(ii)\) the mapping from a pumping well location and the hydraulic conductivity field to the groundwater field, \(iii)\) learning an inverse mapping from the hydraulic head to the hydraulic conductivity field, and finally \(iv)\) a nonlinear problem with a head-dependent boundary condition. Section 3 summarizes the DeepONet approach and provides an overview of the model setup, along with training of the surrogate model. In section 4, we apply the neural operator to the four experimental setups discussed above while section 5 summarizes the findings and outlines the future directions.
## 2 Problem statement
This work focuses on learning the non-linear operator that is represented by the solution of the governing equation of groundwater flow employing a neural operator. In this section, we introduce the partial differential equation (PDE) governing the ground water flow and the computational limitations of a typical numerical solver. We then discuss four experiments which have been designed to illustrate the range of applicability of the proposed method.
### Governing Partial Differential Equation
The governing PDE that defines the movement of groundwater on a two-dimensional space combines the Darcy's Law and the principle of conservation of mass [16], and is written as:
\[S_{s}\frac{\partial h}{\partial t}-\nabla\cdot(K\nabla h)=q_{s}, \tag{1}\]
which is constrained by certain boundary conditions. In Equation 1, \(h\) is the hydraulic head [L], \(K\) is spatially varying hydraulic conductivity field [L/T], \(q_{s}\) is the volumetric flux of groundwater sources and sinks per unit volume [1/T], \(S_{s}\) is the specific storage [1/L], and t [T] is time. A table describing all the notations can be found in A.
The U.S. Geological Survey (USGS) finite-difference flow model, MODFLOW [16] has been broadly used for over the last 30 years by researches, consultants, and governments to efficiently simulate groundwater flow [17]. The Environment Agency, which is the national environmental regulator for England and Wales, uses MODFLOW to analyze the impacts of various scenarios on the hydrological and geohydrorological behavior of the principal aquifers [1]. The model enables the agency to investigate how the aquifer system responds to changes in water withdrawal rates, variations in recharge rates, and the introduction of new recharge or discharge sources. This information helps the agency make informed decisions about managing the country's water resources and protecting the environment. The computational time required to run a groundwater model with a spatial grid of 200m and a temporal resolution of one to three time steps per month, for a time period ranging from months to years, can be prohibitively expensive, especially when solving optimization problems with the aim of maximizing groundwater withdrawals given some constraints. Furthermore, solving an inverse problem for inferring aquifer material properties requires multiple simulations either to discover the missing physics or to calibrate the free parameters of the formulated inverse problem. Such computational burden motivates the development of a deep neural network based emulators to provide predictions with high accuracy while substantially reducing the computational costs at run time. In this work, we have considered a deep neural operator based surrogate model that is a viable alternative to numerically approximating the governing equation for multiple input functions, and thus efficiently forecast the impact of abstraction in an aquifer.
### Operator learning task
An operator, denoted by \(\mathcal{G}\), is a mathematical function that takes one or more functions as input and produces another function as output. Given an input function \(\mathbf{u}(\mathbf{x})\in\mathbb{R}^{d_{x}}\) and an output function \(\mathbf{v}(\mathbf{x})\in\mathbb{R}^{d_{y}}\), the operator \(\mathcal{G}\) is defined as \(\mathcal{G}:\mathbf{u}(\mathbf{x})\in\mathbb{R}^{d_{x}}\mapsto\mathbf{v}(\mathbf{x})\in \mathbb{R}^{d_{y}}\), where \(\mathbb{R}^{d_{x}}\) and \(\mathbb{R}^{d_{y}}\) represents the dimensionality of the inputs and the outputs, respectively and \(\mathbf{x}\) denotes the spatial and temporal coordinates which defines the output space. A PDE may be regarded as an operator: the input space consists of the functions required to specify the problem definition, such as initial and boundary conditions (ICs/BCs), forcing functions and coefficients (which may vary spatially and temporally). The output space is the Sobolev space on which the solution of the PDE lies. Our goal is to approximate the PDE introduced in 2.1 with a neural operator \(\mathcal{G}_{\mathbf{\theta}}\) where \(\mathbf{\theta}\) collectively represents the parameters of the neural operator, the weights, \(\mathbf{W}\) and the biases, \(\mathbf{b}\). The mathematical formulation of DeepONet is introduced in subsection 3.1. We demonstrate the effectiveness of approximating subsurface flows with a neural operator using four computational experiments which are introduced in the next section.
### Computational Experiments
The focus of this study is to develop a fast emulator for groundwater flow. We aim to demonstrate that the proposed framework can be used to efficiently estimate the pumping-induced change of the groundwater level, relative to the level before pumping, in a highly heterogeneous confined aquifer (Experiments \(E_{1}\) and \(E_{2}\)). Furthermore, we employ the framework for solving inverse problems (Experiment \(E_{3}\)), which require a large number of forward numerical simulations if a traditional numerical solver is employed. Finally, we solve a nonlinear system (Experiment \(E_{4}\)), where the solution is conventionally obtained through an iterative process, and hence is heavily time-consuming. This section introduces the four computational experiments (\(E_{1}\) - \(E_{4}\)) designed in the context of subsurface flow presented in subsection 2.1. A visual description of different experiments considered in this work is presented in Figure 1. Details of the data generation to consider heterogeneity within the aquifer are presented in B.
The description of the experiments, E# are as follows.
* **E1: Forward problem, \(\mathcal{G}_{\mathbf{\theta}}:K(\mathbf{x})\mapsto h(\mathbf{x})\)**: The goal of this experiment is to learn the solution operator, \(\mathcal{G}_{\mathbf{\theta}}\) that maps the spatially varying conductivity field, \(K(\mathbf{x})\) to the hydraulic head, \(h(\mathbf{x})\) at some subsequent timestep. In other words, the learning goal is to infer the distribution of hydraulic head in a heterogeneous confined aquifer with one fully penetrating well that starts pumping at a constant rate \(q_{s}\) at \(t=0\). The solution is the prediction of the distribution of hydraulic head \(h(\mathbf{x})\) at the time instance \(t=T\) given a spatially varying hydraulic conductivity field \(K(\mathbf{x})\) and under the assumption of no prescribed flows or heads along the boundary of the domain.
* **E2: Multiple input functions, \(\mathcal{G}_{\mathbf{\theta}}:[K(\mathbf{x}),x_{P}]\mapsto h(\mathbf{x})\)**: The goal of this experiment is to learn a solution operator to approximate the hydraulic head at a time \(T\), given spatially varying hydraulic conductivity \(K(\mathbf{x})\) and the location of the pumping well \(x_{P}\) as input functions.
* **E3: Inverse problem, \(\mathcal{G}_{\mathbf{\theta}}:[h(\mathbf{x},t),\mathcal{K}_{0}(\mathbf{x})]\mapsto\mathcal{ K}(\mathbf{x})\)**: The aim of this experiment is to learn an inverse operator that approximates the spatially varying hydraulic conductivity field \(K(\mathbf{x})\) given the hydraulic head on a domain at different time instances. Understanding that the problem does not have a non-trivial solution, we acknowledge that having more observations of the solution field increases the chances of finding a unique solution that the model converges to. To constrain the solution space in the inverse modeling process, we incorporate sparse observations of hydraulic conductivity \(K_{0}(\mathbf{x})\) as additional inputs.
* **E4: Nonlinear system, \(\mathcal{G}_{\mathbf{\theta}}:K(\mathbf{x})\mapsto h(\mathbf{x})\):** The scenario of this experiment is that a pumping well is located in the center of the domain and a head-dependent well is fixed at a different location within the domain. While a pumping well has specified flow boundaries,_i.e._, the flow is not a function of the head, the specified flow of the head-dependent well is calculated as a function of the hydraulic head. The goal of learning the operator \(\mathcal{G}^{NL}\) is to approximate the mapping between the hydraulic conductivity \(K(\mathbf{x})\) of the heterogeneous aquifer and the distribution of hydraulic head \(h(\mathbf{x})\) directly. In a traditional solver, nonlinearities are resolved using an iteration loop by repeatedly formulating and solving the governing equation using heads from the previous iteration until the residual of the governing equation is within a specified tolerance. The proposed neural operator-based solution eliminates the need for
## 3 Solution operator approximation methods
In this section, we introduce the architecture of the deep operator network, DeepONet and discuss about some of the recent works where neural operators have been employed to solve PDEs. Consider two separable
Figure 1: A schematic representation of the experiments under consideration in this work. The input/output functions and representative plots to demonstrate the task that the operator learns are shown.
Banach spaces, \(\mathbf{u}=\mathbf{u}(\Omega;\mathbb{R}^{d_{x}})\) and \(\mathbf{v}=\mathbf{v}(\Omega;\mathbb{R}^{d_{y}})\), where \(\Omega\) is a bounded open set in \(\mathbb{R}^{D}\), and \(\mathbb{R}^{d_{x}}\) and \(\mathbb{R}^{d_{y}}\) are the dimensionality of the inputs and the outputs, respectively. The nonlinear map \(\mathcal{G}\), arising from the solution of a time-dependent PDE (Equation 1) at some time \(T\), maps from \(\mathbf{u}\) to \(\mathbf{v}\). The objective is to approximate the nonlinear operator \(\mathcal{G}\) through a parametric mapping as \(\mathcal{G}:\mathbf{u}\times\mathbf{\Theta}\rightarrow\mathbf{v}\) or \(\mathcal{G}_{\mathbf{\theta}}:\mathbf{u}\rightarrow\mathbf{v}\), where \(\mathcal{G}_{\mathbf{\theta}}\) represents the parametric mapping and \(\mathbf{\Theta}\) is a finite-dimensional parameter space. The optimal parameters \(\mathbf{\theta}^{*}\) are found by training a neural operator using backpropagation on a dataset of \(\{\mathbf{u}_{j},\mathbf{v}_{j}\}_{j=1}^{N}\) generated on a discretized domain.
### Deep Operator Network
The universal approximation theorem for operators by proposed by Chen and Chen [13] states that shallow neural networks, of sufficient width, are capable of approximating any nonlinear continuous functional or operator to arbitrary accuracy. This theorem is based on a particular neural network model which is composed of two concurrent sub-networks and the outputs of the networks are combined by an inner product. Motivated by the universal approximation theorem, the deep operator Network (DeepONet) [14] was proposed to learn the mapping between Banach spaces with infinite dimensions. The DeepONet architecture consists of two deep neural networks (DNNs): the branch net encodes the input function, \(\mathbf{u}\), at fixed sensor points, \(\{x_{1},x_{2},\ldots,x_{m}\}\), while the trunk net encodes the information related to the spatio-temporal coordinates, \(\zeta=\{x_{i},y_{i},t_{i}\}\), at which the solution operator is evaluated to compute the loss function. The learning process takes place in a general setting, meaning that the sensor locations (\(x_{i=1}^{m}\)) at which the input functions, \(\mathbf{u}\) are evaluated don't have to be evenly spaced, but they must be consistent across all input function evaluations. The branch net takes \([\mathbf{u}(x_{1}),\mathbf{u}(x_{2}),\ldots,\mathbf{u}(x_{m})]^{T}\) as input and outputs \([b_{1},b_{2},\ldots,b_{q}]^{T}\in\mathbb{R}^{q}\), while the trunk network takes \(\zeta\) as input and produces \([t_{1},t_{2},\ldots,t_{q}]^{T}\in\mathbb{R}^{q}\) as output. These two subnetwork outputs are combined through a dot product to produce the desired result. A bias (\(b_{0}\in\mathbb{R}\)) is added in the final stage to increase expressiveness, resulting in \(\mathcal{G}(\mathbf{u})(\zeta)\approx\sum_{i=k}^{q}b_{k}t_{k}+b_{0}\). The optimized values of the trainable parameters \(\mathbf{\theta}\) can be obtained by minimizing a mean square error loss function. Figure 2 illustrates the architecture of the vanilla DeepONet proposed in [14].
### Related Works
DeepONet has shown remarkable success in diverse fields of applications like approximating irregular ocean waves [18], learning stiff chemical kinetics [19], bubble dynamics [20], microstructure evolution [21]_etc._, where the network is trained using large datasets for solving a forward mode problem. Additionally, some recent work has been focused on learning the mapping of multiple input functions to the solution field [22; 23]. Prior work of DeepONet in the area of subsurface flow problems has been to learn the mapping from the conductivity field to the hydraulic head in simple and complex geometries through data-driven [24] and physics informed approaches [25; 26; 27]. In both these works, the PDE governing the subsurface flow (Darcy's equation) has been employed as an application to demonstrate the framework proposed in the corresponding work. In another study, an operator level transfer learning framework [28] was proposed, where Darcy's equation was employed as an example to demonstrate the approach. The idea behind operator transfer learning is to train a source model with sufficient labeled data from a source domain under a standard regression loss and transfer the learned variables to a second target model, which is trained with very limited labeled data from a target (different but related) domain under a hybrid loss function that is the sum of the regression loss and a conditional embedding operator discrepancy loss. Furthermore, another operator-level transfer learning framework was proposed in [29], where Darcy's equation was solved on an L-shaped
Figure 2: Schematic representation of the network architecture of vanilla DeepONet employed in this work. In this work, we have considered a CNN as a branch net and a fully connected feed forward neural network as trunk net. The outputs of the branch and the trunk networks are combined through an inner product to approximate the solution operator.
domain (source) and transferred to an L-shaped domain with a hole. The implementation of a hybrid solver (HINTS) approach could directly handle the change in target geometry and does not require retraining of the operator. None of the works discussed above deal with the additional specific storage term in Equation 1 and is not dedicated to employing the operator for multiple scenarios in subsurface flows. Furthermore, for the first time, the operator network is designed to solve an inverse problem in **E3** to learn the hydraulic conductivity field, \(K(\mathbf{x})\), from the hydraulic head, \(h(\mathbf{x})\) and sparse observations of \(K(\mathbf{x})\).
In an independent work, Lanthaler et al. provide a theoretical analysis of the approximation and generalization errors of DeepONet [15]. They accomplish this by decomposing the error into encoding, approximation, and reconstruction errors and theorizing the lower and upper bounds of the total error. Their analysis indicates that the accuracy of DeepONet can deteriorate in the presence of discontinuities or sharp features that are not fixed in space, while DeepONet can accurately learn operators that produce discontinuities or sharp features fixed in space. This is in-line with our observation and we propose a modified architecture to deal with such scenarios. According to Hadorn [30], it struggles to learn sharp features for each location without increasing the basis functions from the trunk net. Unfortunately, increasing basis functions becomes infeasible for high dimensional problems. An effective modification to overcome this bottleneck of DeepONet to deal with translational invariance is to eliminate the invariance. Both Shift-DeepONet [30] and FlexDeepONet [31] add pre-transformation sub-networks to shift, rotate and scale the data. The input functions to the branch network are passed through these additional networks, which learns the re-scaling and re-centering of these functions. A transformation layer combines the learnt shift, rotation and scale parameters with the spatial coordinates of the evaluation points: the outputs of this layer are the inputs of the trunk network, such that the basis functions of the trunk net depend on the input functions. Similarly, another extension of DeepONet introduces two encoders one each to the inputs of the branch and the trunk network [32]. The embedded features are inserted into the hidden layer of both sub-networks using point-wise multiplication. This novel architecture appears to be more resilient than the conventional DeepONet architecture to vanishing gradient pathologies.
In the current work, we consider the vanilla version of the DeepONet as firstly introduced in [14] which has the benefit of a simpler architecture. In the later part of the work, we propose a modified version of DeepONet in order to overcome the limitations of the vanilla DeepONet in dealing with a source term which is not always defined at the same location, and leads to sharp gradients in the solution field. The next section presents the details of the employed network architectures.
### Network architecture and training
The branch net is considered as a convolution neural network (CNN) that takes as input the functions, \(\mathbf{u}\) evaluated on a lattice grid of size \(32\times 32\), which is consistent for all the experiments carried out in this work. For experiments **E1** and **E4**, we have a the CNN with one input channel, however, for **E2** and **E3** we have two input channels, where the second channel denotes the location of the well and sparse observations of the target hydraulic conductivity, respectively. The inputs to the trunk net are the coordinate of 128 evaluation points, which are randomly sampled in the domain and are distinct for each training sample. The details of the network architecture for all the experiments is provided in Table 1. A schematic representation of the network is shown in Figure 2
In the vanilla DeepONet architecture, the solution operator is approximated as the sum over the products of the outputs of the branch and the trunk net. However, for experiment \(E2\), we noticed that informing the trunk network about the location of the pumping well (input function) is key for good learning. For this reason, we propose a novel DeepONet architecture. As illustrated in Figure 3, each output of the pooling layers of the branch network is combined with the output of each layer of the trunk net. The tensor coming from the branch net is flattened and followed by a dense NN layer with _Sigmoid_ activation function. Given the fact that the resulting vector (whose weights can be interpreted as coefficients) has the same dimension of the corresponding hidden layer of the trunk net, the two vectors can be merged via an inner product.
\begin{table}
\begin{tabular}{l l l l l l} \hline & **Layer** & **Kernel Size** & **Width** & **Activation** & **Output** \\ \hline \multicolumn{5}{c}{Branch Network} \\ \hline
1 & Conv2D & \(5\times 5\) & 16 & ReLU & \(32\times 32\times 16\) \\
2 & Avg-Pool & \(2\times 2\) & & & \(16\times 16\times 16\) \\
3 & Conv2D & \(5\times 5\) & 8 & ReLU & \(16\times 16\times 16\) \\
4 & Avg-Pool & \(2\times 2\) & & & \(8\times 8\times 16\) \\
5 & Conv2D & \(5\times 5\) & 4 & ReLU & \(8\times 8\times 16\) \\
6 & Avg-Pool & \(2\times 2\) & & & \(4\times 4\times 16\) \\
7 & Conv2D & \(5\times 5\) & 4 & ReLU & \(4\times 4\times 16\) \\
8 & Avg-Pool & \(2\times 2\) & & & \(2\times 2\times 16\) \\
9 & Conv2D & \(5\times 5\) & 4 & ReLU & \(2\times 2\times 64\) \\
10 & Avg-Pool & \(2\times 2\) & & & Reshaped to \(1\times 64\) \\
11 & Fully connected & & 1024 & Tanh & \(1\times 256\) \\
12 & Fully connected & & 1024 & Tanh & \(1\times 512\) \\
13 & Fully connected & & 1024 & & \(1\times 150\) \\ \hline \multicolumn{5}{c}{Trunk Network} \\ \hline
14 & Fully connected & & 150 & ReLU & \(150\times 1\) \\
15 & Fully connected & & 150 & ReLU & \(150\times 1\) \\
16 & Fully connected & & 150 & ReLU & \(150\times 1\) \\
17 & Fully connected & & 150 & ReLU & \(150\times 1\) \\
18 & Fully connected & & 150 & ReLU & \(150\times 1\) \\ \hline \end{tabular}
\end{table}
Table 1: Architecture details of vanilla DeepONet employed for all the experiments (**E1**-**E4**).
The result propagates through the following layers of both the trunk net and, after being reshaped and concatenated to the output of the pooling layer, the branch net. We demonstrate that this architecture can accurately predict the high gradients of the hydraulic head in the experiment **E2**, for which the vanilla DeepONet gives a smoother prediction.
The implementation is carried out using the JAX framework on a single NVIDIA GeForce RTX 3080. For all test cases, the datasets consist of \(N_{train}=1000\) training data and \(N_{test}=200\) test data. The network is trained using the Adam optimizer [33] with an initial learning rate of \(5\times 10^{-4}\) which exponentially decays every 1000 iterations with a rate of 0.9 and a batch size of 100 for maximum \(10^{5}\) iterations. We monitor the loss after every 100 iterations and trigger an early stopping if the value of the loss for the test data does not decrease after \(2\times 10^{4}\) iterations.
## 4 Results
In this section, we demonstrate through the previously discussed experiments that DeepONet can be employed for approximating a range of groundwater flow simulation problems, accurately and efficiently. We also provide a comparative study of the two architectures: the vanilla DeepONet and the novel DeepONet
Figure 3: Schematic of the novel DeepONet architecture proposed for experiment **E2**. The architecture is specifically designed to take into account the varying locations of the pump. The basis functions approximated using the trunk net can be modified according to the position of the well.
architecture proposed in this paper. All models are trained on a few sparse points defined on the domain but are evaluated over the whole domain for the test data. The mean relative error (MSE) is used as an error metric which corresponds to the loss function used during training, calculated as the square difference between the target and the predicted fields for all the models and all the experiments. In C, we provide a comparative study of the accuracy of DeepONet with two other popular deep neural network architectures for experiments **E3** and **E4**.
### \(E1\): Forward problem for fixed well location
In this experiment, the location of the well is considered the same for the training and the testing dataset and the operator learns the mapping: \(\mathcal{G}_{\mathbf{\theta}}^{F1}:K(\mathbf{x})\mapsto h(\mathbf{x})\). Training the network takes 283 second for a total of 24,600 iterations, and the error metrics are computed as \(MSE_{train}=1.8\times 10^{-5}\) on \(N_{train}\) samples and \(MSE_{test}=2.5\times 10^{-4}\) on \(N_{test}\) samples. Figure 4 (top row) shows a typical comparison between the predicted and target values of the hydraulic head given a heterogeneous hydraulic conductivity field. Both the inputs and the outputs are normalized along each individual channel, that is the variables are re-scaled into the range \([0,1]\). As can be seen in the prediction plot, DeepONet can predict the pressure buildup and the sharp increase of hydraulic head around the well very accurately. We conducted multiple analyses by altering the number of neurons, layers, kernel size, batch-size, and training data, among other factors. We observed only slight changes in the error rate, and significant deterioration occurred only when the data was not normalized or the learning rate was high. Additionally, increasing the number of query points or sampling more frequently around the well did not improve the accuracy of predictions.
### \(E2\): Forward problem for varying well locations
In the next experiment, we further expand the network capabilities to learn the solution for unseen locations of the pumping well. According to the original formulation of the DeepONet, the branch network encodes the input functions, which are the hydraulic conductivity field and the position of the source term (the well). We observed that the vanilla architecture could not capture the sharp gradients located in the region near the pumping well when the well was shifted to several locations. The reader can refer to the last column of Figure 5 for visualization of vanilla DeepONet predictions for three representative test samples, where the location of the well (forcing term) is encoded as a binary map and concatenated to the hydraulic conductivity field as an additional channel of the existing CNN of the branch net. Through a set
Figure 4: Top row: Prediction of the hydraulic head, \(h(\mathbf{x})\) obtained from a trained vanilla DeepONet (Experiment \(E1\)) for a test sample with an unseen heterogeneous hydraulic conductivity field. The location of the well fixed at \((x,y)=(16,16)\), which is the same for the training and the testing dataset. Bottom Row: Prediction of the hydraulic head obtained using the vanilla DeepONet where the test sample comprises unseen heterogeneous hydraulic conductivity fields and unseen pumping well location (Experiment \(E2\)). The results shown in this plot are for the case where the input to the branch network is the hydraulic conductivity field (first column) and the input to the trunk network is the coordinates in which the network evaluates the solution and also the well location coordinates. The second and the third column denote the ground truth and the prediction of DeepONet for the hydraulic head, respectively.
of computational experiments, we explored different possible ways to encode the forcing term as an input to the branch net. Those included giving the location of the source term as the set of coordinates in the Cartesian coordinate system or as the distance (magnitude and angle) between each point of the domain and the location of the well or as a Gaussian function centered at the location of the well. Within this context, we modified the network architecture by using either a single network, employing two separate neural networks for each input function, and also using two parallel networks with connections between them. All these experiments lead to either very inaccurate predictions with a wrong determination of the extent and location of the pressure front, or to smoother predictions near the source terms as previously highlighted in the last column of Figure 5. D explores the reason for the lower prediction accuracy of the vanilla DeepONet for these test cases through the lens of the singular value decomposition (SVD).
Finally, we found that informing the trunk net of the location of the source term is necessary for good learning in this class of problem. Figure 4 (bottom row) shows the network prediction for one representative test case when the input to the trunk network is the concatenation of the coordinates in which the network evaluates the solution and the coordinates of the pumping well. Input to the branch network is the hydraulic conductivity field evaluated in the whole domain. Visual inspection of the results reveals that the predictions match the reference solutions very well for different distributions of \(K(\mathbf{x})\) and for varying locations of the pumping well. The error metric computed on the training and the testing dataset is computed to be \(2.4\times 10^{-5}\) and \(2.7\times 10^{-4}\), respectively. This approach however becomes inefficient in the case of more complex scenarios, such as multiple wells within the domain, different pumping rates of the wells, or features which are not localized at a single point (rivers and drains). Similarly to real applications, for which the modeler has a planned view of the groundwater system, we decide to encode the location of the source term as a binary map, which is concatenated to the hydraulic conductivity field as input to the branch net. As our experiments showed that informing the trunk network with the location of the forcing term is key for good learning when the location of the forcing term varies among the training data, we link the branch and the trunk net with the newly proposed architecture of DeepONet (Figure 3) described in 3.3. As Figure 5 shows for three representative test samples, the architecture that links the hidden layers of the two sub-networks significantly outperforms the vanilla DeepONet. Training takes 324 seconds and the resulting error metrics is computed to be equal to \(6.6\times 10^{-5}\) and \(2.6\times 10^{-4}\) on the training and testing dataset, respectively. It is interesting to note that the order of magnitude of the training and testing error is always the same for the cases of the forward problems, for which the visual results are also highly satisfactory. It is reasonable to conclude that the error obtained corresponds to the lowest bound of DeepONet for the given forward
problem.
### E3: Inverse problem
In this section, we explore the efficiency of the operator network for solving inverse problems. More specifically, we aim to assist with underground property characterization. Given that it is impossible to directly observe the whole underground system, the aim of the inverse analysis is to understand the heterogeneous aquifer properties (_i.e._, hydraulic conductivity field) using sparse observations of the conductivity field and some known information of the hydraulic head. We consider a test problem for which the information on the hydraulic head in the whole domain is available along with sparse observations of hydraulic conductivity. In Figure 6, we show a representative test case with the inputs and the prediction obtained from the
Figure 5: Predictions of the hydraulic head for unseen heterogeneous hydraulic conductivity fields (first column) and unseen location of the source term (the well). Predictions with vanilla DeepONet (last column) and proposed network architecture (third column) is compared with the target fields (second column) for three representative test samples. Input to the branch network is the hydraulic conductivity field and a binary map indicating the location of the well. Input to the trunk network is the coordinates in which the network evaluates the solution.
operator network. The error metrics computed on the test samples are obtained as \(1.21\times 10^{-1}\) when using 20% of the values of the hydraulic conductivity field (randomly sampled) as observation points. Beyond the comparison between the target reference fields (first column) and the simulated inverse results (fourth column), we also compare the input hydraulic head (second column) and the hydraulic head corresponding to the predicted conductivity as calculated with the traditional solver (third column). The operator network gives trustworthy predictions with accurate and consistent performance across the whole test dataset. The use of observational data informs the network and enhances its accuracy by 16% when compared to the case in which no measurements of hydraulic conductivity were made available. An additional improvement can be achieved by informing the network with the observation of the hydraulic heads at different timesteps. When the branch net additionally encodes the hydraulic head at three equally spaced time instances (instead of only at a single time step), the predictions have a further improvement, which on average is 4% more accurate. In the future, this approach could be further developed to guide efficient observational campaigns which would enhance the accuracy of the inverse models while minimizing the need for observations.
### E4: Nonlinear case
Many groundwater processes exhibit nonlinear behavior and hence requires the use of iterative solvers to obtain the solution of the hydraulic head. In this study, we show that the proposed data-driven method has the capability to resolve the system directly and save a lot on the computational cost required on the conventional iterative solver. The traditional finite-difference form of the groundwater flow equations can be written as: \(\mathbf{A}\mathbf{h}=\mathbf{b}\), where \(\mathbf{h}\) is the vector of head values at the end of time step, \(\mathbf{A}\) is the matrix of the
Figure 6: Representative text sample for the inverse analysis using vanilla DeepONet. The first column shows the unknown hydraulic conductivity field with the location of the observation points on top of it, which are encoded as extra input in addition to the hydraulic head in the whole domain (second column). The last column shows the simulated inverse results and the third column is the hydraulic head corresponding to the predicted conductivity as calculated with the traditional solver.
coefficients of head and \(\mathbf{b}\) is a vector of the constant terms [16]. In a nonlinear system, the individual entries in \(\mathbf{A}\) matrix is a function of the hydraulic head and the system of equations needs to be resolved through a nonlinear outer iteration loop. In our example, a pumping well is located in the center of the domain and a head-dependent well is fixed in another location in the domain (see B for full details of the nonlinearity). The hydraulic head predicted by the neural operator perfectly matches the target values in the whole domain for given hydraulic conductivity fields (considered as input to the branch net), unseen during training (Figure 4.4). The training process takes 430 seconds for a total of 44100 iterations and the error metrics computed on the training and the testing datasets are equal to \(3.1\times 10^{-5}\) and \(3.9\times 10^{-5}\), respectively.
## 5 Summary and Discussion
This paper presents the DeepONet framework as a surrogate model to efficiently and accurately calculate the state response of a groundwater system. The model is trained and tested in four experiments that demonstrate its capability to predict hydraulic head in a heterogeneous aquifer with a fixed pumping well, generalize to unseen pumping well locations, characterize aquifer properties (inverse analysis), and deal with nonlinear systems. The proposed model accurately learns both the forward and inverse relations between the spatially varying hydraulic conductivity and the hydraulic head fields very accurately. However, modifications to the original formulation of DeepONet are needed when the pumping well is placed at any location in the domain. To address this, the paper introduces a novel contribution by linking the input of the branch
Figure 7: Comparison between the ground truth and the prediction of the hydraulic head from DeepONet for unseen heterogeneous conductivity field for the nonlinear problem, **E4**.
network to the trunk network, allowing the network to accurately predict solutions for unseen well locations and hydraulic conductivity fields. By successfully implementing the neural operator on several examples, we demonstrate the capacity of the network to support a range of tasks that require repetitive forward numerical simulations of the groundwater model.
In the future, such a model will be extended to accommodate more complicated and realistic sub-surface problems. These could include more complex predictions from a wider range of abstraction rates and aquifer system geometry, properties, and boundaries, and the interaction with other surface water abstractions and discharges.
## Authors' contribution
**Maria Luisa Taccari**: conceptualization, methodology, software, validation, formal analysis, data curation, writing - original draft, writing - review & editing, visualization. **He Wang**: methodology, formal analysis, writing - review & editing, supervision. **Somdatta Goswami**: methodology, writing - review & editing, supervision, visualization. **Jonathan Nuttall**: conceptualization, methodology, software. **Chen Xiaohui**: supervision, funding acquisition. **Peter K. Jimack**: methodology, formal analysis, writing - review & editing, supervision.
## Code and data availability
All codes used to generate the datasets and train the model will be made available at [https://github.com/mlttac/DeepDnet_gwf](https://github.com/mlttac/DeepDnet_gwf).
## Acknowledgments
This work was supported by the Leeds-York-Hull Natural Environment Research Council (NERC) Doctoral Training Partnership (DTP) Panorama under grant NE/S007458/1. We would like to acknowledge the support provided by Deltares and we express our sincere gratitude to Bennie Minnema for his invaluable contribution to designing the nonlinear test case.
|
2303.16058 | Unmasked Teacher: Towards Training-Efficient Video Foundation Models | Video Foundation Models (VFMs) have received limited exploration due to high
computational costs and data scarcity. Previous VFMs rely on Image Foundation
Models (IFMs), which face challenges in transferring to the video domain.
Although VideoMAE has trained a robust ViT from limited data, its low-level
reconstruction poses convergence difficulties and conflicts with high-level
cross-modal alignment. This paper proposes a training-efficient method for
temporal-sensitive VFMs that integrates the benefits of existing methods. To
increase data efficiency, we mask out most of the low-semantics video tokens,
but selectively align the unmasked tokens with IFM, which serves as the
UnMasked Teacher (UMT). By providing semantic guidance, our method enables
faster convergence and multimodal friendliness. With a progressive pre-training
framework, our model can handle various tasks including scene-related,
temporal-related, and complex video-language understanding. Using only public
sources for pre-training in 6 days on 32 A100 GPUs, our scratch-built ViT-L/16
achieves state-of-the-art performances on various video tasks. The code and
models will be released at https://github.com/OpenGVLab/unmasked_teacher. | Kunchang Li, Yali Wang, Yizhuo Li, Yi Wang, Yinan He, Limin Wang, Yu Qiao | 2023-03-28T15:39:28Z | http://arxiv.org/abs/2303.16058v2 | # Unmasked Teacher: Towards Training-Efficient Video Foundation Models
###### Abstract
Video Foundation Models (VFMs) have received limited exploration due to high computational costs and data scarcity. Previous VFMs rely on Image Foundation Models (IFMs), which face challenges in transferring to the video domain. Although VideoMAE has trained a robust ViT from limited data, its low-level reconstruction poses convergence difficulties and conflicts with high-level cross-modal alignment. This paper proposes a training-efficient method for temporal-sensitive VFMs that integrates the benefits of existing methods. To increase data efficiency, we mask out most of the low-semantics video tokens, but selectively align the unmasked tokens with IFM, which serves as the **U**n**M**asked **T**eacher (**UMT**). By providing semantic guidance, our method enables faster convergence and multimodal friendliness. With a progressive pre-training framework, our model can handle various tasks including scene-related, temporal-related, and complex video-language understanding. Using only public sources for pre-training in **6 days** on **32 A100** GPUs, our scratch-built ViT-L/16 achieves state-of-the-art performances on various video tasks. The code and models will be released at [https://github.com/OpenGVLab/unmasked_teacher](https://github.com/OpenGVLab/unmasked_teacher).
## 1 Introduction
Video understanding has emerged as a critical skill for artificial intelligence systems to analyze and comprehend videos effectively. The progress in video understanding is currently driven by the Image Foundation Models (IFMs) [23, 32, 6, 62, 37], which are trained from massive datasets and adapted for different downstream tasks [18, 90, 99, 61]. However, IFMs tend to focus more on scenes and objects, disregarding the essential motion patterns and object interactions required for complex video understanding. The _true_ Video Foundation Models (VFMs) are underexplored due to the high computational costs and data scarcity.
While building VFMs on well-learned IFMs reduces training costs, it poses significant challenges in transferring knowledge from the image domain to the video domain. Firstly, due to limited video data and a substantial domain gap, video post-pretraining may undermine the generality inherited from IFMs [85]. Moreover, the strong spatial initialization offers a shortcut to perceive videos from scenes in single frames (_e.g_., "grass" in "horse riding"), which constrains VFMs from learning spatiotemporal relationships to recognize and localize temporal-related actions, such as "opening" and "closing" in Figure 2. Lastly, this paradigm is difficult to scale up as it requires well-prepared IFMs.
The recent success of VideoMAE [70, 25] offers a data-efficient way to learn effective spatiotemporal features from scratch, which handles complex temporal action recognition and detection tasks impressively. Nonetheless, its
Figure 1: **Comparison with SOTA methods. “ZS” and “FT” refer to “zero-shot” and “fine-tuned”. “T2V” means video-text retrieval. For Kinetics action recognition, [86] and [76] are excluded since they utilize model ensemble. With only public sources for pre-training, our approach achieves SOTA performances on scene-related, temporal-related and complex video-language benchmarks. Compared with CoCa [90], our method is much more environmentally friendly with 70\(\times\) reduction in carbon emissions.**
strong data efficiency and spatiotemporal modeling are traded by long pre-training (_e.g._, 2400 epochs on 160k videos). Besides, it is not well-suited for video-language tasks since the low-level pixel reconstruction task conflicts with high-level cross-modal alignment [67]. Additionally, the extra decoder that handles masked and unmasked tokens causes high memory costs due to global self-attention, making scaling up this paradigm also challenging.
In this paper, we present a training-efficient method for temporal-sensitive VFMs by integrating the benefits of previous methods. Rather than directly adapting public IFM, _e.g._, CLIP [62], we utilize them as **UnM**asked **T**eacher (**UMT**) to train vanilla ViT from scratch. We mask out most of the video tokens with low semantics and only align the unmasked tokens with a linear projection to the corresponding ones from the teacher. This approach not only inherits data efficiency from VideoMAE but also makes the learned video encoder multimodal-friendly (validated in Table 1). Moreover, training with only unmasked tokens without a decoder further saves GPU memory compared to VideoMAE, and the guidance from the teacher's semantically rich representation leads to faster convergence. Notably, the resulting model can handle both scene-related [38, 56] and temporal-related actions [30, 31] exceptionally well, while the alignment to CLIP features enables the model to be compatible with cross-modal learning.
To address various video tasks, we propose a progressive pre-training framework in Figure 2. In Stage 1, we only use video data for masked video modeling, resulting in a model that excels at video-only tasks. In Stage 2, we employ public vision-language data for multi-modality learning. This allows the model to conduct complex video-language tasks, such as video-text retrieval [83, 64] and video question answering [92, 81]. We use the UMT in both stages, significantly reducing the training sources and speeding up convergence. Thanks to readily-available image and language foundation models [62, 59, 53, 98, 16], our simple framework is easily scalable for video foundation models.
We conduct extensive experiments to verify the effectiveness and efficiency of our approach. As shown in Figure 1, with public sources (data/models) for pre-training, our method achieves state-of-the-art performances on various video tasks, including action recognition [38, 10, 11, 56, 30] (**90.6%** top-1 accuracy on K400), spatiotemporal localization [31] (**39.8** mAP on AVA), video-text retrieval [83, 1, 39, 64, 13] (**58.8** R@1 on MSRVTT) and video question-answering [92, 81, 91] (**47.1%** accuracy on MSRVTT). It is worth emphasizing that our method is much more environmentally friendly compared to CoCa [90], which uses 2,048 CloudTPUv4 chips for 5 days. In contrast, our pre-training requires **32 A100(80G)** GPUs within **6 days**, leading to a remarkable **70\(\times\)** reduction in carbon emissions.
## 2 Related Works
**Video foundation models.** The present Video Foundation Models (VFMs) are primarily based on well-prepared Image Foundation Models (IFMs) [90, 86, 2, 97, 45, 27, 47, 72, 85]. However, the strong spatial pre-training restricts their ability to learn spatiotemporal representations. Despite the impressive results demonstrated by Florence [93], CoCa
Figure 2: **Training-efficient framework for video foundation models. For general video understanding, we propose the _progressive pre-training_ with the unmasked teacher, which is _simple, scalable and reproducible_. The resulting models can not only handle scene-related and temporal-related actions well, but also conduct complex video-language understanding.**
[90], MTV [86], and UniFormerV2 [45] on video-only tasks [38, 10, 11], these models struggle to handle temporal-related actions [30, 65] and localize actions [31, 36]. As for video-language tasks, there have been promising explorations on model architecture [42, 71, 41] and learning paradigms [82, 95, 27, 47, 72]. Recently, InternVideo [76] introduces general VFMs through generative and discriminative learning. However, the dependence on CLIP pre-training and tremendous training costs make it difficult to scale up. In this paper, we propose an easily scalable framework for VFMs that is much more training-efficient.
**Masked vision modeling.** Inspired by the success of masked language modeling [53, 20], masked vision modeling has been proposed for vision transformers [23]. BeiT [7] is the first to propose a BERT-like mask-then-predict framework to recover the discrete tokens [63], while MAE [32] designs masked autoencoders to reconstruct normalized pixel values, which reduces memory consumption by processing only unmasked tokens in the encoder. Later works can be roughly divided into BeiT-style [22, 100, 77, 4, 60] and MAE-style [80, 14, 28, 35] with various target supervision, such as HOG descriptors [77] and momentum features [69]. For spatiotemporal learning, BEVT [75] and VideoMAE [70, 25] can be seen as extensions of BeiT and MAE, respectively. Recent works also indicate that CLIP features provide good guidance for mask modeling [78, 33, 60, 59, 84], but all of them actually perform worse than CLIP itself with elaborate fine-tuning [21]. In contrast, we demonstrate that in the video domain, our model with CLIP supervision clearly outperforms the teacher.
## 3 Method
In this section, we introduce our **UnMasked Teacher (UMT)** for masked video modeling and the progressive pre-training framework for temporal-sensitive video foundation models, as illustrated in Figure 2.
### Unmasked Teacher
As discussed in the introduction, directly adapting the public Image Foundation Model (IFM) to Video Foundation Model (VFM) is challenging [57, 45], thus we propose using IFM as a teacher to train a VFM from scratch. Given the limited data scale, we leverage mask modeling [32] to make good use of the video data. However, unlike VideoMAE [70], we selectively align the unmasked tokens with the teacher, removing an extra decoder for efficient training.
**Architecture.** We choose CLIP-ViT [62] as an unmasked teacher due to its rich semantics that are learned with language guidance, which is beneficial for our following multi-modality learning. To fully impart the teacher's knowledge, we maintain its spatial architecture to process each video frame individually. For our backbone, we apply the vanilla ViT without a class token. We employ spatiotemporal attention [8] to encourage all the unmasked tokens to interact with each other. For better alignment with the spatial teacher, we do not use temporal downsampling, thus the tokens can be aligned frame by frame.
**Masking.** Following VideoMAE, we use a high masking ratio (_e.g._, 80%) to cut down video redundancies. However, the aggressive random masking may only retain the background tokens, which contain insignificant information and hinder the teacher's knowledge transfer. To enhance target effectiveness, we apply the semantic masking [33] frame by frame, where the tokens with important clues are maintained at higher probabilities. Specifically, given the class token \(\mathbf{z}_{cls}{\in}\mathbb{R}^{1\times C}\) and the spatial tokens \(\mathbf{Z}{\in}\mathbb{R}^{L\times C}\) in the \(t\)-th frame of CLIP-ViT (\(L{=}H{\times}W\) is the token number and \(C\) is the token dimension), we calculate the attention score in the last self-attention [23] layer:
\[\mathbf{A} =\sum_{n=1}^{N}\mathbf{A}_{n}(Q_{n}(\mathbf{z_{cls}}),K_{n}( \mathbf{Z}))/N, \tag{1}\] \[\mathbf{A}_{n}(\mathbf{q},\mathbf{k}) =\mathrm{softmax}(\mathbf{q}\mathbf{k}^{T}/\sqrt{C/N}), \tag{2}\]
where \(N\) is the head number, and \(Q_{n}(\cdot)\) and \(K_{n}(\cdot)\) are the linear projections in the \(n\)-th head. The \(\mathbf{A}{\in}\mathbb{R}^{1\times L}\) represents the semantic importance of each token, and we select the unmasked tokens by a multinomial distribution based on \(\mathbf{A}\) to retain the informative objects in each frame. Moreover, we sparsely sample frames from the raw videos [74], which provides a more complicated action context due to the large frame stride. The strategy encourages the model to reason long-term spatiotemporal relationships among objects.
**Target.** For the teacher model, we input all \(L\) spatial tokens along with the class token, frame by frame. In contrast, for the student model, we only input the unmasked tokens, which are equal to \(L(1-r)T\) tokens, where \(r\) is the masking ratio and \(T\) is the frame number. To distill the rich semantics more effectively, we process the output teacher tokens using the pre-trained visual projection, which is designed to establish meaningful connections between visual and text embeddings. Additionally, we add a simple linear projection for the student model to align the token dimension. We select the corresponding unmasked token from the student and teacher, and compute the mean squared error (MSE) between the normalized pairs. Compared to low-level pixel reconstruction, token alignment requires a high-level understanding, which is beneficial for multi-modality learning.
### Progressive Pre-training
For general video understanding, it is vital for the foundation model to handle video-language tasks. However, directly training such a model from scratch is inefficient. For example, CoCa [90] utilizes 4.8B data to train 5 days on 2,048 CloudTPUv4 chips. Therefore, we introduce a training-efficient framework with progressive pre-training.
**Pre-training pipeline.** Figure 2 outlines our pipeline. In Stage 1, we train the ViT from scratch using only high-quality videos and guidance from Unmasked Teacher. The masked video modeling fully mines knowledge from the videos, resulting in a model that excels at video-only tasks. In Stage 2, we equip the pre-trained ViT with a text encoder and cross-modal decoder, initialized with the well-prepared language model. And we conduct multi-modality training with large-scale vision-text pairs, enabling the model to handle complex video-language tasks. It's worth noting that currently, open-source language models are larger and more diverse than vision models, making it easy to scale up our foundation models. For example, the largest OPT [98] has 175B parameters, while ViT-G [96] only has 1.8B.
**Pre-training objectives.** For both stages, we utilize Unmasked Teacher to perform Unmasked Token Alignment (**UTA**). In Stage 2, we employ three other popular objectives: **(i)** Video-Text Contrastive (**VTC**) learning, which aims to align the pooled unmasked video and text embeddings. We use the symmetric contrastive loss [5] to maximize the mutual information. **(ii)** Video-Text Matching (**VTM**) enhances cross-modal fusion by aligning the unmasked video and text tokens. We adopt the binary cross-entropy loss with hard negative mining [44, 41]. **(iii)** Masked Language Modeling (**MLM**) uses the cross-modal decoder to predict masked words from the other text and unmasked video tokens. We follow the BERT [19] strategy but mask 50% of the text tokens.
## 4 Experiments
### Implementation
**Datasets.** Unless otherwise stated, we use Kinetics-710 dataset [45] in Stage 1, which is a combination of Kinetics-400, 600 and 700 [38, 10, 11] and excludes any repeated or leaked videos. In Stage 2, we utilize image-text data for co-training [71, 41, 72], where images are treated as single-frame videos. We use three corpora as in [15]: **(i) 5M** Corpus comprises WebVid-2M [5] video-text pairs and CC3M [66] image-text pairs. **(ii) 17M** Corpus includes four other image-text datasets: COCO [49], Visual Genome [40], SBU Captions [58], and CC12M [12]. **(iii) 25M** Corpus uses a larger version of WebVid containing 10M video-text pairs.
**Settings.** In this paper, we consider two model configurations: ViT-B/16 [23] with BERT\({}_{base}\)[19] and ViT-L/16 with BERT\({}_{large}\). And CLIP-ViT-B/16 [62] and CLIP-ViT-L/14 are adopted as teachers for the base and large models, respectively. For Stage-1 pre-training, we follow most of the hyperparameter settings in VideoMAE [70]. However, we sparsely sample [74] 8 frames and use a masking ratio of 80%. By default, we train both models on 32 A100 with a batch size of 2048 for 200 epochs. The training on Kinetics-710 takes about **60** and **90** hours for ViT-B/16 and ViT-L/16, respectively. In Stage 2, we follow [41] to sample 4 frames and train for 10 epochs. Specifically, we mask 50% image and 80% video tokens. Both models are trained on 32 A100 with a batch size of 4096. The pre-training on 25M Corpus takes about **24** and **40** hours respectively for the base and large models. For more implementation details about training, please refer to the appendix.
### Ablation Study
We ablate the properties of UMT in both stages on both scene-related [38, 83] and temporal-related tasks [30, 41]. For single-modality learning, we pre-train ViT-B/16 for 200 epochs on ShSh V2 [30] or K400 [38] dataset. For multi-modality learning, we use K710 pre-trained models and further pre-train it for 10 epochs on 5M Corpus. Except for Table 1, where we use K400 pre-training.
**Target.** Table 1 presents a comparison of training targets. Compared with pixel reconstruction [70], our unmasked token alignment significantly improves the accuracy with only 36% memory cost. However, combining the two targets results in poor results on K400 and MSRVTT, indicating a conflict between low-level reconstruction and high-level alignment. Moreover, recovering the masked tokens has a detrimental effect, possibly due to the high masking ratio making high-level recovery too challenging. The results demonstrate our method is effective to learn temporal-sensitive and multimodal-friendly representation.
**Mask type, sampling method, and temporal downsampling.** Table 2 indicates that different masking strategies yield comparable results in ShShV V2. We contend that recognizing the category of "something" is not necessary for ShShth V2, but it requires deducing the intricate motion between objects, thus random masking suffices. However, it is critical for K400 to identify the scene and objects, mak
\begin{table}
\begin{tabular}{c c c|c|c c|c}
**[U]** & **[M]** & **MAE** & **Memory (G)** & **SSV2** & **K400** & **MSR** \\ \hline ✗ & ✗ & ✓ & 44.0 & 67.1 & 78.8 & 55.6 \\ ✓ & ✗ & ✓ & 52.5 & **70.2** & 83.9 & 64.5 \\ ✓ & ✓ & ✗ & 43.6 & 70.0 & 84.6 & 65.2 \\ \hline ✓ & ✗ & ✗ & **16.0** & **70.2** & **84.9** & **66.8** \\ \end{tabular}
\end{table}
Table 1: **Target design.** We benchmark ViT-B/16 in 32 A100 with a batch size of 2048. “[U]”, “[M]” and “MAE” refers to unmasked token alignment, masked token recovering and pixel reconstruction [70] respectively. The pixel reconstruction conflict with our unmasked token alignment, and hinder the following multimodal learning.
\begin{table}
\begin{tabular}{c c c|c c}
**Mask** & **Sampling** & **T-Down** & **SSV2** & **K400** \\ \hline Tube & Sparse & ✗ & **70.2** & \(84.3\) \\ Random & Sparse & ✗ & **70.2** & \(84.6\) \\ Semantic & Sparse & ✗ & **70.2** & **84.9** \\ Semantic & Dense & ✗ & 69.8 & 84.0 \\ Semantic & Sparse & ✓ & 69.5 & 84.6 \\ \end{tabular}
\end{table}
Table 2: **Mask type, sampling method and temporal downsampling.** Semantic masking [33] works best.
ing semantic masking advantageous for knowledge distillation. Moreover, sparse sampling without temporal downsampling is more appropriate for our approach.
**Aligned layers.** We try to align more layers in Figure 3, and the losses are averaged across multiple layers. Since the GPU memory and running speed are similar, we simply align the last 6 layers for the best results.
**Masking ratio.** Figure 4 shows that proper high ratios work better. When using a ratio of 95%, the performances dramatically drop since it is too challenging for token alignment. Conversely, when removing masks, the task is too easy to learn the token relationships in space and time. By default, we adopt the ratio of 80% for better trade-offs.
**Why does UMT work?** In Table 3, we investigate the crucial designs of our Unmasked Teacher. **(i) Spatiotemporal attention**: In the 2nd and 3rd parts, we compare the student with spatial attention and spatiotemporal attention during fine-tuning. Our results indicate that utilizing joint attention significantly enhances performance. Moreover, employing spatiotemporal attention during pre-training further improves performance (the 4th part), validating our assumption that joint attention encourages interaction among all unmasked tokens. **(ii) Masked modeling**: In the 4th part, we observe that masked modeling plays a crucial role. However, when using spatial attention during pre-training, masked modeling becomes detrimental. We argue that when processing each frame individually with a high mask ratio of 80%, the token alignment task becomes excessively challenging. **(iii) Teacher attention**: The 5th part shows that although CLIP-_ST_ achieves better performance after fine-tuning, directly applying it as the teacher model leads to a performance drop. We contend that without post-training in the video domain, CLIP-_ST_ may disrupt the representation learned in the image domain.
**Outperforming the CLIP teacher.** In the image domain, the prior research [21] has shown that, CLIP itself with fine-tuning surpasses existing CLIP-targeted MIM methods [78, 79, 33, 60]. However, Table 3 indicates that in the video domain, the student model (the 4th part) clearly outperforms the teacher, _i.e._, CLIP-_ST_ with our elaborate fine-tuning. We attribute the success to masked video modeling with spatiotemporal attention, which encourages the model to capture long-term dependencies among objects.
**Multi-modality masking ratios.** In Table 4, we first alter the masking ratios of the image and video data. Since we co-train image-text and video-text data with the same batch size, the GPU memory primarily depends on the video masking ratio. As expected, processing images requires a
\begin{table}
\begin{tabular}{c c c|c|c}
**Image** & **Video** & **Text** & **Memory (G)** & **MSR** & **SSV2** \\ \hline
50 & 60 & 50 & 35.6 & 66.9 & 80.6 \\ \hline
50 & 80 & 50 & 18.6 & **67.0** & **80.8** \\
75 & 80 & 50 & 18.6 & 66.5 & 80.6 \\
75 & 90 & 50 & 13.1 & 65.9 & 79.5 \\
90 & 95 & 50 & 12.1 & 65.7 & 79.2 \\ \hline
50 & 80 & 25 & 18.6 & 66.5 & 80.1 \\
50 & 80 & 75 & 18.6 & 66.6 & 78.2 \\ \end{tabular}
\end{table}
Table 4: **Different masking ratios for multi-modality pre-training.** We benchmark ViT-B/16 in 16 A100 with a batch size of 2048. We report the average text-to-video retrieval R@1,5,10 accuracy of MSRVTT and SSV2-label. Masking 50% image and 80% video tokens works best.
Figure 4: **Masking ratio.** We use the masking ratio of 0.8 for a better trade-off on both datasets.
\begin{table}
\begin{tabular}{c|c|c|c c}
**Teacher** & **Mask** & **PT Student** & **FT Student** & **SSV2** & **K400** \\ \hline \multicolumn{5}{l|}{_fine-tuning_ CLIP-_S_} & 57.4 & 82.0 \\ \multicolumn{5}{l|}{_fine-tuning_ CLIP-_ST_} & 68.0 & 82.5 \\ \hline CLIP-S & ✗ & ViT-S & ViT-_S_ & 54.5 & 82.4 \\ CLIP-S & ✓ & ViT-S & ViT-_S_ & 54.0 & 82.2 \\ \hline CLIP-S & ✗ & ViT-S & ViT-_ST_ & 68.0 & 83.7 \\ CLIP-S & ✓ & ViT-S & ViT-_ST_ & 67.2 & 83.4 \\ \hline CLIP-S & ✗ & ViT-ST & ViT-_ST_ & 69.1 & 84.6 \\ CLIP-S & ✓ & ViT-_ST_ & ViT-_ST_ & **70.3** & **85.2** \\ \hline CLIP-_ST_ & ✓ & ViT-_ST_ & ViT-_ST_ & 68.7 & 83.7 \\ \end{tabular}
\end{table}
Table 3: **Why does UMT work?** “\(S\)” and “\(ST\)” refers to spatial and spatiotemporal attention respectively. Spatiotemporal attention and mask modeling are vital for UMT.
Figure 3: **Aligned layers.** Since the GPU memory and running speed are similar, we align the last 6 layers.
lower masking ratio of 50%. Although higher masking ratios reduce memory consumption, the corresponding performances are lower. Additionally, masking too few (25%) or too many (75%) text tokens leads to inferior results.
**Multi-modality pre-training objectives.** For cross-modal retrieval, utilizing either VTC or VTM for visual-text pairs is necessary. In Table 5, all loss weights are set to 1. The 1st part reveals that VTM performs better than VTC. Besides, the 2nd part shows that combining VTC or MLM with VTM leads to a minor improvement, while integrating all three objectives significantly enhances the performance. Lastly, without our unmasked teacher alignment, the memory usage triples, while the performances drop.
### Single-modality tasks
We evaluate our method on two conventional video-only tasks: recognizing and localizing actions on six large-scale benchmarks, including the _Kinetics_ family (_i.e._, Kinetics-400, 600 and 700 [38, 10, 11]), _Moments in Time_ V1 [56] and _Something-Something_ V2 [30] for action recognition, and _AVA_ V2.2 [31] for spatiotemporal localization.
**Kinetics.** Table 6 reports the SOTA methods with supervised and self-supervised learning on K400. On one hand, our UMT with intermediate fine-tuning outperforms
\begin{table}
\begin{tabular}{|l|l|l|c|c|c|c c|} \hline \multicolumn{1}{|l|}{**Method**} & **Backbone** & **Extra data** & **Input Size** & **GFLOPs** & **Param** & **Top-1** & **Top-5** \\ \hline \multirow{9}{*}{} & SlowFast\({}_{101}\)[26] & R101+NL & & 80\(\times\)224\({}^{2}\) & 234\(\times\)3\(\times\)10 & 60 & 79.8 & 93.9 \\ & MViTv2-B [48] & MViTv2-B & & 32\(\times\)224\({}^{2}\) & 255\(\times\)1\(\times\)5 & 37 & 81.2 & 95.1 \\ & Uniformer-B[46] & UniFormer-B & IN-1K & 32\(\times\)224\({}^{2}\) & 259\(\times\)3\(\times\)4 & 50 & 83.0 & 95.4 \\ & TimeSformer-L[8] & ViT-B & IN-21K & 96\(\times\)224\({}^{2}\) & 2380\(\times\)3\(\times\)1 & 121 & 80.7 & 94.7 \\ & VideoSwin-L[50] & Swin-L & IN-21K & 32\(\times\)224\({}^{2}\) & 604\(\times\)3\(\times\)4 & 197 & 83.1 & 95.9 \\ \cline{2-10} & \multicolumn{1}{l|}{_Methods with web-scale data.__FLD, ALIGN and CLIP consist of image-text pairs. WTS collects video-text pairs._} & \multicolumn{1}{c}{} & & & & \\ \cline{2-10} & ViViT-H [2] & ViT-H & JFT-300M & 32\(\times\)320\({}^{2}\) & 3981\(\times\)3\(\times\)4 & 654 & 84.9 & 95.8 \\ & CoVe [97] & ViT-L & JFT-3B+SSV2+MiT+IN & 16\(\times\)448\({}^{2}\) & 5860\(\times\)3\(\times\)1 & 431 & 87.1 & - \\ & CoCa [90] & ViT-H+B+LIGN-1.8B & 16\(\times\)576\({}^{2}\) & N/A\(\times\)3\(\times\)4 & 1000+ & 88.9 & - \\ & MTV-H[86] & ViT-H+B+S+T & IN-21K+WTS-60M & 32\(\times\)280\({}^{2}\) & 6130\(\times\)3\(\times\)4 & 1000+ & 89.9 & 98.3 \\ & UniformerV2-L[45] & ViT-L & CLIP-400M+K710\({}^{2}\) & 64\(\times\)336\({}^{2}\) & 12550\(\times\)3\(\times\)2 & 354 & 90.0 & 98.4 \\ \hline \multirow{9}{*}{} & BEV1\({}_{800c}\)[75] & Swin-B & \multirow{2}{*}{IN-1K} & 32\(\times\)224\({}^{2}\) & 282\(\times\)3\(\times\)4 & 88 & 81.1 & - \\ & MaskFeat\({}_{1600c}\)[77] & MiViTv2-L & & 16\(\times\)224\({}^{2}\) & 377\(\times\)1\(\times\)10 & 218 & 84.3 & 96.3 \\ & ST-MAE-B\({}_{1600c}\)[25] & ViT-B & & 16\(\times\)224\({}^{2}\) & 180\(\times\)3\(\times\)7 & 87 & 81.3 & 94.9 \\ & ST-MAE-L\({}_{1600c}\)[25] & ViT-L & K600 & 16\(\times\)224\({}^{2}\) & 598\(\times\)3\(\times\)7 & 304 & 84.9 & 96.2 \\ & ST-MAE-L\({}_{1600c}\)[25] & ViT-L & K600\({}^{2}\) & 16\(\times\)224\({}^{2}\) & 598\(\times\)3\(\times\)7 & 304 & 86.5 & 97.2 \\ & VideoMAE-B\({}_{1600c}\)[70] & ViT-B & & 16\(\times\)224\({}^{2}\) & 180\(\times\)3\(\times\)5 & 87 & 81.5 & 95.1 \\ & VideoMAE-L\({}_{1600c}\)[70] & ViT-L & & 16\(\times\)224\({}^{2}\) & 597\(\times\)3\(\times\)5 & 305 & 85.2 & 96.8 \\ & VideoMAE-L\({}_{1600c}\)[70] & ViT-L & & 16\(\times\)320\({}^{2}\) & 3958\(\times\)3\(\times\)5 & 305 & 86.1 & 97.3 \\ \hline \multirow{9}{*}{} & UMT-B\({}_{800c}\) & ViT-B & \multirow{2}{*}{K710} & 8\(\times\)224\({}^{2}\) & 180\(\times\)3\(\times\)4 & 87 & 85.7 & 97.0 \\ & UMT-B\({}_{200c}\) & ViT-B & K710 & 8\(\times\)224\({}^{2}\) & 180\(\times\)3\(\times\)4 & 87 & 85.7 & 96.9 \\ & UMT-B\({}_{200c}\) & ViT-B & K710\({}^{2}\) & 8\(\times\)224\({}^{2}\) & 180\(\times\)3\(\times\)4 & 87 & 87.4 & 97.5 \\ & UMT-L\({}_{400c}\) & ViT-L & & 8\(\times\)224\({}^{2}\) & 596\(\times\)3\(\times\)4 & 304 & 88.9 & 98.3 \\ & UMT-L\({}_{200c}\) & ViT-L & K710 & 8\(\times\)224\({}^{2}\) & 596\(\times\)3\(\times\)4 & 304 & 89.1 & 98.2 \\ & UMT-L\({}_{200c}\) & ViT-L & K710\({}^{2}\) & 8\(\times\)224\({}^{2}\) & 596\(\times\)3\(\times\)4 & 304 & 90.3 & **98.7** \\ & UMT-L\({}_{200c}\) & ViT-L & K710 & 16\(\times\)224\({}^{2}\) & 1434\(\times\)3\(\times\)4 & 304 & **90.6** & **98.7** \\ \hline \end{tabular}
\end{table}
Table 6: **Comparison with the state-of-the-art methods on Kinetics-400.** For UMT, we use a masking ratio of 80%. The results using spatial resolution \(>\)224\({}^{2}\) are noted in blue. “\(\dagger\)” marks the results with intermediate fine-tuning.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \multicolumn{1}{|l|}{**Method**} & **Input** & **FLOPs** & **Param** & **MiT V1** \\ & **Size** & **(T)** & **(M)** & **Top-1** & **Top-5** \\ \hline \hline \multirow{2}{*}{} & SlowFast\({}_{101}\)[26] & 80\(\times\)224\({}^{2}\) & 7.0 & 60 & 81.8 & 95.1 & 71.0 & 89.6 \\ & MViTv2-L[48] & 40\(\times\)312\({}^{2}\) & 33.9 & 218 & 87.5 & 97.8 & 79.4 \\ \hline \multicolumn{1}{|l|}{_Methods with web-scale data._} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ CoVeR [97
the previous models that rely on web-scale pre-training, _e.g._, the UMT-L achieves 0.4% higher top-1 accuracy than MTV-H [86] with only 1/10 of the FLOPs and 1/3 of the parameters. On the other hand, our UMT surpasses its counterparts with masked video modeling, _e.g._, compared with VideoMAE [70] with 1600-epoch pre-training, the UMT-L with 400-epoch pre-training obtains 3.7% accuracy improvement. For K600 and K700, our UMT-L also obtains the SOTA performances (**90.5%** and **83.6%** see Table 7).
**Moments in Time.** As shown in Table 8, our UMT-L achieves **1.0%/1.7%** higher top-1/5 accuracy compared to the advanced UniFormerV2-L [45], while utilizing fewer FLOPs. Note that MiT is more challenging due to the large inter-class and intra-class variation, thus the results demonstrate the robustness and effectiveness of our method.
**Something-Something.** Distinct from previous benchmarks, this particular dataset requires complex and long-term modeling to accurately recognize temporal-related actions, such as "pretending to close something without actually closing it". Without any additional data, our UMT-L model outperforms the UniFormerV2-L [45] (74.4% _vs._ 73.0% in Table 9) which was specifically tailored for temporal modeling. Additionally, our approach achieves comparable performances to VideoMAE [70] with significantly fewer epochs. Intriguingly, VideoMAE performs worse when utilizing Kinetics for masked modeling, while our UMT performs even better. This demonstrates the versatility and adaptability of our method, which can be applied to diverse video domains with the same pre-training.
**AVA.** Table 10 presents the results of the action detection on AVA. Remarkably, our UMT achieves 2.0 mAP improvement over the advanced VideoMAE [70] with only K400 pre-training. Furthermore, our method achieves the impressive **39.8** mAP with K710 pre-training, showcasing its robust transferability for spatiotemporal understanding.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline
**Method** & **PT** & **Input** & **FLOPs** & **Param** & **AVA** \\ & **Data** & **Size** & **(G)** & **(M)** & **mAP** \\ \hline _supervised_ & & & & & \\ SlowFast [26] & K400 & 32\(\times\)224\({}^{2}\) & 138 & 53 & 23.8 \\ SlowFast [26] & K600 & 64\(\times\)224\({}^{2}\) & 296 & 59 & 27.5 \\ MViTv1-B [24] & K400 & 64\(\times\)224\({}^{2}\) & 455 & 36 & 27.3 \\ MViTv1-B [24] & K600 & 32\(\times\)224\({}^{2}\) & 236 & 53 & 28.7 \\ MViTv2-B [48] & K400 & 32\(\times\)224\({}^{2}\) & 225 & 51 & 28.1 \\ MViTv2-B [48] & K700 & 32\(\times\)224\({}^{2}\) & 225 & 51 & 31.3 \\ MViTv2-L [48] & n-218+k400 & 40\(\times\)312\({}^{2}\) & 2828 & 213 & 33.5 \\ \hline _self-supervised_ & & & & & \\ MaskFeat-L [77] & K400 & 40\(\times\)312\({}^{2}\) & 2828 & 218 & 36.3 \\ MaskFeat-L [77] & K600 & 40\(\times\)312\({}^{2}\) & 2828 & 218 & 37.8 \\ ST-MAE-L [25] & K400 & 16\(\times\)224\({}^{2}\) & 598 & 304 & 34.8 \\ ST-MAE-L [25] & K700 & 16\(\times\)224\({}^{2}\) & 598 & 304 & 37.3 \\ VideoMAE-B [70] & K400 & 16\(\times\)224\({}^{2}\) & 180 & 87 & 31.8 \\ VideoMAE-L [70] & K400 & 16\(\times\)224\({}^{2}\) & 597 & 305 & 37.0 \\ VideoMAE-L [70] & K700 & 16\(\times\)224\({}^{2}\) & 597 & 305 & 39.3 \\ \hline
**UMT-B** & K400 & 8\(\times\)224\({}^{2}\) & 180 & 87 & 32.7 \\
**UMT-B** & K710 & 8\(\times\)224\({}^{2}\) & 180 & 87 & 33.5 \\
**UMT-L** & K400 & 8\(\times\)224\({}^{2}\) & 596 & 304 & 39.0 \\
**UMT-L** & K710 & 8\(\times\)224\({}^{2}\) & 596 & 304 & **39.8** \\ \hline \hline \end{tabular}
\end{table}
Table 10: **Comparison with the state-of-the-art methods on AVA v2.2. All the self-supervised methods are with intermediate fine-tuning on the pre-training data.**
\begin{table}
\begin{tabular}{l|c|c c c c c} \hline \hline
**Method** & **Extra Data** & **\#F** & \begin{tabular}{c} **FLOPs** & **Param** \\ **(G)** \\ \end{tabular} & \begin{tabular}{c} **SSV2** \\ **(M)** \\ \end{tabular} &
\begin{tabular}{c} **Top-1 Top-5** \\ \end{tabular} \\ \hline _supervised_ & & & & & & \\ SlowFast\({}_{101}\)[26] & K400 & 32 & 106\(\times\)3 & 53 & 63.1 & 87.6 \\ TD\({}_{EN}\)[73] & IN-1K & 87 & 198\(\times\)3 & 88 & 69.6 & 92.2 \\ \hline TimeSformer-L [8] & IN-21K & 96 & 2380\(\times\)3 & 121 & 62.3 & - \\ MViTv1-B [24] & K400 & 64 & 455\(\times\)3 & 37 & 67.7 & 70.9 \\ MViTv2-B [48] & K400 & 64 & 225\(\times\)3 & 51 & 70.5 & 92.7 \\ UniFormer-B [46] & IN-1K+K400 & 32 & 259\(\times\)3 & 50 & 71.2 & 92.8 \\ ViViTt-L [2] & IN-21K+K400 & 32980\(\times\)3 & 612 & 65.9 & 89.9 \\ MTV-B [86] & IN-21K+K400 & 32 & 3999\(\times\)3 & 12 & 310 & 68.5 & 90.4 \\ VideoSwin-B [50] & IN-21K+K400 & 32 & 321\(\times\)3 & 88 & 69.6 & 92.7 \\ \hline CoFeR +448 [97] & IFT-3B+KMI & 16 & 5860\(\times\)3 & 431 & 69.9 & - \\ UniFormerV2-B [45] & CLIP-400M & 32 & 375\(\times\)3 & 163 & 70.7 & 93.2 \\ UniFormerV2-L [45] & CLIP-400M & 32 & 1718\(\times\)3 & 574 & 73.0 & 94.5 \\ \hline _self-supervised_ & & & & & \\ BEV\({}_{800c}\)[75] & IN-1K+K400 & 32 & 321\(\times\)3 & 88 & 70.6 & - \\ MaiFeat-L\({}_{1000c}\)[73] & K400 & 16 & 2828\(\times\)3 & 218 & 74.4 & 94.6 \\ ST-MAE-L\({}_{1000c}\)[25] & K400* & 16 & 598\(\times\)3 & 304 & 72.1 & 93.9 \\ ST-MAE-L\({}_{1000c}\)[25] & K600* & 16 & 598\(\times\)3 & 304 & 73.0 & 94.2 \\ ST-MAE-L\({}_{1000c}\)[25] & K700* & 16 & 598\(\times\)3 & 304 & 73.6 & 94.4 \\ VideoMAE-B\({}_{1000}\)[70] & K400* & 16 & 180\(\times\)6 & 87 & 69.7 & 92.3 \\ VideoMAE-B\({}_{2400c}\)[70] & & & & & & \\ VideoMAE-L\({}_{16000}\)[70] & & & & & & \\ VideoMAE-L\({}_{16000}\)[70] & & & & & & \\ VideoMAE-L\({}_{16000}\)[70] & & & & & & \\ VideoMAE-L\({}_{2400c}\)[70] & & & & & & \\ \hline
**UMT-B** & K710* & 8 & 180\(\times\)6 & 87 & 70.8 & 92.4 \\
**UMT-L** & & & & & & & \\
**UMT-L\({}_{1-400c}\) & - & & & & & & \\
**UMT-L\({}_{200c}\)** & K710* & 8 & 596\(\times\)6 & 304 & **74.7** & **94.7** \\ \hline \hline \hline \end{tabular}
\end{table}
Table 9:
### Multi-modality tasks
We further validate our model on two mainstream video-language tasks, including video-text retrieval (MSRVTT [83], DiDeMo [1], ActivityNet [39], LSMDC [64], MSVD [13] and Something-Something [41]) and video question-answering (ActivityNet-QA [92], MSRVTT-QA [81], MSRVT-MC [91] and MSVD-QA [81]).
**Zero-shot text-to-video retrieval.** Table 11 indicates that the UMT-B outperforms the top-performing models [72, 41, 15] by **0.9%**, **5.0%**, and **4.6%** R@1 on MSRVTT, DiDeMo, and ActivityNet, respectively. Moreover, our UMT-L achieves new state-of-the-art results among all the datasets, highlighting its remarkable robustness.
**Text-to-video retrieval.** Table 12 lists the fine-tuned results, where our UMT-L significantly outperforms previous methods pre-trained with large-scale pairs [54, 85, 76]. Specifically, our UMT-L achieves **58.8%** (+3.6%), **70.4%** (+9.2%), **66.8%** (+4.6%), **43.0%** (+9.0%), and **80.3%** (+21.9%) on MSRVTT, DiDeMo, ActivityNet, LSMDC, and MSVD, respectively. Besides, the strong results on the temporally-heavy SthSht V2 dataset (**73.3%** and **90.8%**) in Table 13 further supports our broad applicability.
**Video question-answering.** As shown in Table 14, our UMT outperforms the methods specifically designed for QA such as JustAsk [88], and achieves comparable performance with state-of-the-art models that pre-trained with large-scale pairs [89, 76, 87], which demonstrates its powerful capability of complex multimodal reasoning.
## 5 Conclusion
In this paper, we propose using the image foundation model as the unmasked teacher for masked video modeling.
\begin{table}
\begin{tabular}{l c|c c|c c|c c c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**\#Pairs**} & \multicolumn{3}{c|}{**SSV2-label**} & \multicolumn{3}{c}{**SSV2-template**} \\ & & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline CLIP4Clip [55] & 400M & 43.1 & 71.4 & - & 77.0 & 96.6 & - & - & - & - \\ Singularity [41] & 17M & 47.4 & 75.9 & - & 77.6 & 96.0 & - & - & - & - \\ VINDLU [15] & 25M & 53.1 & 81.8 & - & 83.3 & **100** & - & - & **100** \\ \hline \multirow{3}{*}{**UMT-B**} & 5M & 63.1 & 87.1 & 92.3 & 87.3 & **100** & **100** & **100** \\ & 17M & 63.4 & 88.0 & 92.9 & 86.8 & 99.4 & **100** & **100** \\ & 25M & 64.2 & 88.2 & 92.7 & 87.9 & 99.4 & **100** \\ \hline \multirow{3}{*}{**UMT-L**} & 5M & 70.5 & 92.3 & 95.5 & 90.2 & 99.4 & **100** \\ & 17M & 73.1 & **93.2** & 96.4 & **90.8** & **100** & **100** \\ & 25M & **73.3** & 92.7 & **96.6** & **90.8** & **99.4** & **100** \\ \hline \hline \end{tabular}
\end{table}
Table 13: Text-to-video retrieval on the temporally-heavy SSV2-Label [41] and SSV2-Template datasets [41].
\begin{table}
\begin{tabular}{l c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**\#Pairs**} & \multicolumn{3}{c|}{**MSRVTT**} & \multicolumn{3}{c|}{**DiDeMo**} & \multicolumn{3}{c|}{**ActivityNet**} & \multicolumn{3}{c|}{**LSMDC**} & \multicolumn{3}{c}{**MSVD**} \\ & & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline ClipBERT [42] & 5.4M & 22.0 & 46.8 & 59.9 & 20.4 & 48.0 & 60.8 & 21.3 & 49.0 & 63.5 & - & & - & - \\ Frozen [5] & 5M & 31.0 & 59.5 & 70.5 & 34.6 & 65.0 & 74.7 & & - & 15.0 & 30.8 & 39.8 & 33.7 & 64.7 & 76.3 \\ VIOLET [27] & 138M & 34.5 & 63.0 & 73.4 & 32.6 & 62.8 & 74.7 & & - & 16.1 & 36.6 & 41.2 & - & - \\ All-in-one [71] & 138M & 37.9 & 68.1 & 77.1 & 32.7 & 61.4 & 73.5 & 22.4 & 53.7 & 67.7 & - & - & - \\ LAVENDER [47] & 30M & 40.7 & 66.9 & 77.6 & 53.4 & 78.6 & 85.3 & - & - & 26.1 & 46.4 & 57.3 & 50.1 & 79.6 & 87.2 \\ Singularity [41] & 17M & 42.7 & 69.5 & 78.1 & 53.1 & 79.9 & 88.1 & 48.9 & 77.0 & 86.3 & - & & - & - \\
**OmmVIL** [72] & 17M & 47.8 & 74.2 & 83.8 & 52.4 & 79.5 & 85.4 & - & - & - & - & - \\ VINDLU [15] & 25M & 46.5 & 71.5 & 80.4 & 61.2 & 85.8 & 91.0 & 55.0 & 81.4 & 89.7 & - & - & - \\ CLIP4Clip [55] & 400M & 44.5 & 71.4 & 81.6 & 42.8 & 68.5 & 79.2 & 40.5 & 72.4 & 83.4 & 21.6 & 41.8 & 49.8 & 46.2 & 76.1 & 84.6 \\ CLIP-ViP [85] & 500M & 54.2 & 77.2 & 84.8 & 50.5 & 78.4 & 87.1 & 53.4 & 81.4 & 90.0 & 29.4 & 50.6 & 59.0 & - & - \\ InternVideo [76] & 646M & 55.2 & 79.6 & 87.5 & 57.9 & 82.4 & 88.9 & 62.2 & 85.9 & 93.2 & 34.0 & 53.7 & 62.9 & 58.4 & 84.5 & 90.4 \\ \hline \multirow{3}{*}{**UMT-B**} & 5M & 46.3 & 72.7 & 82.0 & 54.8 & 83.0 & 89.0 & 52.1 & 80.5 & 89.6 & 30.3 & 51.8 & 61.4 & 67.0 & 92.7 & 96.7 \\
**UMT-B** & 17M & 50.6 & 75.4 & 83.5 & 60.8 & 85.1 & 91.0 & 56.1 & 82.5 & 91.2 & 32.3 & 54.5 & 61.9 & 70.8 & 93.7 & 96.6 \\
25M & 51.0 & 76.5 & 84.2 & 61.6 & 86.8 & 91.5 & 58.3 & 83.9 & 91.5 & 32.7 & 54.7 & 63.4 & 71.9 & 94.5 & 97.8 \\ \hline \multirow{3}{*}{**UMT-L**} & 5M & 53.3 & 76.6 & 83.9 & 59.7 & 84.9 & 90.8 & 58.1 & 85.5 & 92.9 & 37.7 & 60.6 & 67.3 & 76.9 & 96.7 & 98.7 \\ & 17M & 56.5 & 80.1 & **87.4** & 66.6 & 89.9 & **93.7** & 66.6 & 88.6 & 94.7 & 41.4 & 63.8 & 72.3 & 78.8 & 97.3 & 98.8 \\ \cline{1-1} & 25M & **58.8** & **81.0** & **87.1** & **70.4** & **90.1** & 93.5 & **66.8** & **89.1** & **94.9** & **43.0** & **65.5** & **73.0** & **80.3** & **98.1** & **99.0** \\ \hline \hline \end{tabular}
\end{table}
Table 12: Text-to-video retrieval on MSRVTT, DiDeMo, ActivityNet,
Besides, we present a progressive pre-training framework for building environmentally friendly video foundation models, which handles both scene-related and temporal-related actions, as well as complex video-language understanding. We hope that our simple, scalable, and reproducible framework will facilitate further research on video foundation models for future AI systems.
|
2304.13187 | AI-assisted coding: Experiments with GPT-4 | Artificial intelligence (AI) tools based on large language models have
acheived human-level performance on some computer programming tasks. We report
several experiments using GPT-4 to generate computer code. These experiments
demonstrate that AI code generation using the current generation of tools,
while powerful, requires substantial human validation to ensure accurate
performance. We also demonstrate that GPT-4 refactoring of existing code can
significantly improve that code along several established metrics for code
quality, and we show that GPT-4 can generate tests with substantial coverage,
but that many of the tests fail when applied to the associated code. These
findings suggest that while AI coding tools are very powerful, they still
require humans in the loop to ensure validity and accuracy of the results. | Russell A Poldrack, Thomas Lu, Gašper Beguš | 2023-04-25T22:59:01Z | http://arxiv.org/abs/2304.13187v1 | # AI-assisted coding: Experiments with GPT-4
###### Abstract
Artificial intelligence (AI) tools based on large language models have acheived human-level performance on some computer programming tasks. We report several experiments using GPT-4 to generate computer code. These experiments demonstrate that AI code generation using the current generation of tools, while powerful, requires substantial human validation to ensure accurate performance. We also demonstrate that GPT-4 refactoring of existing code can significantly improve that code along several established metrics for code quality, and we show that GPT-4 can generate tests with substantial coverage, but that many of the tests fail when applied to the associated code. These findings suggest that while AI coding tools are very powerful, they still require humans in the loop to ensure validity and accuracy of the results.
artificial intelligence software engineering reproducibility
## 1 Introduction
Recent developments in artificial intelligence, particularly through large language models, have enabled the automated generation of computer code (Chen et al. 2021; Bubeck et al. 2023). In particular, GPT-4 has enabled human-level performance on a set of coding challenges that are outside of the training set of the model (Bubeck et al. 2023). In addition, automated coding assistants (particularly Github Copilot) have become integrated into commmon development environments and are widely used, with some evidence that they can significantly improve coding productivity. The performance of these models is also raising important questions regarding coding education, given that the current models can easily complete most coding problem sets using in introductory programming courses (Finnie-Ansley et al. 2022).
In the present paper we explore some of the implications of AI-assisted coding using GPT-4, in a more qualitative way than previous benchmarking assessments. First we examine the experience of interactive coding and debugging using the ChatGPT interface to GPT-4 on a set of data science coding problems. This experiment is meant to approximate the experience of a researcher with minimal expertise in prompt engineering, assessing the success and amount of effort
required to perform these coding tasks. Second, we assess the ability of GPT-4 (using the OpenAI API) to refactor and improve the quality of existing code. This experiment is meant to assess the degree to which AI coding assistants might improve coding quality when used by researchers. Third, we assess the ability of GPT-4 to write tests for its own code, using a set of test prompts from several scientific domains. We conclude with an overall assessment of the utility of AI coding assistants for scientific researchers.
A fully reproducible workflow for this manuscript is available at [https://github.com/poldrack/ai-coding-experiments](https://github.com/poldrack/ai-coding-experiments).
## 2 Coding with GPT-4
Our first set of experiments examined the ability of GPT-4 (via the ChatGPT interface) to generate usable code for a set of data science problems. The prompts were generated manually (by author RP) and are listed in Appendix 1. Each prompt was submitted in a separate chat session; the human examined the resulting code, and issued additional prompts to try to fix problems. If the human was not able to obtain working code within about 5 minutes of effort or less, the problem was deemed unsolved. The results of this experiment are primarily qualitative and subjective, but are meant to index the degree to which GPT-4 is a useful tool for a researcher with minimal prompt engineering skills.
Figure 1 shows the proportion of successful outcomes as a function of the number of prompts required. 72% (23/32)) of attempts were successful in relatively quickly solving the problem; 37.5% (12/32) were successful on the first prompt. In cases where additional prompting was required, a common problem was the use of outdated functions or datasets from Python libraries.
The causes of unsuccessful outcomes were varied. In some cases it was due to the outdated nature of the GPT-4 training data. For example, in one case (prompt #12) ChatGPT could not successfully implement a solution that was compatible with the latest version of PyTorch. In another case (prompt #27), the labels used to query the NOAA API were incorrect, and the correct labels were not easily identifiable upon further examination. In other cases, it was not immediately clear what was wrong with the code, and the researcher did not dig deeply enough to identify the root cause.
One of the examples highlights the continuing challenges that ChatGPT has with mathematical processing (as outlined by Bubeck et al. (2023)). Prompt #4 asked ChatGPT to generate code to simulate a drift diffusion model (a common model for response times in cognitive psychology) and then fit the model parameters using the EZ-diffusion model (Wagemmakers, Van Der Maas, and Grasman 2007), which is a closed-form solution to estimating these parameters using response time and accuracy statistics. The initial code generated by ChatGPT attempted to fit a diffusion model through numerical optimization. After being prompted to generate a closed-form solution based on the original paper, ChatGPT did so, but the formula bore little resemblance to the actual formula from the paper. This is an example of a "hallucination" which is commonly seen with LLMs (Ji et al. 2023), and highlights the ongoing need for automatically generated code to be validated by a human.
Figure 1: Proportion of successful code outcomes as a function of number of prompts. NS: not successful.
Another example also highlights the need for sophisticated domain knowledge in assessing the outputs of ChatGPT. Prompt #18 asked ChatGPT to implement a _hurdle model_, which is a statistical model for zero-inflated count data that combines a binary model with count model using a truncated distribution. In general, this model is fit by performing maximum likelihood estimation on the combined likelihoods of the binary and count models. ChatGPT generated a solution that separately estimated a binary model and a count model, and then combined the predictions from the two models; this incorrect approach can be found in a number of examples from Github. This model fit the test data nearly as well as a reference implementation of the hurdle model (implemented in R), but is incorrect in comparison to the reference implementation. This highlights the need for close attention to detail in the implementation of any numerical methods, as incorrect implementations present on Github can result in incorrect outcomes.
### Refactoring code using GPT4
The quality of research code is important for scientific reproducibility and transparency, as well as for code reusability and maintainability. In our initial explorations of code generated by GPT-4, we noted that the automatically generated code appeared to be substantially more readable than research code that we have encountered (or written) in the past. This led us to ask whether GPT-4 could improve the quality of existing code through _refactoring_(Fowler, 2019), by which we mean modifying code to make it more readable or maintainable without changing its behavior.
To assess this, we downloaded more than 2000 examples of Python code from Github using the Github Code Search API. Only one code file was allowed from any single repository. We further filtered these files, based in part on the criteria used by Chen et al. (2021) to select code for training of the OpenAI Codex model. Exclusion criteria included:
* Presence of non-Python code (as guessed by guesslang.Guess())
* Presence of non-English language in the code (according to pycld2.detect())
* Presence of potential markers of automatic generation (e.g. strings such as "autogenerated", "generated by Django", etc)
* y:" pattern)
* Lack of at least one function definition
* Number of GPT tokens greater than 2500 or less than 200
* Maximum line length > 1000
* Mean line length > 100
* Maximum file size > 1MB
The 274 code files passing these criteria were submitted to further analysis. Analysis of syntax errors and coding style was performed using the flake8 static code analysis package. Files were further excluded on the basis of any syntax errors identified by flake8.
The number of warning and error messages emitted by the flake8 limter was substantially reduced for the refactored code compared to the original (median 0.23 messages/line for original vs 0.09 messages/line for refactored, Cohen's d = 0.50; see Figure 2). While tools exist to perform automatic reformating to ensure standard-compliance, this shows that GPT-4 generates code that is substantially more standards-compliant than the average programmer; given that the files sampled from Github were heavily filtered, this result is probably an underestimate compared to the broader population of Python code on Github. Figure 3 provides an overview of which errors were most common in the original code and how their prevalence changed after refactoring.
We further examined a set of code quality metrics, which were computed using the radon Python package. Metrics extracted from these outputs for further analysis included:
* Logical lines of code
* Number of comments
* Mean cyclomatic complexity (a measure of the number of execution paths)
* Maintainability index (a holistic metric for code maintainability, based on a composite of several metrics including Halstead volume Halstead (1977), cyclomatic complexity, lines of code, and % of comments)
* Halstead "difficulty" (a metric of how difficult the code is to read, based on the number of distinct operators and the ratio of total number of operands to number of distinct operands)
* Halstead "bugs" (a metric meant to estimate the number of bugs in delivered code)
Comparisons between metrics for the original and refactored code are shown in Figure 4, and means, effect sizes, and p-values for the comparison (using a paired t-test) are shown in Table 1. Each of these metrics differed between the origin and refactored code (p <.05 after false discovery rate correction across all hypotheses). However, the effect sizes were all in the small to medium range, with Cohen's d values ranging from 0.13 to 0.33.
Figure 3: Prevalence of individual Flake8 warning/error codes for original github files and refactored files. Values are sorted by prevalence in the original github files.
Figure 2: Number of Flake8 messages (per line of code) for original github files and refactored files.
Figure 4: Code quality metrics computed for each original file and its refactored version.
### Automatically generated code and tests
Given the importance of validating AI-generated code using software tests, we next assessed the ability of GPT-4 to generate tests for its own code. We first used GPT-4 to generate 20 prompts for each of 5 different areas of research, using the following prompt: "Please generate 20 prompts to ask a chatbot to create Python code to solve a variety of {statistical and data science, physics, theoretical computer science, ecology, economics} problems."
For each of the generated problems, we created a prompt to generate the code along with tests for the resulting code, such as the following:
Write a Python program to simulate predator-prey interactions using the Lotka-Volterra equations, given the initial populations, growth rates, and interaction coefficients. Please embed the code within an explicit code block, surrounded by triple-backtick markers. Generate realistic values for any examples, and do not use input() commands. Create code that is modular and well-commented. Then, generate a set of pytest tests that exercise each of the functions, embedded in a separate code block.
We first examined whether each generated script would execute without failure; of the 100 generated scripts, 97 executed successfully. We then examined test coverage using the Coverage.py tool. As shown in Figure 5, the majority of files had test coverage of 100%, with 94% showing a coverage of at least 50% ad a minimum coverage of 40%. There was a weak but statistically significant negative relationship between the number of statements and the level of coverage (Spearman r = -0.23, p =.02). A median of three tests were generated for each file.
Running the tests required minor automated modification to the code, since the tests required importing the relevant functions but the names of the output files were not known to the LLM. After fixing this issue, 45 of the 100 tests completed successfully. The most common source of test failure was failure of a value assertion (47/100); in these cases, it was not immediately possible to determine whether the test or code was incorrect, without additional debugging. In ten cases, the test failed because the code did not raise the error expected by the test. In six cases, other errors were raised (Type, Index, Value, and ZeroDivision errors).
In summary, our analyses of automatic test generation demonstrate that GPT-4 can successfully generate testing code with good coverage, but that these tests fail often, requiring additional debugging to determine the root cause of the test failure.
### Conclusions
Our analyses demonstrate that GPT-4 has strong Python code generation abilities, confirming the results of Bubeck et al. (2023). At the same time, the prevalence of errors in the generated code suggests that humans must remain in the loop in order to ensure the accuracy of any resulting code. Our interactive prompting experiment showed that a relatively novice prompt engineer can successfully solve coding problems within a small number of prompts the majority of the time; however, a sizeable minority of problems would have required significant human debugging in order to solve. An open question is whether re-prompting in a new context may have led to more successful outcomes in these cases.
Comparisons of Python code refactored using GPT-4 to the original code demonstrated that GPT-4 improved the quality of the code, at least as measured by common metrics of software quality and standards compliance. It should be emphasized that these results do not assess the accuracy of the code; rather, they suggest that GPT-4 can help programmers achieve code that is cleaner and potentially more maintainable than the original. Given that GPT-4 refactoring did not eliminate all standards compliance issues, the combination of GPT-4 with other code formatting tools (such as black) would likely result in even further improvements.
The examination of test generation by GPT-4 demonstrated that it was able to generate tests with a high degree of test coverage, but those tests failed a majority of the time. Such test failures require additional human effort to diagnose,
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & Median (github) & Median (GPT-4) & Cohens d & P-value (FDR) \\ \hline Maintainability index & 70.285 & 74.092 & 0.33 & \textless{}.001 \\ Halstead bugs & 0.081 & 0.068 & 0.13 & 0.045 \\ Halstead difficulty & 3.214 & 3.089 & 0.16 & 0.012 \\ flake8 messages per line & 0.237 & 0.089 & 0.50 & \textless{}.001 \\ Mean cyclomatic complexity & 3.462 & 3.284 & 0.18 & 0.006 \\ Number of comments & 7.81 & 7.086 & 0.08 & 0.196 \\ Logical lines of code & 46.022 & 43.372 & 0.27 & \textless{}.001 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Metric comparisons between original and refactored code.
since it is not immediately clear whether the failure is due to inaccurate code, and inaccurate test, or both. These results suggest that while GPT-4 is a very useful tool for generating testing frameworks for new code, the specific test examples should be designed and implemented by a human with domain expertise to ensure that the tests are accurately assessing the intended behavior for the code.
There has been substantial speculation regarding the continued role of human programmers in the face of AI coding tools. The present results suggests that even with the latest generation of AI systems (i.e. GPT-4), human involvement is essential to ensure validity and accuracy of the resulting code. This seems to be especially the case when programming of mathematical concepts is involved. The lack of confidence calibration of tools like GPT-4 means that they will present answers in the same way regardless of the degree of support for the answer.
The prompts used in the present research are almost certainly suboptimal, and thus may be underestimating the potential performance of the model. For example, recent work has shown that chain-of-thought prompting can substantially improve the perfomance of LLMs on complex problems requiring reasoning (Prystawski, Thibodeau, and Goodman 2022; Wei et al. 2023), and this seems to extend to coding as well1. Further work is needed to examine the degree to which such improved prompting techniques might improve the performance of LLMs on complex coding problems, and at this point our results should be taken as a lower bound on the performance of these models.
Footnote 1: [https://martinfowler.com/articles/2023-chatgpt-xu-hao.html](https://martinfowler.com/articles/2023-chatgpt-xu-hao.html)
### Acknowledgments
Thanks to Mark Chen, David Coats, and Noah Goodman for helpful comments and discussion during the development of this work.
|
2309.02076 | MeerKAT HI line observations of the nearby interacting galaxy pair NGC
1512/1510 | We present MeerKAT HI line observations of the nearby interacting galaxy pair
NGC 1512/1510. The MeerKAT data yield high-fidelity image sets characterised by
an excellent combination of high angular resolution (~20") and and sensitivity
(~0.08 Msun/pc^2), thereby offering the most detailed view of this well-studied
system's neutral atomic hydrogen content, especially the HI co-located with the
optical components of the galaxies. The stellar bulge and bar of NGC 1512 are
located within a central HI depression where surface densities fall below 1
Msun/pc^2, while the galaxy's starburst ring coincides with a well-defined HI
annulus delimited by a surface density of 3 Msun/pc^2. In stark contrast, the
star-bursting companion, NGC 1510, has its young stellar population precisely
matched to the highest HI over-densities we measure (~12.5 Msun/pc^2). The
improved quality of the MeerKAT data warrants the first detailed measurements
of the lengths and masses of the system's tidally-induced HI arms. We measure
the longest of the two prominent HI arms to extend over ~27 kpc and to contain
more than 30% of the system's total HI mass. We quantitatively explore the
spatial correlation between HI and far-ultraviolet flux over a large range of
HI mass surface densities spanning the outer disk. The results indicate the
system's HI content to play an important role in setting the pre-conditions
required for wide-spread, high-mass star formation. This work serves as a
demonstration of the remarkable efficiency and accuracy with which MeerKAT can
image nearby systems in HI line emission. | E. Elson, M. Głowacki, R. Deane, N. Isaacs, X. Ndaliso | 2023-09-05T09:21:33Z | http://arxiv.org/abs/2309.02076v1 | # MeerKAT HI-line observations of the nearby interacting galaxy pair NGC 1512/1510
###### Abstract
We present MeerKAT H i line observations of the nearby interacting galaxy pair NGC 1512/1510. The MeerKAT data yield high-fidelity image sets characterised by an excellent combination of high angular resolution (\(\sim 20^{\prime\prime}\)) and and sensitivity (\(\sim 0.08\) M\({}_{\sun}\) pc\({}^{-2}\)), thereby offering the most detailed view of this well-studied system's neutral atomic hydrogen content, especially the H i co-located with the optical components of the galaxies. The stellar bulge and bar of NGC 1512 are located within a central H i depression where surface densities fall below 1 M\({}_{\sun}\) pc\({}^{-2}\), while the galaxy's starburst ring coincides with a well-defined H i annulus delimited by a surface density of 3 M\({}_{\sun}\) pc\({}^{-2}\). In stark contrast, the star-bursting companion, NGC 1510, has its young stellar population precisely matched to the highest H i over-densities we measure (\(\sim 12.5\) M\({}_{\sun}\) pc\({}^{-2}\)). The improved quality of the MeerKAT data warrants the first detailed measurements of the lengths and masses of the system's tidally-induced H i arms. We measure the longest of the two prominent H i arms to extend over \(\sim 275\) kpc and to contain more than 30\(\%\) of the system's total H i mass. We quantitatively explore the spatial correlation between H i and far-ultraviolet flux over a large range of H i mass surface densities spanning the outer disk. The results indicate the system's H i content to play an important role in setting the pre-conditions required for wide-spread, high-mass star formation. This work serves as a demonstration of the remarkable efficiency and accuracy with which MeerKAT can image nearby systems in H i line emission.
## 1 Introduction
Interactions between galaxies serve as one of the strongest drivers of their evolution (Toomre and Toomre, 1972; Gehrz et al., 1983; Farouki and Shapiro, 1982; Moore et al., 1996; Hopkins et al., 2008). In the hierarchical formation scenario, interactions are ubiquitous and are therefore important to study in order to understand this crucial aspect of the evolutionary process. Neutral atomic hydrogen (H i) is a particularly useful tracer of galaxy interactions. Galaxies typically have H i disks that are a factor \(\gtrsim 2\) larger than their stellar disks (Swaters et al., 2002; Toribio et al., 2011; Wang et al., 2016), making H i more susceptible to long-range tidal forces. Furthermore, given the long rotation periods of the outer disk, the H i distribution and kinematics serve as a long-lasting record of the interaction history. During interactions, significant fractions of H i can be removed from regions centred on the optical disks of galaxies, and transported to much larger radii. In the local Universe there are many examples of multiple systems with disturbed gas morphologies consisting of long bridges and tails (e.g., Kregel and Sancisi, 2001; Nammba et al., 2021), as well as being accompanied by gas-rich companions such as dwarf galaxies and/or H i cloud complexes (Sancisi et al., 2008). These all serve as evidence of recent or ongoing interaction processes.
The enhanced imaging capabilities of new radio telescopes warrants the study of H i in galaxies spanning a range of intrinsic properties and environments. In this work, we utilise the unique imaging capabilities of the MeerKAT telescope to map the H i content of the nearby interacting galaxy pair NGC1512/1510. The system has been extensively studied at several wavelengths and is well-known to have a very extended H i distribution dominated by two tidally-induced H i arms. Given the system's southern declination, it is ideally-suited to be observed with MeerKAT. In this work, we produce new H i maps that benefit from an unprecedented combination of high angular resolution and surface brightness sensitivity, and use them to improve our understanding of the distribution of H i in NGC 1512/1510.
Optically, NGC 1512 is classified as type SB(a)a by de Vaucouleurs et al. (1976). It consists of a prominent bulge and bar enclosed in a starburst ring of major axis \(\sim 16\) arcsec. Deep optical imaging carried out by Hawarden et al. (1979) revealed the presence of extended arms and filaments in NGC 1512, and was taken to indicate the tidal effects of an ongoing gravitational interaction with NGC 1510 - a blue compact dwarf at a projected angular distance of \(\sim 5\) arcmin from NGC 1512. Single-dish H i observations of NGC1512/1510 were made by Hawarden et al. (1979) using the Parkes radio telescope. Their maps showed the H i emission to be centred on NGC 1512 and to extend to a radius of about 60 kpc. The first H i interferometric observations of the system we car
ried out by Koribalski & Lopez-Sanchez (2009) using the Australia Telescope Compact Array (ATCA). Their images offered the first spatially-resolved views of some parts of the system's extended H i distribution. In this work, we use our new data to generate a refined view of the spatial distribution of H i in NGC 1512/1510.
Ultra-violet observations of NGC 1512/1510 (Gil de Paz et al., 2007) also offer clear evidence of a dramatic interaction history. NGC 1512 has a structured, UV-bright disk extending far beyond its optical disk (Thilker et al., 2007). The UV disk highlights the high level of recent, high-mass star formation activity occurring due to the tidal interaction between the two galaxies. For \(\gtrsim 200\) stellar clusters in the extended disk, Koribalski & Lopez-Sanchez (2009) showed H i to be a good tracer of star formation activity, and stated that regions of higher H i surface density have higher star formation rates. Their result demonstrates how, even for interacting systems such as NGC 1512/1510, star formation can be linked to H i content over a large range of spatial scales. Similar conclusions have been drawn by other authors (e.g., Bigiel et al., 2010) for other nearby galaxies. Bacchini et al. (2019) presented empirical star formation laws of disk galaxies based on measurements of the volume densities of their gas and star formation rates. They announced the surprising discovery of an unexpected correlation between the volume densities of H i and star formation rate. In this work, we use our new high-resolution H i images of NGC 1512/1510 to carry out a quantitative study of the coincidence between H i and recent, high-mass star formation.
The layout of this paper is as follows. In Section 2 we present the details of the data acquisition and reduction processes. We discuss our assumed distance measure for the NGC 1512/1510 system in Section 3. Our various H i data products are presented and discussed in Section 4. In this section we also carry out a detailed study of the distribution of H i in NGC 1512/1510. A study of the links between H i content and star formation is given in Section 5. Finally, a summary of this work is offered in Section 6.
## 2 Data reduction and processing
The NGC 1512/1510 system was observed on 18 and 24 May 2019 as part of the 2019 call for 'open time' observing proposals on the MeerKAT radio telescope. The reader is referred to Jonas & MeerKAT Team (2016), Camilo et al. (2018) and Mauch et al. (2020) for detailed information regarding the telescope. Of the 64 MeerKAT dishes, 58/60 were used on 18/24 May, with each day yielding 5 hours 58 minutes of on-target observations. The data were taken using MeerKAT's L-band receiver which spans the frequency range 900 - 1670 MHz. The full bandwidth was split into 4096 channels each of width 208.98 kHz. At \(z=0\) the corresponding velocity width of a channel is \(dv\approx 44\) km s\({}^{-1}\). At 1420 MHz the MeerKAT field of view is approximately a degree in diameter. Thus, a single pointing centred on the optical position of NGC 1512 was sufficient to observe the entire NGC 1512/1510 system. J0440-4333 was used as a time-varying complex gain calibrator and was observed for slightly less than 3 minutes every 15 minutes. J0408-6545 was used as an absolute flux density and bandpass calibrator and was observed for approximately 10 minutes every 2.5 hours. Table 1 summarises the main aspects of the MeerKAT data used in this study.
From the full data set, a frequency window of approximately 30 MHz (150 channels) centred on 1415 MHz was split off and used for the purposes of this study. Calibration of the MeerKAT data was carried out using the processMeerKAT software developed and maintained at the Inter-University Institute for Data Intensive Astronomy. The software uses CASA (McMullin et al., 2007) tasks and helper functions to carry out the calibration. Standard cross-calibration, including delay calibration, bandpass calibration, and gain calibration, was applied to the 30 MHz data set. The excellent \(uv\) coverage offered by MeerKAT is shown in Figure 1. Baselines shorter than 1 km constitute 52.3 per cent of the \(uv\) data. The maximum baseline length is \(\sim 7.6\) km. Throughout this work, The Cube Analysis and Rendering Tool for Astronomy (CARTA, Comrie et al., 2020) was used to inspect the data in various ways.
The CASA task uvcontsub was used to subtract a 2nd order polynomial from the line-free channels of the \(uv\) data. The tclean task was used with a Briggs weighting scheme to image each channel of the continuum-subtracted cube. We first produced a dirty cube by imaging with zero clean iterations. The rms of the noise in a line-free channel of the dirty cube was measured and then used to generate a mask cube in which all emission below/above \(3\times\) rms was set to 0/1. A final cube was then generated by re-imaging the \(uv\) data and using the mask cube to clean down to a level of \(0.3\times\) rms. The clean task also restored the cleaned cube with a Gaussian approximation of the central lobe of the point spread function. Finally, we applied a primary beam correction to the cube. We experimented with a range of weighting parameters when producing our data cube. For this study, we selected the cube based on a robust value \(R=1.5\) as being the one that offers a
\begin{table}
\begin{tabular}{c c c} \hline parameter & value & units \\ \hline Target & NGC 1512/1510 & \\ Observation dates & 18, 24 May 2019 & \\ Total observation time & 15.8 & h \\ Total on-target time & 12 & h \\ Frequency range & \(900-1670\) & MHz \\ Central frequency & 1415 & MHz \\ Number of channels & 4096 & \\ Frequency resolution & 208.98 & kHz \\ Velocity resolution & 44.1 & km s\({}^{-1}\) \\ Bandpass/flux density calibrator & J0408-6545 & \\ Gain calibrator & J0440-4333 & \\ rms in a channel & 0.11 & mJy/bm \\ Angular resolution & \(26.8\times 20.1\) & \({}^{\prime\prime}\times^{\prime\prime}\) \\ Spatial resolution & \(1.52\times 1.14\) & kpc\(\times\)kpc \\ Pixel scale & 7.5 & \({}^{\prime\prime}\) \\ \hline \end{tabular}
\end{table}
Table 1: Details of MeerKAT observations.
Figure 1: Number of \(uv\) samples per 50 m baseline interval for the full 12 hours of on-target time. The number of samples per bin is given in units of \(10^{8}\). Baselines shorter than 1 km constitute 52.3 per cent of the \(uv\) coverage. The maximum baseline length is \(\sim 7.6\) km.
good balance between dynamic range and spatial resolution. The cube has a spatial resolution \(26.8^{\prime\prime}\times 20.1^{\prime\prime}\) and a pixel scale of \(7.5^{\prime\prime}\). The rms of the flux in a line-free channel is 0.11 mJy beam\({}^{-1}\) (corresponding to an H i mass surface density of \(\sim 0.08\) M\({}_{\sun}\) pc\({}^{-2}\)).
## 3 Distance estimate
We require an accurate distance measure for the NGC 1512/1510 system in order to convert observed fluxes to luminosities. The system was observed as part of the Legacy ExtraGalactic UV Survey (LEGUS, PI Calzetti, GO-13364). Sabbi et al. (2018) derived the distance from the tip of the red giant branch (TRGB) for each LEGUS galaxy. Sabbi et al. (2018) warn that due to severe star crowding in the central pointing of NGC 1512 it was not possible to determine the luminosity of the TRGB from the data. Therefore, they assumed the average values estimated from their south west pointing of NGC 1512 and NGC 1510. The TRGB distance estimate for NGC 1512 presented by Sabbi et al. (2018) is \(D=11.7\pm 1.1\) Mpc.
As part of the HIPASS Bright Galaxy Catalogue, Koribalski et al. (2004) present a systemic velocity measure of \(V_{\rm sys}=898\) km s\({}^{-1}\) for the NGC 1512/1510 system based on its integrated HI spectrum. Converting this velocity measure directly to a distance using Hubble's Law with \(H_{0}=74.03\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Riess et al., 2019) yields a Hubble flow distance of 12.13 Mpc, which is within the error bounds of the TRGB distance estimate from Sabbi et al. (2018). However, closer agreement is achieved if the systemic velocity and equatorial coordinates of NGC 1512 are entered into the Cosmicflows-3 Distance-Velocity Calculator (Kourkchi et al., 2020) which takes into account relationships between the distances and velocities of galaxies based on smoothed versions of the velocity fields derived by the Cosmicflows program. This calculator yields a distance measure of 11.84 Mpc, which is very close to the TRGB estimate of \(11.7\pm 1.1\) Mpc from Sabbi et al. (2018). Hence, in this work, we adopt the Sabbi et al. (2018) distance measure for the entire NGC 1512/1510 system. We propagate the 1.1 Mpc uncertainty estimate to all quantities based on it.
## 4 Results
The NGC 1512/1510 system has a remarkable H i morphology. Our MeerKAT observations benefit from excellent \(uv\) coverage, yielding H i data products that are of high spatial resolution and which are also highly sensitive over a large dynamic range to structures spanning a range of angular scales. They provide us with a new detailed view of the H i in this interacting galaxy pair. In the sections that follow, we present and discuss several detailed measurements of the H i content of the system.
### Channel maps
Channel maps for our H i data cube are shown in Figure 2. In order to present the full dynamic range of the cube, we have boosted it by an additive constant of 1.12 M\({}_{\sun}\) pc\({}^{-2}\), thereby making all pixels positive (with a minimum mass surface density equal to 0 M\({}_{\sun}\) pc\({}^{-2}\)). Then, in order to highlight fainter emission, we have applied a square root colour stretch in Figure 2 to the boosted mass surface densities. The colour bar at the top of Figure 2 therefore represents H i mass surface density in units of M\({}_{\sun}\) pc\({}^{-2}\), colour stretched in the above-mentioned manner. Each channel spans a velocity range \(dv\approx 44\) km s\({}^{-1}\). The full velocity range spanned by the system's emission is \(\sim 310\) km s\({}^{-1}\). Despite the interaction between NGC 1512 and NGC 1510, the overall H i kinematics of the system are dominated by rotation.
### Global profile
Figure 3 shows as a black histogram the spatially-integrated H i profile of the NGC 1512/1510 system. Despite the coarse velocity resolution, a double-born profile is clearly visible. Our cube yields an integrated H i flux density \(S_{\rm int}=301.8\) Jy km s\({}^{-1}\). For a distance \(D=11.7\pm 1.1\) Mpc, this corresponds to a total H i mass \(M_{\rm HI}=9.74^{11.7}_{6.01}~{}\times~{}10^{9}\) M\({}_{\sun}\). Also shown in Figure 3, as a grey histogram, is the HIPASS integrated H i spectrum. To facilitate a direct comparison to our data, the HIPASS spectrum has been re-sampled from its native 13.2 km s\({}^{-1}\) velocity resolution to the 44 km s\({}^{-1}\) channels of our MeerKAT data cube. The integrated H i flux density from the HIPASS data is \(S_{\rm int}=259.3\pm 17.4\) Jy km s\({}^{-1}\)(Koribalski et al., 2004). The ATCA imaging from Koribalski & Lopez-Sanchez (2009) (based on a total of \(\sim 75\) hours of on-source time) yields \(S_{\rm int}=268\) Jy km s\({}^{-1}\).
### H i total intensity map
Our primary data product in this paper is a new H i total intensity map for NGC 1512/1510, possibly serving as the best such map ever produced for the system. We generated the map by spatially smoothing the H i data cube to a resolution twice that of its native resolution (i.e., to \(52^{\prime\prime}\times 40^{\prime\prime}\)), applying a \(3\sigma\) cut in order to remove the Gaussian-distributed noise, and then using the surviving voxels of the smoothed cube to produce a mask that was then applied to the full-resolution cube, which was then spectrally integrated. Figure 4 presents two views of the H i total intensity map. The top image uses a linear colour stretch of the H i surface densities to highlight the dense ridges of the extended spiral arms, while the bottom image uses a square-root stretch to show more detail at lower mass surface densities.
Immediately striking is the variety of structures present in our map. It reveals a highly complex H i morphology spanning surface densities from \(\sim 0.25\) M\({}_{\sun}\) pc\({}^{-2}\) to \(\sim 12.5\) M\({}_{\sun}\) pc\({}^{-2}\), making for a dynamic range of approximately 50. High-intensity H i emission is restricted mainly to the system's two prominent spiral arms which consist of well-defined central H i ridges with surface densities \(\gtrsim 3\) M\({}_{\sun}\) pc\({}^{-2}\), as well as compact clumps with H i surface densities \(\gtrsim 8\) M\({}_{\sun}\) pc\({}^{-2}\) spread over regions extending several kiloparsecs. The disk-like inner H i distribution centred on NGC 1512 is dominated by the system's two prominent spiral arms which can now be traced inwards all the way to the starbust ring of NGC 1512. Our map shows the eastern H i arm to be significantly longer (in a southern direction) than seen in the ATCA images presented in Koribalski & Lopez-Sanchez (2009). The other H i arm that extends south and then wraps around to the west is clearly seen to bifurcate at a declination of approximately \(-43^{\circ}30^{\prime}\). After this bifurcation, there is a thin H i bridge extending to the northwest, beyond which the arm then wraps through another \(\sim 90^{\circ}\) all the way to a point directly north of the optical disk of NGC 1512.
Our map highlights the established fact that the system's H i content is distributed over a large area. The angular separation between the optical centres of the two galaxies is approximately 5 arcmin, which corresponds to a physical separation of \(\sim 17\) kpc assuming \(D=11.7\) Mpc. The H i extent of the system, however, is larger than 100 kpc. Using the Wang et al. (2016)
Figure 2: Channel maps of the MeerKAT H i data cube. In order to present the full dynamic range of the cube, we have boosted the cube by an additive constant of 1.12 M\({}_{\sun}\) pc\({}^{-2}\), thereby making all pixels positive (with a minimum mass surface density equal to 0 M\({}_{\sun}\) pc\({}^{-2}\)). Then, we have applied a square-root stretch to the boosted mass surface densities. The colour bar at the top therefore represents H i mass surface density in units of M\({}_{\sun}\) pc\({}^{-2}\), stretched in the above-mentioned manner. Our total intensity map shown in Fig. 4 shows the flux distribution using a linear stretch. The black contour in each panel is at a level of 1 M\({}_{\sun}\) pc\({}^{-2}\) in the original cube. The rms H i mass surface density of a line-free portion of the original cube is 0.08 M\({}_{\sun}\) pc\({}^{-2}\). The line-of-sight velocity corresponding to each channel is shown in the top left of each panel. The optical positions are of NGC 1512 and NGC 1510 are represented by the upper and lower black crosses, respectively. Shown as a grey-filled ellipse in the bottom left of each panel is the \(26.8^{\prime\prime}\times 20.1^{\prime\prime}\) restoring beam of the cube. The solid white bar in each panel represents an angular length of approximately 15 arcmin which corresponds to a physical length of 50 kpc for \(D=11.7\) Mpc. Tick marks along each axis are separated by 10 arcmin.
H i mass-size relation to convert our measured total H i mass of \(M_{\rm HI}=9.74\times 10^{9}\) M\({}_{\sun}\) to an H i diameter yields \(D_{\rm HI}\sim 56.9\) kpc. Hence, the observed distribution of H i is approximately twice as large as what is expected for its H i mass.
Three H i clouds are located in the southern part of our total intensity map. Zoomed-in views of these clouds are shown in Figure 5. The shape of each cloud is elongated in a direction that is consistent with the overall spiral structure of the NGC 1512/1510 system. This suggests the existence of the clouds to be as a direct result of the system's interaction history. The first two have similar surface density distributions, peaking at values of \(\sim 1.5\) M\({}_{\sun}\) pc\({}^{-2}\). The cloud in the south-west corner is likely a tidal dwarf galaxy in the process of forming. Its mass of \(2.59\times 10^{8}\) M\({}_{\sun}\) is close to 3 per cent of the system's H i mass, and its mass surface densities peak at \(\sim 7\) M\({}_{\sun}\) pc\({}^{-2}\). Its high-density core is separated from the optical centre of NGC 1512 by approximately 79 kpc. The appearance of the cloud's emission in the channel maps of our data cube shows its kinematics to be consistent with the overall circular rotation of the total H i component.
For completeness, we show the intensity-weighted mean H i velocity field of the system in Figure 6. In spite of the coarse velocity resolution of our H i data cube, the velocity field exhibits clear signs of regular rotation. We do not attempt to fit a dynamical model to the velocity field.
### Expected H i content
We can contextualise our measurement of the total H i mass for NGC 1512/1510 by comparing it to the sum of the H i masses we expect each galaxy to have. A typical galaxy has its H i mass correlated to its stellar mass. Denes et al. (2014) determine scaling relations between the logarithm of the observed H i masses of galaxies and their magnitudes in six different wavebands. For the \(B\)-band, the relation is
\[\log M_{\rm HI}=(2.89\pm 0.11)-(0.34\pm 0.01)M_{\rm B}. \tag{1}\]
From the Gil de Paz et al. (2007) data presented in Koribalski & Lopez-Sanchez (2009) we know the \(B\)-band apparent magnitudes of NGC 1512 and NGC 1510 to be \(11.08\pm 0.09\) mag and \(13.47\pm 0.11\) mag, respectively. Our assumed distance \(D~{}=~{}11.7\pm 1.1\) Mpc then yields respective absolute magnitudes of \(-19.26^{-19.54}_{-18.95}\) mag and \(-16.87^{-17.17}_{-16.54}\) mag. Inserting these magnitudes into Equation 1 yields \(9.43^{+9.84}_{-0.03}\) and \(8.62^{9.01}_{-8.24}\) as the logarithms of the H i masses (in units of M\({}_{\sun}\)) of NGC 1512 and NGC 1510, respectively. Hence, the expected combined H i mass for the two galaxies is \(3.16^{+7.96}_{-12.25}\times 10^{9}\) M\({}_{\sun}\). This estimate is a factor \(\sim 3\) lower than our measurement of \(M_{\rm HI}~{}=~{}9.74^{+11.17}_{-8.01}\times 10^{9}\) M\({}_{\sun}\) for the total H i mass. Denes et al. (2014) consider a galaxy to be anomalous in terms of its H i content only if it differs by more than a factor \(\sim 4\) from the mean relation. Nevertheless, it seems clear that NGC 1512/1510 has an excess of H i mass.
### Inner H i distribution
The unique combination of high spatial resolution and good surface brightness sensitivity offered by our new imaging provides us with the clearest views of the system's central H i content. Here, we compare the H i and stellar content of the central parts of the NGC 1512/1510 system. The top left panel of Figure 7 shows the inner part of our H i total intensity map with H i mass surface density levels of 1, 3, 5, 8 M\({}_{\sun}\) pc\({}^{-2}\) represented by the dotted, dash-dot, dashed and solid lines, respectively. The top right panel shows the Spitzer 3.6 \(\mu\)m view of the evolved stellar populations of NGC 1512 and NGC 1510. Our new H i imaging clearly shows the entire stellar bulge of NGC 1512 to be contained within a central H i depression consisting of surface densities below 1 M\({}_{\sun}\) pc\({}^{-2}\), spanning a total area of \(\sim 11.6\) kpc\({}^{2}\). The H i has presumably been depleted by high levels of star formation. However, in stark contrast in this regard is NGC 1510 - its stellar component is clearly co-located with an H i over-density of very high mass surface densities. In fact, its H i surface densities of \(\sim 12.5\) M\({}_{\sun}\) pc\({}^{-2}\) are the highest seen in our entire map. The enhanced H i content of this region is surely linked to the star-bursting nature of NGC 1510.
The bottom left panel of Figure 7 shows the GALEX dark-ultraviolet (FUV) view of the region. While optical images of NGC 1512/1510 typically detect only the starburst ring of NGC 1512, and NGC 1510, the FUV image reveals a much more complex morphology related to the interaction history of the two galaxies. In the FUV image, the starburst ring of NGC 1512 is very prominent. Comparing it to the H i map, it is clear that the entire ring is associated with an H i annulus that is delimited by a surface density of \(\sim 3\) M\({}_{\sun}\) pc\({}^{-2}\). The semi-major axis of this elliptical H i feature is \(\sim 165^{\prime\prime}\) (9.4 kpc) and the total H mass contained with it \(\sim 2.4\times 10^{8}\) M\({}_{\sun}\). The regions of the starburst ring with the highest FUV fluxes are associated with higher H i surface densities. A prominent arm-like feature is located in the region between the centres of NGC 1512 and NGC 1510. In the FUV map it is seen as a string of bright clumps spanning an azimuthal angle range of nearly 180 degrees. It is clear that this FUV arm is closely associated with a corresponding H i arm with mass surface densities greater than 5 M\({}_{\sun}\) pc\({}^{-2}\). Furthermore, the cores of the FUV clumps are associated with clumps of higher H i mass surface densities. The correspondence between FUV and H i flux in the NGC 1512/1510 system is therefore notable, and is investigated in more detail in Section 5.
We use Spitzer 24 \(\mu\)m imaging to trace the dust-obscured star formation rates of the galaxies. In the bottom right panel of Figure 7, we show the 24 \(\mu\)m image of the NGC 1512/1510 system. The nuclear ring of NGC 1512 is expected to contain a significant amount of dust, and is indeed very bright in the 24 \(\mu\)m image. However, its spatial extent is very similar to that seen in the FUV
Figure 3: Spatially-integrated H i flux density as a function of line-of-sight velocity for the NGC 1512/1510 system based on our MeerKAT imaging (black) and HIPASS imaging (grey). The HIPASS spectrum has been re-sampled from its native 13.2 km s\({}^{-1}\) velocity resolution to the 44 km s\({}^{-1}\) channel size of our MeerKAT data cube. Shown in the top left of the panel is the total flux density for the galaxy based on the two data sets.
Figure 4: H i total intensity map generated from our MeerKAT data cube based on a linear colour stretch (top panel) and a square-root colour stretch (bottom panel). For each panel, the colour bar represents the H i mass surface density in units of M\({}_{\sun}\) pc\({}^{-2}\). Overlaid black contours are at levels of 1, 3, 5, 8 M\({}_{\sun}\) pc\({}^{-2}\). The full range of H i mass surface densities is \(\sim 0.5\cdot 12.5\) M\({}_{\sun}\) pc\({}^{-2}\). The optical positions of NGC 1512 and NGC 1510 are indicated by the black stars in the bottom panel. For each panel, each axis spans a physical length of \(\sim 141\) kpc assuming a distance of 11.7 Mpc.
image, and the entire ring seen at 24 \(\mu\)m is again contained within the H i flux annulus that is delimited by a surface density of 3 M\({}_{\sun}\) pc\({}^{-2}\).
An alternative explanation for the presence of the central H i depression surrounded by the H i annulus is that of an inner Linblad resonance associated with the stellar bar of NGC 1512. Very clear from the top right panel in Figure 7 is the fact that the H i annulus is located at the end of the stellar bar. It may be the case that the H i from the very inner portions of the galaxy has been concentrated into orbits that correspond to an inner Linblad resonance at the end of the bar. An inner ring that encircles a bar and falls inside spiral arms is often identified with inner 4:1 Linblad resonance. However, the low spectral resolution of our imaging prevents an investigations into such a possibility.
### Measurements of extended H i features
Having discussed the distribution of H i in the vicinity of the stellar components of NGC 1512 and NGC 1510, we use this section to present some detailed measurements of several H i features in the outer disk.
Numerical simulations serve as a useful means of understanding the processes driving galaxy interactions. Several investigators have shown that tidal perturbations of gas-rich disk galaxies result in rapid gas inflows, which occur as a results of a rapid transfer of angular momentum from the gas to the stars. Gas that is able to retain its angular momentum typically collects in extended disks which can consist of extended tails. Dynamical modelling of galaxy mergers is typically carried out by finding a simulation that closely matches the observed morphology and kinematics of an interacting system (e.g., Holincheck et al., 2016). Large-scale features as traced by H i observations most typically used to reconstruct the orbit trajectories and disturbed morphologies of pairs of interacting galaxies. The various length and mass measurements we present in this section can be used to constrain parameters of numerical simulations aimed at modelling the interaction history of the NGC1512/1510 system.
Figure 8 shows a saturated version of our H i total intensity map - the colour scale spans the mass surface density range 0 to 5 M\({}_{\sun}\) pc\({}^{-2}\). In this map, the high-density ridges of the system's prominent spiral arms are more clearly seen, which allows us to visually trace the path of each arm in order to roughly measure its length. The arms all originate from, or close to, the nuclear ring of NGC 1512. The first arm we consider is the one originating in the region between the optical components of NGC 1512 and NGC 1510. This arm initially extends west and then wraps around anti-clockwise until it breaks out of the main H i disk. The path we visually trace for it is the grey curve in Figure 8, and is measured to be 161 kpc in length. Our new MeerKAT imaging traces this arm over a larger azimuthal angle than existing image sets do. The most prominent spiral arm of the system emerges from the north east portion of the inner H i annulus seen in NGC 1512, and has its path coloured black in Figure 8. This arm is tightly wrapped through \(\sim 360\) degrees within the main H i disk but then breaks out to form the large southern spiral arm. The length of the black path we visually trace is 129 kpc. At the end of the black path (al
Figure 5: H i clouds seen in the southern part of our H i total intensity map. Note the different range of H i mass surface densities (represented in units of M\({}_{\sun}\) pc\({}^{-2}\) by the colour bar) shown in each panel. The H i mass of each is shown at the bottom of its panel.
Figure 6: Intensity-weighted mean H i velocity field of NGC 1512/1510. The colour bar represents the line-of-sight velocity in units of km s\({}^{-1}\). The thick contour marks the systemic velocity of 898 km s\({}^{-1}\) stated by Koribalski & López-Sánchez (2009). Contours are spaced by 40 km s\({}^{-1}\). The optical positions of NGC 1512 and NGC 1510 are indicated by the black stars.
most directly south of the stellar bulge of NGC 1512), this prominent H i arm bifurcates. This is clearly seen in our high-resolution MeerKAT maps, but not in existing H i maps. The two bifurcations are represented by the cyan and magenta curves in Figure 8, and have lengths of 73 kpc and 36 kpc, respectively. The arm then extends to the north west of the galaxies. The path of this remaining portion is represented by the brown curve, and is measured to be 75 kpc in length. The entire arm has a path length of approximately 129 kpc + 73 kpc +75 kpc = 277 kpc. This makes it an extremely extended feature that must be as a result of the gravitational interaction between NGC 1512 and NGC 1510. The lengths of the various arms we trace are presented in the top half of Table 2.
In order to contextualise our H i arm length measurements, we compare our measurements to those made by Honig & Reid (2015) for four nearby, nearly face-on, late-type spiral galaxies. Those authors measured the positions of many H ii regions within the galaxies to trace their arms, and then fit log-periodic spiral models to segments of each arm. For each spiral arm within a galaxy, they present in their Tables 2 to 5 a set of azimuthal ranges and corresponding mean radius measurements. The two prominent and symmetrical arms of NGC 3184 have lengths of \(\sim 12.5\) kpc and \(\sim 9.5\) kpc.
\begin{table}
\begin{tabular}{c c c c c} \hline feature & colour & length & mass & fractional mass \\ & & [kpc] & [\(10^{8}\) M\({}_{\odot}\)] & \\ \hline arm & grey & 161 & – & – \\ arm & black & 129 & – & – \\ arm & brown & 75 & – & – \\ arm & magenta & 36 & – & – \\ arm & cyan & 73 & – & – \\ polygon & grey & – & 10.9 & 0.11 \\ polygon & black & – & 28.6 & 0.29 \\ polygon & brown & – & 4.1 & 0.04 \\ polygon & orange & – & 2.5 & 0.03 \\ \hline \end{tabular}
\end{table}
Table 2: Measurements of H i arms and regions.
Figure 7: A multi-wavelength view of the inner portion of the NGC 1512/1510 system. Top left: Our new H i total intensity map shown with a linear colour stretch and with H i mass surface density levels of 1, 3, 5, 8 M\({}_{\odot}\) pc\({}^{-2}\) represented by the dotted, dash-dot, dashed and solid lines, respectively. These contours are shown in all panels. Top right: Spitzer 3.6 \(\mu\)m image. Bottom left: GALEX far-ultraviolet image. Bottom right: Spitzer 24 \(\mu\)m image. There exist clear correlations between the H i flux seen in our new high-resolution MeerKAT map and the starlight observed at various wavelengths.
The southern arm in NGC 628 extends \(\sim 35\) kpc. NGC 5194 is undergoing an interaction with NGC 5195. The shorter of its two prominent spiral arms spans \(\sim 20\) kpc while the longer arm that extends from the bulge of NGC 5194 all the way to NGC 5195 is \(\sim 40\) kpc in length. Admittedly, the study of Honig & Reid (2015) is based on H\(\alpha\) and \(B\)-band imaging. The spiral features they measure are not expected to be as extended as they might be if such a study were carried out using H i imaging. Nevertheless, it clear that the spiral arms we measure in the H i distribution of NGC 1512/1510 are particularly extended.
We used our visually-estimated lengths of the main spiral arms in the NGC 1512/1510 system to calculate the rate at which the spiral pitch angle of each arm is changing as a function of path length. These measurements are presented in Figure 9. The average rate at which the absolute value of the pitch angle changes with path length (\(\frac{d\theta}{dt}\)) is less than \(\sim 3.5^{\circ}\) kpc\({}^{-1}\) in all of the main arms. Consider the portion of the southern arm delimited by the black polygon in Figure 8, the highest values of \(\frac{d\theta}{dt}\) occur within the first \(\sim 30\) kpc, peaking at a value of \(\sim 7^{\circ}\) kpc\({}^{-1}\). The portion of the eastern arm delimited by the grey polygon in Figure 8 has \(\frac{d\theta}{dt}\approx 2^{\circ}\) kpc\({}^{-1}\) over the first \(\sim 60\) kpc, after which it slightly rises and then decreases to zero. The north-west arm (dimited by the brown polygon in Figure 8) has \(\frac{d\theta}{dt}\) close to zero over the first and last thirds of its path length, while \(\frac{d\theta}{dt}\approx 2^{\circ}\) kpc\({}^{-1}\) over the central third of its path length.
We also measure how much H i mass is contained within several visually-delimited portions of the NGC 1512/1510 system. These regions are also shown in Figure 8, and their associated masses are summarised in lower half of Table 2. We first consider the eastern H i arm from the point at which it emerges from the main H i disk of NGC 1512. This region is delimited by a grey polygon in Figure 8 and has an H i mass of \(~{}10.9\times 10^{8}\) M\({}_{\sun}\), which amounts to \(\sim 11\) per cent of the H i mass of the entire system. Next, we consider the southern H i arm, also from the point at which it emerges from the central H i disk, all the way to the end of its bifurcated portion extending to the western half of the image. This region, delimited by
Figure 8: H i total intensity map of NGC 1512/1510 shown using a linear colour stretch. Curves coloured black, grey, magenta, cyan and brown have been visually overlaid in order to roughly trace the main H i arms of the system. The lengths of the arms are indicated in units of kpc and are also presented in the top part of Table 2. Polygons coloured black, grey and brown have been visually overlaid to delimit the portions of the arms outside of the inner H i disk. The orange polygon delimits the H i emission of the tidal dwarf galaxy. The H i masses of the polygon-delimited regions are shown in units of \(10^{8}\) M\({}_{\sun}\). Also shown (in square brackets) are the masses of the polygon-delimited relative to the system’s total H i mass. These quantities are also shown in the bottom part of Table 2.
a black polygon in Figure 8, contains \(~{}28.6\times 10^{8}\) M\({}_{\sun}\) of H i, which is nearly 30 per cent of the system's total H i mass. The north west extension of this H i arm is the third region we consider, delimited by a brown polygon in Figure 8. It has an H i mass of \(~{}4.1\times 10^{8}\) M\({}_{\sun}\).
Overall, roughly equal amounts of H i are contained within the system's main H i disk and its spiral arms. The fact that so much mass is contained within the tidally-induced H i arms indicates the system to be in an advanced state of interaction.
Finally, we consider the cloud seen in the bottom right corner of our total intensity map, delimited by the orange polygon. This is likely a tidal dwarf galaxy in the process of forming. It has an H i mass of \(~{}2.5\times 10^{8}\) M\({}_{\sun}\), which is nearly 3 per cent of the system's H i mass. Its peak mass surface densities of \(\sim 8\) M\({}_{\sun}\) pc\({}^{-2}\) are as high as those seen in the dense ridges of the system's prominent spiral arms. Koribalski & Lopez-Sanchez (2009) use GALEX FUV and NUV images to demonstrate the presence of star formation in this tidal dwarf galaxy candidate. They estimate the FUV - NUV colours of the cloud in order to derive an average age of at least 150 Myr for the young stellar population.
## 5 H i and FUV correlation
While optical and infrared images of the system offer no obvious evidence of an interaction history between the two galaxies, the FUV image from GALEX certainly does. Figure 10 presents the full GALEX image of NGC 1512/1510. The system has a large FUV disk which, relative to the optical centre of NGC 1512, extends out to a radius of \(\sim 30\) kpc. Very noticeable is the spatial correlation between H i and FUV emission over a large range of scales.
Rather than we only the GALEX FUV map to trace the star formation, we include the Spitzer 24 \(\mu\)m map as a measure of the dust-obscured star formation rate, and combine it with the the FUV map according to the prescription offered as Equation D11 in Leroy et al. (2008) in order to produce a map of the total star formation in the system. Before combining the maps, we smoothed each one to the spatial resolution of our H i map, and re-grafted each of them to have the same 7.5 arcsec pixel scale. Our final total star formation rate map can therefore be directly compared to our H i map. All of the system's 24 \(\mu\)m emission is actually contained within the starburst ring of NGC 1512, and within NGC 1510 (as shown in Figure 7, bottom right panel). Therefore, the outer parts of our total star formation rate map reduce to the FUV map. In order to clearly display the complex distribution of star formation flux in the extended FUV arms of the NGC 1512/1510 system, we choose to show the GALEX FUV image in Figure 10 instead of the total SFR map. However, below, we do compare the total SFR map to our H i total intensity map.
We measure the fractional amounts of total star formation rate and H i flux corresponding to H i mass surface densities (\(\Sigma_{\rm HI}\)) below a particular limit, and then compare the quantities to each other. We use \(\Sigma_{\rm HI}\) limits of 0.0, 0.5, 1.0,..., 12.0, 12.5 M\({}_{\sun}\) pc\({}^{-2}\). This covers the full dynamic range of our H i map. We repeat the experiment twice: once by considering all spatial pixels within the maps, and again by excluding the regions close to the optical components of the galaxies. Figure 11 shows the results. The black solid curve represents the fractional amount of H i flux above a given \(\Sigma_{\rm HI}\). The large majority of the system's H i mass is associated with low \(\Sigma_{\rm HI}\) values. Approximately 40 per cent is observed at \(\Sigma_{\rm HI}\lesssim 3\) M\({}_{\sun}\). In Figure 4, we see this surface density to generally mark the edges of the high-intensity ridges of the spiral arms. The red solid curve in Figure 11 is the cumulative fractional mass curve for a uniform flux distribution. In order to create it, we measured the fractional area of our H i map below a given \(\Sigma_{\rm HI}\) value.
The grey solid curve in Figure 11 represents the cumulative fractional flux in our total star formation rate map. Very clear is the fact that over the range \(\Sigma_{\rm HI}\sim 2\) M\({}_{\sun}\) pc\({}^{-2}\) to \(\sim 6\) M\({}_{\sun}\) pc\({}^{-2}\) the grey solid curve traces the black solid curve much closer than it does the red solid curve. In other words, stellar flux is not uniformly distributed, it is tracing the H i distribution. However, at surface densities above \(\sim 6\) M\({}_{\sun}\) pc\({}^{-2}\), the grey solid curve begins to significantly depart from the black solid curve. This is due the extreme nature of the starburst ring in NGC 1512. Flux from the ring constitutes a large fraction of the system's total star formation rate. However, the system's highest H i mass surface densities are not correspondingly located in the starburst ring. Rather, as can be seen in Figure 4, the highest H i flux densities occur at the location of NGC 1510, the region between NGC 1512 and NGC 1510, as well as the large H i arm that extends to the south of NGC 1512. Within the starburst ring region of NGC 1512 there is presumably a high content of molecular hydrogen that is playing the dominant role in regulating the star formation activity.
In order to remove from our analysis the complexities associated with the nuclear ring, we mask it together with the region centred on NGC 1510. We repeat our cumulative flux measurements and present the results as dashed curves in Figure 11. For the extended FUV disk of the system, we see even stronger evidence of a correlation between star formation and H i mass. The grey dashed and black dashed curves trace one another closely over a larger range of H i mass surface densities. This is quantitative confirmation of the spatial correlation that is obvious from our comparison of the FUV and H i maps shown in Figure 10. Thus, for the extended star-forming disk of the NGC 1512/1510 system, it seems that high-mass star formation occurs where H i is located. In spite of the complex H i morphology of the NGC 1512/1510 system, the presence of H i is clearly a precondition for star formation. Similar results are found by Bigiel et al. (2010) for M83. They report that their "FUV and H i maps show a stunning spatial correlation out to almost 4 optical radii". They show the H i depletion time in the outer disk to be about 100 Gyr, and comment that it is likely the inefficient build up of molecular clouds that is the bottleneck for forming stars at large radii.
Figure 9: Rate of change of pitch angle with respect to path length for the main H i spiral arms of the NGC 1512/1510 system.
## 6 Summary
We have presented MeerKAT H i-line imaging of the nearby interacting galaxy pair NGC 1512/1510. The powerful combination of high spatial resolution (\(26.8^{\prime\prime}\times 20.1^{\prime\prime}\)) and flux sensitivity offered by MeerKAT data provides us with a new view of the H i content of the system, especially its distribution and its links to star formation activity.
Our primary data set is a new H i total intensity map in which we see the entire H i morphology to be dominated by the tidally-induced H i arms. Our map yields a total H i mass \(M_{\rm HI}=9.74^{1.17}_{80.1}\) M\({}_{\sun}\). For the first time, we are able to resolve the spatial distribution of H i mass near the centre of the system. The starburst ring seen in optical images of NGC 1512 is clearly associated with a corresponding H i annulus that is delimited by a mass surface density of \(\sim 3\) M\({}_{\sun}\) pc\({}^{-2}\), while the bright stellar bulge of NGC 1512 is co-located within a prominent central H i depression where H i mass is concentrated less than 1 M\({}_{\sun}\) pc\({}^{-2}\). NGC 1510, however, has its bright star-bursting core co-located with the highest H i surface densities we detect (\(\sim 12.5\) M\({}_{\sun}\) pc\({}^{-2}\)).
The improved spatial resolution of our new H i total intensity map allows us quantify the correlation between H i mass and
Figure 11: Distribution of fractional FUV and H i flux as a functions of H i mass surface density for the entire disk (solid curves) and outer disk (dashed lines) of NGC 1512. The black/grey curves represent the fraction of H i/FUV flux below a given surface density limit, while the red curves represent the corresponding fractional area of the H i distribution. For the outer disk of NGC 1512 there exists a notable correlation between star formation and H i content.
Figure 10: GALEX far-ultraviolet image showing the extended star-forming disk of the NGC 1512/1510 system. The FUV arms extend up to \(\sim 30\) kpc from the optical centre of NGX 1512. Overlaid in black are H i contours at the same levels as those shown in Fig. 4 and Fig. 7. Their exists a clear correlation between H i and FUV flux in the system. We quantify the correlation and show the results in Fig. 11.
emission from high-mass stars on \(\sim 1.5\) kpc length scales. When we exclude the extreme regions close to the optical components of the two galaxies, we find a clear correlation between H i and far-ultraviolet flux, similar to what is seen in other nearby galaxies. Hence, in spite of the history of interaction between NGC 1512 and NGC 1510, the H i within the system seems to be playing an important role in setting the pre-conditions required for high-mass star formation.
The maps we generate and show in this work demonstrate the raw imaging power of MeerKAT, and exemplify the studies that will soon be carried out routinely as parts of large nearby galaxy surveys, and at higher redshifts.
## 7 Acknowledgments
The MeerKAT telescope is operated by the South African Radio Astronomy Observatory (SARAO), which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. EE's research is supported by SARAO. The authors acknowledge the use of computing facilities of the Inter-University Institute for Data Intensive Astronomy (IDIA) for part of this work. MG acknowledges support from IDIA and was partially supported by the Australian Government through the Australian Research Council's Discovery Projects funding scheme (DP210102103. RPD acknowledges the South African Research Chairs Initiative of the Department of Science and Innovation and the National Research Foundation.
## 8 Data availability
Requests for access to the data products presented in this work may be submitted via email to the corresponding author.
|
2302.00513 | Exact Probabilistic Inference Using Generating Functions | Probabilistic programs are typically normal-looking programs describing
posterior probability distributions. They intrinsically code up randomized
algorithms and have long been at the heart of modern machine learning and
approximate computing. We explore the theory of generating functions [19] and
investigate its usage in the exact quantitative reasoning of probabilistic
programs. Important topics include the exact representation of program
semantics [13], proving exact program equivalence [5], and -- as our main focus
in this extended abstract -- exact probabilistic inference. In probabilistic
programming, inference aims to derive a program's posterior distribution. In
contrast to approximate inference, inferring exact distributions comes with
several benefits [8], e.g., no loss of precision, natural support for symbolic
parameters, and efficiency on models with certain structures. Exact
probabilistic inference, however, is a notoriously hard task [6,12,17,18]. The
challenges mainly arise from three program constructs: (1) unbounded
while-loops and/or recursion, (2) infinite-support distributions, and (3)
conditioning (via posterior observations). We present our ongoing research in
addressing these challenges (with a focus on conditioning) leveraging
generating functions and show their potential in facilitating exact
probabilistic inference for discrete probabilistic programs. | Lutz Klinkenberg, Tobias Winkler, Mingshuai Chen, Joost-Pieter Katoen | 2023-02-01T15:34:13Z | http://arxiv.org/abs/2302.00513v2 | # Exact Probabilistic Inference Using Generating Functions+
###### Abstract
Probabilistic programs are typically normal-looking programs describing posterior probability distributions. They intrinsically code up randomized algorithms and have long been at the heart of modern machine learning and approximate computing. We explore the theory of _generating functions_[20] and investigate its usage in the exact quantitative reasoning of probabilistic programs. Important topics include the exact representation of program semantics [14], proving exact program equivalence [6], and - as our main focus in this extended abstract - exact probabilistic inference.
In probabilistic programming, _inference_ aims to derive a program's posterior distribution. In contrast to approximate inference, inferring _exact_ distributions comes with several benefits [9], e.g., no loss of precision, natural support for symbolic parameters, and efficiency on models with certain structures. Exact probabilistic inference, however, is a notoriously hard task [13, 18, 17, 7]. The challenges mainly arise from three program constructs: (1) unbounded while-loops and/or recursion, (2) infinite-support distributions, and (3) conditioning (via posterior observations). We present our ongoing research in addressing these challenges (with a focus on conditioning) leveraging generating functions and show their potential in facilitating exact probabilistic inference for discrete probabilistic programs.
## 1 Inference in Probabilistic Programs
**State-of-the-Art.** Most existing probabilistic programming languages implement _sampling_-based inference algorithms rooted in the principles of Monte Carlo [16], thereby yielding numerical approximations of the exact results, see, e.g., [10]. In terms of semantics, many probabilistic systems employ _probability density function_ (PDF) representations of distributions, e.g., (\(\lambda\))PSI [7, 9], AQUA [12], Hakaru [17], and the density compiler in [3, 5]. These systems are dedicated to inference (with conditioning) for programs encoding (joint discrete-)continuous distributions. Reasoning about the underlying PDF representations, however, amounts to resolving complex integral expressions in order to answer inference queries, thus confining these techniques either to (semi-)numerical methods [3, 5, 12, 17] or exact methods yet limited to bounded looping behaviors [7, 9]. Dice[12] employs weighted model counting to enable exact inference for discrete probabilistic programs, yet is also confined to statically bounded loops. The tool Mora[5, 6] supports exact inference for various types of Bayesian networks, but relies on a restricted form of intermediate representation known as prob-solvable loops.
**The PGF Approach.** Klinkenberg et al. [14] provide a program semantics that allows for exact quantitative reasoning about probabilistic programs without conditioning. They exploit a denotational approach a la Kozen [15] and treat a probabilistic program as a _distribution transformer_, i.e., mapping a distribution over the inputs (the prior) into a distribution after execution of the program (the posterior). In [14], the domain of discrete distributions is represented in terms of _probability generating functions_ (PGFs), which are a special kind of generating functions [20]. This representation comes with several benefits: (a) it naturally encodes common, infinite-support distributions (and variations thereof) like the geometric or Poisson distribution in compact, _closed-form_ representations; (b) it allows for compositional reasoning and, in particular, in contrast to representations in terms of density or mass functions, the effective computation of (high-order) moments; (c) tail bounds, concentration bounds, and other properties of interest can be extracted with relative ease from a PGF; and (d) expressions containing parameters, both for probabilities and for assigning new values to program variables, are naturally supported. Some successfully implemented ideas based on PGFs, e.g., for deciding probabilistic equivalence and for proving
non-almost-sure termination, are presented in [5, 13], which address especially the aforementioned challenges (1) and (2) for exact probabilistic inference _without_ conditioning.
## 2 Taming Conditioning Using PGFs
The creation of generative models is a challenging task, as these models oftentimes need expert domain knowledge. Therefore, the concept of _conditioning as a first-class language element_ is crucial as it allows for a natural and intuitive approach to the creation of models. Our current research aims to _extend the PGF approach towards exact inference for probabilistic programs with conditioning - thus addressing challenges_ (1), (2), _and_ (3) - _and to push the limits of automation as far as possible._ To this end, we are in the process of developing an exact, symbolic inference engine based on the open-source, PGF-based tool Prodigy[5]. We illustrate below its current capability to cater for conditioning via two examples.
\[\{w\coloneqq 0\}[\nicefrac{{5}}{{\pi}}\{w\coloneqq 1\}\$, x\coloneqq 1\$\] \[\mathtt{if}(w=0)\{\,c\coloneqq\mathtt{poisson}(6)\,\} \mathtt{while}(x=1)\{\,c\coloneqq c+1\}[\nicefrac{{1}}{{2}}\{x \coloneqq 0\}\$\,\}\] \[\mathtt{observe}(c=5) \mathtt{observe}(c\equiv 1\pmod{2})\]
Conditioning in Loop-Free Programs.Prog.1 is a loop-free probabilistic program encoding an infinite-support distribution. It describes a telephone operator who is unaware of whether today is a weekday or weekend. The operator's initial belief is that with probability \(\nicefrac{{5}}{{\pi}}\) it is a weekday (\(w=0\)) and thus with probability \(\nicefrac{{2}}{{\pi}}\) weekend (\(w=1\)). Usually, on weekdays there are 6 incoming calls per hour on average; on weekends this rate decreases to 2 calls - both rates are subject to a Poisson distribution. The operator observes 5 calls in the last hour. The inference task is to compute the distribution in which the initial belief is updated based on the posterior observation. Prodigy can automatically infer that \(\Pr(w=0)=\frac{1215}{1215+2\cdot e^{4}}\approx 0.9178\).
Conditioning Outside of Loops.Prog.2 describes an iterative algorithm that repeatedly flips a fair coin - while counting the number of trials - until seeing heads, and observes that this number is odd. Whereas Prog.1 can be handled by (\(\lambda\))PSI [7, 8], Prog.2 cannot, as (\(\lambda\))PSI do not support programs with unbounded looping behaviors. However, given a suitable invariant as described in [5], Prodigy is able to reason about the posterior distribution of Prog.2 in an automated fashion using the _second-order PGF_ (SOP) technique [5]: the resulting posterior distribution for any input with \(c=0\) is \(\frac{3\cdot c}{(4-c^{2})}\) which encodes precisely a closed-form solution for the generating function \(\sum_{n=0}^{\infty}-3\cdot c^{n}\cdot\big{(}2^{-2-n}\cdot(-1+(-1)^{n})\big{)}\).
## 3 Future Directions
A natural question is whether we can tackle exact inference when conditioning occurs _inside_ of a loop. As argued in [17], more advanced inference techniques are required to answer this question. In fact, to the best of our knowledge, there is no (semi-)automated exact inference technique that allows for the presence of observe statements inside a (possibly unbounded) loop (an exception could be the potentially automatable conditional weakest preexpectation calculus [17]). This is precisely our current research focus. One promising idea is to develop a non-trivial syntactic restriction of the programming language, where the more advanced SOP technique [5] can be generalized to address conditioning inside loops.
The possibility to incorporate symbolic parameters in PGF representations can enable the application of well-established optimization methods, e.g., maximum-likelihood estimations and parameter fitting, to the inference for probabilistic programs. Other interesting future directions include deciding equivalence of probabilistic programs with conditioning, amending our method to continuous distributions using characteristic functions, and exploring the potential of PGFs in differentiable programming. |
2304.03394 | Deep Learning for Opinion Mining and Topic Classification of Course
Reviews | Student opinions for a course are important to educators and administrators,
regardless of the type of the course or the institution. Reading and manually
analyzing open-ended feedback becomes infeasible for massive volumes of
comments at institution level or online forums. In this paper, we collected and
pre-processed a large number of course reviews publicly available online. We
applied machine learning techniques with the goal to gain insight into student
sentiments and topics. Specifically, we utilized current Natural Language
Processing (NLP) techniques, such as word embeddings and deep neural networks,
and state-of-the-art BERT (Bidirectional Encoder Representations from
Transformers), RoBERTa (Robustly optimized BERT approach) and XLNet
(Generalized Auto-regression Pre-training). We performed extensive
experimentation to compare these techniques versus traditional approaches. This
comparative study demonstrates how to apply modern machine learning approaches
for sentiment polarity extraction and topic-based classification utilizing
course feedback. For sentiment polarity, the top model was RoBERTa with 95.5%
accuracy and 84.7% F1-macro, while for topic classification, an SVM (Support
Vector Machine) was the top classifier with 79.8% accuracy and 80.6% F1-macro.
We also provided an in-depth exploration of the effect of certain
hyperparameters on the model performance and discussed our observations. These
findings can be used by institutions and course providers as a guide for
analyzing their own course feedback using NLP models towards self-evaluation
and improvement. | Anna Koufakou | 2023-04-06T21:48:29Z | http://arxiv.org/abs/2304.03394v2 | # Deep Learning for Opinion Mining and Topic Classification of Course Reviews
###### Abstract
Student opinions for a course are important to educators and administrators, regardless of the type of the course or the institution. Reading and manually analyzing open-ended feedback becomes infeasible for massive volumes of comments at institution level or online forums. In this paper, we collected and pre-processed a large number of course reviews publicly available online. We applied machine learning techniques with the goal to gain insight into student sentiments and topics. Specifically, we utilized current Natural Language Processing (NLP) techniques, such as word embeddings and deep neural networks, and state-of-the-art BERT (Bidirectional Encoder Representations from Transformers), RoBERTa (Robustly optimized BERT approach) and XLNet (Generalized Auto-regression Pre-training). We performed extensive experimentation to compare these techniques versus traditional approaches. This comparative study demonstrates how to apply modern machine learning approaches for sentiment polarity extraction and topic-based classification utilizing course feedback. For sentiment polarity, the top model was RoBERTa with 95.5% accuracy and 84.7% F1-macro, while for topic classification, an SVM (Support Vector Machine) was the top classifier with 79.8% accuracy and 80.6% F1-macro. We also provided an in-depth exploration of the effect of certain hyperparameters on the model performance and discussed our observations. These findings can be used by institutions and course providers as a guide for analyzing their own course feedback using NLP models towards self-evaluation and improvement.
**Keywords:** Student Course Feedback, Educational Data Mining, Sentiment analysis, Opinion Mining, Topic Classification, Deep Learning
## 1 Introduction
The course feedback from students has long been utilized from individual instructors to groups and institutions for a variety of purposes. Instructors can utilize the course feedback in order to find out what is important to students, the effectiveness of teaching material and methods, etc., in order to improve the course for a future offering. Institutions can use student surveys to gauge student perceptions and opinions, and even towards evaluations of instructors. Quantitative data from surveys, such as student ratings of their course instructors, have been used for a long time in an effort to measure teaching effectiveness: "student ratings are the single most valid source of data on teaching effectiveness - in fact there is little support for the validity of any other source of data" (Spencer and Schmelkin, 2002).
Besides the quantitative data, there are also qualitative data in the form of textual responses, which are not as easy to explore, summarize, or visualize. Many issues exist with student text comments, such as misspellings, abbreviations, short or irrelevant statements, rambling, etc. However, because of the open-ended nature of the feedback, students are allowed to describe what is on their mind and what they feel is important without necessarily thinking of ratings or matching quantitative concepts such as a Likert scale. At the same time, the issues associated with understanding human language at scale have kept many from fully utilizing this resource.
In the past decade or so, there has been an increasing amount of research using text-based student course feedback for a variety of tasks and purposes. With the ubiquitous web-based platforms and social media, there is also an abundance of data to collect and analyze, for example, from twitter (Chen et al, 2014) or ratemyprofessor website (Onan, 2020). Many efforts are focusing on _sentiment analysis_, which is the field of study that analyzes people's opinions, sentiments, attitudes, and emotions in text. There has been a lot of research using sentiment analysis on educational data, for example see the surveys in (Dolianiti et al, 2018; Zhou and Ye, 2020). Just like sentiment analysis has been used by businesses to help improve marketing and customer relationships, in the educational field, sentiment analysis may be used to improve the "attractiveness of higher educational institutions" (Santos et al, 2018) or decrease drop-out rates in Massive Open Online Courses (MOOCs) (Kastrati et al, 2021).
Besides sentiment analysis, there are other tasks that have been explored in related research, for example topic modeling or topic-based classification: the goal here is not to extract the sentiment (positive or negative), but to extract or predict the topics for which the comments were written or focused on, for example (Van Nguyen et al, 2018; Srinivas and Rajendran, 2019). There is also work in aspect-based sentiment mining, which targets the sentiment for each specific entities (aspects) in the comments, for example (Sindhu et al, 2019; Ren et al, 2022).
Much of the earlier work related to course feedback analysis has concentrated on traditional Machine Learning (ML) techniques, for example
(Altrabsheh et al, 2014; Koufakou et al, 2016). In the last few years, researchers have taken advantage of the advances in ML and Natural Language Processing (NLP) and used Deep Learning (DL) models, for example utilizing word embeddings, and convolutional or other deep neural networks; as examples, see (Dessi et al, 2019; Onan, 2020; Estrada et al, 2020). There are also recent educational data mining surveys (Dutt et al, 2017; Kastrati et al, 2021).
In this paper, we first describe the process of collecting and annotating a corpus of more than ten thousand online reviews for a variety of courses with topics ranging from Web Development to Data Science to Marketing. We applied sentiment polarity extraction on our corpus, looking at reviews as positive or negative. We also explored topic-based classification, categorizing each review in one of four topics. We performed extensive comparative analysis of several DL techniques (such as CNNs and LSTMs, using word embeddings, but also state-of-the-art models, BERT, RoBERTa, and XLNet) and compared their efficacy with traditional classifiers (k-Nearest Neighbor, Naive Bayes, and Support Vector Machines SVMs). Besides our extensive experimentation with very different classifiers, we also explored further possible improvements on the accuracy and the effects on runtime efficiency of the DL models. The main contributions of our work are:
1. We utilized a brand new corpus we collected from over ten thousand course reviews online. Using the new corpus, we presented a two-fold analysis (opinion-based and topic-based) as opposed to many previous works who focused only on one task. This way, our work can demonstrate how to employ the data for different tasks and highlight similarities and differences between the two tasks. For example, in our experiments for topic-based classification, we found that an SVM model performed better than the DL models, which was not the case for the opinion mining experiments.
2. We utilized and experimented with, not only traditionally-used Deep Learning models, such as CNNs and RNNs, but also state-of-the-art NLP transformer-based models, namely BERT, RoBERTa, and XLNet. BERT (and as a consequence any similar models) has quickly become the de facto baseline in NLP experiments (Rogers et al, 2020). Our literature review in the area of course feedback analysis found only a handful of papers that used BERT and none that used RoBERTa or XLNet (see Section 2).
3. We reported the performance results and observations from extensive experiments with diverse models (traditional, deep learning, and transformer-based) we employed for classification. Our experimentation is rigorous and built on a solid framework, for example we used cross-validation, we reported several metrics, we compared confusion matrices, among other things. Additionally, we explored how to improve the accuracy of our DL models, while looking at the relation of the improved accuracy on runtime. We have not seen any similar exploration in related work for course feedback analysis.
The organization of this paper is as follows. In Section 2, we review previous work related to ours. In Section 3, we describe the corpus we developed and used in this study. In Sections 4 and 5, we provide a detailed view of our models followed by our experiments and results. Finally, in Section 6, we summarize our work and provide concluding remarks.
## 2 Related Work
There has been an ever increasing amount of research in using text mining and NLP for educational purposes due to the increased processing power, abundance of datas and the recent advances in ML and NLP models. In the following, we review traditional techniques applied to the analysis of student course reviews, followed by current work in this area, namely using DL.
We also provide a summary of representative related work in Table 1: the Table lists the ML techniques and what types of data were used in each reference, listed by the first author and year of publication for space.
### Traditional techniques
In the previous decade, there are several articles in the literature that have used traditional techniques and tools to mine student feedback. For a thorough review of earlier work, see surveys such as (Pena-Ayala, 2014). In the following, we give an overview of representative works in this area based on traditional techniques.
Sliusarenko et al (2013) used key-phrase extraction and factor analysis to identify which factors were important in student comments; they also employed regression analysis to find which factors have the most impact on student ratings. Altrabsheh et al (2014) applied several pre-processing techniques to data they collected and then machine learning algorithms for sentiment analysis, finding that the best method was the Linear SVM using unigrams. Ortigosa et al (2014) performed sentiment analysis by combining lexicon-based techniques with ML models such as Decision Tree, Naive Bayes and SVM. Tian et al (2014) recognized emotions (such as anger, anxiety, or joy) in Chinese texts from e-learners and proposed a framework for regulating the e-learner's emotion based on active listening strategies. Koufakou et al (2016) explored sentiment analysis using traditional techniques based on bag-of-words, Naive Bayes and k-Nearest Neighbor, as well as Frequent Itemset Mining to identify key frequent terms in survey comments. As more recent examples, Lalata et al (2019) employed an ensemble of traditional ML algorithms, specifically, Naive Bayes, Logistic Regression, SVMs, Decision Tree and Random Forest. Hujala et al (2020) used an LDA (Latent Dirichlet Allocation) method, and then applied qualitative and quantitative evaluation methods to validate the outcomes by connecting them to theoretical frameworks and quantitative data.
As to the type of data that has been used, researchers have collected data e.g. from social media, or used data from their own courses. As examples, researchers have collected real-time student feedback from lectures as well as
end-of-semester surveys (Altrabsheh et al, 2014); survey data from courses at their department (Koufakou et al, 2016); students' conversations on twitter to understand students' opinions about the learning process (Chen et al, 2014); facebook data, for example messages on the user wall and basic user data such as gender and birthday (Ortigosa et al, 2014). More recent examples: Hujala et al (2020) used over six thousand student survey results carried out at a Finnish University; Abbas et al (2022) used student evaluations of more than five thousand teachers from a University in Mexico.
Researchers have also shown how to use the extracted sentiment to predict student performance. For example, Sorour et al (2015) applied probabilistic latent semantic analysis (PLSA) on student comments collected after specific lessons in introductory programming courses. Then, they predicted student final grades using SVM and artificial neural networks (ANN), where the SVM had the highest accuracy.
Besides applying ML techniques to student feedback, there has also been work on developing tools and frameworks for analysis of student feedback. For example, a conceptual framework for student feedback analysis by Gottipati et al (2017) included a sentiment extraction stage and logistic regression. Gronberg et al (2021) proposed an open-source online text mining tool for analyzing and visualizing student feedback entered in course surveys at a university. Estrada et al (2020) proposed an emotion recognition and opinion capture as part of an integrated learning environment for Java. Srinivas and Rajendran (2019) used LDA for topic modeling and the Vader tool (proposed by Hutto and Gilbert (2014)) for sentiment classification, as part of a larger systematic view on strengths, weaknesses, etc. to be used by universities to analyze online feedback.
### Deep Learning (DL)
DL is a more recent advancement in the larger field of machine learning and it has already been used for educational data mining successfully (Doleck et al, 2020). DL models are usually Artificial Neural Networks (ANNs) with more layers than traditional ANNs. They include networks such as Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), etc. Among the more recent neural architectures, BERT (Devlin et al, 2019) is considered state-of-the-art in several NLP tasks. We review the basics of the models we use in our work in Section 4.
For a thorough view on the recent work related to the analysis of student feedback, the reader is referred to recent related surveys, such as (Dutt et al, 2017; Dolianiti et al, 2018; Kastrati et al, 2021). The most recent survey we found (Kastrati et al, 2021) shows that there are only seven papers related to this topic that utilized DL methods, even though we found additional works dated after the survey, using DL models. In the following paragraphs, we review representative work that is related to our work.
Yu et al (2018) used sentiment information extracted from student self-evaluations to improve the accuracy of early prediction of which students are
likely to fail in a Chinese course. They used a Chinese affective lexicon, and structured data such as attendance, in conjunction with unstructured text comments. They found that CNNs using both structured and unstructured data had the best performance overall. Another study by Tseng et al (2018) also focused on course surveys with the task to use student comments for evaluating and hiring teaching faculty. They compared deep networks such as Recurrent Neural Networks (RNNs) using a Chinese text sentiment analysis kit, named SnowNLP, and they found that the best accuracy was achieved by an attention LSTM classifier.
There is a branch of related work based on Vietnamese data. Van Nguyen et al (2018) developed a Vietnamese Students' Feedback Corpus named UIT-VSFC, human-annotated for classification based on sentiment and on topics.
\begin{table}
\begin{tabular}{l l} \hline
**Reference** & **Models; Data (language if known)** \\ \hline Altrabsheh, 2014 & NBs, ME, SVM, with various pre-processing; surveys from \\ & various universities in UK (English) \\ \hline Chen, 2014 & NB multi-label classification; twitter related to engineering \\ & students (English) \\ \hline Ortigosa, 2014 & Lexicon-based techniques combined with DT, NB, SVM; \\ & various data extracted from facebook \\ \hline Koufakou, 2016 & kNN, NB, frequent itemset mining; surveys from their \\ & University (English) \\ \hline Nguyen, 2018 & NB, SVM, variants of LSTMs; Vietnamese Student Feedback \\ & Corpus UIT-VSFC (Vietnamese) \\ \hline Tseng, 2018 & NB, NN, RNN, LSTM, attention; surveys from their \\ & University (Chinese) \\ \hline Yu, 2018 & SVM, CNN with text and quantitative data; surveys from \\ & their University (Chinese) \\ \hline Dessi, 2019 & SVM, RF, MLP with context-trained embeddings; data \\ & extracted earlier from Udemy (English) \\ \hline Lalata, 2019 & Ensemble with NB, LR, SVM, DT, RF; surveys from their \\ & University (English, Filipino) \\ \hline Srinivas, 2019 & LDA for topic modeling then VADER for sentiment \\ & analysis; niche.com \\ \hline Estrada, 2020 & NB, SVM, kNN, DT, RF, LSTM, CNN, BERT, Evolutionaly \\ & model; various websites such as twitter and youtube \\ & and feedback from programming related courses in their \\ & University (Spanish) \\ \hline Onan, 2020 & Various embeddings (e.g. word2vec, GloVe) with NB, SVM, \\ & AdaBoost, CNN, LSTM, RNN, GRU; ratemyprofessor.com \\ \hline Rybinski, 2020 & Various BERT models; various websites such as \\ & ratemyprofessor, and their own University feedback (English) \\ \hline Truong, 2020 & PhoBERT pre-trained BERT model for Vietnamese; UIT-VSFC \\ & (Vietnamese) \\ \hline Ren, 2022 & Aspect-based SA: topic/sentiment dictionaries with bi-LSTM \\ & attention; open-ended questions to junior school (Chinese) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of Representative Related Works. References listed as _first author, year_, in chronological order.
Nguyen et al (2018) explored variants of LSTMs for sentiment analysis on that corpus. Truong et al (2020) utilized PhoBERT, a pre-trained BERT model for Vietnamese, and fine-tuned it to achieve state-of-the-art results on UIT-VSFC.
Dessi et al (2019) experimented with several word embedding representations and DL as well as traditional models, for regression based on a sentiment score rating. They found that the best performance is achieved by their Bidirectional LSTM with an attention layer, based on word2vec. They also explored training word embeddings on a relevant corpus (not available as far as we see).
Estrada et al (2020) presented sentiment analysis and emotion detection of online data from various sources, such as youtube or twitter, as well as data collected from their own courses. They utilized different models such as CNN and LSTM, as well as BERT and an evolutionary model; the latter performed the best in their experiments. Their DL models used as input one-hot encodings of the student comment text, and not word embeddings. Onan (2020) also focused on sentiment analysis and experimented with various embeddings (word2vec, GloVe, fastText, LDA2Vec) and models such as CNN, RNN, LSTM, as well as ensemble techniques. Their experimentation is very thorough and they used more than a 150 thousand reviews collected from ratemyprofessor website.
There is also work that has focused on _aspect-based_ sentiment analysis, which targets sentiment related with a specific aspect, for example, instructor or course. Sindhu et al (2019) applied a two-layer LSTM with the goal of aspect-based sentiment analysis on their own University data as well as SemEval-2014 data. The first layer predicted aspects from the feedback, while the second predicted the sentiment polarity. Kastrati et al (2020) used more than a 100 thousand reviews from coursera as well as classroom feedback. They applied LSTMs and CNNs using various word embeddings. Ren et al (2022) used Chinese open-ended comments written by junior school students. They constructed dictionaries for topics and sentiments which were used for their deep learning model to predict sentiments.
From our review, the research in analysis of student feedback has not fully embraced the state-of-the-art models, namely BERT and its extensions such as RoBERTa or XLNet. To start with, we already cited (Estrada et al, 2020; Truong et al, 2020) earlier in this section. Rybinski and Kopciuszewska (2020) compared BERT models on 1.6 million student evaluations from the US and the UK, extracted from different sources. Wang et al (2020) used subtitles (captions) in videos of more than thousand courses to predict the performance of the instructor in online education, using a hierarchical BERT model based on teacher's verbal cues and on course structure.
In summary, even though there has been a considerable amount of research work in student feedback analysis, there is still a gap of utilizing the recent state-of-the-art models such as BERT, which our paper aims to fill. In our review, we noted that most of the work focuses on sentiment analysis, while our work also examines topic classification. We also noted that several works did not report metrics other than accuracy (for example, (Estrada et al, 2020)) or they did not describe their DL models or (hyper)parameters (for
example, (Tseng et al, 2018)). We presented extensive experimentation for two different classification tasks, with various DL models, exploring the effect of hyperparameters, and discussed runtime efficiency as well as classification accuracy.
## 3 Dataset Description
For this work, we collected publicly available course reviews posted online for bootcamp-type courses at website [https://www.coursereport.com](https://www.coursereport.com). The reviewed courses were for various topics, from assembly language, to web development, to marketing, and the courses were offered online or in different cities globally. The vast majority of the reviews were in English. The data contained a course title, a course review (text comments), a review rating (1 through 5), and other fields we did not use, for example, username or instructor rating or helpfulness of the review. We used a web crawler we developed from scratch to collect data for the reviews. We pre-processed the text reviews to clean up invalid text: removed remaining HTML tags and any reviews that are shorter than 2 words. The resulting dataset had 10,610 reviews (text comments). The review text length ranges from a minimum of 2 words to a maximum of 4,219 words, with an average of 245 words and a standard deviation of 251.
First, we organized the data for the sentiment polarity extraction task, as follows. As mentioned above, each review had a star rating, ranging from 1 to 5. The dataset was divided into positive reviews and negative reviews as follows: the positive comments were considered the ones which had rating score 4 or 5,
\begin{table}
\begin{tabular}{l l} \hline _Label_ & _Sample from a course review_ \\ \hline Positive & “I would not hesitate to recommend \textless{}program\textgreater{}” \\ & “Great curriculum and instructors... they make sure you are prepared” \\ & “\(<\)name\textgreater{} is an awesome teacher and all of the TAs werevery helpful” \\ & “The \textless{}course\textgreater{} was worth the money and gave me a good insight into \\ & what the \textless{}program\textgreater{} would be like” \\ \hline Negative & “This is a terrible program. It was very badly organized.” \\ & “This is one of the worst, most expensive programs in [...]” \\ & “None of my submissions so far have been reviewed or have any feedback \\ & [...] with 2 weeks left in the program” \\ & “I feel \textless{}program\textgreater{} at \textless{}school\textgreater{} was a waste of time and money” \\ \hline \end{tabular}
\end{table}
Table 2: Example comments from course reviews in our data and their label
Figure 1: The overall pre-processing and labeling tasks for the sentiment analysis task
while the negative comments had score ratings 1-3. The entire dataset ended up very imbalanced: it contained 91.5% positive and 8.5% negative reviews. Fig. 1 shows the pre-processing and labeling steps. Table 2 shows examples of comments taken from positive and negative reviews.
Finally, we examined the top ranking trigrams for the positive versus the negative reviews, shown in Table 3 (we also looked at unigrams and bigrams but they were not as descriptive of the polarity between the reviews as the trigrams). In Table 3, one can see that "web development course" is a top ranking term for both types of reviews. We observed that this course is the most frequent course topic overall, therefore it appears very frequently in both negative and positive reviews. Other terms such as "waste time money", "dont waste time", "free online resources" rank high in negative reviews, while positive reviews have terms such as "highly recommend course" and "life changing experience".
For the topic-based classification, first we looked at the course titles and their reviews, and we conducted different visualizations, for example, word clouds. Additionally, we utilized Latent Dirichlet Allocation (LDA)(Blei et al, 2003) and we identified the major course topics were, by far, Web development, Programming, and Data Science. We also manually filtered reviews based on similar course titles and then grouped together these courses in one topic or category. Finally, we dropped the rest of the reviews that did not have any course name or identifiable topic, and ended up with 7,503 reviews. The topics and their distribution in the resulting data is shown in Table 4. As shown in Table 4, the large majority of the courses are related to programming or web development.
## 4 Methodology
The general process for classification using an ML algorithm in our work is shown in Fig. 2. The data is split into training and test set, each including the text and the labels (for example, positive or negative). The training data is used to extract the _vocabulary_: the set of unique tokens or words found in the training data. Based on the vocabulary, we then extract the features that train the model (more details on the features will be given in the following sections). In the prediction phase, the model generated from training is then used to predict the labels for the test inputs. For the split of the dataset into training and test sets, we used cross-validation (see Section 5.1). In \(k\)-fold cross-validation, the data is split into a total of \(k\) subsets, then the experiment is executed \(k\) times, ensuring the test set is varied in each execution.
In the rest of this section, we first present an overview of a traditional approach for classification of text, based on Bag-of-Words (BoW). We also briefly present the BoW classifiers we use in our experiments. Then, we give an overview of DL models for text classification in general, as well as the specific models we use in our work.
### Traditional Bag-Of-Words (BoW) Approach
First, we extracted the text from our collection of course reviews (corpus) and then tokenized the text into words (see Section 3 for more details on preprocessing). The resulting dataset was represented as a BoW matrix. In these methods, the features extracted from the data in Fig. 2 is the BoW matrix.
BoW methods do not preserve the order of the words in the text or the context of a word in a phrase, nor do they preserve or extract grammar-related or other relations: they only store frequency information for each unique word in the corpus. The dimensionality of the resulting BoW matrix is the number of documents (or unique course reviews) \(\times\) the unique words or tokens in our corpus (the vocabulary). A small detail is that the vocabulary is extracted from the training set only.
Figure 2: The training and prediction process for classification using a machine learning algorithm
For this part of our work, we used TF-IDF (Term Frequency-Inverse Document Frequency) values. TF-IDF is a statistical measure used to evaluate the importance of a word in a document in a corpus. Using TF-IDF, the importance increases proportionally to the frequency of the word in the document, but it is offset by the frequency of the word in the corpus. We used the TF-IDF features as input to three classification models with the goal either to predict if a course review comment is negative or positive (sentiment analysis) or to detect one of the four topics (topic-based classification): Naive Bayes, k-Nearest Neighbor (k-NN), and Support Vector Machines (SVM). These algorithms are briefly described below - for more details see any related text, such as (Tan et al, 2005).
Naive Bayes offers a probabilistic framework for solving classification problems. Naive Bayes first uses the training data (corpus) to find the probability of each unique word as it occurs in the corpus for each class. For a test document, Naive Bayes multiplies the pre-calculated probabilities of every word in the document and then chooses the class with the highest probability to classify the test record.
In the \(k\)-Nearest Neighbor algorithm, given a course review \(x\) and a user parameter \(k\), the algorithm finds the \(k\) reviews that are the most similar to \(x\). These are called its \(k\)-nearest neighbors. Then, based on the majority of the labels of \(x\)'s \(k\)-nearest neighbors, the algorithm predicts the label of \(x\).
SVMs have been very successfully applied to a number of applications since their inception, including analysis of course feedback (see Section 2). The SVM algorithm finds a hyperplane (decision boundary) that separates the classes by their features over a space (Cortes and Vapnik, 1995). The goal is to maximize the margin, or create the largest possible distance between the separating hyperplanes, in order to reduce the upper bound on the expected generalization error. For non-linearly separable data, the solution is to map the inputs into a high-dimensional feature space.
### Deep Learning (DL) Approach
#### 4.2.1 Word Embeddings
As already mentioned in the previous section, traditional methods for representing words in matrix form, such as BoW and TF-IDF, do not take into account the position or the context of the word in the document. Recent approaches proposed word embeddings that represent the semantic meanings of words (Mikolov et al, 2013). Words that are similar in meaning or in context are closer to each other in the vector space, while words that are different are farther apart. In this work, we experimented with Word2Vec (Mikolov et al, 2013), which uses a feed-forward neural network to predict the neighboring words for a given word in order to create the word embeddings. The embedding for each word is essentially a one-dimensional vector of \(d\) values, where \(d\) is a user-entered parameter.
#### 4.2.2 Deep Neural Networks
In contrast to traditional techniques in Section 4.1, more recent approaches use DL, such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), to learn text representations. In the following, we give a brief background review on the models we used in our work; the reader is referred to (Minaee et al, 2021) for a comprehensive review on DL-based Text Classification. We also list the models we used for this work, and provide a detailed step-by-step example for a convolutional model we employed for our work.
Originally invented for computer vision, CNN models have subsequently been shown to be effective for many NLP tasks (Kim, 2014). CNNs utilize layers with convolving filters. In text related tasks, the filters are trained to identify word combinations that are most pertinent to the classification task at hand. In most of the recent literature, the word embeddings from the document are fed into the NN as features. In addition, as character-based CNNs have been shown to work for text related tasks (Zhang et al, 2015), we briefly experimented with a character-based CNN.
RNNs were also shown to be effective in NLP tasks due to their architecture specifically designed to address time-series data. In NLP tasks, RNNs are aiming to learn linguistic patterns based on different sequences of words. Basic RNNs are unable to retain information and find relationships over a large sequence of words so we used LSTMs: Long Short-Term Memory units (LSTM) (Hochreiter and Schmidhuber, 1997) use gating functions to selectively store or "forget" input information according to how relevant it is to the classification task. Finally, we also experimented with a Bidirectional LSTM model, which ensures that the network can account for the _preceding_ as well as the _following_ context when processing the sequence of words.
An example of a convolutional model we employed based on word embeddings is shown in Fig. 3. As before, the collection of documents or course
Figure 3: A depiction of a CNN-based model we used for classification
reviews was tokenized into words, but now we padded or shortened each resulting review to a set length (or number of words), based on a user-entered parameter called _maxlen_. We extracted the vocabulary from the resulting text sentences, and then created word embeddings of length \(d\). The result from the embedding layer was a three-dimensional matrix, of dimensionality \(n\times\textit{maxlen}\times d\), where \(n\) is the vocabulary size and \(d\) is the embedding dimension.
The embeddings were fed into the convolutional layer. The output of the convolutional layer was fed into a dropout and a max-pooling layer. There might be more than one convolutional layer employed in this model; if so, the outputs were concatenated before the next layer. Finally, we used a fully-connected dense layer to output the prediction of the model (the predicted label). This layer used a _sigmoid_ or a _softmax_ activation, depending on the label being binary (sentiment) or categorical (topic), respectively. Our LSTM or bi-LSTM models follow a similar general idea.
#### 4.2.3 Transformer-based models
While word embeddings take into consideration the semantic similarities of words in a corpus, they do not explore different meanings of words based on context. Therefore, in word embeddings such as word2vec (Mikolov et al, 2013), each word in the vocabulary will have one single embedding. More recent techniques introduced _contextualized_ embeddings: they encode a word and its context from the words before it, and after it, so it "will generate a different embedding vector for the word 'bank' in 'bank account' to that for 'river bank' (Rybinski and Kopciuszewska, 2020).
BERT (_Bidirectional Encoder Representations from Transformers_) (Devlin et al, 2019) is considered state-of-the-art in several NLP tasks. For example, in a recent SemEval Task for detecting offensive language (Zampieri et al, 2020), the vast majority of the top entries in the task used BERT-like systems. A recent survey found that "in a little over a year, BERT has become a ubiquitous baseline in NLP experiments" (Rogers et al, 2020). A transformer combines a multi-head self-attention mechanism with an encoder-decoder. BERT utilized Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). In MLM, the model masks some of the words, and uses the rest of the words to predict the masked words. In NSP, given two sentences, BERT was trained to predict if the second sentence is likely to follow the first sentence. For more information on the _internal_ architecture of BERT, the reader is referred to (Devlin et al, 2019).
Utilizing BERT is somewhat similar to the models such as CNN we discussed in the previous section, with some significant differences. One of these differences is the BERT _tokenizer_. The BERT model needs inputs in the form of: token identifiers, masks, and segments. BERT marks the end of each sentence with a special [SEP] token. Also, BERT inserts a [CLS] token (which stands for "classification") to the start of each sentence.
Besides these, the BERT-based model we employed is overall similar to the previous CNN model in Fig. 3. The BERT-based model also uses a _maxlen_
for the input comments (see the definition of _maxlen_ in the previous section for the CNN model, and Fig. 3). The results from the BERT tokenizer were passed onto the BERT layer, whose [CLS] output was fed into a dense layer that outputs the prediction of the model. Just as in the DL models from the previous section, this layer used a _sigmoid_ or a _softmax_ activation, for binary (sentiment) or categorical (topic) classification, respectively.
An important benefit of using a BERT-based model, over a CNN-based model such as the one in Fig. 3 is the BERT model has already been pre-trained on big data: in fact, many pre-trained models exist available for direct use or that can be fine-tuned for a specific classification task. Fine-tuning means to further train the pre-trained BERT model using our data. Section 5.1 includes the details of our BERT model and its hyperparameters.
There have been numerous models extending BERT. In our experiments, we used RoBERTa and XLNet. As main differences from BERT, RoBERTa (Robustly optimized BERT approach) (Liu et al, 2019) removed the NSP and replaced the static masking (in MLM) of BERT, with dynamic masking. In summary, RoBERTa has been shown to be more robust than BERT, it modified some of BERT, and it was trained using more data. XLNet (Yang et al, 2019) was based on an auto-regressive model, which predicts future behavior based on past, and used a Transformer-XL. XLNet also introduced permutation language modeling, where _all_ tokens (not _only_ masked tokens) are predicted in random order, rather than sequential.
## 5 Experiments and Results
### Experimental Setup
As discussed in Section 3, the dataset we collected for the sentiment analysis is very imbalanced: it contains 91.5% positive and 8.5% negative reviews. Therefore, for our sentiment-based classification, we used stratified 10-fold cross validation (CV). For the topic-based classification, we used stratified 5-fold CV to better suit the four topics and their distribution (see Table 4).
In order to implement the diverse suite of classification models we utilized, we wrote our code using different tools and platforms, which resulted in quite different implementations1. We conducted all experiments using google colaboratory2. We used scikit-learn3 for all our BOW experiments and keras4 for our implementations of the DL models. All approaches based on TF-IDF were run with the scikit-learn defaults and unigrams. For the NN experiments we used 5 epochs, 0.01 learning rate, 32 batch size, Adam optimizer, and 0.5 dropout. For the convolutional layer, we used rectified linear units (ReLU), and filter windows of 3, 4, or 5. For LSTMs, we used 64 units. For character embeddings, we used 16 as the embedding dimension.
For our experiments with word embeddings, we first used the publicly available word2vec vectors that were trained on 100 billion words from Google News5. The vectors had dimensionality of 300 and were trained using the continuous bag-of-words (CBOW) architecture (Mikolov et al, 2013). Words not present in the set of pre-trained words were initialized randomly. In our results, the models that used these pre-trained vectors are denoted as "Pre-trained". We also experimented with non pre-trained word vectors, i.e. vectors that were randomly initialized.
Footnote 5: [https://code.google.com/archive/p/word2vec](https://code.google.com/archive/p/word2vec)
Finally, for the experiments with the transformer models, we used Pytorch and the corresponding models provided by HuggingFace6. Specifically, we used the bert-base-uncased, the roberta-base and the xlnet-base-cased models, all with lower case option. Based on performance in early experiments and following the recommendations by the original developers of BERT (Devlin et al, 2019), our transformer models used learning rate of \(2e^{-5}\), 3 epochs, batch size 32, and _marlen_ of 50.
Footnote 6: [https://huggingface.co/transformers](https://huggingface.co/transformers)
We reported our results based on the classification metrics defined below:
\[Precision=\frac{TP}{TP+FP} \tag{1}\] \[Recall=\frac{TP}{TP+FN}\] (2) \[Accuracy=\frac{TP+TN}{N} \tag{3}\]
\[F1\text{-}score=\frac{2\times Precision\times Recall}{Precision+Recall} \tag{4}\]
where \(TP\) is True Positives, \(FP\) is False Positives, \(FN\) is False Negatives, and \(N\) is the total number of records. Besides Accuracy in Eq. (3), we chose to also report the F1-macro which averages the F1-score in Eq. (4) over the classes: the macro-averaged F1 is better suited for showing algorithm effectiveness on smaller categories (Altrabsheh et al, 2014; Kastrati et al, 2021), which is important as we are working with imbalanced datasets.
### Results and Discussion
#### 5.2.1 Sentiment Analysis
The results for the sentiment analysis task are shown in Table 5. As shown in Table 5, the transformer-based models performed the best overall: RoBERTa is the top performing model at 95.5% accuracy and 84.7% F1-macro, while BERT and XLNet follow with F1-macro of about 83%. From the rest of the DL models, CNNs performed the best with the CNN using the pre-trained word embeddings performing the best at 92.1% accuracy and 82.4% F1-macro. From the TF-IDF models, accuracies were high but F1-macro results are low, around 50% for all the models we utilized.
We also performed a comparison based on _maxlen_ input values for BERT, RoBERTa, and XLNet: see Table 6 for the sentiment analysis task. Higher _maxlen_ values mean using more words from each comment (or a larger part of the comment) as input to the model. As shown in Table 6, all transformer models took about 3-4 minutes per epoch for _maxlen_ equal to 100 versus under 10 seconds for the word CNN. Among the three models, RoBERTa was slightly faster than BERT at under 3 minutes per epoch and XLNet was the slowest at almost 4 minutes per epoch. This means that each of the transformer models needed about 1.5-2 hours total runtime given 3 epochs and 10-fold stratified CV. We were not able to run transformer model experiments with _maxlen_ larger than 100 due to these long runtimes. XLNet and RoBERTa did the best in these experiments (about 97% accuracy and 89% F1-macro for _maxlen_=100).
Table 6 shows that the CNN model also gained from an increase in _maxlen_ while its execution is under 10 seconds per epoch. Given this observation, we also ran experiments with CNN and _maxlen_ higher than 100. The resulting plot is shown in Fig. 5a (using regular word embeddings). As the figure shows, there was no gain for this model and the sentiment analysis task by increasing the _maxlen_ to more than 150, and its F1-macro stayed at 80%.
Overall, for the sentiment analysis task, we see the superiority of the transformer-based models, BERT, RoBERTa, and XLNet, especially how well these models perform with imbalanced data. We also see that using more words as input to the models increases accuracy at the expense of further increasing their execution time. Another avenue we leave for future research is to use a model such as DistilBERT (Sanh et al, 2019), a distilled and smaller version of BERT that has been shown to be much faster.
Finally, we examined records on which two of the top performing models, BERT and RoBERTa, disagreed on their predictions (given the same train/test split of the records). In the following, we provided a couple of reviews as examples (we edited the reviews for length and content, but made sure to
\begin{table}
\begin{tabular}{l l c c c} \hline & _Classifier_ & _Accuracy_ & _F1-macro_ \\ \hline Traditional & k-Nearest Neighbor & 0.902 & \(\pm\)0.01 & 0.554 & \(\pm\)0.06 \\ (TF-IDF) & Naïve Bayes & 0.907 & \(\pm\)0.00 & 0.476 & \(\pm\)0.00 \\ Models & Linear SVM & 0.908 & \(\pm\)0.01 & 0.537 & \(\pm\)0.05 \\ & RBF SVM & 0.907 & \(\pm\)0.01 & 0.476 & \(\pm\)0.00 \\ & Polynomial SVM & 0.907 & \(\pm\)0.01 & 0.476 & \(\pm\)0.01 \\ \hline Deep Learning & Character-based CNN & 0.750 & \(\pm\)0.03 & 0.591 & \(\pm\)0.02 \\ Models & Word CNN & 0.900 & \(\pm\)0.02 & 0.708 & \(\pm\)0.05 \\ & Pre-trained Word CNN & 0.921 & \(\pm\)0.02 & 0.824 & \(\pm\)0.02 \\ & Word LSTM & 0.788 & \(\pm\)0.03 & 0.660 & \(\pm\)0.03 \\ & Pre-trained Word LSTM & 0.771 & \(\pm\)0.03 & 0.637 & \(\pm\)0.05 \\ & Word bi-LSTM & 0.726 & \(\pm\)0.05 & 0.597 & \(\pm\)0.04 \\ & Pre-trained Word bi-LSTM & 0.801 & \(\pm\)0.07 & 0.674 & \(\pm\)0.05 \\ \hline Transformers & BERT & 0.953 & \(\pm\)0.01 & 0.832 & \(\pm\)0.04 \\ & RoBERTa & **0.955** & \(\pm\)0.01 & **0.847** & \(\pm\)0.03 \\ & XLNet & 0.952 & \(\pm\)0.02 & 0.834 & \(\pm\)0.06 \\ \hline \end{tabular}
\end{table}
Table 5: Results for Sentiment Analysis
preserve the spirit of each review). The following was correctly classified as _positive_ by BERT but _negative_ by RoBERTa: _"I went in not knowing much more than how to write a simple program, and I got a good job [...] I had to spend a lot of time outside the classes to learn [...] curriculum is alright, a bit scattered [...] overall they're competent and do a good job [...]"_. Even though this review had a high star rating, as a whole it contained somewhat mixed opinions and negative wording. As a second example, RoBERTa labeled this review correctly as _positive_, while BERT labeled it as _negative_: _"I'd like to respond to the one review trashing X. It's totally wrong. X is passionate and smart, but if he sees you doing something you shouldn't, he is not afraid to call you out. [...] I can honestly affirm that \(<\)course\(>\) was life changing [...]"_. The review had an argumentative tone trying to defend an instructor, and then it was very positive of the instructor and the course. Both examples show the complexity of capturing sentiment at the paragraph (or essay) level, as researchers have previously noted (Ren et al, 2022). Future research could focus on the sentence level as a way of improving the sentiment classification of such reviews.
#### Topic-Based Classification
The results for the topic-based classification task are shown in Table 7. For this task, a linear SVM was the top performing model at 79.8% accuracy and 80.6% F1-macro. The transformer-based models, BERT, RoBERTa, and XLNet, were closely behind at low to high 70's for accuracy and F1-macro. The rest of the DL models were in the low to mid 60's for accuracy. The highest performing among them was the CNN model using the pre-trained word embeddings, with 65.9% accuracy and 65% F1-macro.
Example confusion matrices for four of the models on the topic classification task are shown in Fig. 4. The models shown are Linear SVM (Fig. 4a), CNN model with regular word embeddings (Fig. 4b), CNN model with pre-trained word embeddings (Fig. 4c), and BERT (Fig. 4d). Any (hyper)parameters are the same as the ones for the results in the Table 7. For reference, the topics and their distribution in the data are shown in Table 4.
From Fig. 4a, it seems that SVM did very well (91%) on the larger topic ("Web Development", about 51% of the records). BERT on the other hand
\begin{table}
\begin{tabular}{l c c c c c} \hline _Model_ & _maxlen_ & _Runtime_ & _Accuracy_ & _F1-macro_ \\ \hline BERT & 50 & 115 & 0.953 & \(\pm\)0.01 & 0.832 & \(\pm\)0.04 \\ BERT & 100 & 188 & 0.958 & \(\pm\)0.01 & 0.852 & \(\pm\)0.04 \\ \hline RoBERTa & 50 & 96 & 0.955 & \(\pm\)0.01 & 0.847 & \(\pm\)0.03 \\ RoBERTa & 100 & 175 & 0.966 & \(\pm\)0.01 & 0.889 & \(\pm\)0.04 \\ \hline XLNet & 50 & 123 & 0.952 & \(\pm\)0.02 & 0.834 & \(\pm\)0.06 \\ XLNet & 100 & 237 & 0.969 & \(\pm\)0.01 & 0.889 & \(\pm\)0.05 \\ \hline Word-Based CNN & 50 & 5 & 0.900 & \(\pm\)0.02 & 0.708 & \(\pm\)0.05 \\ Word-Based CNN & 100 & 8 & 0.924 & \(\pm\)0.04 & 0.770 & \(\pm\)0.06 \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of Deep Learning models given different _maxlen_ values for Sentiment Analysis - Runtime is Seconds per Epoch
seems to have done the best on the smaller topic ("Non-Programming") and relatively well (mid 70's) on the rest of the topics (see Fig. 3(d)). CNN models did worse overall (see Figures 3(b) and 3(c)), except for the CNN with pre-trained word embeddings matched the performance of the SVM for the large topic ("Web Development") (see Figures 3(a) and 3(c)).
As we did for the sentiment analysis task (see section 5.2.1), we also experimented with higher _maxlen_ values for the topic-based classification task. Results are shown in Table 8. For _maxlen_ set to 100, BERT surpassed the Linear SVM result: BERT's F1-macro was 82.5% versus the 80.6% of the SVM (shown in Table 7). However, BERT achieved this at around 2 minutes per epoch, which then made the 5-fold stratified CV and 3 epoch execution at about half an hour.
As the CNN models are the two best of the regular DL models, we also experimented with increasing _maxlen_ for the CNN. Fig. 4(b) shows the effect of different _maxlen_ values on the classification performance of the CNN model with regular word embeddings for the topic-based classification task. As can be seen in Fig. 5, as _maxlen_ increased (which means that more of the text from the comment is used as input to the model), the accuracy and F1-macro overall tended to increase, but the increase was much more pronounced for
\begin{table}
\begin{tabular}{l l c c} \hline _Model_ & _maxlen_ & _Runtime_ & _Accuracy_ & _F1-macro_ \\ \hline BERT & 50 & 76 & 0.772 \(\pm\)0.03 & 0.774 \(\pm\)0.05 \\ BERT & 100 & 114 & 0.799 \(\pm\)0.03 & 0.825 \(\pm\)0.05 \\ \hline RoBERTa & 50 & 70 & 0.749 \(\pm\)0.07 & 0.747 \(\pm\)0.09 \\ RoBERTa & 100 & 125 & 0.794 \(\pm\)0.04 & 0.812 \(\pm\)0.05 \\ \hline XLNet & 50 & 83 & 0.724 \(\pm\)0.08 & 0.722 \(\pm\)0.09 \\ XLNet & 100 & 164 & 0.757 \(\pm\)0.05 & 0.776 \(\pm\)0.05 \\ \hline Word-Based CNN & 50 & 8 & 0.645 \(\pm\)0.04 & 0.623 \(\pm\)0.07 \\ Word-Based CNN & 100 & 10 & 0.727 \(\pm\)0.04 & 0.733 \(\pm\)0.06 \\ \hline \end{tabular}
\end{table}
Table 8: Comparison of Deep Learning models given different _maxlen_ values for Topic Classification - Runtime is Seconds per Epoch
\begin{table}
\begin{tabular}{l l c c} \hline & _Classifier_ & _Accuracy_ & _F1-macro_ \\ \hline Traditional & k-Nearest Neighbor & 0.735 \(\pm\)0.07 & 0.725 \(\pm\)0.07 \\ (TF-IDF) & Naive Bayes & 0.562 \(\pm\)0.04 & 0.257 \(\pm\)0.05 \\ Models & Linear SVM & **0.798**\(\pm\)0.04 & **0.806**\(\pm\)0.05 \\ & RBF SVM & 0.797 \(\pm\)0.05 & 0.789 \(\pm\)0.06 \\ & Polynomial SVM & 0.732 \(\pm\)0.04 & 0.668 \(\pm\)0.06 \\ \hline Deep Learning & Word CNN & 0.645 \(\pm\)0.04 & 0.623 \(\pm\)0.07 \\ Models & Pre-trained Word CNN & 0.659 \(\pm\)0.06 & 0.650 \(\pm\)0.06 \\ & Word LSTM & 0.612 \(\pm\)0.05 & 0.583 \(\pm\)0.06 \\ & Pre-trained Word LSTM & 0.620 \(\pm\)0.03 & 0.581 \(\pm\)0.05 \\ & Word bi-LSTM & 0.619 \(\pm\)0.06 & 0.583 \(\pm\)0.07 \\ & Pre-trained Word bi-LSTM & 0.620 \(\pm\)0.04 & 0.588 \(\pm\)0.04 \\ \hline Transformers & BERT & 0.772 \(\pm\)0.03 & 0.774 \(\pm\)0.05 \\ & RoBERTa & 0.749 \(\pm\)0.07 & 0.747 \(\pm\)0.09 \\ & XLNet & 0.724 \(\pm\)0.08 & 0.722 \(\pm\)0.09 \\ \hline \end{tabular}
\end{table}
Table 7: Results for Topic-Based Classification
the topic classification task (Fig. 4(b)) than the sentiment analysis task (Fig. 4(a)). For _maxlen_ set at 300, the CNN model performed as well as the Linear SVM to classify the course topics (see Table 7). It is noteworthy that we did not observe a similar effect on the accuracy of the linear SVM by limiting number of tokens or features as the input parameter, and that the SVM has a smaller vocabulary than the CNN model. We conducted similar experiments for the LSTM and we did not see an increase in accuracy. As a note, in these results, the CV experiments led to a range of \(\pm 0.05\) to \(\pm 0.07\) deviation from the numbers shown in the plots.
Finally, we also investigated the effect of the number of epochs for the CNN model. Fig. 6 shows an example of a typical run using the pre-trained word-based CNN on the topic classification task. As can be seen from both
Figure 4: Confusion matrix for four models on Topic-Based Classification
the accuracy and loss figures, the best values for the number of epochs were 4-6, after which the network started overfitting on the training data.
Overall, we see that the topic classification task is more challenging for the models than the sentiment analysis. This was expected as the binary sentiment analysis task has been shown to be relatively more straightforward for DL models in recent years. Our empirical results also indicate that we should use a larger part of each review (in the form of the _maxlen_ parameter) to feed as input into the DL models to improve performance. After manually exploring several reviews chosen at random, we observed that sometimes the topic-related wording did not appear until later in the comment. This verified that using a larger part of the course review would indeed result in better model performance. Finally, some of the reviews might lack topic-related terms or language, or they might have wording that is shared by more than one topic, and therefore they are more difficult to classify even by a human. For example, given the following snippets of a 'Web Development' course review: _"This class is a good jump start into a technical career! The class has a limited amount of time and it's really hard to go over as much content as there is, but it's all necessary to get down the fundamentals in that timeframe. [...] is willing to help people with coding problems after it is over [...]"_: this review could easily be classified into 'Programming' instead of 'Web Development' (note that none of the parts we omitted from this example included any terms or language specific to web development).
A limitation of this work is the way we selected and assigned the topics: for example, the 'Non-Programming' topic included reviews for courses in Digital marketing or UX Design, which made the topic not as well-focused as the rest of the topics. At the same time, the 'Programming' topic included reviews for courses in iOS, Android, or Full Stack Development, which might also have very different terminology. Finally, as we just discussed, the terms or expressions used in a review for a 'Programming' course could very well apply to a review for a 'Web development' course. As our dataset are available to others, future research could look into different topics that are more detailed or assigned differently.
## 6 Conclusions
In this study, we describe how we collected and pre-processed more than ten thousand course review comments publicly available online. We present extensive experimentation with several ML techniques to extract sentiment from the text in the reviews as well as to detect the topic of the course for which the review was written. The techniques with which we experiment included a traditional bag-of-words representation of the text as well as word embeddings and character embeddings. Our employed classification models range from traditional machine learning, such as Naive Bayes and SVMs, to current DL techniques, based on CNNs and LSTMs. Finally, we fill a gap in the current research by exploring state-of-the-art transformer-based models (BERT
(Devlin et al, 2019), RoBERTa (Liu et al, 2019), and XLNet (Yang et al, 2019)) which have not been used extensively yet in this course review analysis field.
Our extensive experimentation with these algorithms shows how the different models behave for the two tasks. For the sentiment analysis task, the state-of-the-art transformer-based NLP models perform the best. For the topic classification task, the traditional models, such as SVMs, perform the best, though the DL models become top-performing when we increased the fraction of the course review that is fed as input to the model (using a hyperparameter called _maxlen_). At the same time, we provide a complete picture by showing how the state-of-the-art models require much longer execution times to achieve their results.
Figure 5: Effect of _maxlen_ on Accuracy and F1-macro for the CNN Model (regular word embeddings) for both tasks
Figure 6: Accuracy and Loss per epoch for the CNN Model (pre-trained word embeddings) on Topic-Based Classification
Sentiment analysis and topic classification can be used by educators and administrators as part of their assessment process in order to continuously improve instruction delivery and address issues. Our empirical results, exploration, and discussion can serve to guide others in the analysis of their own course feedback data. Our data and models could be used by others in their own course feedback analysis. Future research goals are to further explore our data towards aspect-based sentiment analysis. We also plan to explore other features in the data, such as helpfulness of the reviews, and explore the use of additional pre-trained models such as EduBERT (Clavie and Gal, 2019). Finally, we would like to explore the applicability of our pre-trained DL models on other student feedback data.
### Declarations
**Availability of data and materials:** The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
**Conflict of interest:** The authors declare that there is no conflict of interest regarding the publication of this paper and that the work presented in this article is not supported by any funding agency.
|
2310.10511 | A linear parameters study of ion cyclotron emission using drift ring
beam distribution | Ion cyclotron emission (ICE) holds great potential as a diagnostic tool for
fast ions in fusion devices. The theory of magnetoacoustic cyclotron
instability (MCI), as an emission mechanism for ICE, states that MCI is driven
by a velocity distribution of fast ions that approximates a drift ring beam.
The influence of key parameters on the linear MCI is systematically
investigated using the linear kinetic dispersion relation solver BO (Xie H.
2019 Comput. Phys. Comm. 244 343). The computational spectra region considered
extends up to 40 times the ion cyclotron frequency. By examining the influence
of these key parameters on MCI, several novel results have been obtained. In
the case of MCI excited by super-Alfv\'enic fast ions, the parallel velocity
spread significantly affects the bandwidth of harmonics and the continuous
spectrum, while the perpendicular velocity spread has a decisive effect on the
MCI growth rate. As the velocity spread increases, the linear relationship
between the MCI growth rate and the square root of the number density ratio
transitions to a linear relationship between the MCI growth rate and the number
density ratio. This finding provides a linear perspective explanation for the
observed linear relation between fast ion number density and ICE intensity in
JET. Furthermore, high harmonics are more sensitive to changes in propagation
angle than low harmonics because a decrease in the propagation angle alters the
dispersion relation of the fast Alfv\'en wave. In the case of MCI excited by
sub-Alfv\'enic fast ions, a significant growth rate increase occurs at high
harmonics due to the transition of sub-Alfv\'enic fast ions to super-Alfv\'enic
fast ions. | Haozhe Kong, Huasheng Xie, Jizhong Sun | 2023-10-16T15:32:28Z | http://arxiv.org/abs/2310.10511v1 | # A linear parameters study of ion cyclotron emission using drift ring beam distribution
###### Abstract
Ion cyclotron emission (ICE) holds great potential as a diagnostic tool for fast ions in fusion devices. The theory of magnetoacoustic cyclotron instability (MCI), as an emission mechanism for ICE, states that MCI is driven by a velocity distribution of fast ions that approximates to a drift ring beam. In this study, the influence of key parameters (velocity spread of the fast ions, number density ratio, and instability propagation angle) on the linear MCI is systematically investigated using the linear kinetic dispersion relation solver BO (Xie H. 2019 _Comput. Phys. Comm._**244** 343). The computational spectra region considered extends up to 40 times the ion cyclotron frequency. By examining the influence of these key parameters on MCI, several novel results have been obtained. In the case of MCI excited by super-Alfv\(\,\)nic fast ions (where the unique perpendicular speed of fast ion is greater than the perpendicular phase velocity of the fast Alfv\(\,\)an waves), the parallel velocity spread significantly affects the bandwidth of harmonics and the continuous spectrum, while the perpendicular velocity spread has a decisive effect on the MCI growth rate. As the velocity spread increases, the linear relationship between the MCI growth rate and the square root of the number density ratio transitions to a linear relationship between the MCI growth rate and the number density ratio. This finding provides a linear perspective explanation for the observed linear relation between fast ion number density and ICE intensity in JET. Furthermore, high harmonics are more sensitive to changes in propagation angle than low harmonics because a decrease in the propagation angle alters the dispersion relation of the fast Alfv\(\,\)an wave. In the case of MCI excited by sub-Alfv\(\,\)nic fast ions (where the unique perpendicular speed of fast ion is less than the perpendicular phase velocity of the fast Alfv\(\,\)an waves), a significant growth rate increase occurs at high harmonics due to the transition of sub-Alfv\(\,\)nic fast ions to super-Alfv\(\,\)nic fast ions. Similarly, for MCI excited by greatly sub-Alfv\(\,\)nic fast ions (where the unique perpendicular speed of fast ion is far less than the perpendicular phase velocity of the fast Alfv\(\,\)an waves), the growth rate at high harmonics also experiences a drastic increase compared to the low harmonic, thereby expanding the parameter range of the velocity spread.
Keywords: ion cyclotron emission (ICE), magnetoacoustic cyclotron instability (MCI), fast ion
## 1 Introduction
ICE, which was the first collective radiative instability driven by confined fusion-born ions in both JET [1, 2] and TFTR [3] deuterium-tritium plasmas, has gained significant attention as a potential passive and non-invasive diagnostic technique for studying the population of fast ions in fusion devices [4-6]. Almost all tokamaks, including TFR [7, 8], PDX [9], JT-60U [10-15], ASDEX-U [16-19], KSTAR [20, 21], DIII-D [22-27], EAST [28-30], NSTX-U [31, 32], TUMAN-3M [33, 34], JET [1, 2,35-38], TFTR [3, 39], and HL-2A [40], as well as some stellarators such as LHD [41-45], W7-AS [46], and W7-X [47], have detected ICE. Experimental results have shown that the ICE intensity is
nearly proportional to the neutron flux over a range of six orders of magnitude [1], and is closely related to various magnetohydrodynamic (MHD) activities, including edge localized mode (ELM) [1, 4, 20, 21, 29, 30, 48], fishbone mode [23], sawtooth oscillations [49], toroidal Alfv \(\mathfrak{e}\)n eigenmode (TAE) [42], and plasma disruption [30]. These findings indicates that the excitation of ICE is inseparable from the presence of fast ions, which are generated by fusion reactions, ion cyclotron resonance heating (ICRH), and neutral beam injection (NBI) at the plasma center or boundary. Fast ions, which can be fusion products or minority species accelerated by ICRH, undergo radial drift excursions towards the outer edge plasma, resulting in a population inversion in velocity space that drives ICE. Similarly, fast ions injected by NBI also exhibit a population inversion near their injection point, contributing to the excitation of ICE. As a description of the population inversion of the above fast ions, drift ring beam distribution has been widely used in theories and simulations [1, 2, 14, 38, 44, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68].
A successful theoretical explanation for the excitation mechanism of ICE is magnetoacoustic cyclotron instability (MCI) in the locally uniform approximation [53, 54, 55, 56, 69, 70, 71, 72, 73]. The MCI theory was first developed by Belikov and Kolesnichenko [71], in which the fast Alfv \(\mathfrak{e}\)n and ion Bernstein branches at frequencies close to the cyclotron harmonics \(\,\lambda\Omega_{\rm F}\,\) (where \(\,\lambda\,\) is the harmonic number) of a fast ion species F, are excited and propagate strictly perpendicular to the magnetic field in frequencies \(\,\omega\gg\Omega_{\rm F}\,\) (where \(\Omega_{\rm F}\,\) is the ion cyclotron frequency). Dendy et al. later extended this analysis to the frequency range \(\,\omega\!\!\sim\!\!\Omega_{\rm F}\,\), providing an explanation for ICE excitation [69]. The inclusion of finite parallel wave number \(\,k_{\rm I}\,\) allowed MCI to being further successful in explaining the ICE excited by super-Alfv \(\mathfrak{e}\)nic fast ions in JET [53]. Subsequently, MCI was used to account for ICE excited by sub-Alfv \(\mathfrak{e}\)nic fast ions generated by fusion reactions and NBI [55]. Additionally, a variant of MCI successfully accounted for ICE excited by greatly sub-Alfv \(\mathfrak{e}\)nic fast ions from NBI [54]. Analysis further considering magnetic field gradient and curvature drift effects predicted higher MCI growth rates [56, 57].
However, the aforementioned MCI theory is valid when the instability growth rate \(\gamma\) is larger than the inverse bounce/transit period of the fast ions \(\,\tau_{b}^{-1}\,\) (\(\gamma>\tau_{b}^{-1}\)). When the instability growth rate \(\gamma\) is lower than \(\tau_{b}^{-1}(\gamma<\tau_{b}^{-1})\), the analysis must consider toroidal effects and eigenmode structures [73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84]. In this work, our focus mainly lies on the locally uniform approximation MCI theory, where \(\,\gamma>\tau_{b}^{-1}\,\). A large number of linear and nonlinear simulations have confirmed the validity of MCI in the locally uniform approximation [14, 38, 44, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68], and these simulations capture most of the key observed features of the ICE measurements in JET and TFIR experiments, including the simultaneous excitation of all cyclotron harmonics, the splitting of spectral peaks, ICE intensity scaling approximately linearly with the fast ion number density, the strong growth rate for nearly perpendicular wave propagation, and the congruence between the linear theory and the observed signal intensities. The experimental phenomenon of the un-captured continuous spectrum for \(\,l>8\,\) in JET has also gained further understanding in this work. What needs to be emphasized is the congruence between linear theory and the observed signal intensities, including the linear simulation results agreeing surprisingly well with both the peaks in ICE intensity at ion cyclotron harmonics and the trend of increasing intensity with harmonic number [58, 59], and a striking correlation between the time evolution of the maximum linear growth rate and the observed time evolution of the ICE amplitude [55, 66]. Nonlinear simulations [58, 59] suggest that MCI is intrinsically self-limiting on very fast timescales, providing an explanation for the observed correlation between linear theory and ICE intensity. Therefore, the self-limitation of MCI suggests that linear simulation is still a very important way to study ICE.
The key observed features of the ICE measurements captured by the simulations are closely related to the velocity spread of the fast ions, the number density ratio \(\,\xi_{\rm F}\,\) (the ratio of the fast ion number density to the background ion number density), and the instability propagation angle \(\,\theta\,\) (the angle between the wave propagation and the ambient magnetic field), and these three parameters have been the focus of MCI research. However, previous linear and nonlinear simulations have not systematically studied the influence of these key parameters on MCI for the drift ring beam distribution in three cases: super-Alfv \(\mathfrak{e}\)nic, sub-Alfv \(\mathfrak{e}\)nic, and greatly sub-Alfv \(\mathfrak{e}\)nic fast ions. This is a limitation for using ICE diagnosis to obtain fast ion information in future experiments. Therefore, this work provides a systematic investigation of the influence of these key parameters on linear MCI. The computational spectra region considered in this work is up to 40 times of the ion cyclotron frequency, which is rarely explored in other simulations. In addition, a more realistic experimental condition is considered in the simulation of ICE excited by super-Alfv \(\mathfrak{e}\)nic fast ions. The present work is carried out by using the fully kinetic dispersion relation program BO (further details provided in Appendix A) [86, 87, 88]. The BO program has the advantage of providing all solutions of a linear kinetic plasma system without requiring an initial guess for root finding, which distinguishes it from previous simulation programs. This allows for parameter scanning across a wide range. Furthermore, the BO program model includes the wave electric field parallel to the ambient magnetic field. Therefore, MCI simulation carried out by
using the BO program can be extended to arbitrary angle.
In section 2, the simulation results of BO program are successfully compared with the previous simulation results. In sections 3, 4 and 5, detailed simulation results are presented for MCI excited by super-Alfv \(\ddot{\mathrm{a}}\)nic, sub-Alfv \(\ddot{\mathrm{a}}\)nic, and greatly sub-Alfv \(\ddot{\mathrm{a}}\)nic fast ions, respectively. Conclusions are presented in section 6.
## 2 Benchmark
In this section, the simulation results of the BO program are successfully compared with the linear theory for super-Alfv \(\ddot{\mathrm{a}}\)nic and greatly sub-Alfv \(\ddot{\mathrm{a}}\)nic fast ions, respectively. In all our simulations, the velocity space distribution of the fast ions follows a drift ring beam distribution [86]:
\[f\propto\exp\left(-\frac{(\nu_{\mathrm{I}}-\nu_{\mathrm{d}})^{2}}{v_{\mathrm{I }}^{2}}\right)\exp\left(-\frac{(\nu_{\mathrm{I}}-u_{\perp})^{2}}{u_{\mathrm{I} }^{2}}\right).\]
Here \(\nu_{\mathrm{I}}\) and \(\nu_{\perp}\) denote velocity components parallel and perpendicular to the magnetic field, and \(\nu_{\mathrm{d}}\) and \(u_{\perp}\) are constants that define the average parallel drift speed and the unique perpendicular speed, respectively. \(\nu_{r}\) and \(u_{r}\) are parameters that define the parallel and perpendicular velocity spread of the fast ions, respectively.
In figure 1, we compare the output of BO program with both linear analytical theory and the linear stages of first principles fully kinetic nonlinear simulations for MCI excited by super-Alfv \(\ddot{\mathrm{a}}\)nic fast ions (alpha-particle). The simulation parameters of the BO program are the same as those in [59], where bulk plasma parameters approximate the outer midplane edge conditions of the JET Preliminary Tritium Experiment (PTE) pulse 26148. The magnetic field is 2.1T, electrons and bulk deuterons are thermalized at 1keV. The bulk deuteron number density \(n_{\mathrm{D}}\) is \(10^{19}\mathrm{m}^{-3}\), and the number density ratio \(\xi_{\mathrm{F}}=n_{\mathrm{F}}/n_{\mathrm{D}}=10^{-3}\), where \(n_{\mathrm{F}}\) represents fast ion number density. The rest of the simulation parameters are that \(u_{\perp}=1.294\times 10^{7}\) m/s, \(\nu_{\mathrm{d}}=0\) m/s, \(\nu_{\mathrm{r}}=u_{r}=0\) m/s, the Alfv \(\ddot{\mathrm{v}}\)nic velocity \(\nu_{\mathrm{A}}=1.02\times 10^{7}\) m/s, and the propagation angle \(\theta=88^{\circ}\). From figure 1, it can be seen that the simulation results obtained using the BO program are basically consistent with those from the nonlinear simulations and linear analytical theory.
Figure 2 shows the comparison between the BO program and the variant of the linear MCI theory regarding the growth rate of instability excited by greatly sub-Alfv \(\ddot{\mathrm{a}}\)nic fast ions (deuteron). The simulation parameters used in the BO program are, following Ref. [54], magnetic field \(B_{0}=57\), bulk deuteron temperature \(T_{\mathrm{D}}=3\)keV, electron temperature \(T_{\mathrm{e}}=1.5\)keV, bulk deuteron number density \(n_{\mathrm{D}}=1.5\times 10^{19}\)m\({}^{-3}\), the number density ratio \(\xi_{\mathrm{F}}=n_{\mathrm{F}}/n_{\mathrm{D}}=10^{-2}\), \(u_{\perp}=2.4\times 10^{6}\) m/s, \(\nu_{\mathrm{d}}=2.4\times 10^{6}\) m/s, \(\nu_{r}=2.4\times 10^{4}\) m/s, \(u_{r}=0\) m/s, \(\nu_{\mathrm{A}}=1.991\times 10^{7}\) m/s, and \(\theta=88.5^{\circ}\). From figure 2, the BO simulation results are in good agreement with the linear theory.
Through the successful verification of MCI excited by super-Alfv \(\ddot{\mathrm{a}}\)nic and greatly sub-Alfv \(\ddot{\mathrm{a}}\)nic fast ions, the maturity and reliability of the BO program for MCI simulations have been demonstrated. Next, we will conduct detailed simulation on the velocity spread of the fast ions, the number density ratio, and the instability propagation angle for the cases of the MCI excited by super-Alfv \(\ddot{\mathrm{a}}\)nic, sub-Alfv \(\ddot{\mathrm{a}}\)nic, and greatly sub-Alfv \(\ddot{\mathrm{a}}\)nic fast ions, respectively. It is important to note the classification of fast ions into these categories [4]. On the one hand, the physical mechanism of ICE excited by greatly sub-Alfv \(\ddot{\mathrm{a}}\)nic fast ions differs from that of ICE excited by super-Alfv \(\ddot{\mathrm{a}}\)nic and sub-Alfv \(\ddot{\mathrm{a}}\)nic fast ions. On the other hand, the excitation conditions of MCI are distinctly different for the above three fast ions. In general, the super-Alfv \(\ddot{\mathrm{a}}\)nic fast ions can drive the MCI even if they are isotropic or have a relatively broad distribution of speeds[4]. The sub-Alfv \(\ddot{\mathrm{a}}\)nic fast ions that are isotropic or have undergone a certain degree of thermalization cannot drive the MCI[4]. For the greatly sub-Alfv \(\ddot{\mathrm{a}}\)nic fast ions with a very narrow spread of velocities in the parallel direction the instability can occur[4]. In addition, the above three ions show more differences in this work.
Figure 1: Growth rate of analytical linear theory from figure 1(b) of [59], nonlinear simulation from figure 1(b) of [59], and BO simulation for MCI excited by super-Alfv \(\ddot{\mathrm{a}}\)nic fast ions in cyclotron harmonics up to 12. \(\Omega_{\mathrm{F}}\) is the cyclotron frequency of fast ion.
Figure 2: Analytical linear growth rate from figure 3 of [54] along with corresponding results from BO simulations for instability excited by greatly sub-Alfv \(\ddot{\mathrm{a}}\)nic fast ions.
## 3 Super-Alfv\(\,\)nic fast ions
In the JET Preliminary Tritium Experiment, ICE excited by super-Alfv\(\,\)eric fast ions (alpha-particle) exhibits numerous features, and linear and nonlinear simulations have captured and explained most of the essential features [4, 53, 56, 58-60]. However, the high cyclotron harmonic range has been seldom considered in the previous simulations. Based on this, the present work studies that the computational spectra region is up to 40 times of the ion cyclotron frequency while a more comprehensive MCI simulation results on the velocity spread of the fast ions, the number density ratio, and the instability propagation angle are presented. Additionally, we consider a more realistic experimental condition by including a certain percentage of tritium in the background plasma for deuterium-tritium plasma simulations. The simulated parameters are consistent with those used in section 2 regarding MCI excited by super-Alfv\(\,\)eric fast ions.
### Velocity spread
The velocity spread of fast ions generated by D-T fusion is influenced by the rise time of neutron emission and the fast ion slowing-down time. The research indicates that if the rise time of neutron emission exceeds the fast ion slowing-down time, collisions will cause the energy distribution of fast ions to broaden before new fast ions are added[89-91]. Therefore, it is necessary to consider the impact of energy distribution on ICE. Studies on the effects of velocity spread on ICE have explored both the impact of considering only parallel velocity spread [50, 57] and the combined effects of parallel and perpendicular velocity spread [2, 55, 70], which corresponds to the case of isotropic temperature. These studies have all shown a suppressive effect on ICE. In our simulation, we consider isotropic temperature, i.e., \(v_{r}=u_{r}\), with the simulation range from 0 to 0.4\(u_{\perp}\). Figures 3 and 4 show that the growth rates of cyclotron harmonics up to \(l=40\) are plotted as a function of \(\omega_{r}\) for velocity spread ranging from 0 to 0.4\(u_{\perp}\). The results show that the growth rates of MCI gradually decrease with increasing \(v_{r}\) and \(u_{r}\) when the cyclotron harmonics are relatively low, i.e., less than 17, which is consistent with the previous simulation results (c.f., [55]). However, when considering higher harmonics, the simulations on MCI reveal more unique phenomena. Firstly, when \(v_{r}=u_{r}<0.1u_{\perp}\), the harmonics greater than 18 are divided into four intervals, with the centers of these intervals located at \(l=25,\ 32,\ 36,\ 39,\ 39\), respectively. Secondly, the harmonics, specifically \(l=19,\ 20,\ 21,\) and 29, are obviously suppressed in \(v_{r}=u_{r}=0\). Thirdly, in terms of the high harmonics, there is a significant suppression as \(v_{r}\) and \(u_{r}\) increase when \(v_{r}=u_{r}<0.1u_{\perp}\). Overall, the growth rate of most harmonics decreases with increasing \(v_{r}\) and \(u_{r}\). However, for boundary harmonics such as \(l=22\), in the four intervals, the growth rate decreases sharply first, then increases slightly, and finally decreases gradually with increasing \(v_{r}\) and \(u_{r}\). Hence, the previous rule does not apply to such harmonics.
A particularly interesting phenomenon arises when \(v_{r}\) and \(u_{r}\) exceed 0.22\(u_{\perp}\): a continuous spectrum forms at the high harmonics, representing the first instance in which a simulation captures a continuous spectrum resembling experimental results. To understand the factors contributing to this phenomenon, we conduct a separate investigation into the influence of \(v_{r}\) and \(u_{r}\) on MCI. Figure 5 shows that the growth rates of cyclotron harmonics are plotted as a function of \(\omega_{r}\) for (a)\(v_{r}=0.4u_{\perp}\), \(u_{r}=0\), and (b)\(v_{r}=0\), \(u_{r}=0.4u_{\perp}\). By comparing Figure 5 with Figure 3(a), we can clearly see that \(v_{r}\) exerts a minor suppressive effect on the growth rate of MCI but plays a decisive role in determining the bandwidth of harmonics and the presence of the continuous spectrum. Conversely, \(u_{r}\) significantly influences the growth rate of MCI. It should be noted that it is understandable that the previous simulations did not capture this key feature. In previous linear simulations, \(v_{r}\) was consistently small, and only a single low harmonic was considered. In previous nonlinear simulations, where the excitation energy of MCI stems from perpendicular speed, \(v_{r}\) remained small during the nonlinear evolution. Thus, in future nonlinear simulations containing a greater \(v_{r}\), a continuum spectrum is foreseen.
### Number density ratio
In JET [1], a linear relation was found between ICE intensity and neutron rate over six orders of magnitude. This observation not only suggests that ICE is excited by fast ions but also indicates a strong connection between fast ion number density and MCI. The simulations have successfully reproduced this experimental phenomenon and revealed a linear relationship between \(\sqrt{\xi_{\rm F}}\) and the MCI growth rate, with \(\xi_{\rm F}\) spanning 2 to 3 orders of magnitude in the linear phase [58, 60]. In our simulations, our study explores a wider range of \(\xi_{\rm F}\) spanning 5 orders of magnitude: \(\xi_{\rm F}=5\times 10^{-7},\ 10^{-6},\ 10^{-5},\ 10^{-4},\) and \(10^{-3}\), while keeping other simulation parameters constant. Figure 6 shows the ratio of the MCI growth rate \(\gamma\) to \(\sqrt{\xi_{\rm F}}\) versus \(\xi_{\rm F}\), where a straight horizontal trend implies a relationship \(\gamma\)-\(\sqrt{\xi_{\rm F}}\). It should be noted that the properties of the four high harmonic intervals regarding the relation between \(\gamma\) and \(\sqrt{\xi_{\rm F}}\) is similar. Therefore, figure 6 only displays the relationship between \(\gamma\) and \(\sqrt{\xi_{\rm F}}\) for each harmonic within one harmonic range (\(22\leq l\leq 28\)). Comparing figure 6 and figure 7(a), we observe that these high harmonics conforming to the linear relation appear only in the centers of the four intervals. In addition, when \(l<22\), the growth rate of all harmonics except \(l=17\), \(l=18\), and \(l<6\) varies linearly with \(\sqrt{\xi_{\rm F}}\), which is consistent with the previous simulation results [60]. Figure 7 shows the growth rates of cyclotron harmonics up to \(l=40\) are plotted as a function of \(\omega_{r}\) for \(\xi_{\rm F}\) ranging from \(10^{-6}\) to \(10^{-3}\). It is evident that as \(\xi_{\rm F}\) decreases, the growth rate of each harmonic gradually decreases. Similar to \(v_{r}\), \(\xi_{\rm F}\) also exerts a significant influence on the bandwidth of harmonics.
Lastly, considering the self-limitation of MCI, it is reasonable to infer that the linear relationship between ICE intensity and neutron rate during the nonlinear stage is not unrelated to the linear stage. Therefore, it is natural to speculate that the velocity spread plays a crucial role in the relationship between \(\gamma\) and \(\xi_{\rm F}\). Figure 8 shows \(\gamma/\Omega_{\rm F}\) versus \(\xi_{\rm F}\) at different velocity spreads. The figure illustrates that the relationship \(\gamma/\Omega_{\rm F}\)\(\sim\)\(\sqrt{\xi_{\rm F}}\) (figure 8 (a)) transitions to \(\gamma/\Omega_{\rm F}\)\(\sim\)\(\xi_{\rm F}\) (figure 8 (d)) with increasing the velocity spread. Hence, this work confirms that the linear relationship between ICE intensity and neutron rate may be determined by a linear mechanism.
### Propagation angle
In the development of linear MCI theory, the fast Alfv\(\,\)@n and ion Bernstein branches is extended from perpendicular to oblique propagation. While Landau damping and ion cyclotron damping have been introduced into MCI, it has been observed through both linear and nonlinear simulations [53, 56, 57, 62] that all ion cyclotron harmonics are excited simultaneously, exhibiting a strong growth rate for nearly perpendicular wave propagation. However, previous linear simulations neglected the wave electric field parallel to the ambient magnetic field, limiting the strict self-consistency of the results to large instability propagation angles. In nonlinear simulations, the instability propagation angle is also limited to a small range close to perpendicular direction. Therefore, we use the BO program, which includes the wave electric field parallel to the ambient magnetic field, to simulate the effect of a large angle deviating from the perpendicular direction on MCI excitation. We consider a large angle of deviation (\(15^{\circ}\leq\theta\leq 90^{\circ}\)) and account for the parallel drift speed \(\nu_{\mathrm{d}}\), which cannot be neglected in such cases. Based on [1, 53], we set \(\nu_{r}=u_{r}=0.04u_{\perp}\) and \(\nu_{\mathrm{d}}=0.25u_{\perp}\). Figure 9 shows that the growth rates of cyclotron harmonics up to \(l=40\) are plotted as a function of \(\omega_{r}\) for \(\theta\) ranging from \(50^{\circ}\) to \(88^{\circ}\). We do not present simulations with propagation angles greater than \(90^{\circ}\) since the forward and reverse propagating waves of angles greater than \(90^{\circ}\) are equivalent to the reverse and forward propagating waves of angles less than \(90^{\circ}\), respectively. Overall, as the propagation angle decreases, almost all harmonics are basically suppressed at propagation angles \(\theta<15^{\circ}\). At about \(85^{\circ}\), the harmonics \(|l|\geq 15\) are almost suppressed, indicating that the high harmonics are more sensitive to the propagation angle than low harmonics.
The rapid suppression of MCI with increasing the angle of deviation from the perpendicular direction is related to the dispersion relation of the fast Alfv\(\,\)@n wave. Figure 10 shows the dispersion relation of the fast Alfv\(\,\)@n wave at different angles, where the straight lines and curves correspond to ion Bernstein and fast Alfv\(\,\)@n waves, respectively. The dispersion relation changes when the propagation angle changes from \(88^{\circ}\) to \(80^{\circ}\), which causes \(u_{\perp}\) to become smaller than the perpendicular phase velocity of the fast Alfv\(\,\)@n wave \(\nu_{\mathrm{AL}}\) at harmonics \(|l|>10\), invalidating the super-Alfv\(\,\)@nic condition and making the sub-Alfv\(\,\)@nic condition valid. This eventually leads to the suppression of the harmonics. It needs to be noted that previous references focused on the case of low harmonic numbers and large propagation angles, where the dispersion relationship of linear Alfv\(\,\)@n waves \(\omega=k_{\perp}v_{\mathrm{A}}\) can describe the dispersion relationship of Alfv\(\,\)@n waves well. Therefore, comparing \(u_{\perp}\) with the Alfv\(\,\)@n speed \(v_{\mathrm{A}}\) is meaningful. However, when the harmonic number is large, the dispersion relationship of Alfv\(\,\)@n waves changes, and it makes sense to compare the \(u_{\perp}\) with the perpendicular phase velocity of fast Alfv\(\,\)@n wave \(\nu_{\mathrm{AL}}\), as defined in this work.
One important observation from figure 9 is that each harmonic consists of both forward and reverse propagating waves, with the forward wave having a higher frequency than the reverse wave. The line splitting, which is evident in figure 9 due to \(\theta\) and \(\nu_{\mathrm{d}}\), provides a simple explanation [4, 53] for the spectral peak splitting observed in JET experiments. Furthermore, recent computational results
reported in Ref. [85] indicate that the origin of spectral peak splitting is Doppler-shifted resonances and the intricate landscape of the MCI growth rate on the dispersion surface in \((k_{\perp},k_{\mathrm{I}})\) space.
### Containing tritium
In previous simulations of the JET Preliminary Tritium Experiment [50, 53, 55, 56, 57, 58, 59, 60], the background plasma was typically assumed to be deuterium plasma, which is a reasonable approximation for low tritium number density. However, for the higher tritium-to-deuterium number density ratios expected in the future ITER, it is necessary to contain a corresponding percentage of tritium in the background plasma. Specifically, we set the maximum ratio of tritium to total ion number density to 0.3, while keeping other parameters unchanged. Figure 11 shows that the growth rates of cyclotron harmonics up to \(l=40\) are plotted as a function of \(\omega_{r}\) for tritium number density ratio ranging from 0 to 0.3. It is clear from the figure that, as the tritium number density ratio increases, the four high harmonic intervals move toward the low harmonics. The harmonics that are significantly suppressed not only exhibit the same trend, but also experience a slight increase in their number.
## 4 Sub-Alfv\(\,\)nic fast ions
ICEs excited by sub-Alfv\(\,\)nic fast ions are commonly observed and have been widely studied through simulations. Here, as with the simulation of the MCI excited by super-Alfv\(\,\)nic fast ions, we conduct a comprehensive simulation of the key parameters for MCI excited the sub-Alfv\(\,\)nic fast ions, with consideration for cyclotron harmonics up to 40. The simulation parameters obtained from LHD are, following Ref. [62], magnetic field \(B_{0}=0.46\,\)T, bulk proton temperature \(T_{\mathrm{H}}=150\,\)eV, electron temperature \(T_{\mathrm{e}}=150\,\)eV, bulk proton number density \(n_{\mathrm{H}}=10^{19}\mathrm{m}^{-3}\), the ratio of the fast ion (proton) number density to the background proton number density \(\xi_{\mathrm{F}}=n_{\mathrm{F}}/n_{\mathrm{H}}=5\times 10^{-4}\), \(u_{\perp}=2.77\times 10^{6}\,\)m/s, \(v_{\mathrm{d}}=0\,\)m/s, \(v_{r}=u_{r}=0\,\)m/s, \(v_{\mathrm{A}}=3.17\times 10^{6}\,\)m/s, and \(\theta=89^{\circ}\).
### Velocity spread
As mentioned in section 2, ICEs excited by sub-Alfv\(\,\)nic fast ions depend more on the velocity spread of the fast ions than those excited by super-Alfv\(\,\)nic fast ions. However, this conclusion is obtained based on low harmonics, and new results have emerged with the inclusion of high harmonics in the present study. Figure 12, where the blue lines represent \(l\leq 28\) and the red lines represent \(l\geq 29\), show that the growth rates of cyclotron harmonics up to \(l=40\) are plotted as a function of \(\omega_{r}\) for velocity spread ranging from 0 to \(0.35u_{\perp}\). The figure shows that the growth rate of harmonics less than 29 is one to two orders of magnitude smaller than those of the harmonics greater
than 29, indicating that high harmonics are more likely to be excited. As the velocity spread increases, the harmonics less than 29 are rapidly suppressed, consistent with the previous simulation results that only consider the velocity spread in the perpendicular direction [64] or the combined effects of velocity spread in parallel and perpendicular [55], i.e., the isotropic temperature case, which all show a suppressive effect on ICE. The growth rate of harmonics greater than 28 gradually decreases, while continuous spectrum features appear, similar to the MCI excited by super-Alfv \(\ddot{\mathrm{a}}\)ic fast ions. Further analysis reveals that the characteristics observed in high harmonics are related to the dispersion relation of the fast Alfv \(\ddot{\mathrm{u}}\) wave. Figure 13 shows that \(\nu_{\mathrm{A}\perp}\) decreases gradually with the increase of harmonics, eventually resulting in \(u_{\perp}>\nu_{\mathrm{A}\perp}\). Consequently, the sub-Alfv \(\ddot{\mathrm{a}}\)ic condition becomes invalid at harmonics above 28 (the black circle mark in the figure), while the super-Alfv \(\ddot{\mathrm{a}}\)ic condition becomes valid, resulting in a drastic increase in the growth rate at harmonics greater than 28. Finally, it should be noted that from the figure 12(a) the simulation results of the BO differ significantly from those in [62] at \(l=11\). This is because the wave-wave coupling, which leads to an increase in the growth rate [59], occurs in the late linear phase of the simulations in [62] while the results of the BO here regarding the MCI excited by sub-Alfv \(\ddot{\mathrm{a}}\)ic fast ions are in the early linear phase.
### Number density ratio
Similar to the MCI excited by super-Alfv \(\ddot{\mathrm{a}}\)ic fast ions, detailed simulations about the number density ratio \(\xi_{\mathrm{F}}\) is presented for the MCI excited by sub-Alfv \(\ddot{\mathrm{a}}\)ic fast ions. The simulation sets \(\xi_{\mathrm{F}}=10^{-6}\), \(10^{-5}\), \(10^{-4}\), and \(10^{-3}\), while keeping other parameters unchanged. Figure 14 illustrates that as the number density ratio \(\xi_{\mathrm{F}}\) decreases, the growth rate of each harmonic also decreases. Moreover, for harmonics above 29, they are divided into three intervals centered around \(l=33\), 37, and 39, respectively. We find that these high harmonics, which exhibit a linear relationship between \(\sqrt{\xi_{\mathrm{F}}}\) and \(\gamma\), appear only at the centers of the three intervals, as shown in figure 15(b), where the interval encompassing \(l=36\), 37 and 38 is depicted. Therefore, the high harmonics with the typical characteristics resembling MCI excited by super-Alfv \(\ddot{\mathrm{a}}\)ic fast ions further supports the conclusions presented in section 4.1. Lastly, for harmonics less than 29, such as \(l=15\) in figure 15(a), the growth rate \(\gamma\) is close to linear relation with \(\sqrt{\xi_{\mathrm{F}}}\), which is consistent with the previous results [62].
Figure 12: Growth rate of MCI excited by sub-Alfv \(\ddot{\mathrm{a}}\)ic fast ions as a function of \(\omega_{r}\) for (a)\(v_{r}=u_{r}=0.0u_{\perp}\), (b)\(v_{r}=u_{r}=0.02u_{\perp}\), (c)\(v_{r}=u_{r}=0.06u_{\perp}\), (d)\(v_{r}=u_{r}=0.35u_{\perp}\). The blue lines represent \(l\leq 28\), and its ordinate is on the left side of the figure. The red lines represent \(l\geq 29\), and its ordinate is on the right side of the figure.
Figure 13: Dispersion relation of fast Alfv \(\dot{\mathrm{u}}\) wave. The blue straight lines and blue curves correspond to ion Bernstein and fast Alfvén \(\ddot{\mathrm{u}}\) wave, respectively. The cold plasma dispersion relation for the fast Alfvén \(\ddot{\mathrm{u}}\) wave, is calculated by the fluid module of the BO program, is shown by the red dashed line.
### Propagation angle
In our research on the relationship between the MCI excited by super-Alfv\(\,\)onic fast ions and the propagation angle, simulation results indicate that the larger the angle of deviation from the perpendicular direction, the stronger the suppression of high harmonics. We have explored the relationship between MCI excited by sub-Alfv\(\,\)onic fast ions and the propagation angle, and the conclusions at the high harmonics are consistent with those of the MCI excited super-Alfv\(\,\)onic fast ions. From figure 16, which shows the relation between MCI growth rate and propagation angle, we can see that the high harmonics are first strongly suppressed as the angle of deviation from the perpendicular direction increases. Near \(\theta=85^{\circ}\), the growth rate of high harmonics is in the same order of magnitude with that of low harmonics, indicating that the super-Alfv\(\,\)onic condition becomes invalid at harmonics above 28, while the sub-Alfv\(\,\)onic condition becomes valid. As the propagation angle decreases further, the growth rate of high harmonics first gradually decreases. However, the growth rate of low harmonics does not decrease monotonically with decreasing propagation angle, but increase first and then decrease, which is consistent with the previous results [66]. When the propagation angle is less than \(10^{\circ}\), all harmonics are basically suppressed.
## 5 Greatly sub-Alfv\(\,\)onic fast ions
Different from the electromagnetic instability excited by super-Alfv\(\,\)onic and sub-Alfv\(\,\)onic fast ions, the instability excited by greatly sub-Alfv\(\,\)onic fast ions, which is a variant of MCI [54], is mainly electrostatic. There have been relatively few simulation studies on the ICE excited by greatly sub-Alfv\(\,\)onic fast ions. Here we conducted a comprehensive simulation study on MCI, taking into account the velocity spread of the fast ions (deuteron), the number density ratio, and the instability propagation angle. The simulation parameters are, following Ref. [54, 92], bulk deuteron temperature \(T_{\rm D}=4\)keV, and electron temperature \(T_{\rm e}=3\)keV. Other parameters remain the same as those used for greatly sub-Alfv\(\,\)onic fast ions discussed in section 2. In our subsequent simulations, we focus solely on the forward propagating waves, as the forward and reverse propagating waves exhibit similar properties, and this choice enhances the clarity of the presented results. In addition, we compare the electromagnetic and electrostatic results carried out by
using the program BO. Figure 17 illustrates that the results are largely consistent, supporting the analytical conclusion [54].
### Velocity spread
As mentioned earlier, the excitation of ICE by greatly sub-Alfv\(\,\)onic fast ions requires a very narrow spread of velocities in the parallel direction [4, 54], on the order of \(10^{-2}u_{\perp}\). The simulation results of the BO are consistent with the previous simulation results at low harmonics. However, when considering high harmonics, the excitation condition for MCI by greatly sub-Alfv\(\,\)onic fast ions becomes relatively relaxed. Figure 18 shows that the growth rates of cyclotron harmonics up to \(\,l=40\,\) are plotted as a function of \(\omega_{r}\) for velocity spread ranging from \(\,0\,\) to \(\,0.1u_{\perp}\). The figure shows that MCI excited by greatly sub-Alfv\(\,\)onic fast ions exhibits a similar characteristic to that excited by sub-Alfv\(\,\)onic fast ions at high harmonics, where the growth rate of high harmonics is one to two orders of magnitude larger than that of low harmonics. Different from the MCI excited by super-Alfv\(\,\)onic and sub-Alfv\(\,\)onic fast ions, the MCI excited by the greatly sub-Alfv\(\,\)onic fast ions shows more line splitting (the line splitting of high harmonics in figure 18(a) is similar to that of low harmonics, but the details are not shown due to the relatively small values of the finite structure compared to its maximum value), which can be well understood through equation (11) in Ref. [54]. Another important characteristic is that with increasing velocity spread, low harmonics are rapidly suppressed, while a few high harmonics still persist when \(\,v_{r}=u_{r}=0.1u_{\perp}\). This expands the parameter range for studying ICE excited by greatly sub-Alfv\(\,\)onic fast ions and is significant for experimental investigations of ICE.
### Number density ratio
In the TFTR experiment, the number density ratio \(\,\xi_{\rm F}\,\) is on the order of \(\,10^{-2}\), and relevant simulations have shown that exciting MCI for low harmonics is difficult at lower number density ratios [54]. Here we further study the influence of number density ratio on each harmonic, especially the high harmonics. Figure 19 plots the growth rates of cyclotron harmonics up to \(\,l=40\,\) as a function of \(\,\omega_{r}\,\) for the number density ratio \(\,\xi_{\rm F}\,\) ranging from \(\,10^{-4}\,\) to \(\,10^{-1}\). From the figure, we can see that the low harmonics are suppressed at a lower number density ratio. However, even when \(\,\xi_{\rm F}\,\) is reduced to \(\,10^{-4}\), a high harmonic persists. In addition, at \(\,\xi_{\rm F}=10^{-1}\), a continuous spectrum is formed at the high harmonics. Overall, as \(\,\xi_{\rm F}\,\) decreases, the growth rates of both low and high harmonics decrease rapidly, accompanied by narrower bandwidths. This behavior is similar to that of super-Alfv\(\,\)onic and sub-Alfv\(\,\)onic fast ions.
Figure 17: (a)electrostatic results along with corresponding (b)electromagnetic results for instability excited by greatly sub-Alfv\(\,\)onic fast ions. The blue lines represent \(\,l\leq 26\,\), and its ordinate is on the left side of the figure. The red lines represent \(\,l\geq 27\), and its ordinate is on the right side of the figure.
### Propagation angle
Here we study in detail the relationship between the instability excited by greatly sub-Alfv\(\,\)nic fast ions and the propagation angle. Figures 20 and 21 show that the growth rates of cyclotron harmonics up to \(\,l\,=\,40\,\) are plotted as a function of \(\,\omega_{r}\,\) for propagation angle ranging from \(\,87^{\,\ast}\,\) to \(\,93^{\,\ast}\,\). Notably, the instability excited by greatly sub-Alfv\(\,\)nic fast ions is more sensitive to the propagation angle than the instability excited by super-Alfv\(\,\)nic and sub-Alfv\(\,\)nic fast ions, and is basically suppressed when the propagation angle deviates from the perpendicular direction by about \(\,4^{\,\ast}\,\). Overall, the growth rate of each harmonic is strong for nearly perpendicular propagation. As the angle of deviation from the perpendicular direction increases, the most unstable harmonic remains in the high harmonic range, while the harmonics at the middle harmonic range are the first to be suppressed. This is different from the instability excited by super-Alfv\(\,\)nic and sub-Alfv\(\,\)nic fast ions, where the high harmonics are typically suppressed first.
excited by super-Alfv\(\,\)onic fast ions, harmonics greater than 18 are divided into four intervals. For MCI excited by sub-Alfv\(\,\)onic and greatly sub-Alfv\(\,\)onic fast ions, the growth rate of high harmonics is one to two orders of magnitude larger than that of low harmonics. Additionally, our simulations on velocity spread, number density ratio, and instability propagation angle yield interesting results, summarized in table 2.
The first one is about the simulations of velocity spread. For MCI excited by super-Alfv\(\,\)onic fast ions, \(v_{r}\) has a small suppressive effect on the growth rate but has a decisive effect on the bandwidth of harmonics and the continuous spectrum, while \(u_{r}\) has a decisive effect on the growth rate. For MCI excited by sub-Alfv\(\,\)onic fast ions, the high harmonics form a continuous spectrum at a greater velocity spread. This indicates that the sub-Alfv\(\,\)onic condition becomes invalid above harmonics 28, while the super-Alfv\(\,\)onic condition becomes valid. For MCI excited by greatly sub-Alfv\(\,\)onic fast ions, the parameter range of the velocity spread, where fast ions can excite MCI, is expanded.
The second one is about the simulations of number density ratio. For MCI excited by the super-Alfv\(\,\)onic fast ions, an important conclusion is that the relationship \(\gamma\)\(\sim\)\(\sqrt{\xi_{F}}\) transitions to \(\gamma\)\(\sim\)\(\xi_{F}\) with increasing the velocity spread. This promotes the understanding of the linear relation of the fast ion number density with ICE intensity in the JET. For MCI excited by sub-Alfv\(\,\)onic fast ions, high harmonics conforming to the linear relation between \(\sqrt{\xi_{F}}\) and \(\gamma\) appear only in the centers of the three intervals, which shows the typical characteristics of MCI excited by super-Alfv\(\,\)onic fast ions. For MCI excited by greatly sub-Alfv\(\,\)onic fast ions, the parameter range of the number density ratio allowing fast ions to excite MCI is expanded.
The last one is about the simulations of propagation angle. For MCI excited by the super-Alfv\(\,\)onic and sub-Alfv\(\,\)onic fast ions, high harmonics are highly sensitive to the propagation angle compared with low harmonics. Table 1. Summary of the previous linear simulation results on MCI excited by super-Alfv\(\,\)onic fast ions, sub-Alfv\(\,\)onic fast ions, and greatly sub-Alfv\(\,\)onic fast ions, respectively, in low harmonic range.
This is because the change in the dispersion relation of the fast Alfv\(\,\)onic wave with propagation angle results in the transition of super-Alfv\(\,\)onic fast ions to sub-Alfv\(\,\)onic fast ions. In addition, low harmonics still persist at large angles of deviation from the perpendicular direction. For MCI excited by greatly sub-Alfv\(\,\)onic fast ions, the most unstable harmonic is still at the high harmonic range when the angle of deviation increases. The instability excited by the greatly sub-Alfv\(\,\)onic fast ions is highly sensitive to the propagation angle and is basically suppressed when the propagation angle deviates from the perpendicular direction by about 4\({}^{\circ}\).
Lastly, we consider a more realistic experimental condition for MCI excited by super-Alfv\(\,\)onic fast ions, that is, the background plasma contains a certain percentage of tritium. The simulation results show that as the tritium number density ratio increases, the four high harmonic intervals move toward the low harmonics, and the harmonics that are significantly suppressed not only exhibit the same moving trend but also have a slight increase in their numbers.
In our current work, we have simulated the key parameters in detail on different devices such as JET (super-Alfv\(\,\)onic fast ions), LHD (sub-Alfv\(\,\)onic fast ions), TFTR (greatly sub-Alfv\(\,\)onic fast ions), and roughly summarized the rules. However, for different parameters of some different devices such as LHD (super-Alfv\(\,\)onic fast ions) [62], JET (sub-Alfv\(\,\)onic fast ions) [38], and TFTR (sub-Alfv\(\,\)onic fast ions) [55], we have also made corresponding simulations. The specific results may change for super-Alfv\(\,\)onic, sub-Alfv\(\,\)onic, and greatly sub-Alfv\(\,\)onic fast ions, but the rules are similar to those of the present work. For more wide parameters and more detailed results, the MCI simulation carried out by using the BO program is still anticipated. Finally, nonlinear simulations about the ICE continuous spectrum as well as wave-wave coupling in the linear phase would be interesting future works.
\begin{table}
\begin{tabular}{l l l l} \hline
**Parameter** & **Super-Alfv\(\,\)onic** & **Sub-Alfv\(\,\)onic** & **Greally sub-Alfv\(\,\)onic** \\ & **fast ions** & **fast ions** & **fast ions** \\ \hline Velocity spread & MCI can occur if fast ions are & MCI cannot occur if fast ions are & MCI can occur under a very \\ & isotropic or have a relatively & isotropic or a certain degree of & narrow spread of velocity in the \\ & broad distribution of speeds. & thermalization. & parallel direction. \\ Number density & \(\gamma\)\(\sim\)\(\sqrt{\xi_{F}}\) & \(\gamma\)\(\sim\)\(\sqrt{\xi_{F}}\) & MCI hard to excite at lower \\ & ratio & & density ratios \\ Propagation & strong growth rate for nearly & nonmonotonically decreasing & \\ angle & perpendicular wave propagation & growth rate with decreasing & \\ & propagation angle & & \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the new simulation results on MCI excited by super-Alfv\(\,\)onic fast ions, sub-Alfv\(\,\)onic fast ions, and
## Acknowledgments
This work is supported by the National Natural Science Foundation of China under Grant No. 12275040, the National Key R&D Program of China under Grant Nos. 2019YFE03030004, Interdisciplinary and Collaborative Teams of CAS, and the High-End Talents Program of Hebei Province, Innovative Approaches towards Development of Carbon-Free Clean Fusion Energy (No. 2021HBQZYCSB006).
## Appendix A BO model
BO [86-88] (open source at [https://github.com/hsxie/bo](https://github.com/hsxie/bo)), which means 'wave' in Chinese, is a powerful solver that consists of two components: BO-F (PDRF), a multi-fluid solver, and BO-K (PDRK), a kinetic solver. BO-K effectively solves the uniform plasma dispersion relation with an extended Maxwellian based equilibrium distribution function, determining the solutions to \(\mathrm{D}(\omega,k)=0\), given a specific wave vector \(k\), by solving the series \(\omega\).
One notable advantage of BO is its ability to solve the difficulty of root finding without requiring an initial guess, providing all important solutions simultaneously, with using of a novel matrix method. Additionally, BO supports various features, including anisotropic temperature, loss cone, drift in arbitrary direction, ring beam, collision, unmagnetized/magnetized species, electrostatic/electromagnetic/Darwin, and \(k_{\mathrm{I}}\leq 0\). The BO code has been widely used in space and astrophysics plasmas (c.f., [93-95]) to study the abundant plasma waves and instabilities phenomena, where the uniform plasma assumption can be valid easily. A major purpose of the present work is also to show that it can be useful for study the waves and instabilities in fusion and laboratory plasmas. In addition, a ray tracing solver named BORAY[96], based on BO, for non-uniform plasmas has also been developed.
|
2303.10429 | Protein Sequence Design with Batch Bayesian Optimisation | Protein sequence design is a challenging problem in protein engineering,
which aims to discover novel proteins with useful biological functions.
Directed evolution is a widely-used approach for protein sequence design, which
mimics the evolution cycle in a laboratory environment and conducts an
iterative protocol. However, the burden of laboratory experiments can be
reduced by using machine learning approaches to build a surrogate model of the
protein landscape and conducting in-silico population selection through
model-based fitness prediction. In this paper, we propose a new method based on
Batch Bayesian Optimization (Batch BO), a well-established optimization method,
for protein sequence design. By incorporating Batch BO into the directed
evolution process, our method is able to make more informed decisions about
which sequences to select for artificial evolution, leading to improved
performance and faster convergence. We evaluate our method on a suite of
in-silico protein sequence design tasks and demonstrate substantial improvement
over baseline algorithms. | Chuanjiao Zong | 2023-03-18T14:53:20Z | http://arxiv.org/abs/2303.10429v1 | # Protein Sequence Design with Batch Bayesian Optimisation
###### Abstract
Protein sequence design is a challenging problem in protein engineering, which aims to discover novel proteins with useful biological functions. Directed evolution is a widely-used approach for protein sequence design, which mimics the evolution cycle in a laboratory environment and conducts an iterative protocol. However, the burden of laboratory experiments can be reduced by using machine learning approaches to build a surrogate model of the protein landscape and conducting in-silico population selection through model-based fitness prediction. In this paper, we propose a new method based on Batch Bayesian Optimization (Batch BO), a well-established optimization method, for protein sequence design. By incorporating Batch BO into the directed evolution process, our method is able to make more informed decisions about which sequences to select for artificial evolution, leading to improved performance and faster convergence. We evaluate our method on a suite of in-silico protein sequence design tasks and demonstrate substantial improvement over baseline algorithms.
## 1 Introduction
Protein engineering is a rapidly growing field of biotechnology that aims to create or modify proteins to perform new or improved functions[1]. The goal of protein engineering is to design proteins with specific desired characteristics, such as increased stability, specificity, and efficacy, or with new functions, such as the ability to bind to a target molecule by selecting sequences with high fitness scores[2]. This can be accomplished through a variety of techniques, including rational design and directed evolution. However, the former approach necessitates a thorough understanding of the protein structure. As an alternative, directed evolution is inspired by natural selection pressure, which can be regarded as a searching task on fitness landscape[3, 4]. The advantage is that directed evolution can simulate natural evolution in a laboratory setting, through an iterative protocol[5]. During each iteration, a large number of variants are generated and evaluated through functional assays. Only the sequences with desired fitness scores are selected from the landscape and to form the next generation[6].
The protein fitness landscape, shown in figure 1, is a crucial aspect of protein engineering, as it provides a mapping between protein sequences and their functional levels, or fitness scores. It represents the relationship between protein sequences and biological functions and is often represented as a high-dimensional surface[7, 8]. The relationship between a protein sequence and its fitness can be influenced by many factors, including the protein's three-dimensional structure, interactions with other proteins and molecules, and the environment in which it operates. Therefore, understanding the protein fitness landscape is crucial in selecting protein sequences with improved fitness scores. However, exploring this landscape can be a challenge as it requires expensive and time-consuming experiments or simulations[9]. The traditional exploration mechanism used in directed evolution is a simple greedy strategy[3], starting from the wild-type sequence and accumulating mutations based on the fitness landscape. This unrestricted search can lead to sequences with high mutation counts that are far from the wild type, which can be challenging to synthesize and produce. Additionally, the training procedure of the surrogate model may have difficulty incorporating previous samples that are far from the current search region. Recent efforts have focused on using machine learning[10]
to construct a surrogate model of the protein landscape and implementing model-guided search strategies. This approach effectively reduces the workload of laboratory experiments by performing in-silico population selection through model-based fitness predictions, but uncertainty and measurement errors are often neglected. To address this challenge, we propose a novel approach that combines the strengths of Bayesian Optimization and Convolutional Neural Networks (CNNs) to model the protein fitness landscape and perform efficient sequence design. Our approach leverages the strengths of multiple CNNs to improve the scalability of Bayesian optimisation and accounts for the uncertainty in the relationship between protein sequences and their fitness, resulting a faster search for protein fitness values with improved accuracy.
## 2 Related Work
**Surrogate Model of Landscape.** The concept of the protein fitness landscape, which describes the relationship between a protein's sequence and its fitness score, has been used in protein engineering since 1932[11], which provides a graphical representation of protein sequence and function. Exploring the entire protein fitness landscape can be difficult due to the time and cost involved, especially for large proteins with complex structure. Recently, machine learning approaches have shown promise in navigating protein fitness landscape[12, 13, 14]. It enables protein engineers more efficiently and effectively navigate these complex fitness landscapes and design proteins with desired functions[15, 16]. The learned model can be used to predict how mutations or modifications to the protein sequence will affect its fitness[17]. As the relationship between the sequence and its fitness can be influenced by various factors, Gaussian process is a popular choice[18], which helps account for the uncertainty in predicting the effects of mutations on protein function to guide the directed evolution of proteins with desired properties. Furthermore, the trained deep neural network can be used to screen large numbers of designed sequences in silico, without the need for wet-lab experiments[19, 20, 21].
**Exploration Method.** Exploring sequence is another critical part in protein engineering[22], that emphasized the need for efficient and systematic methods to explore sequence space, as the protein fitness landscape is vast and complex. The directed evolution has been widely utilised to search protein functions by evolving existing proteins [23, 24, 25]. Under such paradigm, different algorithms based on machine learning are adopted to guide evolutionary search and improve the sample efficiency[13, 17]. Dauparas et al.(2022)[26] proposed a neural network that can learn to predict the effects of mutations and explore new sequences that have desired properties. Brandes et al.(2022)[27] modified the architecture of language model BERT to develop the proteinBERT, which can generate new protein sequences with desired functional properties. Khan et al.(2022)[28] proposed a combinatorial Bayesian optimisation to explore a large sequence space and optimize multiple properties simultaneously. The alternative approach of finding optimal sequences, such as model based reinforcement learning[29], which formulates biological sequence design as Markov decision problem.
**High Dimensional Optimisation.** Finding the optimal protein sequence is a difficult task due to the high dimensionality of the protein sequence space. Bayesian optimization can help search for new sequences by making informed decisions on which sequences to test based on their predicted properties[30]. However, the high-dimensional biomedical datasets contain many features that are irrelevant to its biological function[31]. The utilization of large pre-trained models can aid in the reduction of the dimensionality of high dimensional optimization problems, particularly in the field of protein sequence design[32]. This is achieved by providing a good level of predictive accuracy with a limited number of training examples. However, the use of large models in machine learning often requires significant computational resources, making the training process computationally expensive.Stanton et al.(2022)[33] uses a denoising auto-encoder with BO to project high-dimensional data in a latent lower-dimensional
Figure 1: overview of landscape
space. The auto-encoder is trained to reconstruct the original sequence from a corrupted version, resulting in a compact representation of the sequences without relying on a pre-trained corpus. Our method utilizes a novel approach in which multiple simple Convolutional Neural Networks are employed instead of a single large model, which reduce the computational resource while still guaranteeing accuracy.
## 3 Methodology
### Problem Background
The problem of protein sequence design involves searching for a sequence, \(s\), with specific properties within a high-dimensional sequence space, denoted by \(V^{L}\). Here, \(S\) represents the string of amino acids, and \(L\) represents the desired length of the sequence. Let's define the protein fitness mapping function as \(f\), which is a black box function that can be evaluated through laboratory experiments[19]. The goal is to maximise \(f:S^{L}\to R\) by modifying the starting sequence, \(s_{0}\), such sequence should occur in nature. Specifically, we aim to find a mutant sequence \(s^{*}\), such that it maximises the fitness score with the least amount of modification from the \(s_{0}\).
### Bayesian Optimisation
The optimization problem of finding a protein sequence with desired properties can be framed as a batch black-box optimization problem. Bayesian optimization has emerged as a promising solution for such expensive black-box optimization problems, as it is known for its data efficiency, allowing for the efficient exploration of the search space with limited measurements. The whole architecture of our method is shown in figure 2.
Surrogate model and acquisition function are two main components of BO. Typically, gaussian process is a popular choice for building a surrogate model as it provides information about the uncertainty of the data through its mean and covariance. However, this approach can be computationally expensive for high dimensional data. Instead, we use an ensemble of 1D Convolutional Neural Networks as the surrogate model. The mean and variance of the results obtained from the ensemble CNNs serve as a substitute for the mean and covariance provided by the Gaussian process. This provides us with the ability to leverage the powerful representation learning capabilities of deep learning models to approximate the relationship between protein sequences and their fitness scores, while still maintaining the efficiency and scalability needed to handle high-dimensional optimization problems. The fitness model, \(f_{\theta}\) is trained to predict fitness of mutant sequence, which minimises the expectation of the regression loss:
\[L(\theta)=\mathbb{E}_{D}(f(s)-f_{\theta}(s))^{2} \tag{1}\]
where \(D\) is the collection of observed sequences. The optimization process adjusts the parameters \(\theta\) to minimize this loss, resulting in a surrogate model that can accurately predict the fitness of new sequences. By using this surrogate model, we can reduce the number of wet-lab experiments required, as we can first use the surrogate model to identify promising sequences before verifying their fitness through experiments.
Figure 2: overview of landscape
Acquisition functions are used to guide the exploration of the black-box function in Bayesian optimization. They serve as a measure of the expected improvement of a candidate solution in comparison to the best observed solution thus far. The acquisition function operates by combining the mean and uncertainty estimates obtained from the surrogate model, and evaluating the potential improvement of candidate solutions. The acquisition function is then used to determine the next design point to sample, based on the criterion of maximizing the expected improvement. The choice of acquisition function can have a significant impact on the performance of Bayesian optimization, and different acquisition functions may be better suited for different types of optimization problems and objectives.The One-shot Knowledge Gradient (KSG), shown as 2,
\[\alpha_{KG}(s)=\mathbb{E}_{D}[\max_{s^{\prime}\in S}\mathbb{E}[g(f(s^{\prime}) ]]-\mu \tag{2}\]
where \(g\sim P(f(s^{\prime})|D\cup D_{s})\) is posterior at \(s^{\prime}\), a new proposed sequence, \(P\) is normally distributed in the default setting, \(D\) is a collection of observed sequence and \(D_{s}\) is a batch of sequences which are being observed and \(\mu:=\max_{s}E[g(f(s))|D]\), that is, the maximum expected value of the target function based on the current data set \(D\). It is based on the idea of actively learning the best design point in a single-shot manner, as opposed to performing a full optimization over the entire search space[34]. The KSG acquisition function evaluates the potential improvement of a design point by considering the uncertainty of the current model, as well as the expected improvement in the model's predictions after incorporating the results from that design point. This acquisition function has been demonstrated to be effective in various applications and has advantages over traditional acquisition functions such as Expected Improvement (EI) and Upper Confidence Bound (UCB). In this paper, we compare the performance of the One-shot Knowledge Gradient function with these two commonly used acquisition functions.
### Proximal Optimisation
Drawing inspiration from the natural evolutionary process, where a protein can significantly enhance its fitness through mutations of a limited number of amino acids within a longer sequence, we restrict the search space to the vicinity of the wild type and aim to discover high-fitness mutants with limited mutation counts. Inspired by the concept of directed evolution, our approach prioritizes the selection of low-order mutants, which are sequences with a minimal number of mutations. To achieve this, we formulate a regularized objective function, \(f_{\lambda}(s)\), that balances the trade-off between maximizing the fitness score of the sequence \(f(s)\) and minimizing its hamming distance to the starting sequence \(d(s,s_{0})\). This allows us to guide the sequence selection step towards finding high-fitness mutants with minimal mutations[19, 35]. Therefore, we define new regularized objective function as follow:
\[f_{\lambda}(s)=f(s)-\lambda\cdot d(s,s_{0}) \tag{3}\]
where \(\lambda\geq 0\) is the regularization coefficient. Then we need to find a sequence \(s^{*}\) such that \(s^{*}:=\arg\max f_{\lambda}(s)\). \(s^{*}\) can be seen as a function of \(\lambda\). The choice of the value of \(\lambda\) depends on the problem and the desired balance between exploring new regions and exploiting known regions of the sequence space. A higher value of \(\lambda\) means a stronger focus on the starting sequence and nearby regions, while a lower value of \(\lambda\) means less focus on the starting sequence and more exploration of the sequence space.
### Model-guided Exploration
In classical evolutionary algorithms, only the best-performing sequences are selected and mutated, which can limit exploration and overlook the properties of the natural protein fitness landscape. Instead, our algorithm extends the search beyond just optimizing the fitness score and focuses on exploring the proximal frontier to find high-fitness mutant sequences with low mutation counts. This mechanism is considered a regularization of the search space for protein sequences, and it encourages exploration of the sequence space to discover new and potentially optimal solutions.
The overall procedure of the method is stated in Algorithm 1.
```
1:Input: \(\lambda\), \(\theta\), \(\theta\), \(\theta^{*}\)
Our method differs from classical evolutionary algorithms in that it takes into account not only the predicted fitness score of the candidate sequence, but also its distance from the starting sequence (the wild-type sequence) through mutations. While classical evolutionary algorithms perform a greedy selection based solely on the predicted fitness score of the candidate sequence, our method incorporates both factors into the selection process.
### Surrogate Model of Landscape
We employed an improved method for modeling the protein fitness landscape that overcomes the limitations of traditional neural network approaches. Our method, an ensemble of Convolutional Neural Networks (CNNs), is specifically designed to capture the complex and highly non-smooth relationship between protein sequences and their fitness scores.The ensemble CNN provides not only a prediction of the fitness score of a given protein sequence, but also an estimate of its uncertainty. Furthermore, the ensemble CNN offers improved scalability compared to the Gaussian Process used in traditional Bayesian optimization, making it a more suitable choice for large-scale protein sequence design problems.
## 4 Experiments
In this section, we present experiment to evaluate the performance of our method.Our method outperforms the state-of-the-art Proximal Exploration (PEX) method, which uses a mutation factorization network (MuFacNet) as a surrogate model. By leveraging the strengths of Bayesian optimization, machine learning, and the KG acquisition function, our method offers a promising approach to protein sequence design. To simulate the ground-truth protein fitness landscape, we use **Absolut![36]** as a replacement for the wet-lab measurements. Our exploration protocol can only interact with the simulated landscape through batch queries, and the sequence design algorithm cannot obtain any extra information about the black-box oracle.
### Performance Comparison
We evaluate the effectiveness of our proposed method for designing high-scoring proteins through exploration of the fitness landscape. To assess the performance of our approach, we compare it against other two baseline algorithms.
* **Random Search** is a common baseline used for comparison with other search strategies, which selects a previously measured sequence at random and then mutates it. Unlike other data-driven approach, random search does not use the model to guide the search strategy, but only to score the sequences.
* **PEX[19]** is the algorithm that is a model-guided approach for protein sequence design that aims to reduce the need for costly in-lab experiments. The algorithm leverages the property of protein fitness landscapes that a concise set of mutations on the wild-type sequence can enhance the desired function.
The result is shown in figure 3. Our experimental results demonstrate that our method outperforms the baselines in terms of achieving high fitness scores. Specifically, our method achieved the highest fitness score compared to other methods, after performing forty rounds of black-box queries, which improved the sample efficency of model-guided evolutionary search.
### Impact of Acquisition Function
The effectiveness of an acquisition function depends on how well it balances exploration and exploitation. A good acquisition function should be able to guide the search towards promising regions of the fitness landscape while also exploring new regions. This is especially important in high-dimensional search spaces where the search can easily get stuck in local optima. In addition, the acquisition function should be computationally efficient to allow for fast evaluation of candidate solutions. To conduct an ablation study on acquisition function, I consider three types of acquisition functions: **UCB**, **EI** and **KG**.
Figure 3: Learning curves on each PDB identifier, which generated by **Absolut**!. Each identifier contains 104,537 pairs of sequence and fitness energy.Each round of batch black-box query can measure the fitness scores of 256 sequences.The evaluation metric used is the cumulative maximum fitness score among queried sequences, with all curves plotted from 40 runs using an ensemble of 1D convolutional neural network model architecture and KG as an acquisition function. The shaded region represents the standard deviation.
To enhance the comprehensiveness of our experiments, we also investigated the impact of incorporating RNN as a surrogate model along with different acquisition functions. We conducted forty rounds of searching and recorded the maximum score achieved during these rounds. The results of our experiment are presented in the Table 2. Regardless of whether CNN or RNN was used, the method that employed the knowledge gradient function consistently outperformed the other methods in terms of achieving higher scores. This is because the knowledge gradient function selects a batch of points with the highest expected improvement values, which is particularly useful when searching for multiple high-performing solutions simultaneously. Consequently, it is a natural choice for batch optimization and explains why it achieved consistently better results in terms of maximum fitness score compared to other methods.
## 5 Conclusion
In conclusion, our proposed method for protein sequence design using Batch Bayesian Optimization has shown promising results in discovering novel proteins with useful biological functions. Our approach addresses the challenges of traditional directed evolution methods by incorporating a machine learning-based surrogate model and utilizing Batch BO to guide the search process. We also explored alternative models to the traditional Gaussian process surrogate model. We found that scalable models such as ensemble CNNs can offer a competitive alternative with the added benefit of faster computation times. Furthermore, we have shown that the choice of acquisition function plays a crucial role in the performance of the model-guided search. Our results suggest that the knowledge gradient function is particularly well-suited for Batch Bayesian Optimization, and can lead to improved sample efficiency and faster convergence. Overall, our work highlights the potential of combining machine learning with directed evolution for more efficient and effective protein sequence design.
|
2302.10700 | Solution formula for the general birth-death chemical diffusion master
equation | We propose a solution formula for chemical diffusion master equations of
birth and death type. These equations, proposed and formalized in the recent
paper [5], aim at incorporating the spatial diffusion of molecules into the
description provided by the classical chemical master equation. We start from
the general approach developed in [20] and perform a more detailed analysis of
the representation found there. This leads to a solution formula for
birth-death chemical diffusion master equations which is expressed in terms of
the solution to the reaction-diffusion partial differential equation associated
with the system under investigation. Such representation also reveals a
striking analogy with the solution to the classical birth-death chemical master
equations. The solutions of our findings are also illustrated for several
examples. | Alberto Lanconelli, Berk Tan Perçin, Mauricio J. del Razo | 2023-02-21T14:38:26Z | http://arxiv.org/abs/2302.10700v1 | # Solution formula for the general birth-death chemical diffusion master equation
###### Abstract
We propose a solution formula for chemical diffusion master equations of birth and death type. These equations, proposed and formalized in the recent paper [5], aim at incorporating the spatial diffusion of molecules into the description provided by the classical chemical master equation. We start from the general approach developed in [20] and perform a more detailed analysis of the representation found there. This leads to a solution formula for birth-death chemical diffusion master equations which is expressed in terms of the solution to the reaction-diffusion partial differential equation associated with the system under investigation. Such representation also reveals a striking analogy with the solution to the classical birth-death chemical master equations. The solutions of our findings are also illustrated for several examples.
Key words and phrases: chemical diffusion master equation, Ornstein-Uhlenbeck process, Feynman-Kac formula, spectral methods.
AMS 2000 classification: 60H07; 60H30; 92E20.
## 1 Introduction and statement of the main results
The dynamics of biochemical processes in living cells are commonly understood as an interplay between the spatial transport (diffusion) of molecules and their chemical kinetics (reaction), both of which are inherently stochastic at the molecular scale. In the case of systems with small molecule numbers in spatially well-mixed settings, the diffusion is averaged out and the probabilistic dynamics are governed by the well-known chemical master equation (CME) [13, 23, 24]. The CME can be seldom solved analytically [16]. However, solving a few simple cases analytically can bring valuable insight to the solutions of more complex cases. Alternatively, one can solve it by integrating stochastic trajectories with the Gillespie or tau-leap algorithms [1, 13], by approximation methods [8, 11, 22, 25] or even by deep learning approaches [14, 17].
In the case of spatially inhomogeneous systems, where diffusion is not averaged out, one would expect to obtain a similar master equation. However, obtaining such an equation is plagued with mathematical difficulties, and although it was hinted in previous work [10] and formulated for some specific systems [26], it was not until recently that this was formalized into the so-called chemical diffusion master equation (CDME) [5, 7]. The CDME changes a few paradigms that have not yet been explored thoroughly in stochastic chemical kinetics models. It combines continuous and discrete
degrees of freedom, and it models reaction and diffusion as a joint stochastic process. It consists of an infinite sorted family of Fokker-Planck equations, where each level of the sorted family corresponds to a certain number of particles/molecules. The equations at each level describe the spatial diffusion of the corresponding set of particles, and they are coupled to each other via reaction operators, which change the number of particles in the system. The CDME is the theoretical backbone of reaction-diffusion processes, and thus, it is fundamental to model and understand biochemical processes in living cells, as well as to develop multiscale numerical methods [6, 12, 19, 27] and hybrid algorithms [3, 9, 4]. The stochastic trajectories of the CDME can be often integrated using particle-based reaction-diffusion simulations [2, 15]. However, analytic and approximate solutions have not yet been explored in detail. In this work, we work out a method to obtain an analytic solution of the CDME for a simple birth-death reaction system, with the aim to bring insight of the CDME solution of more complex systems.
We consider a system of indistinguishable molecules of a chemical species \(S\) which undergo
* _diffusion_ in the bounded open region \(\mathbb{X}\) of \(\mathbb{R}^{3}\);
* _degradation_ and _creation_ chemical reactions \[\text{(I)}\quad S\xrightarrow{\lambda_{d}(x)}\varnothing\qquad\quad\text{( II)}\quad\varnothing\xrightarrow{\lambda_{c}(x)}S,\] where \(\lambda_{d}(x)\) denotes the propensity for reaction (I) to occur for a particle located at position \(x\in\mathbb{X}\) (i.e., the probability per unit of time for this particle to disappear) while \(\lambda_{c}(x)\) is the propensity for a new particle to be created at position \(x\in\mathbb{X}\) by reaction (II).
To describe the evolution in time of such system the authors in [5, 7] proposed a set of equations for the number and position of the molecules. Namely, for \(t\geq 0\), \(n\geq 1\) and \(A\in\mathcal{B}(\mathbb{X}^{n})\) they set
\[\mathcal{N}(t) :=\text{ number of molecules at time }t,\] \[\rho_{0}(t) :=\mathbb{P}(\mathcal{N}(t)=0)\] \[\int_{A}\rho_{n}(t,x_{1},...,x_{n})dx_{1}\cdot\cdot\cdot dx_{n} :=\mathbb{P}\left(\left\{\mathcal{N}(t)=n\right\}\cap\left\{(X_{1}(t),...,X_{n}(t))\in A\right\}\right);\]
here, \(dx_{i}\) stands for the three dimensional integration volume \(dx_{i}^{(1)}dx_{i}^{(2)}dx_{i}^{(3)}\). Then, according to [5, 7] the time evolution of the reaction-diffusion process described above is governed by the following infinite system of equations:
\[\left\{\begin{aligned} \partial_{t}\rho_{n}(t,x_{1},...,x_{n})=& \sum_{i=1}^{n}\Delta_{i}\rho_{n}(t,x_{1},...,x_{n})\\ &+(n+1)\int_{\mathbb{X}}\lambda_{d}(y)\rho_{n+1}(t,x_{1},...,x_{n },y)dy\\ &-\sum_{i=1}^{n}\lambda_{d}(x_{i})\rho_{n}(t,x_{1},...,x_{n})\\ &+\frac{1}{n}\sum_{i=1}^{n}\lambda_{c}(x_{i})\rho_{n-1}(t,x_{1},...,x_{i-1},x_{i+1},...,x_{n})\\ &-\int_{\mathbb{X}}\lambda_{c}(y)dy\cdot\rho_{n}(t,x_{1},...,x_{ n}),\qquad\quad n\geq 0,t>0,(x_{1},...,x_{n})\in\mathbb{X}^{n};\end{aligned}\right. \tag{1.1}\]
where we agree on assigning value zero to the three sums above when \(n=0\). The term
\[\sum_{i=1}^{n}\Delta_{i}\rho_{n}(t,x_{1},...,x_{n})\]
in (1.1) refers to spatial diffusion of the particles: here,
\[\Delta_{i}:=\partial_{x_{i}^{(1)}}^{2}+\partial_{x_{i}^{(2)}}^{2}+\partial_{x_ {i}^{(3)}}^{2}\]
stands for the three dimensional Laplace operator. We remark that to ease the notation we choose a driftless isotropic diffusion but the extension to the divergence-form second order partial differential operator
\[\mathbb{L}_{x_{i}}v:=\sum_{l,m=1}^{3}\partial_{x_{i}^{(l)}}\left(a_{lm}(x_{i}) \partial_{x_{i}^{(m)}}v\right)-\sum_{l=1}^{3}\partial_{x_{i}^{(l)}}\left(b_{l} (x_{i})v\right),\]
which models a general anisotropic diffusion with drift on \(\mathbb{R}^{3}\), is readily obtained. The terms
\[(n+1)\int_{\mathbb{X}}\lambda_{d}(y)\rho_{n+1}(t,x_{1},...,x_{n},y)dy-\sum_{i= 1}^{n}\lambda_{d}(x_{i})\rho_{n}(t,x_{1},...,x_{n})\]
formalize gain and loss, respectively, due to reaction (I), while
\[\frac{1}{n}\sum_{i=1}^{n}\lambda_{c}(x_{i})\rho_{n-1}(t,x_{1},...,x_{i-1},x_{i +1},...,x_{n})-\int_{\mathbb{X}}\lambda_{c}(y)dy\cdot\rho_{n}(t,x_{1},...,x_{ n})\]
relate to reaction (II). System (1.1) is combined with initial and Neumann boundary conditions
\[\left\{\begin{array}{rl}\rho_{0}(0)=1;&\\ \rho_{n}(0,x_{1},...,x_{n})=0,&n\geq 1,(x_{1},...,x_{n})\in\mathbb{X}^{n}; \\ \partial_{\nu}\rho_{n}(t,x_{1},...,x_{n})=0,&n\geq 1,t\geq 0,(x_{1},...,x_{n}) \in\partial\mathbb{X}^{n}.\end{array}\right. \tag{1.2}\]
The initial condition above states that there are no molecules in the system at time zero while the Neumann condition prevents flux through the boundary of \(\mathbb{X}\), thus forcing the diffusion of the molecules inside \(\mathbb{X}\). The symbol \(\partial_{\nu}\) in (1.2) stands for the directional derivative along the outer normal vector at the boundary of \(\mathbb{X}^{n}\).
Aim of this note is to present the following solution formula for (1.1)-(1.2).
**Theorem 1.1**.: _Let \(v\) be a classical solution of the problem_
\[\begin{cases}\partial_{t}v(t,x)=\Delta v(t,x)-\lambda_{d}(x)v(t,x)+\lambda_{c} (x),&t>0,x\in\mathbb{X};\\ v(0,x)=0,&x\in\widetilde{\mathbb{X}};\\ \partial_{\nu}v(t,x)=0,&t\geq 0,x\in\partial\mathbb{X}.\end{cases} \tag{1.3}\]
_Then, the chemical diffusion master equation (1.1) with initial and boundary conditions (1.2) has a classical solution given by_
\[\rho_{0}(t)=\mathbb{P}(\mathcal{N}(t)=0)=\exp\left\{-\int_{\mathbb{X}}v(t,x) dx\right\},\quad t\geq 0, \tag{1.4}\]
_and for \(n\geq 1\)_
\[\rho_{n}(t,x_{1},...,x_{n})=\exp\left\{-\int_{\mathbb{X}}v(t,x)dx\right\} \frac{1}{n!}v(t,x_{1})\cdot\cdot\cdot v(t,x_{n}),\quad t\geq 0,(x_{1},...,x_{n}) \in\mathbb{X}^{n}. \tag{1.5}\]
To prove the validity of the representations (1.4)-(1.5) one can trivially differentiate the right hand sides with respect to \(t\) and verify using (1.3) that they indeed solve (1.1)-(1.2). We will however provide in the next section a constructive derivation of the expressions (1.4)-(1.5) which is based on the general approach proposed in [20]; here, an infinite dimensional version of the moment generating function method, which is commonly utilized to solve analytically some chemical master equations (see for details [21]), is developed. These techniques are also employed in an ongoing work which consider chemical diffusion master equations with higher order reactions.
**Remark 1.2**.: _It is important to highlight the striking similarities between the representation formulas (1.4)-(1.5) for the solution of the CDME (1.1)-(1.2) and the solution_
\[\varphi_{n}(t)=\frac{\left(\frac{\mathfrak{d}}{\mathfrak{d}}(1-e^{-\mathfrak{ d}t})\right)^{n}}{n!}e^{-\frac{\mathfrak{d}}{\mathfrak{d}}(1-e^{-\mathfrak{d}t})}, \quad t\geq 0,n\geq 0, \tag{1.6}\]
_of the corresponding (diffusion-free) birth-death chemical master equation_
\[\dot{\varphi}_{n}(t)=\mathsf{d}(n+1)\varphi_{n+1}(t)+\mathsf{c} \varphi_{n-1}(t)-\mathsf{d}n\varphi_{n}(t)-\mathsf{c}\varphi_{n}(t), \tag{1.7}\]
_with initial condition_
\[\varphi_{n}(0)=\delta_{0n},\quad\text{for all }n\geq 0. \tag{1.8}\]
_Equation (1.7)-(1.8) describes the evolution in time of the probability_
\[\varphi_{n}(t):=\mathbb{P}(\text{number of molecules at time }t=n)\]
_for the reactions_
\[\text{(I)}\quad S\xrightarrow{\mathsf{d}}\varnothing\qquad\quad\text{(II)} \quad\varnothing\xrightarrow{\mathsf{c}}S,\]
_with no molecules at time zero. Here, \(\mathsf{d}\) and \(\mathsf{c}\) are the stochastic rate constants for degradation and creation reactions, respectively. (To see how (1.6) is derived from (1.7)-(1.8) one can for instance use the moment generating function method: see [21] for details). We note that the function_
\[t\mapsto\frac{\mathsf{c}}{\mathsf{d}}(1-e^{-\mathfrak{d}t}),\]
_appearing in (1.6) solves the deterministic rate equation_
\[\begin{cases}\frac{d}{dt}v(t)=-\mathsf{d}v(t)+\mathsf{c},&t>0\\ v(0)=0&.\end{cases} \tag{1.9}\]
_This establishes a perfect agreement between (1.3),(1.4),(1.5), i.e. representation of the solution for (1.1)-(1.2) and reaction-diffusion PDE, on one side and (1.6),(1.9), i.e. representation of the solution for (1.7)-(1.8) and rate equation, on the other side._
**Corollary 1.3**.: _In the reaction-diffusion model described by the CDME (1.1)-(1.2), conditioned on the event \(\{\mathcal{N}(t)=n\}\) the positions of the molecules at time \(t\) are independent and identically distributed with probability density function_
\[p(t,x):=\frac{v(t,x)}{\int_{\mathbb{X}}v(t,x)dx},\quad x\in \mathbb{X}.\]
_Moreover,_
\[\mathbb{P}(\mathcal{N}(t)=n)=\frac{\left(\int_{\mathbb{X}}v(t,x) dx\right)^{n}}{n!}\exp\left\{-\int_{\mathbb{X}}v(t,x)dx\right\}.\]
Proof.: Let \(A\in\mathcal{B}(\mathbb{X}^{n})\); then,
\[\mathbb{P}((X_{1}(t),...,X_{n}(t))\in A|\mathcal{N}(t)=n) =\frac{\mathbb{P}(\{(X_{1}(t),...,X_{n}(t))\in A\}\cap\{ \mathcal{N}(t)=n\})}{\mathbb{P}(\mathcal{N}(t)=n)}\] \[=\frac{\int_{A}\rho_{n}(t,x_{1},...,x_{n})dx_{1}\cdots dx_{n}}{ \int_{\mathbb{X}^{n}}\rho_{n}(t,x_{1},...,x_{n})dx_{1}\cdots dx_{n}}\]
\[=\frac{\int_{A}\exp\left\{-\int_{\mathbb{X}}v(t,x)dx\right\}}{\int_{ \mathbb{X}^{n}}\exp\left\{-\int_{\mathbb{X}}v(t,x)dx\right\}}\frac{1}{n!}v(t,x_{ 1})\cdots v(t,x_{n})dx_{1}\cdots dx_{n}\] \[=\int_{A}\frac{v(t,x_{1})}{\int_{\mathbb{X}}v(t,x)dx}\cdots\frac{v (t,x_{n})}{\int_{\mathbb{X}}v(t,x)dx}dx_{1}\cdots dx_{n}.\]
The second part of the statement is proved as follows:
\[\mathbb{P}(\mathcal{N}(t)=n) =\int_{\mathbb{X}^{n}}\rho_{n}(t,x_{1},...,x_{n})dx_{1}\cdots dx_ {n}\] \[=\int_{\mathbb{X}^{n}}\exp\left\{-\int_{\mathbb{X}}v(t,x)dx \right\}\frac{1}{n!}v(t,x_{1})\cdots v(t,x_{n})dx_{1}\cdots dx_{n}\] \[=\frac{\left(\int_{\mathbb{X}}v(t,x)dx\right)^{n}}{n!}\exp\left\{ -\int_{\mathbb{X}}v(t,x)dx\right\}.\]
The paper is organized as follows: in Section 2 we propose a constructive proof of Theorem 1.1 which is based on the approach described in [20] while in Section 3 we show graphical illustrations of our findings for some particular cases of physical interest that allow for explicit computations in the reaction diffusion PDE (1.3).
## 2 Constructive proof of Theorem 1.1
In this section we propose a constructive method to derive the representation formulas (1.4)-(1.5) of Theorem 1.1. The method we propose steams from a further development of the ideas and results presented in [20] which are reported here for easiness of reference.
For notational purposes we assume \(\mathbb{X}=]0,1[\). Consider the birth-death CDME
\[\left\{\begin{aligned} \partial_{t}\rho_{n}(t,x_{1},...,x_{n})=& \sum_{i=1}^{n}\partial_{x_{i}}^{2}\rho_{n}(t,x_{1},...,x_{n})\\ &+(n+1)\int_{0}^{1}\lambda_{d}(y)\rho_{n+1}(t,x_{1},...,x_{n},y) dy\\ &-\sum_{i=1}^{n}\lambda_{d}(x_{i})\rho_{n}(t,x_{1},...,x_{n})\\ &+\frac{1}{n}\sum_{i=1}^{n}\lambda_{c}(x_{i})\rho_{n-1}(t,x_{1},...,x_{i-1},x_{i+1},...,x_{n})\\ &-\int_{0}^{1}\!\lambda_{c}(y)dy\cdot\rho_{n}(t,x_{1},...,x_{n} ),\qquad\quad n\geq 0,t>0,(x_{1},...,x_{n})\in]0,1[^{n},\end{aligned}\right. \tag{2.1}\]
with the usual agreement of assigning value zero to the three sums above when \(n=0\), together with initial and Neumann boundary conditions
\[\left\{\begin{aligned} \rho_{0}(0)&=1;\\ \rho_{n}(0,x_{1},...,x_{n})&=0,\quad n\geq 1,(x_{1},...,x_{n})\in[0,1]^{n};\\ \partial_{\nu}\rho_{n}(t,x_{1},...,x_{n})&=0,\quad n \geq 1,t\geq 0,(x_{1},...,x_{n})\in\partial[0,1]^{n}.\end{aligned}\right. \tag{2.2}\]
We set
\[\mathcal{A}:=-\partial_{x}^{2}+\lambda_{d}(x),\quad x\in[0,1], \tag{2.3}\]
with homogenous Neumann boundary conditions and write \(\{\xi_{k}\}_{k\geq 1}\) for the orthonormal basis of \(L^{2}([0,1])\) that diagonalizes the operator \(\mathcal{A}\); this means that for all \(j,k\geq 1\) we have
\[\int_{0}^{1}\xi_{k}(y)\xi_{j}(y)dy=\delta_{kj},\quad\xi_{k}^{\prime}(0)=\xi_{k} ^{\prime}(1)=0,\]
and there exists a sequence of non negative real numbers \(\{\alpha_{k}\}_{k\geq 1}\) such that
\[\mathcal{A}\xi_{k}=\alpha_{k}\xi_{k},\quad\text{ for all }k\geq 1.\]
We observe that \(\mathcal{A}\) is an unbounded, non negative self-adjoint operator.
**Assumption 2.1**.: _The sequence of eigenvalues \(\{\alpha_{k}\}_{k\geq 1}\) is strictly positive._
We now denote by \(\Pi_{N}:L^{2}([0,1])\to L^{2}([0,1])\) the orthogonal projection onto the finite dimensional space spanned by \(\{\xi_{1},...,\xi_{N}\}\), i.e.
\[\Pi_{N}f(x):=\sum_{k=1}^{N}\langle f,\xi_{k}\rangle_{L^{2}([0,1])}\xi_{k}(x), \quad x\in[0,1];\]
we also set
\[d_{k}:=\langle\lambda_{d},\xi_{k}\rangle_{L^{2}([0,1])},\quad c_{k}:=\langle \lambda_{c},\xi_{k}\rangle_{L^{2}([0,1])},\quad\gamma:=\int_{0}^{1}\lambda_{c} (y)dy. \tag{2.4}\]
**Assumption 2.2**.: _There exists \(N_{0}\geq 1\) such that \(\Pi_{N_{0}}\lambda_{d}=\lambda_{d}\); this is equivalent to say \(\Pi_{N}\lambda_{d}=\lambda_{d}\) for all \(N\geq N_{0}\)._
In the sequel we set \(\Pi_{N}^{\otimes n}\) to be the orthogonal projection from \(L^{2}([0,1]^{n})\) to the linear space generated by the functions \(\{\xi_{i_{1}}\otimes\cdots\otimes\xi_{i_{n}},1\leq i_{1},...,i_{n}\leq N\}\). The next theorem was proved in [20].
**Theorem 2.3**.: _Let Assumptions 2.1-2.2 be in force and denote by \(\{\rho_{n}\}_{n\geq 0}\) a classical solution of equation (1.1)-(1.2). Then, for any \(N\geq N_{0}\) and \(t\geq 0\) we have the representation_
\[\rho_{0}^{(N)}(t)=\mathbb{E}[u_{N}(t,Z)], \tag{2.5}\]
_and for any \(n\geq 1\) and \((x_{1},...,x_{n})\in[0,1]^{n}\),_
\[\Pi_{N}^{\otimes n}\rho_{n}(t,x_{1},...,x_{n})=\frac{1}{n!}\sum_{j_{1},...j_{ n}=1}^{N}\mathbb{E}\left[\left(\partial_{z_{j_{1}}}\cdots\partial_{z_{j_{n}}}u_{N} \right)(t,Z)\right]\xi_{j_{1}}(x_{1})\cdots\xi_{j_{n}}(x_{n}). \tag{2.6}\]
_Here,_
\[\mathbb{E}\left[\left(\partial_{z_{j_{1}}}\cdots\partial_{z_{j_{n}}}u_{N} \right)(t,Z)\right]=\int_{\mathbb{R}^{N}}\left(\partial_{z_{j_{1}}}\cdots \partial_{z_{j_{n}}}u_{N}\right)(t,z)(2\pi)^{-N/2}e^{-\frac{|z|^{2}}{2}}dz, \tag{2.7}\]
_while \(u_{N}:[0,+\infty[\times\mathbb{R}^{N}\to\mathbb{R}\) is a classical solution of the partial differential equation_
\[\left\{\begin{array}{l}\partial_{t}u_{N}(t,z)=\sum_{k=1}^{N} \alpha_{k}\partial_{z_{k}}^{2}u_{N}(t,z)+\sum_{k=1}^{N}\left(d_{k}-c_{k}- \alpha_{k}z_{k}\right)\partial_{z_{k}}u_{N}(t,z)+\left(\sum_{k=1}^{N}c_{k}z_{ k}-\gamma\right)u_{N}(t,z)\\ u_{N}(0,z)=1,\quad t\geq 0,z\in\mathbb{R}^{N}.\end{array}\right. \tag{2.8}\]
We now start working out the details of formulas (2.5)-(2.6).
**Lemma 2.4**.: _The solution to the Cauchy problem (2.8) can be represented as_
\[u_{N}(t,z)=\exp\left\{-\gamma t+\sum_{k=1}^{N}\left(c_{k}z_{k}g_{k}(t)+c_{k}(d_{k} -c_{k})\int_{0}^{t}g_{k}(s)ds+c_{k}^{2}\alpha_{k}\int_{0}^{t}g_{k}(s)^{2}ds \right)\right\}, \tag{2.9}\]
_where_
\[g_{k}(t):=\frac{1-e^{-\alpha_{k}t}}{\alpha_{k}},\quad t\geq 0,k=1,...,N. \tag{2.10}\]
Proof.: The solution to the Cauchy problem (2.8) admits the following Feynman-Kac representation (see for instance [18])
\[u_{N}(t,z)=\mathbb{E}\left[\exp\left\{\int_{0}^{t}\left(\sum_{k=1}^{N}c_{k} \mathcal{Z}_{k}^{z_{k}}(s)-\gamma\right)ds\right\}\right],\quad t\geq 0,z=(z_{1},...,z_{N})\in\mathbb{R}^{N}. \tag{2.11}\]
Here, for \(k\in\{1,...,N\}\), the stochastic process \(\{\mathcal{Z}_{k}^{z_{k}}(t)\}_{t\geq 0}\) is the unique strong solution of the mean-reverting Ornstein-Uhlenbeck stochastic differential equation
\[d\mathcal{Z}_{k}^{z_{k}}(t)=\left(d_{k}-c_{k}-\alpha_{k}\mathcal{Z}_{k}^{z_{k} }(t)\right)dt+\sqrt{2\alpha_{k}}dW_{k}(t),\quad\mathcal{Z}_{k}^{z_{k}}(0)=z_{k}, \tag{2.12}\]
with \(\{W_{1}(t)\}_{t\geq 0,}\)...,\(\{W_{N}(t)\}_{t\geq 0}\) being independent one dimensional Brownian motions. Using the independence of the processes \(\mathcal{Z}_{1}^{z_{1}}\),...., \(\mathcal{Z}_{N}^{z_{N}}\) we can rewrite (2.11) as
\[u_{N}(t,z) =e^{-\gamma t}\mathbb{E}\left[\exp\left\{\sum_{k=1}^{N}c_{k}\int _{0}^{t}\mathcal{Z}_{k}^{z_{k}}(s)ds\right\}\right]=e^{-\gamma t}\mathbb{E} \left[\prod_{k=1}^{N}\exp\left\{c_{k}\int_{0}^{t}\mathcal{Z}_{k}^{z_{k}}(s)ds \right\}\right]\] \[=e^{-\gamma t}\prod_{k=1}^{N}\mathbb{E}\left[\exp\left\{c_{k} \int_{0}^{t}\mathcal{Z}_{k}^{z_{k}}(s)ds\right\}\right]. \tag{2.13}\]
We now want to compute the last expectation explicitly: first of all, we observe that equation (2.12) admits the unique strong solution
\[\mathcal{Z}_{k}^{z_{k}}(t)=z_{k}e^{-\alpha_{k}t}+\frac{d_{k}-c_{k}}{\alpha_{k }}\left(1-e^{-\alpha_{k}t}\right)+\int_{0}^{t}e^{-\alpha_{k}(t-s)}\sqrt{2 \alpha_{k}}dW_{k}(s),\]
(recall Assumption 2.1). Therefore,
\[\int_{0}^{t}\mathcal{Z}_{k}^{z_{k}}(s)ds =z_{k}\frac{1-e^{-\alpha_{k}t}}{\alpha_{k}}+(d_{k}-c_{k})\int_{0} ^{t}\frac{1-e^{-\alpha_{k}s}}{\alpha_{k}}ds+\int_{0}^{t}\int_{0}^{s}e^{-\alpha _{k}(s-u)}\sqrt{2\alpha_{k}}dW_{k}(u)ds\] \[=z_{k}\frac{1-e^{-\alpha_{k}t}}{\alpha_{k}}+(d_{k}-c_{k})\int_{0} ^{t}\frac{1-e^{-\alpha_{k}s}}{\alpha_{k}}ds+\sqrt{2\alpha_{k}}\int_{0}^{t} \frac{1-e^{-\alpha_{k}(t-s)}}{\alpha_{k}}dW_{k}(s);\]
in the last equality we employed Fubini theorem for Lebesgue-Wiener integrals. The identity above yields
\[\mathbb{E}\left[\exp\left\{c_{k}\int_{0}^{t}\mathcal{Z}_{k}^{z_{ k}}(s)ds\right\}\right]= \exp\left\{c_{k}\left(z_{k}\frac{1-e^{-\alpha_{k}t}}{\alpha_{k}} +(d_{k}-c_{k})\int_{0}^{t}\frac{1-e^{-\alpha_{k}s}}{\alpha_{k}}ds\right)\right\}\] \[\times\mathbb{E}\left[\exp\left\{c_{k}\sqrt{2\alpha_{k}}\int_{0}^{ t}\frac{1-e^{-\alpha_{k}(t-s)}}{\alpha_{k}}dW_{k}(s)\right\}\right]\] \[= \exp\left\{c_{k}\left(z_{k}\frac{1-e^{-\alpha_{k}t}}{\alpha_{k}} +(d_{k}-c_{k})\int_{0}^{t}\frac{1-e^{-\alpha_{k}s}}{\alpha_{k}}ds\right)\right\}\]
\[\times\exp\left\{c_{k}^{2}\alpha_{k}\int_{0}^{t}\left(\frac{1-e^{- \alpha_{k}(t-s)}}{\alpha_{k}}\right)^{2}ds\right\},\]
where in last equality we used the fact that \(\int_{0}^{t}\frac{1-e^{-\alpha_{k}(t-s)}}{\alpha_{k}}dW_{k}(s)\) is a Gaussian random variable with mean zero and variance \(\int_{0}^{t}\left(\frac{1-e^{-\alpha_{k}(t-s)}}{\alpha_{k}}\right)^{2}ds\). This, together with (2.13), gives
\[u_{N}(t,z)= e^{-\gamma t}\prod_{k=1}^{N}\exp\left\{c_{k}\left(z_{k}\frac{1-e^{ -\alpha_{k}t}}{\alpha_{k}}+(d_{k}-c_{k})\int_{0}^{t}\frac{1-e^{-\alpha_{k}s}}{ \alpha_{k}}ds\right)\right\}\] \[\times\prod_{k=1}^{N}\exp\left\{c_{k}^{2}\alpha_{k}\int_{0}^{t} \left(\frac{1-e^{-\alpha_{k}(t-s)}}{\alpha_{k}}\right)^{2}ds\right\}\] \[= \exp\left\{-\gamma t+\sum_{k=1}^{N}\left(c_{k}z_{k}g_{k}(t)+c_{k} (d_{k}-c_{k})\int_{0}^{t}g_{k}(s)ds+c_{k}^{2}\alpha_{k}\int_{0}^{t}g_{k}(s)^{2 }ds\right)\right\},\]
(recall definition (2.10)). The proof is complete.
**Lemma 2.5**.: _Expectation (2.5) can be written as_
\[\rho_{0}^{(N)}(t)= \exp\left\{t\left(\sum_{k=1}^{N}\frac{c_{k}d_{k}}{\alpha_{k}}- \gamma\right)+\sum_{k=1}^{N}c_{k}d_{k}\frac{e^{-\alpha_{k}t}-1}{\alpha_{k}^{2 }}\right\}.\]
_In particular,_
\[\rho_{0}(t)=\lim_{N\rightarrow+\infty}\rho_{0}^{(N)}(t)=\exp\left\{\sum_{k \geq 1}c_{k}d_{k}\frac{e^{-\alpha_{k}t}-1}{\alpha_{k}^{2}}\right\}. \tag{2.14}\]
Proof.: Let \(Z=(Z_{1},...,Z_{N})\) be an \(N\)-dimensional vector of i.i.d. standard Gaussian random variables; then,
\[\rho_{0}^{(N)}(t)= \mathbb{E}[u_{N}(t,Z)]\] \[= \mathbb{E}\left[\exp\left\{-\gamma t+\sum_{k=1}^{N}\left(c_{k}Z_{ i}g_{k}(t)+c_{k}(d_{k}-c_{k})\int_{0}^{t}g_{k}(s)ds+c_{k}^{2}\alpha_{k}\int_{0}^{t} g_{k}(s)^{2}ds\right)\right\}\right]\] \[= \exp\left\{-\gamma t+\sum_{k=1}^{N}\left(c_{k}(d_{k}-c_{k})\int_ {0}^{t}g_{k}(s)ds+c_{k}^{2}\alpha_{k}\int_{0}^{t}g_{k}(s)^{2}ds\right)\right\} \mathbb{E}\left[\exp\left\{\sum_{k=1}^{N}c_{k}Z_{i}g_{k}(t)\right\}\right]\] \[= \exp\left\{-\gamma t+\sum_{k=1}^{N}\left(\frac{c_{k}^{2}g_{k}(t) ^{2}}{2}+c_{k}(d_{k}-c_{k})\int_{0}^{t}g_{k}(s)ds+c_{k}^{2}\alpha_{k}\int_{0}^ {t}g_{k}(s)^{2}ds\right)\right\}\] \[= \exp\left\{-\gamma t+\sum_{k=1}^{N}\left[c_{k}^{2}\left(\frac{g_{ k}(t)^{2}}{2}-\int_{0}^{t}g_{k}(s)ds+\alpha_{k}\int_{0}^{t}g_{k}(s)^{2}ds\right)+c_{k} d_{k}\int_{0}^{t}g_{k}(s)ds\right]\right\}\] \[= \exp\left\{-\gamma t+\sum_{k=1}^{N}c_{k}d_{k}\int_{0}^{t}g_{k}(s )ds\right\}.\]
The fourth equality follows from the expression of the exponential generating function of a Gaussian vector while the last equality is due to identity
\[\frac{g_{k}(t)^{2}}{2}-\int_{0}^{t}g_{k}(s)ds+\alpha_{k}\int_{0}^{t}g_{k}(s)^{2 }ds=0,\quad t\geq 0\]
which follows from a direct verification (recall definition (2.10)). On the other hand, we have
\[\int_{0}^{t}g_{k}(s)ds=\frac{t}{\alpha_{k}}+\frac{e^{-\alpha_{k}t}-1}{\alpha_{k}^ {2}},\]
and hence
\[\rho_{0}^{(N)}(t)= \exp\left\{t\left(\sum_{k=1}^{N}\frac{c_{k}d_{k}}{\alpha_{k}}- \gamma\right)+\sum_{k=1}^{N}c_{k}d_{k}\frac{e^{-\alpha_{k}t}-1}{\alpha_{k}^{2} }\right\}.\]
Moreover, letting \(N\) to infinity we get
\[\rho_{0}(t)= \lim_{N\rightarrow+\infty}\rho_{0}^{(N)}(t)\] \[= \lim_{N\rightarrow+\infty}\exp\left\{t\left(\sum_{k=1}^{N}\frac{ c_{k}d_{k}}{\alpha_{k}}-\gamma\right)+\sum_{k=1}^{N}c_{k}d_{k}\frac{e^{-\alpha_{k}t} -1}{\alpha_{k}^{2}}\right\}\] \[= \exp\left\{t\left(\sum_{k\geq 1}\frac{c_{k}d_{k}}{\alpha_{k}}- \gamma\right)+\sum_{k\geq 1}c_{k}d_{k}\frac{e^{-\alpha_{k}t}-1}{\alpha_{k}^{2} }\right\}\] \[= \exp\left\{\sum_{k\geq 1}c_{k}d_{k}\frac{e^{-\alpha_{k}t}-1}{ \alpha_{k}^{2}}\right\}.\]
Here, we employed the identity
\[\sum_{k\geq 1}\frac{c_{k}d_{k}}{\alpha_{k}}=\gamma,\]
which follows from
\[\sum_{k\geq 1}\frac{c_{k}d_{k}}{\alpha_{k}}= \langle\mathcal{A}^{-1}\lambda_{c},\lambda_{d}\rangle_{L^{2}([0,1 ])}=\langle\mathcal{A}^{-1}\lambda_{c},\mathcal{A}1\rangle_{L^{2}([0,1])}= \langle\lambda_{c},\mathcal{A}^{-1}\mathcal{A}1\rangle_{L^{2}([0,1])}\] \[= \langle\lambda_{c},1\rangle_{L^{2}([0,1])}=\int_{0}^{1}\lambda_{ c}(x)\mathtt{1}(x)dx=\gamma.\]
We also denoted \(\mathtt{1}(x)=1\), \(x\in[0,1]\) and exploited the identity \(\mathcal{A}\mathtt{1}=\lambda_{d}\).
**Lemma 2.6**.: _Expectation (2.6) can be written as_
\[\Pi_{N}^{\otimes n}\rho_{n}(t,x_{1},...,x_{n})=\rho_{0}^{(N)}(t)\frac{1}{n!} \left(\sum_{j=1}^{N}c_{j}g_{j}(t)\xi_{j}(x_{1})\right)\cdots\left(\sum_{j=1}^{ N}c_{j}g_{j}(t)\xi_{j}(x_{n})\right).\]
_In particular,_
\[\rho_{n}(t,x_{1},...,x_{n}) =\lim_{N\rightarrow+\infty}\Pi_{N}^{\otimes n}\rho_{n}(t,x_{1},...,x_{n})\] \[=\exp\left\{-\sum_{k\geq 1}c_{k}d_{k}\frac{1-e^{-\alpha_{k}t}}{ \alpha_{k}^{2}}\right\}\frac{1}{n!}\left(\sum_{j\geq 1}c_{j}\frac{1-e^{-\alpha_{j} t}}{\alpha_{j}}\xi_{j}\right)^{\otimes n}(x_{1},...,x_{n}). \tag{2.15}\]
Proof.: We note that according to (2.9) we have
\[\left(\partial_{z_{j_{1}}}\cdots\partial_{z_{j_{n}}}u_{N}\right)(t,z)=u_{N}(t,z)c_ {j_{1}}g_{j_{1}}(t)\cdots c_{j_{n}}g_{j_{n}}(t),\]
and hence
\[\mathbb{E}\left[\left(\partial_{z_{j_{1}}}\cdots\partial_{z_{j_{n}}}u_{N} \right)(t,Z)\right]=\rho_{0}^{(N)}(t)c_{j_{1}}g_{j_{1}}(t)\cdots c_{j_{n}}g_{j _{n}}(t).\]
Therefore,
\[\Pi_{N}^{\otimes n}\rho_{n}(t,x_{1},...,x_{n}) =\frac{1}{n!}\sum_{j_{1},...,j_{n}=1}^{N}\mathbb{E}\left[\left( \partial_{z_{j_{1}}}\cdots\partial_{z_{j_{n}}}u_{N}\right)(t,Z)\right]\xi_{j_ {1}}(x_{1})\cdots\xi_{j_{n}}(x_{n})\] \[=\frac{1}{n!}\sum_{j_{1},...,j_{n}=1}^{N}\rho_{0}^{(N)}(t)c_{j_{ 1}}g_{j_{1}}(t)\cdots c_{j_{n}}g_{j_{n}}(t)\xi_{j_{1}}(x_{1})\cdots\xi_{j_{n}} (x_{n})\] \[=\rho_{0}^{(N)}(t)\frac{1}{n!}\left(\sum_{j=1}^{N}c_{j}g_{j}(t) \xi_{j}(x_{1})\right)\cdots\left(\sum_{j=1}^{N}c_{j}g_{j}(t)\xi_{j}(x_{n}) \right).\]
Moreover, letting \(N\) to infinity we obtain
\[\rho_{n}(t,x_{1},...,x_{n}) =\exp\left\{-\sum_{k\geq 1}c_{k}d_{k}\frac{1-e^{-\alpha_{k}t}}{ \alpha_{k}^{2}}\right\}\frac{1}{n!}\left(\sum_{j\geq 1}c_{j}g_{j}(t)\xi_{j}(x_{1}) \right)\cdots\left(\sum_{j\geq 1}c_{j}g_{j}(t)\xi_{j}(x_{n})\right)\] \[=\exp\left\{-\sum_{k\geq 1}c_{k}d_{k}\frac{1-e^{-\alpha_{k}t}}{ \alpha_{k}^{2}}\right\}\frac{1}{n!}\left(\sum_{j\geq 1}c_{j}\frac{1-e^{- \alpha_{j}t}}{\alpha_{j}}\xi_{j}\right)^{\otimes n}(x_{1},...,x_{n}).\]
We are now in a position to show the equivalence between (2.14)-(2.15) and (1.4)-(1.5). We start observing that
\[d_{k}:=\langle\lambda_{d},\xi_{k}\rangle=\langle\mathcal{A}1,\xi_{k}\rangle= \langle 1,\mathcal{A}\xi_{k}\rangle=\alpha_{k}\langle 1,\xi_{k}\rangle=\alpha_{k} \int_{0}^{1}\xi_{k}(x)dx.\]
Therefore, from formula (2.14) we can write
\[\rho_{0}(t)=\mathbb{P}(\mathcal{N}(t)=0) =\exp\left\{-\sum_{k\geq 1}c_{k}d_{k}\frac{1-e^{-\alpha_{k}t}}{ \alpha_{k}^{2}}\right\}\] \[=\exp\left\{-\sum_{k\geq 1}c_{k}\int_{0}^{1}\xi_{k}(x)dx\frac{1-e^{- \alpha_{k}t}}{\alpha_{k}}\right\}\] \[=\exp\left\{-\int_{0}^{1}\left(\sum_{k\geq 1}c_{k}\frac{1-e^{- \alpha_{k}t}}{\alpha_{k}}\xi_{k}(x)\right)dx\right\}\] \[=\exp\left\{-\int_{0}^{1}v(t,x)dx\right\},\]
where we set
\[v(t,x):=\sum_{k\geq 1}c_{k}\frac{1-e^{-\alpha_{k}t}}{\alpha_{k}}\xi_{k}(x). \tag{2.16}\]
Note that with this notation we can also write according to (2.15) that
\[\rho_{n}(t,x_{1},...,x_{n})=\exp\left\{-\int_{\mathbb{X}}v(t,x)dx \right\}\frac{1}{n!}v(t,x_{1})\cdots v(t,x_{n}),\quad t\geq 0,(x_{1},...,x_{n}) \in\mathbb{X}^{n}.\]
If we now prove that the function \(v\) defined in (2.16) solves (1.3), then the equivalence between (2.14)-(2.15) and (1.4)-(1.5) will be established. Since
\[\partial_{t}v(t,x)=\partial_{t}\left(\sum_{k\geq 1}c_{k}\frac{1-e^{- \alpha_{k}t}}{\alpha_{k}}\xi_{k}(x)\right)=\sum_{k\geq 1}c_{k}e^{-\alpha_{k}t} \xi_{k}(x),\]
we can conclude that
\[\partial_{x}^{2}v(t,x)-\lambda_{d}(x)v(t,x) =-\mathcal{A}v(t,x)=-\mathcal{A}\left(\sum_{k\geq 1}c_{k}\frac{1-e ^{-\alpha_{k}t}}{\alpha_{k}}\xi_{k}(x)\right)=\sum_{k\geq 1}c_{k}(e^{- \alpha_{k}t}-1)\xi_{k}(x)\] \[=\sum_{k\geq 1}c_{k}e^{-\alpha_{k}t}\xi_{k}(x)-\lambda_{c}(x)= \partial_{t}v(t,x)-\lambda_{c}(x),\]
proving the desired property (the initial and boundary conditions in (1.3) are readily satisfied).
## 3 Case study: one dimensional motion with constant degradation function
In this section we illustrate through several plots our theoretical findings for some concrete models. According to formulas (1.4)-(1.5) the solution to the chemical diffusion master equation (1.1)-(1.2) is completely determined by the solution of equation (1.3). To solve this problem explicitly we decided to focus on the one dimensional case \(\mathbb{X}=]0,1[\) with driftless isotropic diffusion (i.e. the framework of Section 2) and constant degradation function \(\lambda_{d}\). This last restriction yields the advantage of knowing the explicit form of the eigenfunctions and eigenvalues of the operator \(\mathcal{A}\) in (2.3) and hence the possibility of working with (2.14)-(2.15), which we recall to be equivalent to (1.4)-(1.5).
When \(\lambda_{d}(x)=\lambda_{d}\), \(x\in[0,1]\) for some positive constant \(\lambda_{d}\), we get
\[\mathcal{A}f(x)=-f^{\prime\prime}(x)+\lambda_{d}f(x),\qquad\xi_{k }(x)=\cos((k-1)\pi x),\quad k\geq 1,\]
and
\[\alpha_{k}=(k-1)^{2}\pi^{2}+\lambda_{d},\quad k\geq 1. \tag{3.1}\]
Therefore, the degradation function \(\lambda_{d}(x)=\lambda_{d}\) is proportional to the first eigenfunction \(\xi_{1}(x)=1(x)\) and hence orthogonal to all the other eigenfunctions \(\xi_{k}(x)\) for \(k\geq 2\); this gives
\[d_{1}=\langle\lambda_{d},\xi_{1}\rangle=\lambda_{d}\quad\text{ and }\quad d_{k}= \langle\lambda_{d},\xi_{k}\rangle=0\text{ for all }k\geq 2;\]
note also that (3.1) implies \(\alpha_{1}=\lambda_{d}\). Combining these facts in (2.14) and (2.15) we obtain
\[\rho_{0}(t)=\mathbb{P}(\mathcal{N}(t)=0)=\exp\left\{-c_{1}\frac{1 -e^{-\lambda_{d}t}}{\lambda_{d}}\right\},\quad t\geq 0, \tag{3.2}\]
and, for \(n\geq 1\), \(t\geq 0\) and \((x_{1},...,x_{n})\in[0,1]^{n}\),
\[\rho_{n}(t,x_{1},...,x_{n})=\exp\left\{-c_{1}\frac{1-e^{-\lambda_ {d}t}}{\lambda_{d}}\right\}\frac{1}{n!}\left(\sum_{j\geq 1}c_{j}\frac{1-e^{- \alpha_{j}t}}{\alpha_{j}}\xi_{j}\right)^{\otimes n}(x_{1},...,x_{n}). \tag{3.3}\]
We now specify some interesting choices of the creation function \(\lambda_{c}\).
### Constant creation function
In the case \(\lambda_{c}(x)=\lambda_{c}\), \(x\in[0,1]\) for some positive constant \(\lambda_{c}\), in other words the creation is uniform in the whole interval just like the degradation, we get from (3.2) and (3.3)
\[\rho_{0}(t)=\mathbb{P}(\mathcal{N}(t)=0)=\exp\left\{-\lambda_{c} \frac{1-e^{-\lambda_{d}t}}{\lambda_{d}}\right\},\quad t\geq 0,\]
and for \(n\geq 1\)
\[\rho_{n}(t,x_{1},...,x_{n}) =\exp\left\{-\lambda_{c}\frac{1-e^{-\lambda_{d}t}}{\lambda_{d}} \right\}\frac{1}{n!}\left(\lambda_{c}\frac{1-e^{-\lambda_{d}t}}{\lambda_{d}} \xi_{1}\right)^{\otimes n}(x_{1},...,x_{n})\] \[=\exp\left\{-\lambda_{c}\frac{1-e^{-\lambda_{d}t}}{\lambda_{d}} \right\}\frac{1}{n!}\left(\lambda_{c}\frac{1-e^{-\lambda_{d}t}}{\lambda_{d}} \right)^{n}1(x_{1})\cdots 1(x_{n}).\]
This shows that for any \(n\geq 1\) the function \(\rho_{n}\) is constant in \(x_{1},...,x_{n}\) with height given by the \(n\)-th component of the solution to the birth-death chemical master equation with stochastic rate constants \(\lambda_{d}\) and \(\lambda_{c}\) (compare with (1.6)).
Figure 1 shows the solutions for \(\rho_{0}\) and \(\rho_{1}\) as a function of time. Figure 0(a) shows the the exponential decay of the probability of having 0 particles due to the constant creation of particles, and 0(b) shows the probability distribution of having 1 particle is uniform in space for all times, as well as its convergence to the stationary distribution.
### Dirac delta creation function at \(x=0\)
In this case we take \(\lambda_{c}(x)=\lambda_{c}\delta_{0}(x)\), \(x\in[0,1]\) for some positive constant \(\lambda_{c}\), so the creation takes place only in the leftmost point of the interval while degradation happens uniformly. This way one yields
\[c_{j}=\int_{0}^{1}\lambda_{c}(x)\xi_{j}(x)dx=\int_{0}^{1}\lambda _{c}\delta_{0}(x)\xi_{j}(x)dx=\lambda_{c}\xi_{j}(0)=\lambda_{c},\quad\text{ for all }j\geq 1.\]
and formulas (3.2) and (3.3) now read
\[\rho_{0}(t)=\mathbb{P}(\mathcal{N}(t)=0)=\exp\left\{-\lambda_{c} \frac{1-e^{-\lambda_{d}t}}{\lambda_{d}}\right\},\quad t\geq 0,\]
Figure 1: Solutions plots of the bdCDME generated with constant creation and degradation rates with \(\lambda_{c}=\lambda_{d}=0.5\). a. The solution of \(\rho_{0}(t)\) as a function of time. b. The solution of \(\rho_{1}(t,x_{1})\) as a function of position and time.
Figure 2: Solutions plots of the bdCDME generated with creation of particles at \(x=0\) and constant degradation in the whole domain, namely with \(\lambda_{c}(x)=0.5\delta_{0}(x)\) and \(\lambda_{d}(x)=0.5\). The first 1000 terms of the sum in eq. (3.4) are considered. a. The solution of the 0 particle density (\(\rho_{0}(t)\)) as a function of time. b. The solution of the 1 particle density (\(\rho_{1}(t,x_{1})\)) for given position and time. c. The solution of the 2 particle density (\(\rho_{2}(t,x_{1},x_{2})\)) with respect to \(x_{1}\) and \(t\) for three values of \(x_{2}\). Time points as indicated in the color bar. d. The solution of the two particle density for fixed time, \(\rho_{2}(t=0.25,x_{1},x_{2})\), as a function of \(x_{1}\) and \(x_{2}\).
and for \(n\geq 1\)
\[\rho_{n}(t,x_{1},...,x_{n}) =\exp\left\{-\lambda_{c}\frac{1-e^{-\lambda_{d}t}}{\lambda_{d}} \right\}\frac{1}{n!}\left(\sum_{j\geq 1}\lambda_{c}\frac{1-e^{-\alpha_{j}t}}{ \alpha_{j}}\xi_{j}\right)^{\otimes n}(x_{1},...,x_{n})\] \[=\exp\left\{-\lambda_{c}\frac{1-e^{-\lambda_{d}t}}{\lambda_{d}} \right\}\frac{\lambda_{c}^{n}}{n!}\left(\sum_{j\geq 1}\frac{1-e^{-\alpha_{j}t}}{ \alpha_{j}}\xi_{j}\right)^{\otimes n}(x_{1},...,x_{n}).\]
We note that even though \(\lambda_{c}(x)\) is a generalized function the series
\[\sum_{j\geq 1}c_{j}\frac{1-e^{-\alpha_{j}t}}{\alpha_{j}}\xi_{j}=\lambda_{c} \sum_{j\geq 1}\frac{1-e^{-\alpha_{j}t}}{\alpha_{j}}\xi_{j} \tag{3.4}\]
appearing above converges in \(L^{2}([0,1])\).
In Figure 2 we plot solution of the bdCDME for this example. Figure 2a shows the exponential decay of the probability of having \(0\) particles due to the constant creation of particles. In contrast with Figure 1, in Figures 2b and 2c, one can see the effect of the creation happening only at \(x=0\) due to the peaks at the origin, while the highest peak is when \(x_{2}=x_{1}=0\). With increasing time the peaks at the origin smooth out due to diffusion and probability being distributed through the different particle number densities. Similar to before, the curves converge to their stationary distribution as time increases. Lastly, Figure 2d shows the solution of the bdCDME for \(2\) particles as a surface on \(x_{1},x_{2}\) axes, when time is fixed at \(t=0.25\).
### Dirac delta creation function at \(x=1/2\)
We now choose \(\lambda_{c}(x)=\lambda_{c}\delta_{1/2}(x)\), \(x\in[0,1]\) for some positive constant \(\lambda_{c}\), so the creation takes place only on the middle of the interval and degradation happens uniformly. This way one obtains
\[c_{j}=\int_{0}^{1}\lambda_{c}(x)\xi_{j}(x)dx=\int_{0}^{1}\lambda_{c}\delta_{1/ 2}(x)\xi_{j}(x)dx=\lambda_{c}\xi_{j}(1/2)=\lambda_{c}\cos((j-1)\pi/2),\quad \text{ for all }j\geq 1.\]
Therefore, equations (3.2) and (3.3) take now the form
\[\rho_{0}(t)=\mathbb{P}(\mathcal{N}(t)=0)=\exp\left\{-\lambda_{c}\frac{1-e^{- \lambda_{d}t}}{\lambda_{d}}\right\},\quad t\geq 0,\]
and for \(n\geq 1\)
\[\rho_{n}(t,x_{1},...,x_{n}) =\exp\left\{-\lambda_{c}\frac{1-e^{-\lambda_{d}t}}{\lambda_{d}} \right\}\frac{1}{n!}\left(\sum_{j\geq 1}\lambda_{c}\cos((j-1)\pi/2)\frac{1-e^{- \alpha_{j}t}}{\alpha_{j}}\xi_{j}\right)^{\otimes n}(x_{1},...,x_{n})\] \[=\exp\left\{-\lambda_{c}\frac{1-e^{-\lambda_{d}t}}{\lambda_{d}} \right\}\frac{\lambda_{c}^{n}}{n!}\left(\sum_{k\geq 1}(-1)^{k-1}\frac{1-e^{- \alpha_{2k-1}t}}{\alpha_{2k-1}}\xi_{2k-1}\right)^{\otimes n}(x_{1},...,x_{n}).\]
Figure 3 shows plots of the solution of the bdCDME for this example. Figure 3a shows the exponential decay of the probability of having \(0\) particles due to the constant creation of particles. However, in conrast with figures 1 and 2, in this case, the effect of creation in the middle of the interval can be seen in the peaks in the Figures 3b and 3c, while the highest peak is at \(x_{1}=x_{2}=0.5\), as
expected. Similar to the previous example the effect of the location of the creation of particles on the distribution becomes less important with increasing time due to diffusion. Once again, the curves converge to their stationary distribution. Lastly, Figure 2(d) plots the solution of bdCDME as a surface for 2 particle case at fixed time \(t=0.25\), as a function of \(x_{1}\) and \(x_{2}\).
AcknowledgmentsM.J.R acknowledges the support from Deutsche Forschungsgemeinschaft (DFG) (Grant No. RA 3601/1-1) and from the Dutch Institute for Emergent Phenomena (DIEP) cluster at the University of Amsterdam.
Figure 3: Solutions plots of the bdCDME generated with creation of particles at \(x=0.5\) and constant degradation in the whole domain, namely \(\lambda_{c}=0.5\delta_{1}/2(x)\) and \(\lambda_{d}=0.5\). The first 1000 terms of the sum in eq. (3.4) are considered. a. The solution of the 0 particle density (\(\rho_{0}(t)\)) as a function of time. b. The solution of the 1 particle density (\(\rho_{1}(t,x_{1})\)) for given position and time. c. The solution of the 2 particle density (\(\rho_{2}(t,x_{1},x_{2})\)) with respect to \(x_{1}\) and \(t\) for three values of \(x_{2}\). Time points as indicated in the color bar. d. The solution of the two particle density for fixed time, \(\rho_{2}(t=0.25,x_{1},x_{2})\), as a function of \(x_{1}\) and \(x_{2}\). |
2308.05430 | Ensemble Modeling for Multimodal Visual Action Recognition | In this work, we propose an ensemble modeling approach for multimodal action
recognition. We independently train individual modality models using a variant
of focal loss tailored to handle the long-tailed distribution of the MECCANO
[21] dataset. Based on the underlying principle of focal loss, which captures
the relationship between tail (scarce) classes and their prediction
difficulties, we propose an exponentially decaying variant of focal loss for
our current task. It initially emphasizes learning from the hard misclassified
examples and gradually adapts to the entire range of examples in the dataset.
This annealing process encourages the model to strike a balance between
focusing on the sparse set of hard samples, while still leveraging the
information provided by the easier ones. Additionally, we opt for the late
fusion strategy to combine the resultant probability distributions from RGB and
Depth modalities for final action prediction. Experimental evaluations on the
MECCANO dataset demonstrate the effectiveness of our approach. | Jyoti Kini, Sarah Fleischer, Ishan Dave, Mubarak Shah | 2023-08-10T08:43:20Z | http://arxiv.org/abs/2308.05430v2 | # Ensemble Modeling for Multimodal Visual Action Recognition
###### Abstract
In this work, we propose an ensemble modeling approach for multimodal action recognition. We independently train individual modality models using a variant of focal loss tailored to handle the long-tailed distribution of the MECCANO [21] dataset. Based on the underlying principle of focal loss, which captures the relationship between tail (scarce) classes and their prediction difficulties, we propose an exponentially decaying variant of focal loss for our current task. It initially emphasizes learning from the hard misclassified examples and gradually adapts to the entire range of examples in the dataset. This annealing process encourages the model to strike a balance between focusing on the sparse set of hard samples, while still leveraging the information provided by the easier ones. Additionally, we opt for the _late fusion_ strategy to combine the resultant probability distributions from RGB and Depth modalities for final action prediction. Experimental evaluations on the MECCANO dataset demonstrate the effectiveness of our approach.
## 1 Introduction
Amidst the surge of data in recent times, multimodal learning has emerged as a transformative approach, leveraging heterogeneous cues from multiple sensors to enhance the learning process. Both early and late multimodal fusion mechanisms have demonstrated the ability to effectively harness complementary information from diverse sources. However, real-world multimodal data associated with action occurrences suffer from an inherent skewness, giving rise to the long-tailed action recognition scenario. In such cases, some action classes are prevalent and well-represented in the training data, while others are scarce, leading to significant data imbalance. The inherent complexity of multimodal data, combined with such data imbalance, presents a formidable challenge for learning approaches.
In order to fuse information from different data streams, researchers in the vision community have proposed a variety of approaches, spanning from consolidating feature representations at an early stage (early fusion) [19, 10] to aggregating prediction scores at the final stage (late fusion) [3, 9]. Furthermore, to combat the long-tailed visual recognition issue, use of data augmentation [26, 25, 31, 29], re-sampling [18, 7, 1], cost-sensitive loss [13, 4, 2, 27], and transfer learning [24, 16, 33, 32, 12, 5] strategies is highly recommended. Methods that rely on data augmentation techniques like M2M [8], ImbalanceCycleGAN[23], and
MetaS-Aug[11] attempt to augment the minority classes with diverse samples. Other data augmentation-based works [14; 30; 28] generate pseudo labels to reduce scarcity in tail classes. However, these methods are often limited by their ability to generate realistic and diverse minority class samples. Some of the resampling approaches [20; 22; 6; 1] focus on assigning larger sampling probabilities to the tail classes. Although beneficial, re-sampling models are at risk of overfitting to the tail classes.
In this paper, we introduce an ensemble training strategy that leverages multimodal RGB and Depth signals for visual action recognition, using the late fusion mechanism. We, also, allow the model to focus on _h_ard-to-classify examples using an exponentially decaying variant of the focal loss objective function. This function not only reduces the KL divergence between the predicted distribution and the ground-truth distribution but also simultaneously increases the entropy of the predicted distribution, thereby preventing model overconfidence towards majority classes.
## 2 Approach
### Cross-Modal Fusion
Figure 1 provides comprehensive details of our proposed approach. Given a set of spatiotemporally aligned RBG and Depth sequences that extend between \([t_{s},t_{e}]\), where \(t_{s}\) and \(t_{e}\) are the start and the end duration of the sequence, our goal is to predict the action class \(\mathcal{O}=\{o_{1}\), \(o_{2}\),.., \(o_{K}\}\) associated with the sequence. In order to achieve this, we adopt an ensemble architecture comprising two dedicated Video Swin Transformer [15] backbones to process the RGB clip
Figure 1: **Architecture**: The RGB frames {\(A_{i}\), \(A_{i-1}\),..,\(A_{T}\)} and Depth frames {\(B_{i}\), \(B_{i-1}\),..,\(B_{T}\)} are passed through two independently trained Swin3D-B [15] encoders \(f_{\theta_{1}}\) and \(f_{\theta_{2}}\) respectively to generate feature tokens. The resultant class probabilities, obtained from each pathway, are averaged to subsequently yield action classes. Exponentially decaying focal loss \(L_{Focal}\) is leveraged to deal with the long-tailed distribution exhibited by the data.
\(\mathcal{A}=\{A_{i}\), \(A_{i-1}\),..,\(A_{T}\}\) and Depth clip \(\mathcal{B}=\{B_{i}\), \(B_{i-1}\),..,\(B_{T}\}\) independently. Here, \(i\) corresponds to a random index spanning between \(t_{s}\) and \(t_{e}\). The input video for each modality defined by size \(T\times H\times W\times 3\) results in token embeddings of dimension \(\frac{T}{2}\times H_{d}\times W_{d}\times C\). We pass this representation retrieved from _stage-4_ of the base feature network to our newly added fully connected layer and fine-tune the overall network. The final prediction is derived by averaging the two probability distributions obtained as output from the RGB and Depth pathways.
### Exponentially Decaying Focal Loss
Focal loss [13] is a variant of cross-entropy loss with a modulating factor that down-weighs the impact of easy examples and focuses on the hard ones. It, therefore, tends to prevent bias towards data-rich classes and improves the performance on scarce categories.
Multi-classification cross-entropy (CE) loss is given by:
\[L_{CE}=-\sum_{j=1}^{K}y_{j}\log(p_{j}) \tag{1}\]
where, say we have \(K\) action classes, and \(y_{j}\) and \(p_{j}\) correspond to the ground-truth label and predicted probability respectively for the \(j^{th}\) class.
On the other hand, the key objective of focal loss [13] is defined as:
\[L_{Focal}=-\sum_{j=1}^{K}{(1-p_{j})^{\gamma}\log p_{j}} \tag{2}\]
In our work, we use focal loss \(L_{Focal}\) and exponentially decay \(\gamma\) from 2 to 0.1. When \(\gamma\)=0, the objective function is equivalent to cross-entropy loss. Our proposed annealing process for \(\gamma\) allows for the model to focus on the sparse set of hard examples in the early stage of training, and gradually shift its focus towards easy examples. This configuration is essential to ensure that the model learns meaningful representations and generalized decision boundaries.
## 3 Experimental Setup
### Data-preprocessing
For our experiments, we resize the frames to a width of 256, without disturbing the aspect ratio of the original image, followed by a random crop of \(224\times 224\). In addition, we use 16 consecutive frames to generate a single clip for the forward pass. In the case of shorter sequences, we pad the sequence with the last frame.
### Training
We use the Swin3D-B [15] backbone, which is pre-trained on the Something-Something v2 [17] dataset. We adopt focal loss [13] with exponentially decaying \(\gamma\) for training the classification model. For optimization, AdamW optimizer with a learning rate of \(3\times 10^{-4}\) and a weight decay of 0.05 has been employed. Our model converges in about 20 epochs on the MECCANO dataset. We report the _Top-1_ and _Top-5_ classification accuracy as our evaluation metrics. Additionally, to demonstrate the effectiveness of employing the focal loss for this task, we present the average class _Precision_, _Recall_ and _F1-score_.
## 4 Discussion
Table 1 presents our results on the MECCANO test set. Applying cross-entropy loss to fine-tune our model, pre-trained on Something-Something v2, gives us an initial baseline accuracy of 50.94% on our multimodal setup. Introducing focal loss with exponential decay in \(\gamma\) boosts the overall accuracy by \(\approx\) 2%. Figure 2 demonstrates the effectiveness of our approach in dealing with the long-tailed distribution of the MECCANO dataset. Furthermore, combining the train and validation data gives the best _Top-1_ accuracy of 55.37%.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Modality & Loss & \multicolumn{2}{c}{Accuracy} & \multicolumn{2}{c}{AVG Class} & \multicolumn{2}{c}{AVG _F1-score_} \\ \cline{3-6} & & _Top-1_ & _Top-5_ & _Precision_ & _Recall_ & \\ \hline RGB & _CE_ & 48.35 & 80.91 & 45.52 & 48.35 & 46.22 \\ Depth & _CE_ & 43.32 & 75.38 & 41.79 & 43.32 & 41.88 \\ RGB+Depth & _CE_ & 50.94 & 81.79 & 47.28 & 50.94 & 48.08 \\ \hline RGB & Focal & 50.80 & 82.36 & 47.17 & 50.80 & 47.95 \\ Depth & Focal & 45.52 & 78.07 & 43.74 & 45.52 & 43.41 \\ RGB+Depth & Focal & 52.82 & 83.85 & 49.97 & 52.82 & 49.41 \\ \hline RGB\({}^{*}\) & Focal & 53.03 & 85.37 & 50.46 & 53.03 & 50.39 \\ Depth\({}^{*}\) & Focal & 48.39 & 80.55 & 46.43 & 48.39 & 46.35 \\ RGB+Depth\({}^{*}\) & Focal & **55.37** & 85.58 & 52.41 & 55.37 & 52.28 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results demonstrating the effectiveness of our ensemble modeling approach for the action recognition task on the MECCANO test dataset. _CE_ implies Cross-Entropy loss. * refers to model trained using both train+validation set.
Figure 2: The resultant confusion matrix obtained from the MECCANO test set highlights our model’s proficiency in handling the long-tailed distribution. |
2301.12447 | Homotopy types of diffeomorphisms groups of simplest Morse-Bott
foliations on lens spaces, 2 | Let $\mathcal{F}$ be a Morse-Bott foliation on the solid torus $T=S^1\times
D^2$ into $2$-tori parallel to the boundary and one singular central circle.
Gluing two copies of $T$ by some diffeomorphism between their boundaries, one
gets a lens space $L_{p,q}$ with a Morse-Bott foliation $\mathcal{F}_{p,q}$
obtained from $\mathcal{F}$ on each copy of $T$ and thus consisting of two
singluar circles and parallel $2$-tori. In the previous paper [O. Khokliuk, S.
Maksymenko, Journ. Homot. Rel. Struct., 2024, 18, 313-356] there were computed
weak homotopy types of the groups $\mathcal{D}^{lp}(\mathcal{F}_{p,q})$ of leaf
preserving (i.e. leaving invariant each leaf) diffeomorphisms of such
foliations. In the present paper it is shown that the inclusion of these groups
into the corresponding group $\mathcal{D}_{+}^{fol}(\mathcal{F}_{p,q})$ of
foliated (i.e. sending leaves to leaves) diffeomorphisms which do not
interchange singular circles are homotopy equivalences. | Sergiy Maksymenko | 2023-01-29T14:10:18Z | http://arxiv.org/abs/2301.12447v3 | # Foliated and leaf preserving diffeomorphisms of simplest Morse-Bott foliations on lens spaces
###### Abstract.
Let \(\mathcal{F}\) be a Morse-Bott foliation on the solid torus \(T=S^{1}\times D^{2}\) into \(2\)-tori parallel to the boundary and one singular circle \(S^{1}\times 0\). A diffeomorphism \(h:T\to T\) is called foliated (resp. leaf preserving) if for each leaf \(\omega\in\mathcal{F}\) its image \(h(\omega)\) is also leaf of \(\mathcal{F}\) (resp. \(h(\omega)=\omega\)). Gluing two copies of \(T\) by some diffeomorphism between their boundaries, one gets a lens space \(L_{p,q}\) with a Morse-Bott foliation \(\mathcal{F}_{p,q}\) obtained from \(\mathcal{F}\) on each copy of \(T\). Denote by \(\mathcal{D}_{fol}(T,\partial T)\) and \(\mathcal{D}_{lp}(T,\partial T)\) respectively the groups of foliated and leaf preserving diffeomorphisms of \(T\) fixed on \(\partial T\). Similarly, let \(\mathcal{D}_{fol}(L_{p,q})\) and \(\mathcal{D}_{lp}(L_{p,q})\) be respectively the groups of foliated and leaf preserving diffeomorphisms of \(\mathcal{F}_{p,q}\). In a recent joint paper of the author it is shown that \(\mathcal{D}_{lp}(T,\partial T)\) is weakly contractible (all homotopy groups vanish), which allowed also to compute the weak homotopy type of \(\mathcal{D}_{lp}(L_{p,q})\). In the present paper it is proved that \(\mathcal{D}_{lp}(T,\partial T)\) is a strong deformation retract of \(\mathcal{D}_{fol}(T,\partial T)\). As a consequence the weak homotopy type of \(\mathcal{D}_{fol}(L_{p,q})\) for all possible pairs \((p,q)\) is computed.
Key words and phrases:Foliation, diffeomorphism, homotopy type, lens space, solid torus 2020 Mathematics Subject Classification: 57R30, 57T20
## 1. Introduction
The paper is devoted to computations of homotopy types of diffeomorphisms groups of a solid torus and lens spaces preserving foliations by level sets of <<simplest>> Morse-Bott functions whose sets of critical points consist of extreme circles only.
Note that the homotopy types of diffeomorphisms groups of compact manifolds in dimensions \(1,2\) are described completely, [37, 7, 8, 13]; in dimension \(3\) there is a lot of information, e.g. [16, 11, 18]; while in dimensions \(\geq 4\) only very specific cases are computed, e.g. [32, 36, 14, 6, 25, 2]. Computations for the groups of leaf preserving diffeomorphisms of foliations are usually related with perfectness properties of such groups proved independently by T. Rybicki [34] and T. Tsuboi [39], which extended the results by M.-R. Herman [17], W. Thurston [38], J. Mather [29, 30], D. B. A. Epstein [9] on simplicity of groups of compactly supported isotopic to the identity diffeomorphisms. For foliations with singularities there is much less information, e.g. [10, 35, 28, 26].
In a recent series of joint papers by the author with O. Khokhliuk [23, 24, 22, 21] there were developed several techniques for computations of homotopy types of diffeomorphisms groups of Morse-Bott and more general classes of <<singular>> foliations in higher dimensions. In particular, [21] computes the homotopy types of diffeomorphisms groups of leaf preserving diffeomorphisms for mentioned above foliations of the solid torus and lens spaces. In this
paper we find the relationship between those groups and the groups of diffeomorphisms sending leaves to leaves, see Theorems 2.1, 2.2, 3.1, 3.4 below.
**Some definitions and notations.** Let \(\mathcal{F}\) be a partition of a set \(M\). Then a map \(h:M\to M\) is \(\mathcal{F}\)_-foliated_ if for each leaf \(\omega\in\mathcal{F}\) its image \(h(\omega)\) is a (possibly distinct from \(\omega\)) leaf of \(\mathcal{F}\) as well. Also \(h\) is \(\mathcal{F}\)_-leaf preserving_, if \(h(\omega)=\omega\) for each leaf \(\omega\in\mathcal{F}\). More generally, if \(\mathcal{G}\) is a partition of a set \(N\), then a map \(h:M\to N\) is \((\mathcal{F},\mathcal{G})\)_-foliated_, if for each leaf \(\omega\in\mathcal{F}\) its image \(h(\omega)\) is a leaf of \(\mathcal{G}\).
All manifolds and their diffeomorphisms are assumed to be of class \(\mathcal{C}^{\infty}\). If \(M\) is a manifold, and \(X\subset M\) is a subset, then we will denote by \(\mathcal{D}^{fol}(\mathcal{F},X)\), (resp. \(\mathcal{D}^{lp}(\mathcal{F},X)\)), the groups of \(\mathcal{F}\)-foliated, (resp., \(\mathcal{F}\)-leaf preserving), \(\mathcal{C}^{\infty}\) diffeomorphisms of \(M\) endowed with the corresponding strong \(\mathcal{C}^{\infty}\) Whitney topologies. If \(X=\varnothing\), then we will omit it from notation and denote the above groups by \(\mathcal{D}^{fol}(\mathcal{F})\) and \(\mathcal{D}^{lp}(\mathcal{F})\) respectively.
Throughout the paper \(D^{2}=\{|z|\leq 1\}\) is the unit disk in the complex plane and \(S^{1}=\partial D^{2}\) the unit circle. For an abelian group \(A\) we will denote by \(\mathrm{Dih}(A)\) the _dihedral extension of \(A\)_, i.e. the semidirect product \(A\rtimes\mathbb{Z}_{2}\) corresponding to the natural action of \(\mathbb{Z}_{2}\) on \(A\) generated by the isomorphism \(o:A\to A\), \(o(a)=-a\). For example, \(\mathrm{Dih}(\mathbb{Z}_{n})\) is the dihedral group \(\mathbb{D}_{n}\), and \(\mathrm{Dih}(\mathrm{SO}(2))=\mathrm{O}(2)\).
## 2. Solid torus
Let \(\mathbf{T}=S^{1}\times D^{2}\) be the solid torus. Consider the following Morse-Bott function \(f:\mathbf{T}\to\mathbb{R}\), \(f(w,z)=|z|^{2}\), and let \(\mathcal{F}=\{f^{-1}(t)\mid t\in[0;1]\}\) be the partition of \(\mathbf{T}\) into the level sets of \(f\). Then \(f^{-1}(0)=S^{1}\times 0\) is the \(*\)central circle\({}_{*}\) of \(\mathbf{T}\) consisting of all critical points of \(f\), and thus being a non-degenerate critical submanifold. For all other values \(t\in(0;1]\), \(f^{-1}(t)=S^{1}\times\{|z|^{2}=t\}\) is a 2-torus (product of two circles) \(*\)parallel\({}_{*}\) to \(\partial\mathbf{T}\). In particular, \(f^{-1}(1)=\partial\mathbf{T}\).
N. Ivanov [19] proved that the group \(\mathcal{D}(\mathbf{T},\partial\mathbf{T})\) of all diffeomorphism of \(\mathbf{T}\) fixed on the boundary is contractible. Also in a recent joint paper [21] of the author with O. Khokhliuk it is computed the weak homotopy type of \(\mathcal{D}^{lp}(\mathcal{F})\) and shown that all homotopy groups of \(\mathcal{D}^{lp}(\mathcal{F},\partial\mathbf{T})\) vanish. One of the main results of this paper accomplishes the results of [21] with the following theorem:
**Theorem 2.1**.: _The pair of groups \(\left(\mathcal{D}^{lp}(\mathcal{F}),\ \mathcal{D}^{lp}(\mathcal{F},\partial\mathbf{T})\right)\) is a strong deformation retract of the pair \(\left(\mathcal{D}^{fol}(\mathcal{F}),\ \mathcal{D}^{fol}(\mathcal{F},\partial\mathbf{T})\right)\)._
The proof identifies the groups \(\mathcal{D}^{fol}(\mathcal{F})\) and \(\mathcal{D}^{lp}(\mathcal{F})\) with the stabilizers of \(f\) with respect to the natural actions \(*\)left-right\(*\) and \(*\)right\(*\) actions of diffeomorphisms groups of \(\mathbf{T}\) and \([0;1]\) on \(\mathcal{C}^{\infty}(\mathbf{T},[0;1])\), see SS5, and then uses the author's result from [27] claiming that those stabilizers are homotopy equivalent. The above identification exploits the well-known Whithey theorem on even smooth functions, see Lemma 4.3. In fact, it will be established a more general statement concerning smooth fiber-wise definite homogeneous functions of degree 2 on total spaces of vector bundles, see Theorem 6.7.
As a consequence of [19, 21] we will get a description of homotopy types of the mentioned above groups, see Theorem 2.2 below. To formulate the corresponding statement we need to define certain subgroups of \(\mathcal{D}^{lp}(\mathcal{F})\) which will also be used later in the paper.
Consider the following subgroup of \(\mathrm{GL}(2,\mathbb{Z})\) consisting of matrices for which the vector \(\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right)\) is eigen with eigen value \(\pm 1\):
\[\mathcal{G}:=\left\{\begin{pmatrix}\varepsilon&0\\ m&\delta\end{pmatrix}\mid m\in\mathbb{Z},\,\varepsilon,\delta\in\{\pm 1\}\right\}. \tag{2.1}\]
Then \(\mathcal{G}\) is generated by the matrices:
\[\mathsf{D}=\begin{pmatrix}1&0\\ 1&1\end{pmatrix},\qquad\quad\mathsf{\Lambda}=\begin{pmatrix}-1&0\\ 0&1\end{pmatrix},\qquad\quad\mathsf{M}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}=-\lambda,\qquad\quad\mathsf{T}=\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}, \tag{2.2}\]
satisfying the following identities:
\[\mathsf{\Lambda}^{2}=\mathsf{M}^{2}=E,\qquad\quad\mathsf{\Lambda}\mathsf{D} \mathsf{\Lambda}=\mathsf{M}\mathsf{D}\mathsf{M}=\mathsf{D}^{-1},\qquad\mathsf{ T}=\mathsf{M}\mathsf{M}=\mathsf{M}\mathsf{\Lambda},\qquad\mathsf{T}\mathsf{D}= \mathsf{D}\mathsf{T}. \tag{2.3}\]
Hence, \(\mathcal{G}=\langle\mathsf{D},\mathsf{\Lambda}\rangle\times\langle\mathsf{T} \rangle\cong\mathrm{Dih}(\mathbb{Z})\times\mathbb{Z}_{2}\). It is easy to check that every \(\mathsf{\Lambda}=\begin{pmatrix}\varepsilon&0\\ m&\delta\end{pmatrix}\in\mathcal{G}\) yields the following diffeomorphism \(g_{\mathsf{\Lambda}}\in\mathcal{D}^{lp}(\mathcal{F})\),
\[g_{\mathsf{\Lambda}}(w,z)=\begin{cases}\big{(}w^{\varepsilon},w^{m}z\big{)}, &\text{if $\delta=1$},\\ \big{(}w^{\varepsilon},w^{m}\bar{z}\big{)},&\text{if $\delta=-1$},\end{cases} \tag{2.4}\]
so that the correspondence \(\mathsf{\Lambda}\mapsto g_{\mathsf{\Lambda}}\) is a monomorphism \(\mathcal{G}\subset\mathcal{D}^{lp}(\mathcal{F})\), and we will identify \(\mathcal{G}\) with its image in \(\mathcal{D}^{lp}(\mathcal{F})\). It will be convenient to denote \(d:=g_{\mathsf{D}}\), \(\lambda:=g_{\mathsf{\Lambda}}\), \(\mu:=g_{\mathsf{M}}\), \(\tau:=g_{\mathsf{T}}\), so
\[d(w,z) =(w,wz), \lambda(w,z)=(\bar{w},z), \tag{2.5}\] \[\mu(w,z) =(w,\bar{z}), \tau(w,z)=(\bar{w},\bar{z}).\]
Further note that \(\mathcal{D}^{lp}(\mathcal{F})\) contains another subgroup \(\mathcal{R}\), called the _rotation subgroup_, isomorphic to the 2-torus \(S^{1}\times S^{1}\) and consisting of diffeomorphisms \(\rho_{\alpha,\beta}\), \((\alpha,\beta)\in S^{1}\times S^{1}\), given by
\[\rho_{\alpha,\beta}(w,z)=(\alpha w,\beta z),\qquad(w,z)\in\mathbf{T}=S^{1} \times D^{2}. \tag{2.6}\]
Let \(\mathcal{T}=\langle\mathcal{R},\mathcal{G}\rangle\) be the subgroup of \(\mathcal{D}^{lp}(\mathcal{F})\) generated by \(\mathcal{R}\) and \(\mathcal{G}\). Evidently, \(\mathcal{R}\) is the identity path component of \(\mathcal{T}\), whence \(\pi_{0}\mathcal{T}\cong\mathcal{G}\).
**Theorem 2.2**.: _In the following diagrams of inclusions the arrows denoted by (w.)h.e. are (weak) homotopy equivalences (the new statements are written in bold):_
Proof.: The proof that the inclusion \(\{\mathrm{id}_{\mathbf{T}}\}\subset\mathcal{D}(\mathbf{T},\partial\mathbf{T})\) is a homotopy equivalence, i.e. contractibility of \(\mathcal{D}(\mathbf{T},\partial\mathbf{T})\), is given in [19, Theorem 2]. The fact, that the inclusion \(\mathcal{T}\subset\mathcal{D}(\mathbf{T})\) is a homotopy equivalence, is classical and follows from contractibility of \(\mathcal{D}(\mathbf{T},\partial\mathbf{T})\), see [21] for exposition of those results. Also for the proof that the corresponding induced map \(\pi_{0}\mathcal{T}=\mathcal{G}\to\pi_{0}\mathcal{D}(\mathbf{T})\) is an isomorphism see e.g. [40, Theorem 14]. The upper horizontal weak homotopy equivalences are established in [21], and the right vertical homotopy equivalence are contained in Theorem 2.1. This implies that the lower arrows are weak homotopy equivalences as well.
## 3. Lens spaces
Recall that any 3-manifold \(L_{\xi}\) obtained by gluing two copies of the solid torus \(\mathbf{T}\) by some diffeomorphism of their boundaries \(\xi:\partial\mathbf{T}\to\partial\mathbf{T}\) is called a _lens space_. We assume that the reader is familiar with lens spaces and will briefly recall their basic properties, see e.g. [33, 4, 3, 12, 18].
More precisely, let \(L:={\bf T}\times\{0,1\}\) be two copies of \({\bf T}\). Identify each point \((x,0)\) with \((\xi(x),1)\) and denote the obtained quotient space by \(L_{\xi}\). Let also \(p:L\to L_{\xi}\) be the corresponding quotient map, \({\bf T}_{i}:=p({\bf T}\times\{i\})\), \(i=0,1\), and \(C_{i}:=S^{1}\times 0\times i\) be the \(*\)central circle\({}_{\flat}\) of the torus \({\bf T}_{i}\). It is well known that \(L_{\xi}\) admits a smooth structure, and if \(\xi^{\prime}:\partial{\bf T}\to\partial{\bf T}\) is another diffeomorphism satisfying either of the following conditions:
* \(\xi\) is isotopic to \(\xi^{\prime}\);
* \(\xi^{\prime}=g|_{\partial{\bf T}}\circ\xi\circ h|_{\partial{\bf T}}\), where \(g,h\) are certain diffeomorphisms of \({\bf T}\);
* \(\xi^{\prime}=\xi^{-1}\)
then \(L_{\xi}\) and \(L_{\xi^{\prime}}\) are diffeomorphic. This finally implies that one can always assume that \(\xi\) is defined by the formula:
\[\xi(w,z)=(w^{r}z^{p},w^{s}z^{q}),\quad(w,z)\in\partial{\bf T}, \tag{3.1}\]
for some matrix \({\sf X}=\binom{r}{s}\;\;\;q\in{\rm GL}(2,\mathbb{Z})\) with \(|{\sf X}|=rq-ps=-1\), and in this case \(L_{\xi}\) is denoted by \(L_{p,q}\). Moreover, if \(\xi^{\prime}:\partial{\bf T}\to\partial{\bf T}\) is given by the matrix \({\sf X}^{\prime}=\binom{r^{\prime}}{s^{\prime}}\;\;\;q^{\prime}\) with \(r=r^{\prime}({\rm mod}\;p)\) and \(q=q^{\prime}({\rm mod}\;p)\), then \(L_{\xi}\) and \(L_{\xi^{\prime}}\) are still diffeomorphic. This implies that for \(p=0,1,2\) there exists a unique (up to a diffeomorphism) lens space so that
* \(L_{0,1}\cong S^{1}\times S^{2}\) with \({\sf X}=\mathsf{A}=\binom{-1}{0}\;\;\;0\choose 0\;\;1\) and \(\xi(w,z)=\lambda(w,z)=(\bar{w},z)\);
* \(L_{1,0}\cong S^{3}\) with \({\sf X}=\binom{0}{1}\;\;\;0\choose 1\;\;0\) and \(\xi(w,z)=(z,w)\);
* \(L_{2,1}\cong\mathbb{R}P^{3}\) with \({\sf X}=\binom{1}{1}\;\;\;2\choose 1\;\;1\) and \(\xi(w,z)=(wz^{2},wz)\) or with \({\sf X}^{\prime}={\sf X}{\sf D}^{-1}=\binom{-1}{0}\;\;\;2\choose 0\;\;1\) and \(\xi^{\prime}(w,z)=(\bar{w}z^{2},z)\).
For \(p>2\) one can assume that \(1\leq q<p\), \(\gcd(p,q)=1\), and \(q\) can be replaced with \(q^{\prime}\) such that \(qq^{\prime}=1({\rm mod}\;p)\).
**Subgroups \({\mathcal{L}}_{p,q}\) and \(\widehat{{\mathcal{L}}}_{p,q}\) of \({\mathcal{D}}(L_{p,q})\).** We will now recall the definition of a certain subgroup \({\mathcal{L}}_{p,q}\subset{\mathcal{D}}(L_{p,q})\) considered in [21], and also define some other group \(\widehat{{\mathcal{L}}}_{p,q}\) containing \({\mathcal{L}}_{p,q}\). They will play the principal role in our main results.
1) First note that every diffeomorphism \(\phi\in{\mathcal{D}}(L_{\xi})\) which preserves the common boundary \(\partial{\bf T}_{0}=\partial{\bf T}_{1}\) induces a diffeomorphism \(\widehat{\phi}:L\to L\) such that \(p\circ\widehat{\phi}=\phi\circ p:L\to L_{\xi}\).
a) Moreover, if \(\phi\) preserves each torus \({\bf T}_{i}\), \(i=0,1\), then \(\widehat{\phi}|_{\partial{\bf T}\times\{i\}}(x,i)=(\phi_{i}(x),i)\) for unique diffeomorphisms \(\phi_{0},\phi_{1}:{\bf T}\to{\bf T}\) satisfying the identity \(\xi\circ\phi_{0}|_{\partial{\bf T}}=\phi_{1}\circ\xi:\partial{\bf T}\to\partial {\bf T}\), i.e. making commutative the diagram (a) in (3.2):
(3.2)
It will be convenient to write down \(\phi\) as follows: \(\;\;\phi:\frac{0}{1}\,\frac{\phi_{0}}{\phi_{1}}\;\;\;0\;\). Evidently, \(\;\phi^{-1}:\frac{0}{1}\,\frac{\phi_{0}^{-1}}{\phi_{1}^{-1}}\;\;0\;\), and if \(\phi\in{\mathcal{D}}^{lp}({\mathcal{F}}_{p,q})\), then \(\phi_{0},\phi_{1}\in{\mathcal{D}}^{lp}({\mathcal{F}})\).
b) Similarly, if \(\phi\) exchanges \({\bf T}_{0}\) and \({\bf T}_{1}\), then \(\widehat{\phi}|_{\partial{\bf T}\times\{i\}}(x,i)=(\phi_{i}(x),1-i)\) for unique diffeomorphisms \(\phi_{0},\phi_{1}:{\bf T}\to{\bf T}\) satisfying the identity \(\phi_{0}|_{\partial{\bf T}}=\xi\circ\phi_{1}\circ\xi:\partial{\bf T}\to\partial {\bf T}\), i.e. making commutative the diagram (b) in (3.2). In that case we will write down \(\phi\) as follows:
\[\phi:\frac{0}{1}\,\frac{\phi_{0}}{\phi_{1}}\;\;\frac{1}{0}\;\;\mbox{Notice that then}\;\;\phi^{-1}:\frac{0}{1}\,\frac{\phi_{1}^{-1}}{\phi_{1}^{-1}}\;\;0\;.\]
2) For \((\alpha,\beta)\in S^{1}\times S^{1}\) let \(\rho_{\alpha,\beta}:{\bf T}\to{\bf T}\), \(\rho_{\alpha,\beta}(w,z)=(\alpha w,\beta z)\), be the diffeomorphism given by (2.6). Evidently,
\[\xi\circ\rho_{\alpha,\beta}(w,z)=(\alpha^{r}\beta^{p}w^{r}z^{p},\alpha^{s} \beta^{q}w^{s}z^{q})=\rho_{\xi(\alpha,\beta)}\circ\xi(w,z),\]
and we have the following diffeomorphism \(\widehat{\rho_{\alpha,\beta}}:\frac{0}{1}\frac{\rho_{\alpha,\beta}}{\rho_{ \xi(\alpha,\beta)}}\frac{0}{1}\) of \(L_{p,q}\), called a _rotation_. Then the group \({\mathcal{R}}=\{\rho_{\alpha,\beta}\mid\alpha,\beta\in S^{1}\}\subset{\mathcal{ D}}(L_{p,q})\) of all rotations is isomorphic to \(S^{1}\times S^{1}=\partial{\bf T}\).
3) We will recall now the definition of two diffeomorphisms \(\sigma_{+}\) and \(\sigma_{-}\) exchanging \({\bf T}_{0}\) and \({\bf T}_{1}\). F. Bonahon [3, Theorem 1] proved that every diffeomorphism \(h\) of \(L_{p,q}\) is isotopic to a diffeomorphism which leaves the common boundary \({\bf T}_{0}\) and \({\bf T}_{1}\) invariant, and thus preserving or exchanging those tori. Moreover, for \(p>2\) the group \(\pi_{0}{\mathcal{D}}(L_{p,q})\) is generated by the isotopy classes of the diffeomorphism \(\widehat{\tau}:\frac{0}{1}\frac{\pi\tau}{\widehat{\tau}}\frac{0}{1}\), and \(\sigma_{+}\), \(\sigma_{-}\) (more precisely by that \(\sigma_{-}\) or \(\sigma_{+}\) which is defined for the given \((p,q)\)).
a) Suppose \(-q^{2}-ps=-1\) for some \(s\in{\mathbb{Z}}\), so \(\xi\) is given by the matrix \({\sf X}=\binom{-q}{s}\). This property holds when \(L_{p,q}\) is either \(L_{0,1}=S^{1}\times S^{2}\), or \(L_{1,0}=S^{3}\), or \(L_{2,1}={\mathbb{R}}{\mathbb{P}}^{3}\), or \(q^{2}=1(\mbox{mod }p)\) for \(p>2\). Notice that \({\sf X}^{2}=E\), so \(\xi^{2}=\mbox{id}_{\partial{\bf T}}\), i.e. \(\mbox{id}_{\partial{\bf T}}=\xi\circ\mbox{id}_{\partial{\bf T}}\circ\xi\), and we have a well defined diffeomorphism \(\sigma_{+}:\frac{0}{1}\frac{\mbox{id}_{\cal T}}{\mbox{id}_{\cal T}}\frac{1}{0}\) of \(L_{p,q}\) exchanging \({\bf T}_{0}\) and \({\bf T}_{1}\). One easily checks that \(\sigma_{+}\) preserves orientation and \(\sigma_{+}^{2}=\mbox{id}_{L_{p,q}}\).
b) Suppose \(q^{2}-ps=-1\) for some \(s\in{\mathbb{Z}}\), so \(\xi\) is given by the matrix \({\sf X}=\binom{q}{s}\ \frac{p}{q}\). This holds if \(L_{p,q}\) is either \(L_{1,0}=S^{3}\), or \(L_{2,1}={\mathbb{R}}{\mathbb{P}}^{3}\) or if \(q^{2}=-1(\mbox{mod }p)\) for \(p>2\). One easily checks that \({\sf\Lambda}={\sf XMX}\), so \(\lambda=\xi\circ\mu\circ\xi\), and we have another diffeomorphism \(\sigma_{-}:\frac{0}{1}\frac{\lambda}{\mu}\frac{1}{0}\) of \(L_{p,q}\) exchanging \({\bf T}_{0}\) and \({\bf T}_{1}\). One easily checks that \(\sigma_{-}\) reverses orientation, and is periodic of period \(4\).
4) We will now define the group \({\mathcal{L}}_{p,q}\) and put \(\widehat{{\mathcal{L}}}_{p,q}:=\langle{\mathcal{L}}_{p,q},\sigma_{-},\sigma_{+}\rangle\) to be the group generated by \({\mathcal{L}}_{p,q}\) and those of \(\sigma_{-}\) and \(\sigma_{+}\) which are defined for the given values \((p,q)\).
Let \(d,\lambda,\mu,\tau=\lambda\circ\mu\in{\mathcal{D}}^{lp}({\mathcal{F}})\) be diffeomorphisms of \({\bf T}\) given by (2.5) and corresponding to matrices (2.2).
a) Let \(L_{p,q}=L_{0,1}=S^{1}\times S^{2}\), so \({\sf X}={\sf\Lambda}=\binom{-1}{0}\). Then it follows from (2.3) that \({\mathcal{D}}(L_{0,1})\) contains the following diffeomorphisms:
\[\widehat{d}:\frac{0}{1}\frac{d}{d^{-1}}\frac{0}{1}\;,\qquad\qquad \widehat{\lambda}:\frac{0}{1}\frac{\lambda}{\lambda}\frac{0}{1}\;,\qquad\qquad \widehat{\mu}:\frac{0}{1}\frac{\mu}{\mu^{\prime}}\frac{0}{1}\;,\qquad\qquad \widehat{\tau}=\widehat{\lambda}\circ\widehat{\mu}:\frac{0}{1}\frac{\tau}{\tau ^{\prime}}\frac{0}{1}\;.\]
Notice that in this case only \(\sigma_{-}:\frac{0}{1}\frac{\lambda}{\mu^{\prime}}\frac{1}{0}\) is defined. Define the following groups
\[{\mathcal{L}}_{0,1}:=\langle{\mathcal{R}},\widehat{d},\widehat{ \lambda},\widehat{\mu}\rangle\cong\langle{\mathcal{R}},{\mathcal{G}}\rangle={ \mathcal{T}},\qquad\qquad\qquad\widehat{{\mathcal{L}}}_{0,1}:=\langle{\mathcal{ L}}_{p,q},\sigma_{-}\rangle.\]
Then \({\mathcal{R}}\) is the identity path component of each of them. Moreover, one also easily checks that \(\sigma_{-}\circ\widehat{d}=\widehat{d}^{-1}\circ\sigma_{-}:\frac{0}{1}\frac{ \lambda d}{\mu d^{-1}}\frac{1}{0}\) and \(\sigma_{-}\) commutes with \(\widehat{\lambda}\) and \(\widehat{\mu}\). Then together with the first and second identities in (2.3) we get that
\[\pi_{0}{\mathcal{L}}_{0,1}={\mathcal{G}},\qquad\qquad\pi_{0}\widehat{{\mathcal{ L}}}_{0,1}=\langle{\mathcal{G}},\sigma_{-}\rangle\cong\langle\widehat{d} \rangle\rtimes\langle\widehat{\lambda},\widehat{\mu},\sigma_{-}\rangle\cong{ \mathbb{Z}}\rtimes({\mathbb{Z}}_{2})^{3},\]
where the semidirect product corresponds to the homomorphism \(({\mathbb{Z}}_{2})^{3}\to\mbox{Aut}({\mathbb{Z}})={\mathbb{Z}}_{2}\), \((a,b,c)\cdot n=(-1)^{a+b+c}n\), for \(a,b,c\in{\mathbb{Z}}_{2}\) and \(n\in{\mathbb{Z}}\).
b) Let \(L_{p,q}=L_{1,0}=S^{3}\), so \(\mathsf{X}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\). Then \(\mathsf{X}^{2}=E\), \(\mathsf{X}\mathsf{\Lambda}=\mathsf{MX}\), \(\mathsf{X}\mathsf{M}=\mathsf{AX}\), which implies that \(\mathcal{D}^{lp}(\mathcal{F}_{1,0})\) contains the following diffeomorphisms of order two \(\widehat{\lambda}:\!\!\begin{array}{c}0\\ 1\end{array}\!\!\!\frac{\lambda}{\mu^{\prime}}\!\!\begin{array}{c}0\\ 1\end{array}\!\!\!\frac{\mu}{\widehat{\lambda}}\!\!\begin{array}{c}0\\ 1\end{array}\!\!\!\). In this case both \(\sigma_{-}:\!\!\begin{array}{c}0\\ 1\end{array}\!\!\!\frac{\lambda}{\mu^{\prime}}\!\!\begin{array}{c}1\\ 0\end{array}\!\!\!\) and \(\sigma_{+}:\!\!\begin{array}{c}0\\ 1\end{array}\!\!\!\frac{\mathrm{id}_{\mathsf{T}}}{\mathrm{id}_{\mathsf{T}}}\! \!\begin{array}{c}1\\ 0\end{array}\!\!\!\) are defined, and one easily checks that
\[\sigma_{+}^{2}=\mathrm{id}_{L_{1,0}}, \sigma_{-}^{4}=\mathrm{id}_{L_{1,0}}, \sigma_{-}^{2}=\widehat{\lambda}\widehat{\mu}, \sigma_{+}\sigma_{-}+\sigma_{-}^{-1},\] \[\sigma_{+}\sigma_{-}=\widehat{\lambda}, \sigma_{+}\sigma_{-}=\widehat{\mu}, \widehat{\lambda}\partial_{\alpha,\beta}=\rho_{\bar{\alpha}, \beta}\widehat{\lambda}, \widehat{\mu}\rho_{\alpha,\beta}=\rho_{\alpha,\bar{\beta}} \widehat{\mu},\]
Define the following groups
\[\mathcal{L}_{0,1}:=\langle\mathcal{R},\widehat{\lambda},\widehat{\mu}\rangle, \widehat{\mathcal{L}}_{0,1}:=\langle\mathcal{L}_{0,1},\sigma_{-},\sigma_{+}\rangle.\]
Then \(\mathcal{R}\) is the identity path component of each of them, and it follows from the above identities that
\[\mathcal{L}_{0,1}=\langle\widehat{\rho_{\alpha,1}},\widehat{ \lambda}\mid\alpha\in S^{1}\rangle\times\langle\widehat{\rho_{1,\beta}}, \widehat{\mu}\mid\beta\in S^{1}\rangle=\mathrm{O}(2)\times\mathrm{O}(2),\] \[\pi_{0}\mathcal{L}_{1,0}=\langle\widehat{\lambda},\widehat{\mu} \rangle=\mathbb{Z}_{2}\times\mathbb{Z}_{2},\] \[\widehat{\mathcal{L}}_{0,1}:=\langle\mathcal{R},\widehat{\lambda },\widehat{\mu},\sigma_{-},\sigma_{+}\rangle=\langle\mathcal{R},\sigma_{-}, \sigma_{+}\rangle,\] \[\pi_{0}\widehat{\mathcal{L}}_{0,1}=\langle\sigma_{-},\sigma_{+} \rangle\cong\mathrm{Dih}(\mathbb{Z}_{4})=\mathbb{D}_{4}.\]
c) Let \(L_{p,q}=L_{2,1}=\mathbb{RP}^{3}\), so \(\mathsf{X}=\begin{pmatrix}1&2\\ 1&1\end{pmatrix}\). Then \(\mathsf{X}\mathsf{A}\mathsf{D}=\mathsf{D}\mathsf{M}\mathsf{X}=-\mathsf{X}\), \(\mathsf{X}\mathsf{T}=\mathsf{TX}\), and \(\mathcal{D}^{lp}(\mathcal{F}_{2,1})\) contains the following diffeomorphisms of order two: \(\widehat{\theta}:\!\!\begin{array}{c}0\\ 1\end{array}\!\!\frac{\lambda d}{d\mu}\!\!\begin{array}{c}0\\ 1\end{array}\!\!\) and \(\widehat{\tau}:\!\!\begin{array}{c}0\\ 1\end{array}\!\!\!\frac{\tau}{\tau}\!\!\begin{array}{c}0\\ 1\end{array}\!\!\!\frac{\tau}{\tau}\!\!\begin{array}{c}0\\ 1\end{array}\!\!\!\). Since \((p,q)=(2,1)\), we have that \(q^{2}=-1(\bmod\ 2)\) and thus the diffeomorphism \(\sigma_{-}:\!\!\begin{array}{c}0\\ 1\end{array}\!\!\frac{\lambda}{\mu}\!\!\begin{array}{c}1\\ 0\end{array}\!\!\!\frac{\lambda}{0}\) is defined.
On the other hand, \(L_{2,1}\) is also diffeomorphic to the lens space \(L_{2,-1}\) given by the matrix \(\mathsf{X}^{\prime}=\mathsf{XD}^{-1}=\begin{pmatrix}-1&2\\ 0&1\end{pmatrix}\) for which \(\sigma_{+}^{\prime}:\!\!\begin{array}{c}0\\ 1\end{array}\!\!\!\frac{\mathrm{id}}{\mathrm{id}}\!\!\begin{array}{c}1\\ 0\end{array}\!\!\) is also defined. To transfer \(\sigma_{+}^{\prime}\) to a diffeomorphism of \(L_{2,1}\) we need to <<conjugate>> it by \(d\).
More precisely, note that \(E=\mathsf{X}^{\prime}\mathsf{X}^{\prime}=\mathsf{XD}^{-1}\mathsf{XD}^{-1}\), whence \(\mathsf{D}=\mathsf{XD}^{-1}\mathsf{X}\), and we get the following diffeomorphism \(\sigma_{+}:\!\!\begin{array}{c}0\\ 1\end{array}\!\!\frac{d}{d-1}\!\!\begin{array}{c}1\\ 0\end{array}\!\!\) of \(L_{2,1}\). Define the following groups
\[\mathcal{L}_{2,1}:=\langle\mathcal{R},\widehat{\theta},\widehat{\tau}\rangle, \widehat{\mathcal{L}}_{2,1}:=\langle\mathcal{L}_{2,1},\sigma_{-},\sigma_{+}\rangle.\]
Then again \(\mathcal{R}\) is the identity path component of each of them. Moreover,
\[\sigma_{+}^{2}=\sigma_{-}^{4}=\mathrm{id}, \sigma_{+}\sigma_{-}\sigma_{+}=\sigma_{-}^{-1}, \sigma_{-}^{2}=\widehat{\tau}, \sigma_{+}\sigma_{-}=\widehat{\theta}, \widehat{\theta}\widehat{\tau}=\widehat{\tau}\widehat{\theta},\]
which imply that \(\widehat{\mathcal{L}}_{2,1}:=\langle\mathcal{R},\widehat{\theta},\widehat{\tau},\sigma_{-},\sigma_{+}\rangle=\langle\mathcal{R},\sigma_{-},\sigma_{+}\rangle\), and
\[\pi_{0}\mathcal{L}_{2,1}=\langle\widehat{\theta},\widehat{\tau}\rangle= \mathbb{Z}_{2}\times\mathbb{Z}_{2}, \pi_{0}\widehat{\mathcal{L}}_{2,1}=\langle\sigma_{-},\sigma_{+}\rangle \cong\mathrm{Dih}(\mathbb{Z}_{4})=\mathbb{D}_{4}.\]
d) In all other cases \(p>2\) and \(\mathcal{D}(L_{p,q})\) still contains the diffeomorphism \(\widehat{\tau}:\!\!\begin{array}{c}0\\ 1\end{array}\!\!\frac{\tau}{\tau}\!\!\begin{array}{c}0\\ 1\end{array}\!\!\!\). Then we put
\[\mathcal{L}_{p,q}:=\langle\mathcal{R},\widehat{\tau}\rangle\cong\mathrm{Dih}( \mathcal{R}), \widehat{\mathcal{L}}_{p,q}:=\langle\mathcal{L}_{p,q},\sigma_{-},\sigma_{+}\rangle.\]
Note that \(\mathcal{R}\) is the identity path component of each of these groups. Let us specify them more precisely.
* Suppose \(q^{2}=1(\text{mod }p)\), so \(\sigma_{+}:\frac{0}{1}\frac{\text{id}}{\text{id}}\frac{1}{0}\) is defined. Then \(\sigma_{+}\circ\widehat{\tau}=\widehat{\tau}\circ\sigma_{+}\), whence \[\widehat{\mathcal{L}}_{p,q}=\langle\mathcal{R},\widehat{\tau}, \sigma_{+}\rangle, \pi_{0}\widehat{\mathcal{L}}_{p,q}=\mathbb{Z}_{2}\times\mathbb{Z}_ {2}.\]
* Suppose \(q^{2}=-1(\text{mod }p)\), so \(\sigma_{-}:\frac{0}{1}\frac{\lambda_{\lambda}}{\mu}\frac{1}{0}\) is defined. Then \(\sigma_{-}^{2}=\widehat{\tau}\), whence \[\widehat{\mathcal{L}}_{p,q}=\langle\mathcal{R},\sigma_{-}\rangle, \pi_{0}\widehat{\mathcal{L}}_{p,q}=\mathbb{Z}_{4}.\]
* Otherwise, \(q^{2}\neq\pm 1(\text{mod }p)\), so neither \(\sigma_{-}\) nor \(\sigma_{+}\) are defined, and \[\widehat{\mathcal{L}}_{p,q}=\mathcal{L}_{p,q}=\langle\mathcal{R},\widehat{ \tau}\rangle\cong\text{Dih}(\mathcal{R}), \pi_{0}\widehat{\mathcal{L}}_{p,q}=\pi_{0}\mathcal{L}_{p,q}= \mathbb{Z}_{2}.\]
**Smale conjecture.** Recall that _Smale conjecture_ for a manifold \(M\) with a \(*\)good\(*\) Riemannian metric is a statement that the inclusion \(\text{Isom}(M)\subset\mathcal{D}(M)\) of the group of isometries of \(M\) into the group of all its diffeomorphisms is a homotopy equivalence.
It is also well known that each lens space \(L_{p,q}\) (except for \(L_{0,1}=S^{1}\times S^{2}\)) is also the quotient of the unit 3-sphere \(S^{3}\) in \(\mathbb{C}^{2}\) by a free action of \(\mathbb{Z}_{p}\) generated by the diffeomorphism \(\delta_{p,q}:S^{3}\to S^{3}\), \(\delta_{p,q}(z_{1},z_{2})=(e^{2\pi i/p}\cdot z_{1},e^{2\pi iq/p}\cdot z_{2})\). In particular, \(L_{p,q}\) it has a natural metric, called _elliptic_, induced from the standard metric on \(S^{3}\) via the corresponding covering map \(S^{3}\to L_{p,q}\). The group of isometries \(\text{Isom}(L_{p,q})\) of this metric is described in [20, Theorem 2.3], which, in particular, implies that \(\mathcal{L}_{p,q}\subset\widehat{\mathcal{L}}_{p,q}\subset\text{Isom}(L_{p,q})\), so the above groups consist of isometries, and these three groups coincide exactly when neither \(\sigma_{-}\) nor \(\sigma_{+}\) is defined. Thus,
\[\mathcal{L}_{p,q}=\widehat{\mathcal{L}}_{p,q}=\text{Isom}(L_{p,q})\qquad \Leftrightarrow\qquad p>2\text{ and }q^{2}\neq\pm 1(\text{mod }p). \tag{3.3}\]
Note further that the Smale conjecture is proved
* for \(L_{1,0}=S^{3}\), so the inclusion \(\text{O}(4)\subset\mathcal{D}(S^{3})\) is a homotopy equivalence, A. Hatcher [16];
* for all lens spaces \(L_{p,q}\) with \(p>2\) in the book [18];
* for all lens spaces except for \(S^{1}\times S^{2}\) by another methods used Ricci flow in the preprint by R. Bamler and B. Kleiner [1].
Also in [15] A. Hatcher also proved that \(\mathcal{D}(S^{1}\times S^{2})=\mathcal{D}(L_{0,1})\) has the homotopy type of \(\Omega(\text{O}(3))\times\text{O}(3)\times\text{O}(2)\).
**Simplest Morse-Bott foliation on \(L_{p,q}\).** Recall that in SS2 we defined a Morse-Bott foliation \(\mathcal{F}\) on the solid torus \(\mathbf{T}\) by the central circle and 2-tori \(*\)parallel\(*\) to the boundary, see SS2. In particular, \(\partial\mathbf{T}\) is a leaf of \(\mathcal{F}\). Since \(\xi\) identifies \(\partial\mathbf{T}\times\{0\}\) with \(\partial\mathbf{T}\times\{1\}\), it induces the foliation \(\mathcal{F}_{p,q}\) on \(L_{p,q}\) whose leaves are the images of the corresponding leaves of foliations on those tori. In particular, \(\mathcal{F}_{p,q}\) has two singular leaves (the central circles) \(C_{0}\) and \(C_{1}\) and all other leaves are 2-tori parallel each other.
One can also define a Morse-Bott function \(g\colon L_{p,q}\to\mathbb{R}\) such that \(\mathcal{F}_{p,q}\) coincides with the partition of \(L_{p,q}\) into the level sets of \(g\). For example, define the function \(\widehat{g}:L\to[0;2]\) by
\[\widehat{g}(w,z)=\begin{cases}|z|^{2},&(w,z)\in\mathbf{T}\times\{0\},\\ 2-|z|^{2},&(w,z)\in\mathbf{T}\times\{1\}.\end{cases}\]
Then \(\widehat{g}\) induces a Morse-Bott \(\mathcal{C}^{\infty}\) function \(g:L_{p,q}\to[0;2]\) such that \(\widehat{g}=g\circ p\) and the set of critical points of \(g\) is a union of two circles \(g^{-1}(0)=C_{0}\) and \(g^{-1}(1)=C_{1}\). Evidently, \(\mathbf{T}_{0}=g^{-1}\big{(}[0;1]\big{)}\), \(\mathbf{T}_{1}=g^{-1}\big{(}[1;2]\big{)}\), so \(\partial\mathbf{T}_{0}=\partial\mathbf{T}_{1}=g^{-1}(1)\).
Let \(\mathcal{D}^{fol}(\mathcal{F}_{p,q})\) be the group of \(\mathcal{F}_{p,q}\)-foliated diffeomorphisms of \(L_{p,q}\), \(\mathcal{D}^{lp}(\mathcal{F}_{p,q})\) be its (normal) subgroup consisting of \(\mathcal{F}_{p,q}\)-leaf preserving ones, and
\[\mathcal{D}^{fol}_{+}(\mathcal{F}_{p,q})=\{h\in\mathcal{D}^{lp}(\mathcal{F}_{p, q})\mid h(C_{i})=C_{i},i=0,1\}\]
be the another normal subgroup of \(\mathcal{D}^{fol}(\mathcal{F}_{p,q})\) leaving invariant each central circle \(C_{i}\). If \(\mathcal{D}^{fol}_{+}(\mathcal{F}_{p,q})\) does not coincide with all the group \(\mathcal{D}^{fol}(\mathcal{F}_{p,q})\), then \(\mathcal{D}^{fol}_{+}(\mathcal{F}_{p,q})\) obviously has index \(2\) in \(\mathcal{D}^{fol}(\mathcal{F}_{p,q})\).
Since the above diffeomorphisms \(\rho_{\alpha,\beta}\), \(d\), \(\lambda\), \(\mu\), \(\tau\) of \(\mathbf{T}\) belong to \(\mathcal{D}^{lp}(\mathcal{F})\), it follows that \(\mathcal{L}_{p,q}\subset\mathcal{D}^{lp}(\mathcal{F}_{p,q})\), while \(\sigma_{+},\sigma_{-}\in\mathcal{D}^{fol}(\mathcal{F}_{p,q})\setminus\mathcal{ D}^{fol}_{+}(\mathcal{F}_{p,q})\), and thus we get the following inclusions of subgroups:
(3.4)
Our second results of this paper is the following:
**Theorem 3.1**.: _The group \(\mathcal{D}^{lp}(\mathcal{F}_{p,q})\) is a strong deformation retract of \(\mathcal{D}^{fol}_{+}(\mathcal{F}_{p,q})\)._
In fact, we will prove it for a more general class of foliations by level sets of Morse-Bott functions on a closed manifold having only extreme critical submanifolds, see Theorem 7.7. Now, using previous results we can deduce homotopy types of all the groups in (3.4), see Theorem 3.4 below.
**Theorem 3.2** ([21]).: _The inclusion \(\mathcal{L}_{p,q}\subset\mathcal{D}^{lp}(\mathcal{F}_{p,q})\) is a weak homotopy equivalence._
**Lemma 3.3**.: _The following conditions are equivalent:_
1. _there are no diffeomorphisms_ \(h\in\mathcal{D}(L_{p,q})\) _exchanging_ \(\mathbf{T}_{0}\) _and_ \(\mathbf{T}_{1}\)_;_
2. \(p>2\) _and_ \(q^{2}\neq\pm 1(\mathrm{mod}\ p)\)_, i.e. exactly when neither_ \(\sigma_{-}\) _nor_ \(\sigma_{+}\) _are defined;_
3. \(\widehat{\mathcal{L}}_{p,q}=\mathcal{L}_{p,q}\)_;_
4. \(\mathcal{D}^{fol}_{+}(\mathcal{F}_{p,q})=\mathcal{D}^{fol}(\mathcal{F}_{p,q})\)_, i.e. every_ \(h\in\mathcal{D}^{fol}(\mathcal{F}_{p,q})\) _leaves invariant each_ \(C_{0}\) _and_ \(C_{1}\)_._
Proof.: In fact, the equivalence (a)\(\Leftrightarrow\)(b) is well known, (b)\(\Leftrightarrow\)(c) follows from the definition of \(\widehat{\mathcal{L}}_{p,q}\), and the implication (d)\(\Rightarrow\)(b) is evident.
(a)\(\Rightarrow\)(d). Let \(M=L_{p,q}\setminus(C_{0}\cup C_{1})=g^{-1}\big{(}(0;2)\big{)}\). Then \(g\) has no critical points in \(M\) and each level set \(g^{-1}(t)\), \(t\in(0;2)\), is diffeomorphic to a \(2\)-torus. Moreover, using standard technique with some gradient flow of \(g\), one can construct a diffeomorphism \(\psi:M\to T^{2}\times(0;2)\) such that \(f\circ\psi^{-1}(y,t)=t\), \(t\in(0;2)\).
Now suppose that (d) fails, so there exists \(h\in\mathcal{D}^{fol}(\mathcal{F}_{p,q})\setminus\mathcal{D}^{fol}_{+}(\mathcal{ F}_{p,q})\). If \(h(g^{-1}(1))=1\), then \(h\) exchanges \(\mathbf{T}_{0}\) and \(\mathbf{T}_{1}\), so (a) also fails. Suppose \(h(g^{-1}(1))=g^{-1}(\tau)\) for some \(\tau\in(0;2)\) and let \(\mu:(0;2)\to(0;2)\) be a diffeomorphism fixed near \(0\) and \(2\) and such that \(\mu(\tau)=1\), so it gives the diffeomorphism \(k^{\prime}:T^{2}\times(0;2)\to T^{2}\times(0;2)\), \(k^{\prime}(y,t)=k(y,\mu(t))\). Notice also that \(h(M)=M\), whence we have a diffeomorphism \(k=\psi\circ h\circ\psi^{-1}:T^{2}\times(0;2)\to T^{2}\times(0;2)\) such that \(k(T^{2}\times 1)=T^{2}\times\tau\). Define the following diffeomorphism \(h^{\prime}\in\mathcal{D}(L_{p,q})\) by
\[h^{\prime}(x)=\begin{cases}\psi^{-1}\circ k^{\prime}\circ k\circ\psi(x),&x\in M,\\ h(x),&x\in C_{0}\cup C_{1}.\end{cases}\]
Then \(h^{\prime}\in\mathcal{D}^{fol}(\mathcal{F}_{p,q})\setminus\mathcal{D}^{fol}_{+}( \mathcal{F}_{p,q})\) coincides with \(h\) near \(C_{0}\cup C_{1}\) and leaves invariant \(g^{-1}(1)\).
**Theorem 3.4**.: 1) _Suppose \(p>2\) and \(q^{2}\neq\pm 1(\bmod\ p)\). Then we have the following commutative diagram of inclusions in which the arrows denoted by (w.)h.e. are (weak) homotopy equivalences (and new statements of this paper are written in bold):_
_In particular, each of those groups has the homotopy type of the disjoint union of two \(2\)-tori \(\mathcal{R}\sqcup\mathcal{R}\)._
2) _For all other values of \((p,q)\), i.e. when either \(\sigma_{-}\) or \(\sigma_{+}\) is defined, the inclusion of pairs_
\[\big{(}\widehat{\mathcal{L}}_{p,q},\mathcal{L}_{p,q}\big{)}\ \subset\ \big{(} \mathcal{D}^{fol}(\mathcal{F}_{p,q}),\ \mathcal{D}^{fol}_{+}(\mathcal{F}_{p,q})\big{)} \tag{3.5}\]
_is a weak homotopy equivalence. In particular, each path components of those groups is also homotopy equivalent to a \(2\)-torus \(\mathcal{R}\)._
Proof.: 1) The left vertical and all horizontal arrows explained at the diagram. They imply that the right vertical arrow is also a weak homotopy equivalence.
2) By Lemma 3.3, \(\widehat{\mathcal{L}}_{p,q}\neq\mathcal{L}_{p,q}\) and \(\mathcal{D}^{fol}(\mathcal{F}_{p,q})\neq\mathcal{D}^{fol}_{+}(\mathcal{F}_{p,q})\). Hence \(\widehat{\mathcal{L}}_{p,q}\), resp. \(\mathcal{D}^{fol}(\mathcal{F}_{p,q})\), is a union of two adjacent classes by the group \(\mathcal{L}_{p,q}\), resp. \(\mathcal{D}^{fol}_{+}(\mathcal{F}_{p,q})\). By 1) the inclusion \(j:\mathcal{L}_{p,q}\subset\mathcal{D}^{fol}_{+}(\mathcal{F}_{p,q})\) is a weak homotopy equivalence, i.e. \(j:\pi_{0}\mathcal{L}_{p,q}\to\pi_{0}\mathcal{D}^{fol}_{+}(\mathcal{F}_{p,q})\) is a bijection, and \(j\) yields a weak homotopy equivalence between the corresponding identity path components. Hence so must be the inclusion of pairs (3.5).
**Structure of the paper.** SS4 collects preliminary results about left-right actions of groups of diffeomorphisms. SS6 is devoted to properties of fiberwise homogeneous functions on the total spaces of vector bundles. In particular, in SS6 we prove Theorem 6.7 including Theorem 2.1 as a particular case. The proof uses a famous Whitney lemma on \(\mathcal{C}^{\infty}\) even functions, see Lemma 4.3.
Further in SS7 we consider high-dimensional analogues of lens spaces: obtained by gluing unit disk bundles over some manifolds by some diffeomorphism of their boundaries (assuming that such a diffeomorphism exists). We prove there Theorem 7.7 including Theorem 3.1 as a particular case.
## 4. Preliminaries
### Morphisms of topological groups and monoids
Let \(j:A\to X\) be a topological inclusion, i.e. a homeomorphism of \(A\) onto some subset of \(X\). It will be convenient to say that \(A\) is a _(strong) deformation retract of \(X\) with respect to \(j\)_, if so is the image \(j(A)\). We will need the following simple lemmas.
**Lemma 4.1**.: _Let \(1\to A\stackrel{{\alpha}}{{\longrightarrow}}B\stackrel{{ p}}{{\rightsquigarrow}}C\to 1\) be a short exact sequence of continuous homomorphisms of topological groups in which \(\alpha\) is a topological embedding. Suppose there exists a continuous homomorphism \(\theta:C\to B\) being a section of \(p\), i.e. \(p\circ\theta=\mathrm{id}_{C}\)._
1. _Then the map_ \(\zeta:B\to A\times C\)_,_ \(\zeta(b)=\big{(}\theta(p(b^{-1}))\cdot b,\ p(b)\big{)}\)_, is a homeomorphism._
2. _Suppose that there exists a strong deformation retraction of_ \(C\) _into the unit_ \(e_{C}\) _of_ \(C\)_. Then_ \(A\) _is a strong deformation retract of_ \(B\) _with respect to the inclusion_ \(\alpha\)_. If, in addition,_
\(\theta(C)\) is contained in some subgroup \(B^{\prime}\subset B\), and denote \(A^{\prime}=\alpha^{-1}(A\cap B^{\prime})\). Then the pair \((A,A\cap B^{\prime})\) is a strong deformation retract of \((B,B^{\prime})\) via the inclusion \(\alpha\)._
Proof.: The first statement is trivial. Suppose \(H:C\times[0;1]\to C\) is a strong deformation retraction of \(C\) into \(e_{C}\), i.e. \(H_{0}=\operatorname{id}_{C}\), \(H_{t}(e_{C})=e_{C}\) for \(t\in[0;1]\), and \(H_{1}(C)=\{e_{gC}\}\). Then the map
\[G:B\times[0;1]\to B,\qquad G(b,t)=\theta\big{(}H(p(b^{-1}),1-t))\big{)}\cdot b, \tag{4.1}\]
is a strong deformation retraction of \(B\) onto \(A\), so \(G_{0}=\operatorname{id}_{B}\), \(H_{t}\) is fixed on \(A\) for \(t\in[0;1]\), and \(H_{1}(B)\subset A\).
Moreover, if \(\theta(C)\subset B^{\prime}\), then (4.1) shows that if \(b\in B^{\prime}\), then \(G(b,t)\in B^{\prime}\) as well. In other words, \(B^{\prime}\) is invariant under \(G\), and thus \(G\) induces a strong deformation retraction of \(B^{\prime}\) onto \(j(A\cap B^{\prime})\).
**Lemma 4.2**.: _Let \(\sigma:A\to B\) be a homomorphism of monoids, so \(\sigma(e_{A})=e_{B}\) and \(\sigma(aa^{\prime})=\sigma(a)\sigma(a^{\prime})\) for all \(a,a^{\prime}\in A\). Then for every invertible \(a\in A\), its image \(\sigma(a)\) is invertible in \(B\)._
Proof.: We have that \(e_{B}=\sigma(e_{A})=\sigma(aa^{-1})=\sigma(a)\sigma(a^{-1})\). Therefore, \((\sigma(a))^{-1}=\sigma(a^{-1})\).
**Smooth even functions.** We will need the following statement.
**Lemma 4.3** (H. Whitney, [41]).: _Let \(a>0\), \(I_{a}=[-a;a]\) and \(\gamma\in\mathcal{C}^{\infty}(I_{a},\mathbb{R})\) be an even function, that is \(\gamma(-t)=\gamma(t)\) for all \(t\in I_{a}\). Then there exists a unique \(\phi\in\mathcal{C}^{\infty}([0;a],\mathbb{R})\) such that \(\gamma(t)=\phi(t^{2})\) for all \(t\in\mathbb{R}\)._
_Moreover, let \(\mathcal{C}^{\infty}_{\operatorname{ev}}(I_{a},\mathbb{R})\) be the space of even \(\mathcal{C}^{\infty}\) functions on \(I_{a}\). Then the correspondence \(\gamma\to\phi\) is an \(\mathbb{R}\)-linear map \(\delta:\mathcal{C}^{\infty}_{\operatorname{ev}}(I_{a},\mathbb{R})\to\mathcal{ C}^{\infty}([0;a],\mathbb{R})\) being continuous between the \(\mathcal{C}^{\infty}\) topologies._
Sketch of proof.: Notice that \(\phi:[0;a]\to\mathbb{R}\) is uniquely defined by \(\phi(t)=\gamma(\sqrt{|t|})\), \(t\in[0;a]\). Such formula implies that \(\phi\) is \(\mathcal{C}^{\infty}\) only for \(t\neq 0\) and the main difficulty was to show that \(\phi\) is in fact \(\mathcal{C}^{\infty}\) near \(0\) as well. Smoothness of \(\phi\) is proved by Whitney, and we need to derive the second statement about continuity of \(\delta\).
Uniqueness of \(\phi\) easily implies that \(\delta\) is an \(\mathbb{R}\)-linear map. Hence it suffices to verify continuity of \(\delta\) at the zero function \(0\) only. One easily checks by induction that the identity \(\gamma(t)=\phi(t^{2})\) implies that for every for \(r\geq 1\) there exists some constant \(A_{r}>0\) depending only on \(r\) such that:
\[\sup_{t\in[0;a]}\bigl{|}\tfrac{d^{r}\phi}{dt^{i}}(t)\bigr{|}\leq A_{r}\sum_{i=0 }^{r+1}\sup_{t\in I_{a}}\bigl{|}\tfrac{d^{i}\gamma}{dt^{i}}(t)\bigr{|},\]
This implies, that for each \(r\geq 0\) the map \(\delta\) is continuous from \(\mathcal{C}^{r+1}\) topology of \(\mathcal{C}^{\infty}_{\operatorname{ev}}(I_{a},\mathbb{R})\) into \(\mathcal{C}^{r}\) topology of \(\mathcal{C}^{\infty}([0;a],\mathbb{R})\). Hence it is continuous between \(\mathcal{C}^{\infty}\) topologies.
For example, notice that \(\gamma^{\prime}(t)=2t\phi^{\prime}(t^{2})\). Hence \(\gamma^{\prime}(0)=0\), and thus, by Hadamard lemma, \(\gamma^{\prime}(t)=t\delta(t)\), where \(\delta(t)=\int\limits_{0}^{1}\gamma^{\prime\prime}(st)ds\). Therefore \(\phi^{\prime}(t^{2})=\frac{1}{2}\delta(t)=\frac{1}{2}\int\limits_{0}^{1} \gamma^{\prime\prime}(st)ds\), and
\[\sup_{t\in[0;a]}|\phi^{\prime}(t)|\leq\tfrac{1}{2}\sup_{t\in I_{a}}|\gamma^{ \prime\prime}(t)|.\]
We leave the other cases \(r\geq 2\) for the reader.
## 5. Stabilizers of functions under actions of diffeomorphisms groups
### Left-right actions of diffeomorphisms groups
Let \(M\) be a smooth compact manifold. Then the product \(\mathcal{D}(\mathbb{R})\times\mathcal{D}(M)\) of groups of diffeomorphisms naturally acts from the _left_ on the space of smooth functions \(\mathcal{C}^{\infty}(M,\mathbb{R})\) by the following action map, see e.g. [31, SS3] for detailed discussions and references:
\[\mu:\mathcal{D}(\mathbb{R})\times\mathcal{D}(M)\times\mathcal{C}^{\infty}(M, \mathbb{R})\to\mathcal{C}^{\infty}(M,\mathbb{R}),\qquad\qquad\mu(\phi,h,f)=\phi \circ f\circ h^{-1}.\]
It is usually referred as _left-right_. Notice also that \(\mathcal{D}(M)=\mathrm{id}_{\mathbb{R}}\times\mathcal{D}(M)\) is a subgroup of \(\mathcal{D}(\mathbb{R})\times\mathcal{D}(M)\), and thus we have an induced (still _left_) action
\[\mu:\mathcal{D}(M)\times\mathcal{C}^{\infty}(M,\mathbb{R})\to\mathcal{C}^{ \infty}(M,\mathbb{R}),\qquad\qquad\qquad\mu(h,f)=f\circ h^{-1},\]
which will be referred below as _right_1. In terms of \(*\)arrows1 these actions \(*\)move down2 the horizontal arrow \(f\):
Footnote 1: In fact, it will become a right action if we define it by \(\mu(h,f)=f\circ h\). However, for certain reason it is convenient to mean the terms \(\bullet\)left-right\(*\) and \(\bullet\)right\(*\) to refer the sides at which we append the corresponding diffeomorphisms to \(f\).
Let \(f\in\mathcal{C}^{\infty}(M,\mathbb{R})\) and \(X\subset M\) be a subset. Denote by \(\mathcal{D}(M,X)\) the subgroup of \(\mathcal{D}(M)\) consisting of diffeomorphisms fixed on \(X\). Then one can define its stabilizers with respect to the above left-right and right actions:
\[\mathcal{S}_{\mathsf{LR}}(f,X) :=\{(\phi,h)\in\mathcal{D}(\mathbb{R})\times\mathcal{D}(M,X)\mid \phi\circ f\circ h^{-1}=f\},\] \[\mathcal{S}_{\mathsf{R}}(f,X) :=\{h\in\mathcal{D}(M,X)\mid f\circ h^{-1}=f\}.\]
If \(X\) is empty, then we will omit it from the notation.
Notice also that we have a canonical inclusion \(j:\mathcal{S}_{\mathsf{R}}(f,X)\subset\mathcal{S}_{\mathsf{LR}}(f,X)\), \(j(h)=(\mathrm{id}_{\mathbb{R}},h)\), and therefore sometimes we will identify \(\mathcal{S}_{\mathsf{R}}(f,X)\) with its image \(\{\mathrm{id}_{\mathbb{R}}\}\times\mathcal{S}_{\mathsf{R}}(f,X)\).
**Example 5.1**.: Define \(\phi,f,h,q:\mathbb{R}\to\mathbb{R}\) by \(\phi(t)=4t\), \(f(x)=x^{2}\), \(h(x)=2x\), \(q(x)=-x\). Then the identities \(4x^{2}=(2x)^{2}\) and \((-x)^{2}=x^{2}\), mean respectively that \(\phi\circ f=f\circ h\) and \(f\circ q=f\), so \((\phi,h)\in\mathcal{S}_{\mathsf{LR}}(f)\) and \(q\in\mathcal{S}_{\mathsf{R}}(f)\).
We will describe now the geometrical meaning of the above right- and left-right stabilizers in terms of level sets of \(f\). Let \(\mathcal{F}=\{f^{-1}(t)\}_{t\in\mathbb{R}}\) be the partition of \(M\) into the level sets of \(f\), \(\phi\in\mathcal{D}(\mathbb{R})\), and \(h\in\mathcal{D}(M)\). Then it is easy to see that the following conditions are equivalent:
* \(f\circ h^{-1}=f\), so \(h\in\mathcal{S}_{\mathsf{R}}(f)\);
* \(f=f\circ h\);
* \(h(f^{-1}(t))=f^{-1}(t)\) for every \(t\in\mathbb{R}\), i.e. \(h\) is an \(\mathcal{F}\)-leaf preserving diffeomorphism.
Similarly, the following conditions are also equivalent:
* \(\phi\circ f\circ h^{-1}=f\), so \(h\in\mathcal{S}_{\mathsf{LR}}(f)\);
* \(\phi\circ f=f\circ h\);
* \(h(f^{-1}(t))=f^{-1}(\phi(t))\) for all \(t\in\mathbb{R}\).
In particular, condition (LR3) implies that \(h\) is an \(\mathcal{F}\)-foliated diffeomorphism. However, if \(h^{\prime}\) is an \(\mathcal{F}\)-foliated diffeomorphism, then a priori we can not claim that there exists \(\phi\in\mathcal{D}(\mathbb{R})\) such that \((\phi,h^{\prime})\in\mathcal{S}_{\mathsf{LR}}(f)\).
Let us clarify relations between stabilizers and foliated diffeomorphisms.
**Lemma 5.2**.: _Let \(f\in\mathcal{C}^{\infty}(M,\mathbb{R})\) and \(A:=f(M)\subset\mathbb{R}\) be its image. Then for every \((\phi,h)\in\mathcal{S}_{\mathsf{LR}}(f)\), we have that \(\phi(A)=A\). Moreover, if \(\psi\in\mathcal{D}(\mathbb{R})\) is another diffeomorphism such that \(\psi=\phi\) on \(A\), then \((\psi^{\prime},h)\in\mathcal{S}_{\mathsf{LR}}(f)\)._
Proof.: Let \(a\in A\), so \(a=f(x)\) for some \(x\in M\). Then \(\phi(a)=\phi\circ f(x)=f\circ h(x)\in A\), so \(\phi(A)\subset A\). Applying the same arguments for the inverse \((\phi^{-1},h^{-1})=(\phi,h)^{-1}\in\mathcal{S}_{\mathsf{LR}}(f)\), we obtain that \(\phi^{-1}(A)\subset A\), whence \(\phi(A)=A\). Moreover, if \(\psi=\phi\) on \(A\), then for each \(x\in M\) we have that \(\psi\circ f(x)=\phi\circ f(x)=f\circ h(x)\). Thus \((\psi,h)\in\mathcal{S}_{\mathsf{LR}}(f)\) as well.
Let \(f\in\mathcal{C}^{\infty}(M,\mathbb{R})\) and \(X\subset M\) a subset. Then its image \(A:=f(M)\subset\mathbb{R}\) is a finite union of closed intervals, and therefore a submanifold of \(\mathbb{R}\). Let \(\mathcal{D}(A)\) be the group of diffeomorphisms of \(A\). In view of Lemma 5.2, it is more natural to regard \(f\) as a surjective function \(f\in\mathcal{C}^{\infty}(M,A)\), and study the stabilizer of \(f\) with respect to the corresponding <<left-right>>\) action of \(\mathcal{D}(A)\times\mathcal{D}(M)\). Therefore it is more convenient to consider instead of \(\mathcal{S}_{\mathsf{LR}}(f)\) the following groups:
\[\mathcal{S}_{\mathsf{LR}}^{img+}(f,X) :=\{(\phi,h)\in\mathcal{D}^{+}(A)\times\mathcal{D}(M,X)\mid\phi \circ f=f\circ h\},\] \[\mathcal{S}_{\mathsf{LR}}^{img}(f,X) :=\{(\phi,h)\in\mathcal{D}(A)\times\mathcal{D}(M,X)\mid\phi\circ f =f\circ h\}.\]
Then \(\mathcal{S}_{\mathsf{R}}(f)\equiv\mathrm{id}_{A}\times\mathcal{S}_{\mathsf{R} }(f)\) is a subgroup of \(\mathcal{S}_{\mathsf{LR}}^{img+}(f)\subset\mathcal{S}_{\mathsf{LR}}^{img}(f)\), and the properties (LR1)-(LR3) still hold for \(\mathcal{S}_{\mathsf{LR}}^{img}(f)\) instead of \(\mathcal{S}_{\mathsf{LR}}(f)\). Moreover, we have the following lemma.
**Lemma 5.3**.: _Let \(p_{1}:\mathcal{D}(A)\times\mathcal{D}(M)\to\mathcal{D}(A)\) and \(p_{2}:\mathcal{D}(A)\times\mathcal{D}(M)\to\mathcal{D}(M)\) be natural projections, so \(p_{1}(\phi,h)=\phi\) and \(p_{2}(\phi,h)=h\). Then the following statements hold._
1. \(p_{1}\) _and_ \(p_{2}\) _are homomorphisms;_
2. \(\mathcal{S}_{\mathsf{R}}(f)=\mathcal{D}^{lp}(\mathcal{F})\)_;_
3. \(\ker(p_{1}|_{\mathcal{S}_{\mathsf{LR}}^{img}(f)})=\mathrm{id}_{A}\times \mathcal{S}_{\mathsf{R}}(f)\)_;_
4. \(p_{2}(\mathcal{S}_{\mathsf{LR}}^{img}(f))\subset\mathcal{D}^{fol}(\mathcal{F})\)_;_
5. _if_ \((\phi,h),(\phi^{\prime},h)\in\mathcal{S}_{\mathsf{LR}}(f)\)_, then_ \(\phi=\phi^{\prime}\)_, i.e. the map_ \(p_{2}|_{\mathcal{S}_{\mathsf{LR}}^{img}(f)}:\mathcal{S}_{\mathsf{LR}}^{img}(f )\to\mathcal{D}(M)\) _is injective._
Proof.: Statements (1) and (3) are trivial, (2) coincides with (R3), while (4) coincides with (LR3).
(5) Notice that the kernel of \(p_{2}|_{\mathcal{S}_{\mathsf{LR}}^{img}(f)}\) consists of pairs of the form \((\phi,\mathrm{id}_{M})\in\mathcal{S}_{\mathsf{LR}}^{img}(f)\) satisfying \(\phi\circ f=f\), whence \(\phi\in\mathcal{D}(A)\) is fixed on the image \(A\) of \(f\), so \(\phi=\mathrm{id}_{A}\).
In particular, we get the following commutative diagram in which the upper row is exact at the two middle items (though \(p_{2}\) is not necessarily surjective):
(5.1)
**Condition** (J).: In [27] the author gave wide conditions on \(f\) under which the inclusion \(j:\mathcal{S}_{\mathsf{R}}(f)\subset\mathcal{S}^{img+}_{\mathsf{LR}}(f)\) is a homotopy equivalence, see Theorem 5.6 below. We will briefly recall that result. Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a \(\mathcal{C}^{\infty}\) function. Say that \(f\)_has property_ (J) _at \(u\in\mathbb{R}^{n}\)_ if there exists a neighborhood \(U\) of \(u\) and \(\mathcal{C}^{\infty}\) functions \(\alpha_{1},\ldots,\alpha_{n}:U\to\mathbb{R}\) such that \(f(x)-f(u)=\sum\limits_{i=1}^{n}f^{\prime}_{x_{i}}(x)\,\alpha_{i}(x),\,x\in U\).
**Example 5.4**.: Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a \(\mathcal{C}^{\infty}\) homogeneous function of order \(k\), that is \(f(tx)=t^{k}f(x)\) for all \(t\geq 0\) and \(x\in\mathbb{R}^{n}\). Then, by the well known Euler identity
\[f=f^{\prime}_{x_{1}}\,\tfrac{x_{1}}{n}+\cdots+f^{\prime}_{x_{n}}\,\tfrac{x_{n }}{n},\]
so \(f\) has property (J) at the origin \(0\in\mathbb{R}^{n}\).
Equivalently, denote by \(\mathcal{C}^{\infty}_{u}(\mathbb{R}^{n})\) the algebra of germs at \(u\) of \(\mathcal{C}^{\infty}\) functions \(\mathbb{R}^{n}\to\mathbb{R}\), and for each \(f\in\mathcal{C}^{\infty}_{u}(\mathbb{R}^{n})\) let \(J_{u}(f)\) be the ideal in \(\mathcal{C}^{\infty}_{u}(\mathbb{R}^{n})\) generated by partial derivatives of \(f\). Then property (J) means that the germ at \(u\) of the function \(g(x)=f(x)-f(u)\) belongs to \(J_{u}(f)\). The following simple lemma shows that the property (J) does not depend on local coordinates at \(u\), and so it is well defined for functions on manifolds.
**Lemma 5.5**.: _Let \(h=(h_{1},\ldots,h_{n}):(\mathbb{R}^{m},v)\to(\mathbb{R}^{n},u)\) be a germ of a \(\mathcal{C}^{\infty}\) map, and \(h^{*}:\mathcal{C}^{\infty}_{u}(\mathbb{R}^{n})\to\mathcal{C}^{\infty}_{v}( \mathbb{R}^{m})\), \(h^{*}(f)=f\circ h\), be the induced algebra homomorphism. Then_
\[J_{v}(h^{*}(f))\subset h^{*}\big{(}J_{u}(f)\big{)}.\]
_In particular, if \(h\) is a diffeomorphism, then \(f\in J_{u}(f)\) iff \(h^{*}(f)\in J_{u}(h^{*}(f))\)._
Proof.: Let \(y=(y_{1},\ldots,y_{m})\in\mathbb{R}^{m}\). Then for each \(k=1,\ldots,m\) we have that
\[\tfrac{\partial(h^{*}(f))}{\partial y_{k}}(y)=\tfrac{\partial(f\circ h)}{ \partial y_{k}}(y)=\sum\limits_{i=1}^{n}\tfrac{\partial f}{\partial x_{i}}(h(y ))\ \tfrac{\partial h_{i}}{\partial y}(y)=\sum\limits_{i=1}^{n}h^{*}(f^{\prime}_{x_ {i}})(y)\ \tfrac{\partial h_{i}}{\partial y}(y),\]
i.e. partial derivatives of \(h^{*}(f)\) are linear combinations with smooth coefficients of the images of partial derivatives \(h^{*}(f^{\prime}_{x_{i}})\) of \(f\). Hence \(J_{v}(h^{*}(f))\subset h^{*}\big{(}J_{u}(f)\big{)}\).
**Theorem 5.6** ([27, Theorem 1.3]).: _Let \(M\) be a smooth connected compact manifold, \(f\in\mathcal{C}^{\infty}(M,\mathbb{R})\), and \(A=f(A)\) be its image. Suppose that_
* \(f\) _takes a constant value at each connected component of_ \(\partial M\) _and has only finitely many critical values;_
* \(f\) _has property_ (J) _at every critical point_ \(x\in M\)_._
_Then \(\operatorname{id}_{A}\times\mathcal{S}_{\mathsf{R}}(f)\) is a strong deformation retract of \(\mathcal{S}^{img+}_{\mathsf{LR}}(f)\)._
Remarks to the proof.: Let \(Q=\{a_{0},a_{1},\cdots,a_{n}\}\subset A\) be all the values of \(f\) at critical points and boundary components of \(M\). Condition (a) implies that \(Q\) is finite. Let also \(\mathcal{D}(A,Q)\) be the subgroup of \(\mathcal{D}(A)\) fixed on \(Q\). One easily checks that if \((\phi,h)\in\mathcal{S}^{img+}_{\mathsf{LR}}(f)\), then \(\phi\in\mathcal{D}(A,Q)\), i.e. \(p_{1}(\mathcal{S}^{img+}_{\mathsf{LR}}(f))\subset\mathcal{D}(A,Q)\). Moreover, each \(\phi\in\mathcal{D}(A,Q)\) preserves orientation of \(A\), and \(\mathcal{D}(A,Q)\) is convex in \(\mathcal{C}^{\infty}(A,A)\), and therefore contractible.
The main technical result of [27, Theorem 1.3] is based on condition (b) and claims that there exists a continuous homomorphism \(\theta:\mathcal{D}(A,Q)\to\mathcal{S}^{img+}_{\mathsf{LR}}(f)\) such that \(\phi\circ f=f\circ\theta(\phi)\). In other words, \(\theta\) is a section of \(p_{1}:\mathcal{S}^{img+}_{\mathsf{LR}}(f)\to\mathcal{D}(A,Q)\), so the statement of this theorem follows from Lemma 4.1.
## 6. Fiberwise homogeneous functions on vector bundles
In this section we prove Theorem 6.7 including Theorem 2.1 as a particular case.
**Smooth homogeneous functions.** Recall that a continuous function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is _homogeneous_ of some degree \(k\geq 0\), whenever \(f(tv)=t^{k}f(v)\) for all \(t>0\) and \(v\in\mathbb{R}^{n}\). The following simple lemma seems to be a classical result. However, the author did not find precise references, though there are discussions on this question in internet resources, e.g. [5].
**Lemma 6.1**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a homogeneous \(\mathcal{C}^{k}\) function of integer degree \(k\geq 0\). Then \(f\) is a homogeneous polynomial of degree \(k\)._
Proof.: If \(k=0\), then the assumption that \(f\) is continuous and homogeneous of degree \(0\), means that \(f(tv)=t^{0}f(v)=f(v)\) for all \(t\geq 0\) and \(v\in\mathbb{R}^{n}\). In particular, for \(t=0\) we get that \(f(v)=f(0)\) for all \(v\in\mathbb{R}^{n}\), so \(f\) is constant.
For \(k>0\), applying \(\frac{\partial}{\partial v_{i}}\), \(i=1,\ldots,n\), to both sides of the identity \(f(tv)=t^{k}f(v)\), we get \(tf^{\prime}_{v_{i}}(tv)=t^{k}f^{\prime}_{v_{i}}(tv)\), whence \(f^{\prime}_{v_{i}}(tv)=t^{k-1}f^{\prime}_{v_{i}}(tv)\), i.e. every partial derivative \(f^{\prime}_{v_{i}}\) is a homogeneous \(C^{k-1}\) function. Then, by induction on \(k\), \(f^{\prime}_{v_{i}}\) must be a homogeneous polynomial of degree \(k-1\). Hence, by Euler's identity, \(f(v)=\frac{1}{n}\sum_{i=1}^{n}v_{i}f^{\prime}_{v_{i}}(v)\) is a homogeneous polynomial of degree \(k\).
More generally, let \(p\colon E\to B\) be a smooth vector bundle of rank \(n\) over a manifold \(B\). We will identify \(B\) with the image of the zero section of \(p\) in \(E\). A continuous function \(f\colon E\to\mathbb{R}\) is called _homogeneous_ of degree \(k\geq 0\) (or _\(k\)-homogeneous_) whenever \(f(tx)=t^{k}f(x)\) for all \(t\geq 0\) and \(x\in E\).
Assume that \(f\colon E\to\mathbb{R}\) is homogeneous. Then \(B\subset f^{-1}(0)\), however, in general, this inclusion in non-strict. We will say that \(f\) is _definite_, whenever \(f(x)>0\) for all \(x\in E\setminus B\). In particular, \(B=f^{-1}(0)\).
**Corollary 6.2**.: _Let \(p:E=B\times\mathbb{R}^{n}\to B\) be a trivial vector bundle, and \(f\colon E\to\mathbb{R}\) be a \(k\)-homogeneous \(\mathcal{C}^{r}\) function with \(k\leq r\). Then_
\[f(y,v_{1},\ldots,v_{n})=\sum_{\begin{subarray}{c}i_{1},\ldots,i_{n}\in\{0, \ldots,k\}\\ i_{1}+\cdots+i_{n}=k\end{subarray}}a_{i_{1},\ldots,i_{n}}(y)\,v_{1}^{i_{1}} \cdots v_{n}^{i_{n}}, \tag{6.1}\]
_where each \(a_{i_{1},\ldots,i_{n}}:B\to\mathbb{R}\) is some \(\mathcal{C}^{r-k}\) function._
Proof.: Notice that \(k\)-homogeneity of \(f\) means that for every \((y,u)\in B\times\mathbb{R}^{n}\) and \(t\geq 0\) we have that \(f(y,tu)=t^{k}f(y,u)\). Now the proof follows from the Lemma 6.1.
**Corollary 6.3**.: _Let \(f\colon E\to\mathbb{R}\) be a \(\mathcal{C}^{\infty}\) homogeneous function of degree \(k\geq 2\). Then \(f\) satisfies condition_ (J) _at each \(x\in B\)._
Proof.: Due to Lemma 5.5 one can pass to a local trivialization of \(p\) at \(x\), and thus assume that \(p:E\to B\) is a trivial vector bundle. Since \(k\geq 2\), every point of \(B\) is critical. Then by (6.1) and <<Euler's identity with respect to the coordinates \(v_{1},\ldots,v_{n}\)>> we have that \(f(y,v)=\frac{1}{k}\sum_{i=1}^{n}v_{i}f^{\prime}_{v_{i}}(y,v)\). Hence \(f\) satisfies property (J) at \(x\).
**Stabilizers of homogeneous functions.** Let \(p\colon E\to B\) be a smooth vector bundle of rank \(n\) over a manifold \(B\). The following statement is a particular case of [27, Theorem 1.3]. However, the proof is essentially simpler and it additionally takes to account the behavior of diffeomorphisms on \(\partial\mathbf{T}\).
Let \(f\colon E\to[0;+\infty)\) be a \(\mathcal{C}^{\infty}\) function such that \(1\) is a regular value of \(f\), so \(\mathbf{T}=f^{-1}([0;1])\) is a submanifold of \(E\), and \(\mathcal{F}=\{f^{-1}(t)\mid t\in[0;1]\}\) be the partition of \(\mathbf{T}\) into level sets of \(f\).
**Lemma 6.4** (c.f. [27, Theorem 1.3]).: _Let \(k\geq 1\), \(f\colon E\to\mathbb{R}\) be a \(k\)-homogeneous \(\mathcal{C}^{\infty}\) function, \(\mathbf{T}=f^{-1}([0;1])\), and \(\mathcal{F}=\{f^{-1}(t)\mid t\in[0;1]\}\) be the partition of \(\mathbf{T}\) into level sets of \(f\). Then there exists a homomorphism \(\theta:\mathcal{D}^{+}([0;1])\to\mathcal{D}^{fol}(\mathcal{F},\partial\mathbf{ T})\) such that \(\phi\circ f=f\circ\theta(h)\) for all \(\phi\in\mathcal{D}^{+}([0;1])\). In particular, \((\phi,\theta(\phi))\in\mathcal{S}^{img}_{\mathsf{LR}}(f)\)._
_If \(\mathbf{T}\) is compact, then \(\theta\) is continuous, and the pair \(\big{(}\mathcal{S}_{\mathsf{R}}(f),\ \mathcal{S}_{\mathsf{R}}(f,\partial\mathbf{ T})\big{)}\) is a strong deformation retract of the pair \(\big{(}\mathcal{S}^{img}_{\mathsf{LR}}(f),\ \mathcal{S}^{img}_{\mathsf{LR}}(f,\partial\mathbf{ T})\big{)}\) with respect to the natural inclusion \(j:\mathcal{S}_{\mathsf{R}}(f)\equiv\{\mathrm{id}_{[0;1]}\}\times\mathcal{S}_{ \mathsf{R}}(f)\subset\mathcal{S}^{img}_{\mathsf{LR}}(f)\)._
Proof.: Let \(\phi\in\mathcal{D}^{+}([0;1])\), so \(\phi:[0;1]\to[0;1]\) is a \(\mathcal{C}^{\infty}\) function such that \(\phi(0)=0\), \(\phi(1)=1\), and \(\phi^{\prime}>0\). Then by the arguments of Hadamard lemma:
\[\phi(t)=\int\limits_{0}^{t}\phi^{\prime}(u)du=\big{|}_{\text{then }du=s\,dt}^{\text{replace }u=st,}\big{|}=t\underbrace{\int\limits_{0}^{1}\phi^{\prime}(st)dt}_{g_{\phi} (t)}=tg_{\phi}(t),\]
where \(g_{\phi}:[0;1]\to\mathbb{R}\) is \(\mathcal{C}^{\infty}\). Moreover, \(g_{\phi}(t)>0\) for all \(t\in[0;1]\) and \(g_{\phi}(0)=\phi^{\prime}(0)\).
Define the map
\[\theta:\mathcal{D}^{+}([0;1])\to\mathcal{C}^{\infty}(\mathbf{T},\mathbf{T}), \theta(\phi)(x)=\left[g_{\phi}(f(x))\right]^{1/k}x.\]
We claim that the image of \(\theta\) is contained in \(\mathcal{D}^{fol}(\mathcal{F})\), and the induced map \(\theta:\mathcal{D}^{+}([0;1])\to\mathcal{D}^{fol}(\mathcal{F})\) is the desired homomorphism.
1) First we show that \(\theta\) is a homomorphism of _monoids_ with respect to the natural composition of maps. Then by Lemma 4.2 it sends invertible elements to invertible, and therefore \(\theta(h)\) will be a diffeomorphism of \(\mathbf{T}\).
a) Indeed, if \(\phi=\mathrm{id}_{[0;1]}\), then \(g_{\phi}(t)\equiv 1\), whence \(\theta(\mathrm{id}_{[0;1]})=\mathrm{id}_{\mathbf{T}}\).
b) Furthermore, let \(\phi_{0},\phi_{1}\in\mathcal{D}^{+}([0;1])\) be two diffeomorphisms. Then \(\phi_{0}(t)=tg_{0}(t)\) and \(\phi_{1}(t)=tg_{1}(t)\) for unique \(\mathcal{C}^{\infty}\) functions \(g_{0},g_{1}:[0;1]\to\mathbb{R}\) such that
\[\theta(\phi_{0})(x)=\left[g_{0}(f(x))\right]^{1/k}x, \theta(\phi_{1})(x)=\left[g_{1}(f(x))\right]^{1/k}x.\]
Hence
\[\theta(\phi_{1})\circ\theta(\phi_{0})(x) =\theta(\phi_{1})\big{(}\big{[}g_{0}(f(x))\big{]}^{1/k}\ x\big{)}\] \[=\Big{[}g_{1}\Big{(}f\big{(}\big{[}g_{0}(f(x))\big{]}^{1/k}\ x \big{)}\Big{)}\Big{]}^{1/k}\cdot\left[g_{0}(f(x))\right]^{1/k}\ x=\] \[=\big{[}g_{1}\big{[}g_{0}(f(x))\cdot f(x)\big{]}\big{]}^{1/k}\cdot \left[g_{0}(f(x))\right]^{1/k}\ x.\]
On the other hand, \(\phi_{1}\circ\phi_{0}(t)=\phi_{0}(t)\cdot g_{1}(\phi_{0}(t))=t\cdot \underbrace{g_{0}(t)\cdot g_{1}(tg_{0}(t))}_{\hat{g}}\), whence
\[\theta\big{(}\phi_{1}\circ\phi_{0}\big{)}(x) =\left[\bar{g}(f(x))\right]^{1/k}\ x\] \[=\big{[}g_{0}(f(x))\cdot g_{1}\big{[}f(x)\cdot g_{0}(f(x))\big{]} \big{]}^{1/k}\ x=\theta(\phi_{1})\circ\theta(\phi_{0})(x).\]
2) Let us verify that \((\phi,\theta(\phi))\in\mathcal{S}_{\mathsf{LR}}(f)\). Indeed,
\[f\circ\theta(\phi)(x)=f\Big{(}\big{[}g_{\phi}\circ f(x)\big{]}^{1/k}\ x\Big{)}= (g_{\phi}\circ f(x))\cdot f(x)=\phi\circ f(x).\]
This implies that \((\phi,\theta(\phi))\in\mathcal{S}_{\mathsf{LR}}(f)\), and in particular, \(\theta(\phi)\in\mathcal{D}^{fol}(\mathcal{F})\).
3) Finally, note that \(g_{\phi}(1)=\phi(1)/1=1\), whence if \(x\in\partial{\bf T}=f^{-1}(1)\), then \(\theta(\phi)(x)=x\), so \(\theta(\phi)\) is fixed on \(\partial{\bf T}\). Thus, \(\theta(\phi)\in{\mathcal{D}}^{fol}({\mathcal{F}},\partial{\bf T})\).
4) Suppose \({\bf T}\) is compact. Then continuity of \(\theta\) directly follows from the formulas for \(\theta(h)\).
Consider the following short exact sequence, see (5.1):
\[1\to{\mathcal{S}}_{\sf R}(f)\xrightarrow{j}{\mathcal{S}}^{img}_{\sf LR}(f) \xrightarrow{p_{2}}{\mathcal{D}}^{+}([0;1])\to 1.\]
Then the map \(\widehat{\theta}:{\mathcal{D}}^{+}([0;1])\to{\mathcal{S}}^{img}_{\sf LR}(f)\), \(\widehat{\theta}(\phi)=(\phi,\theta(\phi))\), is a continuous section of \(p_{1}\) and its image is contained in \({\mathcal{S}}^{img}_{\sf LR}(f,\partial{\bf T})\). Moreover, \({\mathcal{D}}^{+}([0;1])\) is contractible into \(\operatorname{id}_{[0;1]}\) via the homotopy
\[H:{\mathcal{D}}^{+}([0;1])\times[0;1]\to{\mathcal{D}}^{+}([0;1]),\qquad\qquad H (\phi,t)=(1-t)\phi+t\operatorname{id}_{[0;1]}.\]
Then, by Lemma 4.1, the pair \(({\mathcal{S}}_{\sf R}(f),{\mathcal{S}}_{\sf R}(f,\partial{\bf T}))\) is a strong deformation retract of
\[({\mathcal{S}}^{img}_{\sf LR}(f),{\mathcal{S}}^{img}_{\sf LR}(f,\partial{\bf T }))\]
with respect to the inclusion map \(j\).
**Foliated maps for homogeneous functions.** Let \(p_{i}\colon E_{i}\to B_{i}\) for \(i=0,1\) be a smooth vector bundle of some rank \(n_{i}\) over a compact manifold \(B_{i}\) and \(f_{i}\colon E_{i}\to[0;+\infty)\) be a \({\mathcal{C}}^{\infty}\) function such that \(1\in{\mathbb{R}}\) is a regular value for \(f\). Then \({\bf T}_{i}:=f_{i}^{-1}([0;1])\) is a submanifold of \(E_{i}\). Let also
\[{\mathcal{F}}_{i}=\{f_{i}^{-1}(t)\mid t\in[0;1]\}\]
be the partition of \({\bf T}_{i}\) into the level sets of \(f\).
A map \(h:{\bf T}_{0}\to{\bf T}_{1}\) will be called \(({\mathcal{F}}_{0},{\mathcal{F}}_{1})\)_-foliated_ if for each leaf \(\omega\) of \({\mathcal{F}}_{0}\) its image \(h(\omega)\) is contained in some leaf of \({\mathcal{F}}_{1}\). Denote by \({\mathcal{C}}^{\infty}({\mathcal{F}}_{0},{\mathcal{F}}_{1})\) the subset of \({\mathcal{C}}^{\infty}({\bf T}_{0},{\bf T}_{1})\) consisting of \(({\mathcal{F}}_{0},{\mathcal{F}}_{1})\)-foliated maps. If \({\bf T}_{0}\) and \({\bf T}_{1}\) are diffeomorphic, then we denote by \({\mathcal{D}}({\mathcal{F}}_{0},{\mathcal{F}}_{1})\) the _set_ of all \(({\mathcal{F}}_{0},{\mathcal{F}}_{1})\)-foliated diffeomorphisms.
**Lemma 6.5**.: _Suppose the functions \(f_{0}\) and \(f_{1}\) are \(2\)-homogeneous and definite._
\((1)\) _Then for each \(({\mathcal{F}}_{0},{\mathcal{F}}_{1})\)-foliated map \(h:{\bf T}_{0}\to{\bf T}_{1}\) there exists a unique function \(\sigma(h):[0;1]\to[0;1]\) such that \(\sigma(h)\circ f_{0}=f_{1}\circ h\). Moreover, if \(h\) is \({\mathcal{C}}^{\infty}\), then \(\sigma(h)\) is also \({\mathcal{C}}^{\infty}\) and the correspondence \(h\mapsto\sigma(h)\) is a continuous map \(\sigma:{\mathcal{C}}^{\infty}({\mathcal{F}}_{0},{\mathcal{F}}_{1})\to{ \mathcal{C}}^{\infty}([0;1],[0;1])\)._
\((2)\) _If \({\bf T}_{0}\) and \({\bf T}_{1}\) are diffeomorphic, then \(\sigma(h)\in{\mathcal{D}}^{+}([0;1])\) is a preserving orientation diffeomorphism of \([0;1]\) for each \(h\in{\mathcal{D}}({\mathcal{F}}_{0},{\mathcal{F}}_{1})\)._
\((3)\) _Suppose \(p_{0}=p_{1}\) is the same vector bundle and \(f_{0}=f_{1}:E_{0}=E_{1}\to{\mathbb{R}}\), so \({\bf T}_{0}={\bf T}_{1}\), \({\mathcal{F}}_{0}={\mathcal{F}}_{1}\), and thus \({\mathcal{D}}({\mathcal{F}}_{0},{\mathcal{F}}_{1})={\mathcal{D}}^{fol}({ \mathcal{F}}_{0})\). Then \(\sigma:{\mathcal{C}}^{\infty}({\mathcal{F}}_{0},{\mathcal{F}}_{0})\to{ \mathcal{C}}^{\infty}([0;1],[0;1])\) is a (continuous) homomorphism of monoids. In particular, \(\sigma({\mathcal{D}}^{fol}({\mathcal{F}}_{0}))\subset{\mathcal{D}}^{+}([0;1])\)._
Proof.: (1) The assumption that \(h:{\bf T}_{0}\to{\bf T}_{1}\) is \(({\mathcal{F}}_{0},{\mathcal{F}}_{1})\)-foliated means that for each leaf \(A_{t}=f_{0}^{-1}(t)\), \(t\in[0;1]\) of \({\mathcal{F}}_{0}\) there exist a unique leaf \(B_{t^{\prime}}=f_{1}^{-1}(t^{\prime})\) such that \(h(A_{t})\subset B_{t^{\prime}}\). Define the function \(\phi:[0;1]\to[0;1]\) by \(\phi(t)=t^{\prime}\). Then \(\phi\circ f_{0}=f_{1}\circ h\), and then we put \(\sigma(h)=\phi\). Hence such \(\sigma(h)\) is unique.
Suppose \(h\) is \({\mathcal{C}}^{\infty}\). To prove that \(\sigma(h)\) is \({\mathcal{C}}^{\infty}\) as well we will use Whitney Lemma 4.3. Fix any point \(w\in\partial{\bf T}_{0}\), so \(f_{0}(w)=1\), and consider the path \(\eta:[-1;1]\to{\bf T}_{0}\) given by \(\eta(t)=tw\). Then
\[f_{0}\circ\eta(t)=f_{0}(tw)=t^{2}f_{0}(w)=t^{2}. \tag{6.2}\]
In particular, for each \(t\in[-1;1]\) the points \(\eta(t)\) and \(\eta(-t)\) belong to the same leaf \(f_{0}^{-1}(t^{2})\) of \({\mathcal{F}}_{0}\).
Define the following \(\mathcal{C}^{\infty}\) function \(\gamma=f_{1}\circ h\circ\eta:[-1,1]\to\mathbb{R}\). Since \(h\) sends level sets of \(f_{0}\) to level sets of \(f_{1}\), the points \(h(\eta(t))\) and \(h(\eta(-t))\) also belong to the same level set of \(f_{1}\), that is
\[\gamma(-t)=f_{1}\circ h\circ\eta(-t)=f_{1}\circ h\circ\eta(t)=\gamma(t).\]
Hence \(\gamma\) is an even function. Then, due to Lemma 4.3, there exists a unique \(\mathcal{C}^{\infty}\) function \(\phi:[0;1]\to[0;1]\) such that \(\gamma(t)=\phi(t^{2})\). Thus
\[f_{1}\circ h\circ\eta(t)=\gamma(t)=\phi(t^{2})\stackrel{{\eqref {eq:2}}}{{=}}\phi\circ f_{0}\circ\eta(t). \tag{6.3}\]
We claim that \(f_{1}\circ h\equiv\phi\circ f_{0}\) on all of \(\mathbf{T}_{0}\). Indeed, let \(y\in\mathbf{T}_{0}\) and \(f_{0}(y)=s\) for some \(s\in[0;1]\). Then \(f_{0}(\eta(\sqrt{s}))=s\) as well, so the points \(y\) and \(\eta(\sqrt{s})\) belong to the same level set \(f_{0}^{-1}(s)\) of \(f_{0}\). Hence \(h(y)\) and \(h(\eta(\sqrt{s}))\) also belong to the same level set of \(f_{1}\). Therefore
\[f_{1}\circ h(y)=f_{1}\circ h\circ\eta(\sqrt{s})\stackrel{{\eqref {eq:2}}}{{=}}\phi\circ f_{0}\circ\eta(\sqrt{s})=\phi\circ f_{0}(y). \tag{6.4}\]
Continuity of the correspondence \(h\mapsto\phi\) also follows from Lemma 4.3.
(2) Suppose \(h\) is a diffeomorphism. We should show that then \(\phi\) is a preserving orientation diffeomorphism of \([0;1]\). It suffices to check the following three properties of \(\gamma\).
* \(\gamma^{\prime}(t)>0\) for \(t>0\);
* \(\gamma(t)=t^{2}\delta(t)\) for some \(\mathcal{C}^{\infty}\) function \(\delta:[0;1]\to\mathbb{R}\) such that \(\delta(0)>0\).
Assuming they are proved, let us show that \(\phi\in\mathcal{D}^{+}([0;1])\). Indeed, it is evident that \(\phi(0)=0\) and \(\phi(1)=1\), so we need to verify that \(\phi^{\prime}>0\). Since \(\phi(s)=\gamma(\sqrt{s})\) for \(s\in[0;1]\), we get from (a) that \(\phi^{\prime}(s)>0\) for \(s\in(0;1]\). Furthermore, as \(\phi(0)=0\), we have by the Hadamard lemma that \(\phi(s)=s\psi(s)\) for some \(\mathcal{C}^{\infty}\) function \(\psi:[0;1]\to\mathbb{R}\) such that \(\psi(0)=\phi^{\prime}(0)\). Then, due to (b), \(t^{2}\delta(t)=\gamma(t)=\phi(t^{2})=t^{2}\psi(t^{2})\), whence \(\delta(t)=\psi(t^{2})\), and therefore \(\phi^{\prime}(0)=\psi(0)=\delta(0)>0\).
**Proof of (a).** Note that if \(\zeta:[a,b]\to\mathbf{T}_{0}\) is a smooth path not passing through \(B\), then for each \(t\in[a;b]\) the following conditions are equivalent:
* \(t\) is a regular point of the function \(f_{0}\circ\zeta:[a;b]\to\mathbb{R}\);
* \(\zeta\) is transversal at \(t\) to the level set \(f_{0}^{-1}(f_{0}\circ\zeta(t))\).
Since \(f_{0}\circ\eta(t)=t^{2}\), it follows that the function \(f_{0}\circ\eta:[-1,1]\to\mathbb{R}\) has a unique critical point \(t=0\), and thus \(\eta\) is transversal to all level sets \(f_{0}^{-1}(s)\) for all \(s\in(0;1]\).
On the other hand, as \(h\) sends level sets of \(f_{0}\) to level sets of \(f_{1}\), the path \(h\circ\eta\) must also be transversal to the level sets of \(f_{1}\) for all \(t\neq 0\). More precisely, if \(\eta\) is transversal at some \(t\neq 0\) to the leaf \(L=f_{0}^{-1}(t^{2})\), then \(h\circ\eta\) is transversal at \(t\) to the leaf
\[h(L)=f_{1}^{-1}(f_{1}\circ h\circ\eta(t))=f_{1}^{-1}(\gamma(t)).\]
This implies that _the function \(\gamma=f_{1}\circ h\circ\eta\) has a unique critical point \(t=0\)_.
Moreover, since \(\gamma(-1)=\gamma(1)=1\) and \(\gamma(0)=0\), we see that \(\gamma\) decreases on \([-1,0)\) and increases on \((0;1]\). In particular, \(\gamma^{\prime}(t)<0\) for \(t<0\) and \(\gamma^{\prime}(t)>0\) for \(t>0\).
**Proof of (b).** Let \(q_{0}:U\times\mathbb{R}^{n}\to U\) and \(q_{1}:V\times\mathbb{R}^{n}\to V\) be vector bundle trivializations of \(p\) over open neighborhood \(U\) of \(\eta(0)\) and \(V\) of \(h(\eta(0))\) respectively. Then \(h\) has local representation as an embedding \(h=(h_{0},h_{1},\ldots,h_{n}):U\times\mathbb{R}^{n}\supset W\to V\times \mathbb{R}^{n}\) of some open neighborhood \(W\) of \(\eta(0)\) in \(U\times\mathbb{R}^{n}\), where \(h_{0}:W\to V\) is a \(\mathcal{C}^{\infty}\) map, and each \(h_{i}:W\to\mathbb{R}\) is a \(\mathcal{C}^{\infty}\) function.
One can assume that the image of \(\eta\) is contained in \(W\), and \(w=(x,\bar{u})\in U\times\mathbb{R}^{n}\) are the coordinates of \(w\), so \(\eta(t)=(x,t\bar{u})\).
Then by Corollary 6.2 the restriction of \(f_{1}\) to each fiber of \(p\) is a \(2\)-homogeneous polynomial, i.e. \(f_{1}(y,v)=\sum\limits_{1\leq i,j\leq n}a_{ij}(y)v_{i}v_{j}\) for all \((y,v)\in V\times\mathbb{R}^{n}\). One can assume that \(a_{ij}=a_{ji}\), so we get a symmetric matrix \(A(y)=(a_{ij})\), such that \(f_{1}(y,v)=vA(y)v^{t}\), where \(v=(v_{1},\ldots,v_{n})\), and \(v^{t}\) is the transposed vector column. Hence
\[\gamma(t)=f_{1}\circ h\circ\eta(t)=f_{1}\big{(}h(x,t\bar{u})\big{)}=\bar{h}(x, t\bar{u}))\cdot A(h_{0}(x,t\bar{u}))\cdot\bar{h}^{t}(x,t\bar{u})),\]
where \(\widehat{h}=(h_{1},\ldots,h_{n})\). By the Hadamard lemma \(h_{i}(x,u)=\sum\limits_{j=1}^{n}h_{ij}(x,u)u_{j}\) for some \(\mathcal{C}^{\infty}\) functions \(h_{ij}:W\to\mathbb{R}\) such that \(h_{ij}(x,0)=\frac{\partial}{\partial u_{j}}h_{i}\). Let \(J(x,u)=(h_{ij}(x,u))\) the matrix whose \(i\)th row consists of the functions \(h_{i1},\ldots,h_{in}\). Notice that \(J(x,0)\) is the Jacobi matrix of the composition
\[(x\times\mathbb{R}^{n})\cap W\stackrel{{ h}}{{\longrightarrow}}V \times\mathbb{R}^{n}\stackrel{{ p_{2}}}{{\longrightarrow}} \mathbb{R}^{n}.\]
Since \(h\) is a diffeomorphism leaving invariant zero section \(B\), it follows that \(J(x,0)\) is non-degenerate.
One can also write \(\widehat{h}(x,u)=J(x,u)u^{t}\), whence
\[\gamma(t)=t\bar{u}\cdot J(x,t\bar{u})^{t}\cdot A(h_{0}(x,t\bar{u}))\cdot J(x, t\bar{u})\cdot t\bar{u}^{t},\]
which implies that
\[\delta(t)=\gamma(t)/t^{2}=uJ(x,tu)^{t}\cdot A(h_{0}(x,t\bar{u}))\cdot J(x,tu)u^ {t}.\]
Therefore
\[\delta(0)=u\underbrace{J(x,0)^{t}\cdot A(h_{0}(x,0))\cdot J(x,0)}_{B(x)}u^{t }=uB(x)u^{t}.\]
The assumption that \(f_{1}\) is definite means that \(A(h_{0}(x,0))\) is non-degenerate, whence \(B\) is symmetric and non-degenerate as well, and therefore \(\delta(0)=uB(x)u^{t}>0\).
(3) Suppose \(p_{0}=p_{1}\) is the same vector bundle and \(f_{0}=f_{1}:E_{0}=E_{1}\to\mathbb{R}\). Since \(\mathrm{id}_{[0;1]}\circ f_{0}=f_{0}\circ\mathrm{id}_{\mathbf{T}}\), it follows from uniqueness of \(\sigma\), that \(\sigma(\mathrm{id}_{\mathbf{T}})=\mathrm{id}_{[0;1]}\). Moreover, if \(h,h^{\prime}\in\mathcal{D}^{fol}(\mathcal{F}_{0})\), then
\[\sigma(h^{\prime})\circ\sigma(h)\circ f_{0}=\sigma(h^{\prime})\circ f_{0} \circ h=f_{0}\circ h^{\prime}\circ h=\sigma(h^{\prime}\circ h)\circ f_{0}. \tag{6.5}\]
Now uniqueness of \(\sigma\) implies that \(\sigma(h^{\prime})\circ\sigma(h)=\sigma(h^{\prime}\circ h)\). Thus \(\sigma\) is a homomorphism of monoids.
Example 6.6.: Notice that the assumption that \(f_{0}\) and \(f_{1}\) are of the same homogeneity order is essential for \(\phi\) to be a diffeomorphism. Indeed, let \(f_{0},f_{1}:B\times\mathbb{R}^{n}\to[0;1]\) be given by \(f_{0}(w,x)=\|x\|^{2}\) and \(f_{1}(x)=\|x\|^{4}\). Then \(f_{0}\) is \(2\)-homogeneous, while \(f_{1}\) is \(4\)-homogeneous, \(\mathcal{F}_{0}=\mathcal{F}_{1}\), and \(\mathbf{T}_{0}=f_{0}^{-1}([0;1])=f_{1}^{-1}([0;1])=\mathbf{T}_{1}\). Let also \(h=\mathrm{id}_{B\times\mathbb{R}^{n}}\). Then \(h\) is a \((\mathcal{F}_{0},\mathcal{F}_{1})\)-foliated diffeomorphism, while \(\phi:[0;1]\to[0;1]\), \(\phi(t)=t^{2}\), is a unique \(\mathcal{C}^{\infty}\) map satisfying \(\phi\circ f_{0}=f_{1}\circ h\). However \(\phi\) is not a diffeomorphism.
It seems plausible that Lemma 6.5 holds for definite homogeneous functions of the same degree \(>2\), but one needs an analogue of Whitney Lemma 4.3 for <<evenness of higher order>>.
**Fiberwise definite \(2\)-homogeneous functions.** Let \(p\colon E\to B\) be a smooth vector bundle of rank \(n\) over a compact manifold \(B\), \(f\colon E\to\mathbb{R}\) be a definite \(2\)-homogeneous \(\mathcal{C}^{\infty}\) function, \(\mathbf{T}=f^{-1}([0;1])\), and \(\mathcal{F}=\{f^{-1}(t)\mid t\in[0;1]\}\) the partition of \(\mathbf{T}\) into level sets of \(f\). Then we have two homomorphisms \(\theta:\mathcal{D}^{+}([0;1])\to\mathcal{D}^{fol}(\mathcal{F})\) and \(\sigma:\mathcal{D}^{fol}(\mathcal{F})\to\mathcal{D}^{+}([0;1])\), defined in Lemmas 6.4 and 6.5 respectively, such that \(\sigma(h)\circ f=f\circ h\) and \(\phi\circ f=f\circ\theta(\phi)\) for all \(h\in\mathcal{D}^{fol}(\mathcal{F})\) and \(\phi\in\mathcal{D}^{+}([0;1])\), so we have well-defined homomorphisms
\[\begin{split}\widehat{\sigma}:\mathcal{D}^{fol}(\mathcal{F})\to \mathcal{S}^{img}_{\mathsf{LR}}(f),&\widehat{\sigma}(h)=( \sigma(h),h),\\ \widehat{\theta}:\mathcal{D}^{+}([0;1])\to\mathcal{S}^{img}_{ \mathsf{LR}}(f),&\widehat{\theta}(\phi)=(\phi,\theta(\phi)). \end{split} \tag{6.6}\]
Notice that the foliation described in Theorem 2.1 corresponds to the definite homogeneous function \(f:S^{1}\times\mathbb{R}^{2}\to\mathbb{R}\), \(f(w,x,y)=x^{2}+y^{2}\), on the trivial vector bundle \(p:S^{1}\times\mathbb{R}^{2}\to S^{1}\) of rank \(2\) over the circle. Therefore Theorem 2.1 is a particular case of the following:
**Theorem 6.7**.: _The homomorphism_
\[p_{2}:\mathcal{S}_{\mathsf{LR}}(f)\to\mathcal{D}^{fol}(\mathcal{F}), p_{2}(\phi,h)=h,\]
_induces an isomorphism of the following short exact sequences:_
(6.7)
_and its inverse is \(\widehat{\sigma}\). Moreover, the pair \(\big{(}\mathcal{D}^{lp}(\mathcal{F}),\mathcal{D}^{lp}(\mathcal{F},\partial \mathbf{T})\big{)}\) is a strong deformation retract of \(\big{(}\mathcal{D}^{fol}(\mathcal{F}),\mathcal{D}^{fol}(\mathcal{F},\partial \mathbf{T})\big{)}\)._
Proof.: Evidently, \(p_{2}\circ\widehat{\sigma}(h)=h\) for all \(h\in\mathcal{D}^{fol}(\mathcal{F})\), in particular, \(p_{2}\) is surjective. Since it is also injective, it follows that \(p_{2}\) and \(\widehat{\sigma}\) are mutually inverse isomorphisms of topological groups. Moreover, by Lemma 5.3, \(\mathcal{S}_{\mathsf{R}}(f)=\mathcal{D}^{lp}(\mathcal{F})\), and \(p_{2}\circ j=\mathrm{id}_{\mathcal{S}_{\mathsf{R}}(f)}\). It is also evident that
\[p_{2}(\mathcal{S}^{img}_{\mathsf{LR}}(f,\partial\mathbf{T}))=\mathcal{D}( \mathcal{F},\partial\mathbf{T}).\]
Now, by Lemma 6.4, the pair \(\big{(}\mathcal{S}_{\mathsf{R}}(f),\mathcal{S}_{\mathsf{R}}(f,\partial \mathbf{T})\big{)}\) is a strong deformation retract of the pair \(\big{(}\mathcal{S}^{img}_{\mathsf{LR}}(f),\mathcal{S}^{img}_{\mathsf{LR}}(f, \partial\mathbf{T})\big{)}\) with respect to the inclusion \(j:\mathcal{S}_{\mathsf{R}}(f)\subset\mathcal{S}^{img}_{\mathsf{LR}}(f)\). Hence the pair \(\big{(}\mathcal{D}^{lp}(\mathcal{F}),\mathcal{D}^{lp}(\mathcal{F},\partial \mathbf{T})\big{)}\) is a strong deformation retract of \(\big{(}\mathcal{D}^{fol}(\mathcal{F}),\mathcal{D}^{fol}(\mathcal{F},\partial \mathbf{T})\big{)}\).
## 7 High-dimensional analogues of lens spaces
In this section we prove Theorem 7.7 including Theorem 3.1 as a particular case.
For \(i=0,1\) let \(p_{i}\colon E_{i}\to B_{i}\) be a smooth vector bundle of some rank \(n_{i}\) over a compact manifold \(B_{i}\) and \(f_{i}\colon E_{i}\to\mathbb{R}\) be a definite \(k_{i}\)-homogeneous \(\mathcal{C}^{\infty}\) function for some \(k_{i}>0\). Put
\[\mathbf{T}_{i}:=f_{i}^{-1}([0;1]),\qquad\quad\mathbf{U}_{i}:=f_{i}^{-1}\big{(} [0;1)\big{)}=\mathbf{T}_{i}\setminus\partial\mathbf{T}_{i},\qquad\quad\mathbf{ L}:=\mathbf{U}_{0}\sqcup\mathbf{U}_{1}.\]
Denote by \(\mathcal{F}_{i}=\{f_{i}^{-1}(t)\mid t\in[0;1]\}\) the partition of \(\mathbf{T}_{i}\) into the level sets of \(f\).
**Lemma 7.1**.: _Suppose there is a diffeomorphism \(\psi:\partial\mathbf{T}_{0}\to\partial\mathbf{T}_{1}\). Then the map_
\[\xi:\mathbf{U}_{0}\setminus B_{0}\to\mathbf{U}_{1}\setminus B_{1},\qquad\qquad \xi(x)=\sideset{{}_{\widehat{\sigma}}}{{}^{\widehat{\sigma}}}{\sum}\sqrt{1-f_{0 }(x)}\cdot\psi\big{(}x/\sideset{{}_{\widehat{\sigma}}}{{}^{\widehat{\sigma}}} \big{)}, \tag{7.1}\]
_is a diffeomorphism such that_
\[f_{0}(x)+f_{1}(\xi(x))=1,\ x\in\mathbf{U}_{0}\setminus B_{0}, \tag{7.2}\]
_which is the same as \(\xi(f_{0}^{-1}(t))=f_{1}^{-1}(1-t)\) for all \(t\in(0;1)\), so \(\xi\) is a \((\mathcal{F}_{0},\mathcal{F}_{1})\)-foliated diffeomorphism._
Proof.: Let \(x\in\mathbf{U}_{0}\setminus B_{0}\), and \(y=x/\sqrt[k]{f_{0}(x)}\). Then
\[f_{0}(y)=f_{0}\big{(}x/\sqrt[k]{f_{0}(x)}\big{)}=f_{0}(x)/\sqrt[k]{f_{0}(x)}=1,\]
i.e. \(y\in\partial\mathbf{T}_{0}\), so \(\xi\) is a well defined map. Moreover, since \(\psi(y)\in\partial\mathbf{T}_{1}\), \(f_{1}\big{(}\psi(y)\big{)}=1\), and therefore
\[f_{1}(\xi(x))=f_{1}\big{(}\sqrt[k]{1-f_{0}(x)}\cdot\psi(y)\big{)}=\big{(}1-f_{ 0}(x)\big{)}\cdot f_{1}\big{(}\psi(y)\big{)}=1-f_{0}(x).\qed\]
**Example 7.2**.: Notice that even if \(\partial\mathbf{T}_{0}\) and \(\partial\mathbf{T}_{1}\) are diffeomorphic, the bases \(B_{0}\) and \(B_{1}\) may have distinct topological type. The following example is inspired by surgery theory of manifolds. Fix any \(a,b\geq 0\) and let \(p_{0}:S^{a}\times\mathbb{R}^{b+1}\to S^{a}\) and \(p_{1}:S^{b}\times\mathbb{R}^{a+1}\to S^{b}\) be trivial vector bundles over spheres \(S^{a}\) and \(S^{b}\). Consider the following \(2\)-homogeneous functions \(f_{0}:S^{a}\times\mathbb{R}^{b+1}\to\mathbb{R}\) and \(f_{1}:S^{b}\times\mathbb{R}^{a+1}\to\mathbb{R}\) given by
\[f_{0}(x,u_{1},\ldots,u_{b+1})=\sum_{i=1}^{b+1}u_{i}^{2}, f_{1}(v,v_{1},\ldots,v_{a+1})=\sum_{i=1}^{a+1}v_{i}^{2}.\]
Then both \(f_{0}^{-1}(1)\) and \(f_{1}^{-1}(1)\) are diffeomorphic with \(S^{a}\times S^{b}\), though the bases \(S^{a}\) and \(S^{b}\) are not homeomorphic for \(a\neq b\).
**Remark 7.3**.: Recall that each lens space \(L_{\xi}\) is glued from two solid tori by some diffeomorphism \(\xi\) between their boundaries. Though \(L_{\xi}\) admits a smooth structure, such a definition has the following disadvantage: suppose we have a \(h:L_{\xi}\to M\) into some other manifold. Therefore, even if we know that the restriction of \(h\) on each tori \(\mathbf{T}_{i}\) is \(\mathcal{C}^{\infty}\), the full map \(h\) is not necessary even differentiable, and checking of its smoothness might be rather complicated. For that reason, in what follow we will glue our analogues of lens spaces by diffeomorphism between open sets, as in Lemma 7.1.
Suppose that there exists a diffeomorphism \(\psi:\partial\mathbf{T}_{0}\to\partial\mathbf{T}_{1}\). Let \(\mathbf{L}_{\xi}=\mathbf{U}_{0}\cup_{\xi}\mathbf{U}_{1}\) be the space obtained by gluing \(\mathbf{U}_{0}\) with \(\mathbf{U}_{1}\) via the diffeomorphism \(\xi:\mathbf{U}_{0}\setminus B_{0}\to\mathbf{U}_{1}\setminus B_{1}\) from Lemma 7.1. Let also \(p:\mathbf{L}\to\mathbf{L}_{\xi}\) be the corresponding quotient map and \(p_{i}:=p|_{\mathbf{U}_{i}}:\mathbf{U}_{i}\to\mathbf{L}_{\xi}\) be the restriction maps. Then it is evident that \(\mathbf{L}_{\xi}\) is a manifold, and each \(p_{i}\) is an open embedding. One can regard the pair \(\{p_{0},p_{1}\}\) as a \(\mathcal{C}^{\infty}\) atlas2 for \(\mathbf{L}_{\xi}\) with \(\xi\) being a transition map. Define the following function
Footnote 2: Usually an atlas of a manifold \(M\) is a collection of open embeddings \(\mathbb{R}^{n}\supset U_{i}\xrightarrow{\psi_{i}}M\), \(i\in\Lambda\), from open subsets of \(\mathbb{R}^{n}\) such that \(M=\cup_{i\in\Lambda}\psi_{i}(U_{i})\). However, all the theory of manifolds will not be changed if one extend a notion of an atlas allowing each \(U_{i}\) to be an open subset of some \(n\)-manifold such that the corresponding transition functions are smooth maps.
\[\hat{f}:\mathbf{L}\to[0;1],\qquad\hat{f}(x)=\begin{cases}f_{0}(x),&x\in \mathbf{U}_{0},\\ 1-f_{1}(x),&x\in\mathbf{U}_{1}.\end{cases}\]
It follows from (7.2) that \(\hat{f}(\xi(x))=\hat{f}(x)\) for all \(x\in\mathbf{U}_{0}\setminus B_{0}\), whence \(\hat{f}\) yields a well-defined \(\mathcal{C}^{\infty}\) function \(f:\mathbf{L}_{\xi}\to[0;1]\) such that \(\hat{f}=f\circ p\). It will be convenient to define the following
diffeomorphism \(q:[0;1]\to[0;1]\), \(q(t)=1-t\). Then we get the following commutative diagram:
**Example 7.4**.: If \(E_{i}=S^{1}\times\mathbb{R}^{2}\to S^{1}\), \(i=0,1\), are trivial vector bundles of rank \(2\) over the circle, and the functions \(f_{0}=f_{1}:S^{1}\times\mathbb{R}^{2}\to\mathbb{R}\) coincide and are given by the formula \(f_{0}(w,x,y)=x^{2}+y^{2}\), then the space \(\mathbf{L}_{\xi}\) is the same as lens space \(L_{\xi}\), and one can assume that \(\mathbf{T}_{0}=f^{-1}([0;\frac{1}{2}])\) and \(\mathbf{T}_{1}=f^{-1}([\frac{1}{2};1])\). In particular, Theorem 3.4 is a particular case of Theorem 7.7 below.
**Lemma 7.5**.: _There exists a continuous homomorphism_
\[\theta:\mathcal{D}^{+}([0;1])\to\mathcal{D}^{fol}_{+}(\mathcal{F})\]
_such that \(\phi\circ f=f\circ\theta(h)\) for all \(\phi\in\mathcal{D}^{+}([0;1])\). Therefore \(\mathcal{S}_{\mathsf{R}}(f)\) is a strong deformation retract of \(\mathcal{S}^{img+}_{\mathsf{LR}}(f)\) with respect to the natural inclusion \(j:\mathcal{S}_{\mathsf{R}}(f)\subset\mathcal{S}^{img}_{\mathsf{LR}}(f)\)._
Proof.: By definition, \(f_{i}:E\to\mathbb{R}\), \(i=0,1\), is a \(k_{i}\)-homogeneous \(\mathcal{C}^{\infty}\) function, so it satisfies the property (J) at each \(x\in B_{i}\). Moreover, \(f_{0}\) and \(f_{1}\) are local representations of \(f\) in the <<local chart>>\(\mathbf{U}_{0}\) and \(\mathbf{U}_{1}\), it follows that \(f\)_has property_ (J) _at each \(x\in L_{0}\cup L_{1}\). Now the result follows from Theorem 5.6. However, we will give an explicit proof similar to the proof of Lemma 6.4.
1) Let \(\phi\in\mathcal{D}^{+}([0;1])\). It will be convenient to put \(\phi_{0}=\phi\). As \(\phi_{0}\big{(}[0;1)\big{)}=[0;1)\), it follows from Lemma 6.4 that the map
\[h_{0}:\mathbf{U}_{0}\to\mathbf{U}_{0}, h_{0}(x)=\sqrt[k0]{g_{0}(f_{0}(x))}\,x,\]
is a diffeomorphism satisfying \(\phi_{0}\circ f_{0}=f_{0}\circ h_{0}\), where \(g_{0}:[0;1]\to(0;+\infty)\) is a unique \(\mathcal{C}^{\infty}\) function such that \(\phi_{0}(t)=g_{0}(t)t\).
Similarly, we have that \(\phi\big{(}(0;1]\big{)}=(0;1]\). Consider another diffeomorphism \(\phi_{1}\in\mathcal{D}^{+}([0;1])\), \(\phi_{1}(t)=q\circ\phi\circ q(t)=1-\phi(1-t)\). Then \(\phi_{1}\big{(}[0;1)\big{)}=[0;1)\), whence, again by Lemma 6.4, the map
\[h_{1}:\mathbf{U}_{1}\to\mathbf{U}_{1}, h_{1}(x)=\sqrt[k0]{g_{1}(f_{1}(x))}\,x,\]
is a diffeomorphism satisfying \(\phi_{1}\circ f_{1}=f_{1}\circ h_{1}\), where \(g_{1}:[0;1]\to(0;+\infty)\) is a unique \(\mathcal{C}^{\infty}\) function such that \(\phi_{1}(t)=g_{1}(t)t\).
We claim that
\[\xi\circ h_{0}(x)=h_{1}\circ\xi(x),\quad x\in\mathbf{U}_{0}\setminus B_{0}. \tag{7.3}\]
This will imply that \(h_{0}\) and \(h_{1}\) yield a well-defined diffeomorphism
\[\theta(\phi):\mathbf{L}_{\xi}\to\mathbf{L}_{\xi} \theta(\phi)(x)=\begin{cases}h_{0}(p_{0}^{-1}(x)),&x\in p_{0}(\mathbf{U}_ {0}),\\ h_{1}(p_{1}^{-1}(x)),&x\in p_{1}(\mathbf{U}_{1}).\end{cases}\]
Moreover, we will also have that \(\phi\circ f=f\circ\theta(\phi)\).
Before proving (7.3) let us establish several simple identities:
\[\begin{split}\frac{h_{0}(x)}{\sqrt[k_{0}]{f_{0}\circ h_{0}(x)}}& =\frac{\sqrt[k_{0}]{g_{0}(f_{0}(x))}\,x}{\sqrt[k_{0}]{f_{0}\big{(} \sqrt[k_{0}]{g_{0}(f_{0}(x))}\,x\big{)}}}\\ &=\sqrt[k_{0}]{g_{0}(f_{0}(x))}\,x=\frac{x}{\sqrt[k_{0}]{f_{0}(x )}},\\ (g_{1}\circ q\circ f_{0}(x))\cdot(q\circ f_{0}(x))&= \phi_{1}\circ q\circ f_{0}(x)\\ &=q\circ\phi_{0}\circ f_{0}(x)=q\circ f_{0}\circ h_{0}(x).\end{split} \tag{7.4}\]
Then
\[\begin{split} h_{1}\circ\xi(x)&=\sqrt[k_{0}]{g_{1} \circ f_{1}\circ\xi(x)}\cdot\xi(x)\stackrel{{\eqref{eq:f_0}, \eqref{eq:f_1}}}{{\longrightarrow}}\\ &=\sqrt[k_{0}]{g_{1}\circ q\circ f_{0}(x)}\cdot\sqrt[k_{0}]{g_{0} (x)}\cdot\sqrt[k_{0}]{g_{0}(x)}\cdot\sqrt[k_{0}]{g_{0}(x)}\stackrel{{ \eqref{eq:f_0},\eqref{eq:f_1}}}{{\longrightarrow}}\\ &=\sqrt[k_{0}]{g_{0}\circ f_{0}\circ h_{0}(x)}\cdot\sqrt[k_{0}]{g_ {0}(x)}\stackrel{{\eqref{eq:f_0},\eqref{eq:f_1}}}{{\longrightarrow }}\xi\circ h_{0}(x).\end{split}\]
The correspondence \(\phi\mapsto\theta(\phi)\) is a homomorphism, since so is the correspondence \(\phi_{0}\mapsto h_{0}\). Continuity of \(\theta\) follows from the formulas for \(h_{0}\) and \(h_{1}\).
2) The proof that \(\mathcal{S}_{\mathsf{R}}(f)\) is a strong deformation retract of \(\mathcal{S}_{\mathsf{LR}}^{img+}(f)\) is literally the same as in step 4) of the proof of Lemma 6.4.
For \(t\in[0;1]\) let \(L_{t}:=f^{-1}(t)\), and \(\mathcal{F}=\{L_{t}\}_{t\in[0;1]}\) be the foliation on \(\mathbf{L}\) by level sets of \(f\). Then \(L_{t}=p(f_{0}^{-1}(t))\) for \(t\in[0;1)\) and \(L_{t}=p(f_{1}^{-1}(1-t))\) for \(t\in(0;1]\). In particular, \(L_{0}=p(B_{0})\) and \(L_{1}=p(B_{1})\) are <<singular leaves>>, and each \(L_{t}\), \(t\in(0;1)\), is diffeomorphic with \(\partial\mathbf{T}_{0}\). Let \(\mathcal{D}_{+}^{fol}(\mathcal{F})\) be the group of \(\mathcal{F}\)-foliated diffeomorphisms leaving invariant each \(L_{0}\) and \(L_{1}\).
**Lemma 7.6**.: _Suppose \(f_{0}\) and \(f_{1}\) are definite and \(2\)-homogeneous. Then there is a unique continuous homomorphism \(\sigma:\mathcal{D}^{fol}(\mathcal{F})\to\mathcal{D}([0;1])\) such that \(\sigma(h)\circ f=f\circ h\), i.e. \((\sigma(h),h)\in\mathcal{S}_{\mathsf{LR}}^{img}(f)\)._
_Also, \(h\in\mathcal{D}_{+}^{fol}(\mathcal{F})\) if and only if \(\sigma(h)\in\mathcal{D}^{+}([0;1])\), and moreover, \(\sigma(\mathcal{D}_{+}^{fol}(\mathcal{F}))=\mathcal{D}^{+}([0;1])\). Hence, if \(\mathcal{D}_{+}^{fol}(\mathcal{F})\neq\mathcal{D}^{fol}(\mathcal{F})\), then \(\sigma\) is surjective._
Proof.: 1) Let \(h\in\mathcal{D}^{fol}(\mathcal{F})\). We should construct a diffeomorphism \(\phi:[0;1]\to[0;1]\) satisfying \(\phi\circ f=f\circ h\) and then put \(\sigma(h)\). As \(h\) is \(\mathcal{F}\)-foliated, for each \(t\in[0;1]\) there exists a unique \(t^{\prime}\in[0;1]\) such that \(h(L_{t})=L_{t^{\prime}}\). Then, by condition (LR3), \(\phi\) must be defined by \(\phi(t)=t^{\prime}\). In particular, \(\phi\) is uniquely determined by \(h\), and we need to check that \(\phi\) is a diffeomorphism. Consider two cases.
a) First assume that \(h\in\mathcal{D}_{+}^{fol}(\mathcal{F})\), i.e. \(h(L_{i})=L_{i}\) for \(i=0,1\). Then \(p(\mathbf{U}_{i})=\mathbf{L}_{\xi}\setminus L_{1-i}\) is also invariant under \(h\). Hence \(h\) yields a \(\mathcal{F}_{i}\)-foliated diffeomorphism
\[h_{i}=p_{i}\circ h\circ p_{i}:\mathbf{U}_{i}\xrightarrow{p_{i}}p(\mathbf{U}_{i })\xrightarrow{h}p(\mathbf{U}_{i})\xrightarrow{p_{i}^{-1}}\mathbf{U}_{i}, \quad i=0,1,\]
such that \(h_{1}\circ\xi=\xi\circ h_{0}\). Then by Lemma 6.5, there exists a unique diffeomorphism \(\phi_{i}:[0;1)\to[0;1)\) such that \(\phi_{i}\circ f_{i}=f_{i}\circ h_{i}\). Thus we get the following commutative diagram:
It implies that \(\phi_{0}=q\circ\phi_{1}\circ q^{-1}\), i.e. \(\phi_{0}(t)=1-\phi_{1}(1-t)\) for \(t\in(0;1)\). Therefore, we get a well defined diffeomorphism \(\phi\in\mathcal{D}^{+}([0;1])\) given by either of the formulas:
\[\phi(t)=\begin{cases}\phi_{0}(t),&t\in[0;1),\\ 1-\phi_{1}(1-t),&t\in(0;1]\end{cases} \tag{7.6}\]
and satisfying \(\phi\circ f=f\circ h\).
b) Suppose that \(h(L_{i})=L_{1-i}\) for \(i=0,1\). Then \(h\) yields a \((\mathcal{F}_{i},\mathcal{F}_{1-i})\)-foliated diffeomorphism
\[h_{i}=p_{1-i}\circ h\circ p_{i}:\mathbf{U}_{i}\xrightarrow{p_{i}}p(\mathbf{U }_{i})\xrightarrow{h}p(\mathbf{U}_{1-i})\xrightarrow{p_{1-i}}\mathbf{U}_{1-i },\quad i=0,1,\]
such that \(\xi^{-1}\circ h_{0}=h_{1}\circ\xi\). Therefore, by Lemma 6.5, there exists a diffeomorphism \(\phi_{i}:[0;1)\to[0;1)\) such that \(\phi_{i}\circ f_{i}=f_{1-i}\circ h_{i}\). Thus we get the following commutative diagram:
In particular, \(q\circ\phi_{0}=\phi_{1}\circ q\), i.e. \(1-\phi_{0}(t)=\phi_{1}(1-t)\) for \(t\in(0;1)\). Therefore, we get a well defined reversing orientation diffeomorphism \(\phi\in\mathcal{D}([0;1])\) given by
\[\phi(t)=\begin{cases}q\circ\phi_{0}(t)=1-\phi_{0}(t),&t\in[0;1),\\ \phi_{1}\circ q(t)=\phi_{1}(1-t),&t\in(0;1].\end{cases} \tag{7.7}\]
and satisfying \(\phi\circ f=f\circ h\).
2) Due to Lemma 6.5 and formulas for \(\phi\), the correspondence \(h\mapsto\phi\) is a well-defined continuous map \(\sigma:\mathcal{D}^{fol}(\mathcal{F})\to\mathcal{D}([0;1])\), \(\sigma(h)=\phi\). Moreover, as in (6.5), uniqueness of \(\sigma\) implies that \(\sigma\) is a homomorphism. Also, by the construction \(\sigma(h)\in\mathcal{D}^{+}([0;1])\) if and only if \(h\in\mathcal{D}^{fol}_{+}(\mathcal{F})\).
Further notice, that due to Lemma 7.5, \(\sigma(\theta(\phi))=\phi\) for all \(\phi\in\mathcal{D}^{+}([0;1])\). Indeed, we have that \(\phi\circ f=f\circ\theta(\phi)\), and \(\sigma(\theta(\phi))\circ f=f\circ\theta(\phi)\). Then by uniqueness of \(\sigma\) we should have that \(\sigma(\theta(\phi))=\phi\). This implies that \(\sigma(\mathcal{D}^{fol}_{+}(\mathcal{F}))=\mathcal{D}^{+}([0;1])\).
Finally, suppose that \(\mathcal{D}_{+}^{fol}(\mathcal{F})\neq\mathcal{D}^{fol}(\mathcal{F})\). Then \(\sigma\) must map the unique adjacent class \(\mathcal{D}^{fol}(\mathcal{F})\setminus\mathcal{D}_{+}^{fol}(\mathcal{F})\) of \(\mathcal{D}^{fol}(\mathcal{F})\) by \(\mathcal{D}_{+}^{fol}(\mathcal{F})\), onto the unique adjacent class \(\mathcal{D}([0;1])\setminus\mathcal{D}^{+}([0;1])\) of \(\mathcal{D}([0;1])\) by \(\mathcal{D}^{+}([0;1])\). Hence \(\sigma\) is surjective.
**Theorem 7.7**.: _Suppose \(f_{0}\) and \(f_{1}\) are definite and \(2\)-homogeneous. Then the projection \(p_{2}:\mathcal{S}_{\mathsf{LR}}^{img}(f)\to\mathcal{D}^{fol}(\mathcal{F})\) induces an isomorphism of the following short exact sequences:_
(7.8)
_where \(\widehat{\sigma}(h)=(\sigma(h),h)\) is the inverse to \(p_{2}\), and \(\widehat{\theta}(\phi)=(\phi,\theta(\phi))\) is the inverse to \(p_{1}\). In particular, \(\mathcal{D}^{lp}(\mathcal{F})\) is a strong deformation retract of \(\mathcal{D}_{+}^{fol}(\mathcal{F})\)._
_Moreover, if \(\mathcal{D}_{+}^{fol}(\mathcal{F})\neq\mathcal{D}^{fol}(\mathcal{F})\), then we have isomorphism of another pair of short exact sequences:_
(7.9)
Proof.: Evidently, \(p_{2}\circ\widehat{\sigma}(h)=h\) for all \(h\in\mathcal{D}^{fol}(\mathcal{F})\), in particular, \(p_{2}\) is surjective and maps \(\mathcal{S}_{\mathsf{LR}}^{img+}(f)\) onto \(\mathcal{D}_{+}^{fol}(\mathcal{F})\). Since it is also injective (see Lemma 5.3), it follows that \(p_{2}\) and \(\widehat{\sigma}\) are mutually inverse isomorphisms of topological groups. Moreover, again by Lemma 5.3, \(\mathcal{S}_{\mathsf{R}}(f)=\mathcal{D}^{lp}(\mathcal{F})\) and \(p_{2}\circ j=\operatorname{id}_{\mathcal{S}_{\mathsf{R}}(f)}\). Hence \(p_{2}\) yields isomorphisms of the corresponding short exact sequences in (7.8) and (7.9).
By Lemma 7.5, \(\mathcal{S}_{\mathsf{R}}(f)\) is a strong deformation retract of \(\mathcal{S}_{\mathsf{LR}}^{img}(f)\) with respect to the inclusion \(j:\mathcal{S}_{\mathsf{R}}(f)\subset\mathcal{S}_{\mathsf{LR}}^{img}(f)\), whence \(\mathcal{D}^{lp}(\mathcal{F})\) is a strong deformation retract of \(\mathcal{D}_{+}^{fol}(\mathcal{F})\).
|
2310.04564 | ReLU Strikes Back: Exploiting Activation Sparsity in Large Language
Models | Large Language Models (LLMs) with billions of parameters have drastically
transformed AI applications. However, their demanding computation during
inference has raised significant challenges for deployment on
resource-constrained devices. Despite recent trends favoring alternative
activation functions such as GELU or SiLU, known for increased computation,
this study strongly advocates for reinstating ReLU activation in LLMs. We
demonstrate that using the ReLU activation function has a negligible impact on
convergence and performance while significantly reducing computation and weight
transfer. This reduction is particularly valuable during the memory-bound
inference step, where efficiency is paramount. Exploring sparsity patterns in
ReLU-based LLMs, we unveil the reutilization of activated neurons for
generating new tokens and leveraging these insights, we propose practical
strategies to substantially reduce LLM inference computation up to three times,
using ReLU activations with minimal performance trade-offs. | Iman Mirzadeh, Keivan Alizadeh, Sachin Mehta, Carlo C Del Mundo, Oncel Tuzel, Golnoosh Samei, Mohammad Rastegari, Mehrdad Farajtabar | 2023-10-06T20:01:33Z | http://arxiv.org/abs/2310.04564v1 | # ReLU Strikes Back:
###### Abstract
Large Language Models (LLMs) with billions of parameters have drastically transformed AI applications. However, their demanding computation during inference has raised significant challenges for deployment on resource-constrained devices. Despite recent trends favoring alternative activation functions such as GELU or SiLU, known for increased computation, this study strongly advocates for reinstating ReLU activation in LLMs. We demonstrate that using the ReLU activation function has a negligible impact on convergence and performance while significantly reducing computation and weight transfer. This reduction is particularly valuable during the memory-bound inference step, where efficiency is paramount. Exploring sparsity patterns in ReLU-based LLMs, we unveil the reutilization of activated neurons for generating new tokens and leveraging these insights, we propose practical strategies to substantially reduce LLM inference computation up to three times, using ReLU activations with minimal performance trade-offs.
## 1 Introduction
The widespread excitement surrounding Large Language Models (LLMs) has sparked significant interest in leveraging AI across diverse domains [5, 9, 6]. However, realizing the potential of LLMs is challenged by their significant computational and memory requirements during inference [60, 40, 3]. To enhance the inference efficiency1, various techniques have been explored, including quantization [12, 50], speculative decoding [41], pruning [53, 71], and weight sparsification [20, 15]. Among these techniques, achieving activation sparsity offers a compelling advantage by providing a favorable balance between accuracy and speedup, especially on modern hardware like GPUs [51].
Footnote 1: In this work, we use FLOPS as a proxy for inference efficiency. In Appendix B, we demonstrate that for LLMs with activation sparsity, FLOPS can serve as a good approximation of real-world efficiency due to the structure inherent in activation sparsity (e.g., skipping the entire row corresponding to zero activations).
Notably, employing the Rectified Linear Unit (ReLU) activation function [22] in neural networks is recognized for inducing sparse activations and has been adopted in various prior works [27, 44, 48, 69]. To reaffirm this property, we employ the OPT model [80], utilizing ReLU, and measure the sparsity of activations in the Feed Forward Network (FFN) between the fully connected layers. As illustrated in Fig. 0(a), all layers exhibit sparsity exceeding \(90\%\). On average, across all layers, this activation sparsity results in substantial weight transfer (I/O) savings between the GPU and CPU, impacting \(95\%\) of the rows of the down projection layer's weights (Fig. 0(b)). This reduction directly translates to computation savings, as for these rows, the result of the matrix multiplication operation will be zero. Furthermore, unlike unstructured sparsity (e.g., weight pruning), this type of sparsity is more hardware-friendly due to zeroing more extensive and structured chunks, such as rows or columns [36, 51]. For OPT models, this sparsity reduces the computation required for inference from 6.6G FLOPS (Floating Point Operations Per Second) to 4.5G FLOPS per token, resulting in a \(32\%\) computation saving (Fig. 0(c)).
However, a recent trend has emerged, favoring variations of ReLU that are smoother but more complex [28, 64]. These alternatives have gained popularity due to their slightly faster convergence and improved final accuracy [66]. For example, PaLM [9] and Llama models [73] adopt SiLU2[28, 17, 64], while MPT [56] and Falcon models [2] use GELU [28]. Nonetheless, as demonstrated in Fig. 1c, when we finetune several pretrained LLMs with different activation functions, their performance does not change significantly (within a specific model), while ReLU models require much less computation.
Footnote 2: To be more precise, the mentioned models use SwiGLU activation function, but in this work, we focus on the gating module that uses SiLU (Swish) function.
In this paper, we re-evaluate using ReLU for LLMs. We are motivated by the pragmatic consideration that, in many real-world applications and computational platforms capable of supporting sparse vector-matrix multiplications, computational efficiency during _inference_ outweighs the one-time computational cost incurred during training. We make the following contributions:
* We demonstrate that when trained from scratch, there is no significant difference in terms of performance between different activation functions. However, in terms of computational requirements during inference, ReLU activations prove significantly lighter (Sec. 3).
* Considering that many modern LLMs (e.g., Llama and Falcon) have been trained with non-ReLU activations, and it is not cost-effective to train them from scratch, we investigate fine-tuning these models with ReLU activations. We show that the models quickly regain their original performance across various reasoning and reading comprehension tasks (Sec. 4.1). Moreover, we show that by leveraging the activation sparsity of ReLU layers and inserting additional ReLU layers after normalization layers, we can further reduce inference FLOPS by up to threefold (Sec. 4.2).
* In addition to their computational benefits, we present two promising applications of activation sparsity that can inspire future work. Firstly, we demonstrate that LLMs with ReLU activations reuse a significant portion of already activated neurons during token generation, a phenomenon we term _aggregated sparsity_ (Sec. 5.1). This reusability leads to an inference speedup for speculative decoding (Sec. 5.2). Additionally, we show that studying the pre-activations of pretrained LLMs can guide the selection of unconventional activation functions (e.g., _shifted ReLU_), achieving up to 90% sparsity while maintaining performance similar to ReLU activation (Sec. 5.3).
Overall, we believe our work represents a significant step toward leveraging the potential of sparse activation functions for faster and more efficient inference in large language models.
Figure 1: **(a)** Activation Sparsity of different pretrained models: ReLU-based OPTs show significantly higher sparsity. **(b)** Zeroed out entries after ReLU save compute in large semi-structured chunks (e.g., rows). **(c)** Comparison of inference efficiency and performance of the different models with different activation functions after fine-tuning: The choice of activation function does not significantly impact the accuracy, as any of GELU, SiLU, or ReLU can be used on all three models and achieve the same level of accuracy as the original activation function. However, using ReLU can provide an additional benefit of leading to activation sparsity and faster inference.
## 2 Related Works
**Activation Functions in Transformers.** The original Transformer architecture [74] was proposed with the ReLU activation function [22], following the popularity of ReLU at the time. Later, several studies aimed to improve the ReLU activation function by increasing its smoothness [28] and/or including parameterized gating mechanisms, such as GELU, SiLU, GLU, and SwiGLU [11, 64]. Earlier studies demonstrated the benefits of these alternatives to ReLU for transformers [66, 57], but on a small scale (e.g., they trained models up to a couple of 100M parameters with at most 35B tokens, while in this work, we train 1B parameter models on more than 100B tokens). However, we believe the impact of activation functions on performance is marginal, following scaling laws [37, 31], which state that architectural changes do not significantly impact performance.
**Activation Sparsity.** Existing research shows increased sparsity reduces inference and training times [44, 25, 70, 81, 47, 51]. For instance, Jaszczur et al. [36] uses ReLU and added a controller to both promote and predict sparsity, while other works only use prediction modules to predict the activation masks [51]. We note that the mentioned works assume the pretrained model has already been using a sparse ReLU activation, and hence, only training a separate module to predict sparsity could be enough. However, we note that most LLMs pretrained these days do not use ReLU, and we aim to bridge this gap. Moreover, these works focus only on a single transformer architecture while we focus on various architectures so our findings can be practical. Finally, we show that there is no need to train a separate prediction module that complicates the computation graph, and using efficient ReLU layers can be enough.
**Speculative Decoding and Sparsity.** Speculative decoding combats latency under memory constraints using a smaller model for token prediction and a larger model for verification [46, 41]. Investigating its integration with sparsity, we find activation sparsity exhibits a temporal pattern, enhancing speculative decoding. We provide guidelines for parameter selection when incorporating sparsity.
We defer other lines of related works that are orthogonal to our work, such as model compression techniques, sparse attention methods, and Mixture of Experts (MoE) to Appendix A.
## 3 Does the Activation Function Impact Performance?
This section first overviews our experimental setup, including models, data, and evaluations. Then, by training various models from scratch with different activation functions, we demonstrate that changing activation functions minimally impacts performance. However, the impact on inference efficiency is substantial.
### Experimental setup
**Models.** We use open source pretrained models such as OPT [80], Llama (v1) [73], and Falcon [2] as they use different architectures and pretraining setup (e.g., attention/FFN structure/normalization, activation functions), allowing our study covers a wider range of models.
**Datasets.** We use the RefinedWeb dataset [59], for our pretraining in Sec. 3.2 and finetuning pretrained models in Sec. 4. We chose RefinedWeb because it is a high-quality subset of Common Crawl, which is often used in the pretraining phase of LLMs, including Llama, Falcon, and OPT. We also use the validation split of WikiText [54] for measuring the sparsity and recording preactivation distributions of various pretrained models. However, our conclusions hold for other datasets we have tested.
**Training and Finetuning.** For finetuning the pretrained models, we follow the original pretraining recipe, except we use a fixed learning rate of 1.5e-5 for Llama 7B, Falcon 7B, and OPT 6.7B models. In addition, we use the AdamW optimizer [52] for our finetuning with ZeRO stage 1 [62], where we shard the optimizer states across different GPUs. For pretraining OPT 1.3B models from scratch in Sec. 3.2, we follow the OPT training recipe.
**Evaluation.** For our _performance_ evaluation, we use the few-shot tasks from Language Model Evaluation Harness [23]. We select these tasks such that they can measure various abilities of the models (e.g., reading comprehension, reasoning, etc.), and we aim to be consistent with other works in the literature to make the comparison easier. Consistent with the other sections, we compare activation sparsity as a measure of _efficiency_.
Further details regarding the relationship between activation sparsity, FLOPS, and inference efficiency are discussed in Appendix B.
### Training from scratch: performance and sparsity
While the previous literature suggests that non-ReLU variants can improve the performance of transformers [66, 57], we argue the impact is marginal at best. To support our claim, we train the OPT 1.3B model from scratch on a hundred billion tokens of the RefinedWeb datasets with different activation functions, including ReLU, SiLU, and GELU. All these activation functions can be viewed as f(x) \(=x\cdot\sigma(\beta x)\), where \(\beta\) controls the gating part (smoothed cutoff threshold) of the activation function (see Fig. 1(a)). For \(\beta=1\), we will have SiLU(\(x\cdot\sigma(x)\)), and \(\beta=1.7\) is a good approximation of GELU. Finally, as \(\beta\rightarrow\infty\), the activation function becomes closer to ReLU. To further explore the spectrum of ReLU to SiLUwe add another one with \(\beta=8\).
As shown in the bottom row of Fig. 2, the performance of the models is very similar when using different activation functions. This is consistent with the scaling laws literature ([37, 31]), which suggests that the performance of sufficiently large models trained on sufficiently large data depends heavily on compute and data, not architectural details.
While the performance levels of the different activations are similar, their activation sparsity levels differ. Here, we define sparsity as the average sparsity level across all layers for each model. As shown in Fig. 1(c), as we transition from SiLUto ReLU (increasing \(\beta\)), the sparsity also increases. This results from the different gating thresholds, as ReLU drops significantly more values compared to GELU and SiLU (see Fig. 1(b)). In Appendix D, we illustrate the evolution of the pre-activation distribution throughout training.
Overall, the results support our initial claim: non-ReLU activations result in a negligible performance gain (if any) but a substantial loss in sparsity and efficiency. While, at times, the performance of GeLU or SiLU might be slightly higher, ReLU can match it with slightly longer training. We acknowledge that to compensate for the small gap in performance, we need to pay the one-time cost of longer training. However, in return, we get a significantly more sparsity.
## 4 Reluification
While in the previous section, we have seen that the performance does not depend on the activation function, we note that most of the available pretrained LLMs are trained with activation functions other than ReLU. Hence, to incorporate the computational benefits of ReLU activations at inference time, we perform various architectural surgeries and study the consequences of such changes.
Figure 2: **(top)** (a) Shapes of different gating functions over [-5, 5]; (b) Continuation of (a) where SiLU is comparably larger compared to others; (c) Sparsity of the FFN with different activations: increasing \(\beta\) increases sparsity. **(bottom)** when trained from scratch, OPT 1.3 B models using different activation functions achieve similar performance.
We present our findings about incorporating ReLU activations into the pretrained LLMs, a process we refer to as _replification_. More specifically, we show that replacing the activation functions of pretrained LLMs with ReLU is possible, and the performance can be recovered very rapidly during finetuning. Moreover, we show that we can exploit the sparse ReLU activations, and by inserting additional ReLU layers after normalization layers, we can improve inference efficiency, as FLOPS indicates. Finally, we show these modifications, which are easy to implement, lead to lighter models at inference time while maintaining comparable performance to the original pretrained models.
### Stage 1: replacing non-ReLU activations
The process of replification for different pretrained architectures is shown in Fig. 3. This process can be done in multiple stages, as we describe here. The first and more intuitive stage replaces non-ReLU activations with ReLU in the FFN layer. For the Falcon and Llama models, this means replacing GELU and SiLU, respectively. We note that since OPT models already use ReLU activations, we keep those unchanged. After finetuning on 30 billion tokens of the RefinedWeb, Fig. 4 shows that the modified models have significantly more sparsity in their activations.
In addition to the drastic improvement in activation sparsity, we can make several notable observations. First, while the shape of preactivation depends on the pretraining dynamics and architecture, in Fig. 5, we show that it does not change significantly during the relatively short finetuning stage. As a result, we can predict the activation sparsity before finetuning, knowing it will not change significantly. Later in Sec. 5.3 we build on this observation and propose shifting the preactivation values before applying ReLU and further increasing the activation sparsity. The stability of the preactivation distribution may suggest that the behavior of the network does not change while creating sparse representations. Indeed, we show that after replacing the activation function with ReLU, finetuned models quickly recover their performance in Fig. 6. We believe optimizing this process even further (e.g., using better finetuning data) is an exciting follow-up direction.
Figure 4: Activation sparsity of Falcon and Llama models improves significantly after _replification_.
Figure 3: Architectural surgeries for _replification_. In stage 1 we keep the existing ReLUs (in the case of OPT) or replace the activation function between up projection and down projections from GELU (Falcon) and SiLU (Llama) to ReLU. In stage 2, we insert new ReLUs after normalization layers.
### Stage 2: Pushing for more sparsity
In the previous stage, we replaced non-ReLU activations to gain more sparsity. This leads to the input of _down projection_ layer being sparse, roughly \(30\%\) of the total computation. However, there are other matrix-vector multiplications in the decoder layer of transformers besides the down projection. For instance, before the _up projection_ and _gate projections_ of FFN layer, and _QKV projections_ in the attention layer (see Fig. 3). Together, the mentioned matrix-vector multiplications consume about \(55\%\) of the total computation.
To this end, we utilize the fact that in modern transformer layers, the input to both the attention and FFN layers come from a normalization layer, e.g., LayerNorm [4] or RMSNorm [78]. These layers can be viewed as a specific form of MLP, where, instead of applying arbitrary learnable parameters, they learn to scale inputs. Therefore, we apply ReLU to obtain sparse activations after normalization layers which we call the _second stage_ of relufication in Fig. 3.
Tab. 1 shows that different stages of the relufication process do not significantly reduce zero-shot accuracy while using significantly less compute. The sparsity is broken down into three categories: up, down, and QKV projections. Notably, the input to QKV is less sparse than FFN projections, which opens an interesting avenue for future research. We note that the small gap in performance between the original vs. relufied models may be partially due to the finetuning process and not necessarily the activation function. Our finetuning is applied only for 30B and 50B tokens for stages 1 and 2, respectively. Putting into prospect and comparing it with 1T tokens of Llama, for example, this is equivalent to 3-5% of the original training duration. As discussed in Sec. 3.2, according to the scaling properties of LLMs, the gap will be further bridged by additional finetuning steps.
\begin{table}
\begin{tabular}{l|c c c|c|c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model (stage)**} & \multicolumn{3}{c|}{**Input Sparsity (\%)**} & \multicolumn{3}{c|}{**FLOPS**} & \multicolumn{6}{c}{**Zero-Shot Accuracy (\%)**} \\ \cline{2-13} & **QKV** & **DownProj** & **UpProj** & **(G)** & **Avg** & **Arc-E** & **Arc-C** & **Hellaswag** & **Bool** & **PIQA** & **LAMBADA** & **TriviaQA** & **WinoGrande** & **SciQ** \\ \hline OPT 1.3B & 0 & 96 & 0 & 1.3 & 50.7 & 57.3 & 22.9 & 41.3 & 57.0 & 71.8 & 56.0 & 6.1 & 58.9 & 84.6 \\ OPT 2.7B (s2) & 50 & 96 & 35 & 1.1 & 53.1 & 60.3 & 26.8 & 44.9 & 55.4 & 73.9 & 57.6 & 12.4 & 59.6 & 86.7 \\ OPT 2.7B (s2) & 0 & 96 & 0 & 1.8 & 54.5 & 61.3 & 29.2 & 45.8 & 57.6 & 74.2 & 61.4 & 12.3 & 60.8 & 85.9 \\ OPT 6.7B (s2) & 50 & 97 & 40 & 2.8 & 58.6 & 66.5 & 32.2 & 49.1 & 63.0 & 76.4 & 63.3 & 23.8 & 63.1 & 90.3 \\ OPT 6.7B & 0 & 97 & 0 & 4.5 & 59.8 & 68.0 & 32.4 & 50.2 & 68.4 & 75.5 & 67.2 & 20.9 & 65.3 & 90.2 \\ \hline Falcon 7B (s2) & 56 & 95 & 56 & 2.2 & 64.8 & 73.6 & 38.6 & 55.3 & 68.4 & 78.9 & 67.6 & 40.4 & 67.1 & 93.4 \\ Falcon 7B (s1) & 0 & 94 & 0 & 4.1 & 65.2 & 72.2 & 39.1 & 55.4 & 70.6 & 78.4 & 69.2 & 40.5 & 67.5 & 93.1 \\ Falcon 7B & 0 & 1 & 0 & 6.6 & 66.8 & 74.6 & 40.2 & 57.7 & 73.5 & 79.4 & 74.5 & 40.4 & 67.2 & 94.0 \\ \hline Llama 7B (s2) & 51 & 65 & 67 & 2.9 & 66.4 & 73.8 & 39.6 & 54.8 & 69.9 & 77.9 & 70.7 & 48.5 & 68.6 & 93.8 \\ Llama 7B (s1) & 0 & 62 & 0 & 4.8 & 67.1 & 75.2 & 40.1 & 55.2 & 73.4 & 77.7 & 71.5 & 49.6 & 67.1 & 94.2 \\ Llama 7B & 0 & 0 & 0 & 6.6 & 68.4 & 75.5 & 42.1 & 69.9 & 74.8 & 78.7 & 73.1 & 49.9 & 69.8 & 95.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparing zero-shot performance across several tasks: After _relufication_, the activation sparsity of models increases significantly, hence increased efficiency measured by FLOPS. Within each group, the performance levels are comparable.
Figure 5: The preactivation distribution of pretrained models for Falcon and Llama does not change significantly during the short finetuning stage of relufication. The dashed line shows the cutoff point before which the output is almost zero.
Figure 6: Evolution of zero-shot accuracy during finetuning: The model quickly recovers most of its lost performance due to the architecture surgery.
We also assess the in-context learning ability of the relufied models with the Massive Multitask Language Understanding (MMLU) [29] benchmark in Tab. 2. Our results show that when we augment the original LLMs with different activations and finetune, the few-shot performance does not change significantly either. Moreover, Sec. E in the appendix shows that a larger but relufied model performs better than an original smaller model of the same FLOPS. Overall, the results affirm that the proposed relufication procedure can decrease the inference FLOPS at various stages and rates while maintaining on-par performance on various tasks.
## 5 Applications
In this section, we discuss promising directions motivated by our investigation in Sec. 4. First, we introduce _aggregated sparsity_, showing that ReLU networks reuse previously activated neurons when generating tokens. Hence, we can leverage this to increase the generation speed. Next, we relate aggregated sparsity with speculative decoding to further improve speculative decoding's inference time. Finally, we briefly discuss a promising direction of using the _shifted ReLU_ activation function to improve the sparsity further.
### Aggregated Sparsity: reusing previously activated neurons
A consequence of using only a small subset of neurons for each token is that if these neurons are shared to some degree, the model still does not use all of the neurons until many tokens are processed. We refer to this as _aggregated sparsity_, which we defined as the ratio of neurons that have not been used up to processing the first \(t\) token. Note that this metric will always be non-increasing. Intuitively, it measures the unused capacity of feed-forward neurons for processing a specific prompt.
Here in Fig. 6(a) we show that for the OPT-6.7B model, on average, about 50% of all the neurons will be unused across the first 150 tokens of prompts coming from the WikiText dataset. Our empirical results hold for other ReLU models and other datasets. Additionally, in Fig. 6(b), we show that this pattern is far from random activation of neurons during the token generation with a rate equal to the average rate of activation usage per token. Let \(s_{i}\) be the activation sparsity of layer \(i\) averaged over all tokens. Then, the probability of an activation not used in generating the first \(t\) tokens in uniformly random selection is \(s_{i}^{t}\). Fig. 6(b) shows this quantity for two layers \(i=8,24\) for the first \(256\) tokens in dashed line. It also shows the real (observed) number of activations being used in the solid line. The fact that the random aggregated sparsity (referred to as random sparsity) is lower than the observed aggregated sparsity (we refer to it as aggregated sparsity) shows a clear pattern of reusing activations.
We can benefit from the overlapping activations by utilizing previously loaded weights from the down projection layer for upcoming tokens. To test this, we initiate with reading 128 tokens. For the subsequent 128 tokens, we intermittently avoid loading new weights for every \(\gamma\) token. Using \(\gamma=16\) as an example, tokens 129-145 are generated conventionally. However, for tokens 146-161, we retain the existing weight without introducing any new weight. This pattern continues, with every next set of \(\gamma\) tokens alternating between conventional generation and weight reuse. In Fig. 6(c), we observe only a slight increase in perplexity when using this approximation to address the memory and I/O-intensive nature of LLM inference. This figure contrasts the perplexity obtained from reused activations and random selections. The reuse strategy aligns well with the baseline, whereas random selection notably increases perplexity, highlighting the effectiveness of reusing the already loaded activations for subsequent tokens.
### Activation sparsity and speculative decoding
As highlighted in Sec. 5.1, activation reuse happens for multiple consecutive tokens. When multiple consecutive tokens are processed together, we can save the I/O (i.e., transferring weights to GPU/CPU as discussed in Appendix B)
\begin{table}
\begin{tabular}{l c c c|c c c c} \hline \hline Model & Activation & FLOPS(\%) & Avg & Humanities & STEM & Social Sciences & Other \\ \hline \hline Falcon 7B & SiLU & 100 & 26.4 & 24.8 & 27.4 & 27.2 & 26.2 \\ Falcon 7B & GELU & 100 & 27.7 & 28.1 & 26.0 & 28.0 & 29.4 \\ Falcon 7B & ReLU & 62 & 27.9 & 26.0 & 26.5 & 31.8 & 27.9 \\ \hline \hline Llama 7B & SiLU* & 100 & 35.1 & 37.9 & 30.2 & 37 & 37.1 \\ Llama 7B & GELU & 100 & 35.9 & 38.4 & 29.4 & 37.6 & 39.5 \\ Llama 7B & ReLU & 72 & 34.7 & 34.8 & 31.2 & 36.3 & 37.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: MMLU five-shot accuracy. Models finetuned with different activation functions have similar performance.* Denotes we replace the SiLU function in Llama’s SwiGLU activation function with ReLU.
associated with activations that are not used in any of them. If the reuse was not happening, and the sparsity of all tokens was purely random, the aggregated sparsity would shrink exponentially and quickly diminish. Speculative decoding [46] is a related technique that uses a smaller model \(M_{q}\) to propose \(\gamma\) tokens and a larger model \(M_{p}\) to verify those tokens and select matching ones. It improves the runtime of the model by avoiding running \(M_{p}\) sequentially.
To improve speculative decoding, aggregated sparsity can trim down the portion of the model that needs to be run. Instead of running the full model, only the non-sparse parts need to be evaluated, which will reduce I/O and compute latency. Suppose the average aggregated sparsity of \(M_{p}\) for \(\gamma\) tokens is \(\bar{s}_{\text{agg}}(\gamma)\), and cost of running \(M_{q}\) over \(M_{p}\) is \(c\). Then the expected latency speedup when going from standard speculative decoding to sparse speculative decoding is \(\frac{c\gamma+1}{c\gamma+(1-\bar{s}_{\text{agg}}(\gamma))}\).
Fig. 7d compares sparse speculative decoding to the standard version for OPT 6.7B model. As a case study, for \(\gamma=16\), the sparse version has a 1.27x speedup over the standard speculative decoding. If the aggregated sparsity was random over different tokens, the speedup would have been only 1.20x. Note that even random sparsity will lead to speedup over standard speculative decoding. This further shows the value of relufication. However, the speedup due to random sparsity would diminish quickly in comparison to aggregated sparsity as we go for larger \(\gamma\). For example, for \(\gamma=64\) the speedup is almost negligible, while the speedup for the aggregated sparsity is around 1.14x. Further discussion and details are postponed to Appendix C, where we compare sparse speculative decoding, standard speculative decoding, and autoregressive decoding and discuss optimal \(\gamma\) in the case of sparse speculative decoding.
### The shifted ReLU activation
Our work in this section is motivated by the observation from Sec. 4, where, comparing Fig. 4d with Fig. 4b revealed that the relufied Llama has much less sparsity (65%) than the relufied Falcon model (95%). In addition, we build on two of our previous findings. First, the preactivation distribution of the relufied Llama (Fig. 5c) includes a considerable mass after the cutoff value at zero. Second, the shape of the preactivation distribution does not change before and after the relufication process (Fig. 5c and Fig. 5d).
Therefore, we may be able to shift the preactivation distribution to the left to put more volume before the cutoff at 0. To this end, for preactivation input \(x\), rather than applying \(\text{ReLU}(x)\), we use \(\text{ReLU}(x-b)\) where \(b\in\mathbb{R}\) is a constant scalar. We propose to set the value \(b\) based on the preactivation distribution. For instance, based on the distribution in Fig. 5d, setting \(b=1\) and hence using \(\text{ReLU}(x-1)\) as our activation function will result in dropping \(95\%\) of the preactivations and make it significantly sparser. Another benefit of this approach is simplicity, as this does not require changing the loss function or the training regime.
Figure 8a shows that the shifted ReLU activation function has on-par accuracy with the ReLU activation function. Moreover, similar to our observation in Sec. 4, the shifted ReLU activation quickly recovers the lost performance due to the drastic change of activation function, while it also maintains a very high-level activation sparsity during the finetuning stage. The gap between shifted ReLU and ReLU is wider in the early stages of training, and it narrows down when more tokens are seen.
A deeper investigation of ReLU-variants that can promote sparsity without sacrificing performance is an appealing future direction. Moreover, it will be interesting to study the impact of the shifted ReLU for stage-2 of our relufication process where the sparsity level is usually not very high.
Figure 7: **(a) Aggregated sparsity of different layers and their mean. (b) Aggregated sparsity during token generation and comparison with a random sparsity. (c) Perplexity, based on the number of tokens for which loaded weights from previous tokens are reused. The dashed line represents no reuse, the solid blue line shows the case with activation reuse according to aggregated sparsity, and the orange line depicts the perplexity when activations are reused according to a random sparsity. (d) The inference speedup of speculative decoding with aggregated sparsity and with random sparsity. Speedup equal to 1.0 is the standard version of speculative decoding.**
## 6 Conclusion
In this study, we conducted a large-scale investigation of the activation functions, and we have shown that the choice of activation functions during pretraining and finetuning does not have a significant impact on performance while using ReLU can provide an additional benefit of leading to activation sparsity and more efficient inference. To bridge the gap between existing pre-trained models and our work, we have _relufied_ several models to incorporate the ReLU activation function into the architecture of these already pre-trained models. We have shown that across several zero-shot and few-shot tasks, the ReLU-based LLMs perform similarly to their non-ReLU models at a significantly reduced computation. In addition, after observing sparsity patterns in ReLU LLMs, we explored a few promising directions to improve the token generation speed through _aggregated sparsity_ and achieve greater efficiency using ReLU-based activation functions like _shifted ReLU_.
We believe our work is among the few studies that investigate changes in the architectural components of LLMs on a large scale. We hope our findings motivate the community to further investigate the advantages of well-structured activation sparsity, ultimately enhancing the efficiency of these models.
## Acknowledgments
The authors would like to thank Fartash Faghri, Minsik Cho, Thomas Merth, and Mohammad Samragh for their invaluable discussions and feedback on this project.
Figure 8: The effect of shifted ReLU on Llama model. **(a)** The performance is almost the same as the original ReLU. **(b)** Shifted ReLU (i.e., ReLU\((x-1)\)) is much sparser than the original ReLU. |
2302.13209 | I-MSV 2022: Indic-Multilingual and Multi-sensor Speaker Verification
Challenge | Speaker Verification (SV) is a task to verify the claimed identity of the
claimant using his/her voice sample. Though there exists an ample amount of
research in SV technologies, the development concerning a multilingual
conversation is limited. In a country like India, almost all the speakers are
polyglot in nature. Consequently, the development of a Multilingual SV (MSV)
system on the data collected in the Indian scenario is more challenging. With
this motivation, the Indic- Multilingual Speaker Verification (I-MSV) Challenge
2022 has been designed for understanding and comparing the state-of-the-art SV
techniques. For the challenge, approximately $100$ hours of data spoken by
$100$ speakers has been collected using $5$ different sensors in $13$ Indian
languages. The data is divided into development, training, and testing sets and
has been made publicly available for further research. The goal of this
challenge is to make the SV system robust to language and sensor variations
between enrollment and testing. In the challenge, participants were asked to
develop the SV system in two scenarios, viz. constrained and unconstrained. The
best system in the constrained and unconstrained scenario achieved a
performance of $2.12\%$ and $0.26\%$ in terms of Equal Error Rate (EER),
respectively. | Jagabandhu Mishra, Mrinmoy Bhattacharjee, S. R. Mahadeva Prasanna | 2023-02-26T02:26:02Z | http://arxiv.org/abs/2302.13209v1 | # I-MSV 2022: Indic-Multilingual and Multi-sensor Speaker Verification Challenge
###### Abstract
Speaker Verification (SV) is a task to verify the claimed identity of the claimant using his/her voice sample. Though there exists an ample amount of research in SV technologies, the development concerning a multilingual conversation is limited. In a country like India, almost all the speakers are polyglot in nature. Consequently, the development of a Multilingual SV (MSV) system on the data collected in the Indian scenario is more challenging. With this motivation, the Indic- Multilingual Speaker Verification (I-MSV) Challenge 2022 has been designed for understanding and comparing the state-of-the-art SV techniques. For the challenge, approximately \(100\) hours of data spoken by \(100\) speakers has been collected using \(5\) different sensors in \(13\) Indian languages. The data is divided into development, training, and testing sets and has been made publicly available for further research. The goal of this challenge is to make the SV system robust to language and sensor variations between enrollment and testing. In the challenge, participants were asked to develop the SV system in two scenarios, viz. constrained and unconstrained. The best system in the constrained and unconstrained scenario achieved a performance of \(2.12\%\) and \(0.26\%\) in terms of Equal Error Rate (EER), respectively.
## I Introduction
Speaker Verification (SV) is the task of validating the identity of a speaker using the voice sample of the claimant. The tremendous development in SV technology in the last five decades has enabled the system to be deployed in various application areas, starting from voice-based attendance system to authentication for bank transactions [1]. However, the performance of the systems suffer when multiple languages and sensors are involved during testing [2]. Hence, the scalability of SV systems is limited considering such scenarios. The citizens of India use approximately \(122\) major and \(1599\) other languages in their day-to-day conversation. Most importantly, they are polyglot in nature. Therefore, the flexibility in language and sensors during testing may restrict the reach of SV technologies. With this motivation, the Indian Institute of Technology Guwahati Multi Variability (IITG-MV) data was collected using five different sensors from the people coming from different geographical locations of India having variations in the native language, dialect, and accent [3].
In the literature, there exist few works on the development of SV in multilingual and domain mismatch scenarios [2]. The reported works contribute to the feature, model, and score level for minimizing the impact of language and domain mismatch [2]. Most of the reported work uses either an in-house dataset or publicly available data (mostly crawled from the public domain) for performing their studies. The in-house data are limited by the number of speakers, languages, and sensors. Though the publicly available data have a huge number of speakers, languages and environmental variations, the unavailability of appropriate annotations (mostly done with automatic algorithms) poses a challenge for an in-depth analysis [2]. The current challenge was planned with aim of resolving the above-mentioned issues by inviting the community to work on the development of the language and sensor invariant speaker representation.
This work considers the conversation recordings of the IITG-MV phase-I dataset. The dataset is divided into four parts, viz. (1) Development, (2) Enrollment, (3) Public, and (4) Private test set. The development set consists of speech utterances from \(50\) speakers recorded with all \(5\) sensors and in \(13\) languages. The enrollment set consists of utterances from the remaining \(50\) speakers, spoken in English language and through a headset microphone. The public test set consists of utterances from the \(50\) enrolled speaker in both matched and mismatched sensors and languages. The private test set only consists of cross-lingual and sensor utterances. Along with releasing the dataset, the challenge was offered in the form of two sub-tasks, (1) constrained and (2) unconstrained. The constrained sub-task restricts the participants to use only the provided data. On the other hand, no such restrictions are there in the unconstrained sub-task. The aim of the constrained sub-task here was to encourage the community to develop the SV with limited training data. Conversely, the aim of the unconstrained sub-task was to observe the performance of SV technologies developed with a sufficient amount of training data. A baseline system implemented with X-vector framework for both constrained and unconstrained sub-tasks was made available to the participants during the challenge (available at [https://github.com/jagabandhumishra/I-MSV-Baseline](https://github.com/jagabandhumishra/I-MSV-Baseline)). The performance of the baseline in public test data on both the sub-tasks were \(9.32\%\) and \(8.15\%\), respectively.
The rest of the paper is organized as follows: the challenge rules are described in section II. The detailed description of the data preparation is described in section III. Section IV reports the procedure of baseline system development and the performance measure used. A brief description of the top five systems along with their performance are described
in section V. Finally, the summary and future directions are reported in section VI.
## II Challenge Rules
As mentioned in the earlier section, the challenge consisted of two sub-tasks, viz. (1) constrained SV and (2) unconstrained SV.
* **Constrained SV**: Participants were not allowed to use speech data other than the speech data released as a part of the constrained SV challenge for the development of the SV system.
* **Unconstrained SV**: Participants were free to use any publicly available speech data in addition to the audio data released as a part of unconstrained SV.
The challenge was organized as a part of the \(25^{th}\) edition of the O-COCOSDA-2022 conference along with the Asian-Multilingual Speaker Verification (A-MSV) track. The participants were asked for registration. Upon agreeing to the data usage licenses agreement, the download link of the development, enrollment, and public test sets were provided. Through a license agreement, the participant teams agreed that they could use the data only for research purposes. Moreover, the top five systems in both the sub-tasks would have to submit the source code of their systems and a detailed report.
The public test set released during the time of registration had ground truth information. The purpose here was to tune the system parameter using the public test data. The participants were asked to upload their score files in a specific format on the challenge portal. The corresponding performance was evaluated by a back-end script and the results were uploaded to a online leader board. There was no constraint on uploading and evaluating the score files on the public test set. After around one month of the public test set release, the private test set was released without ground truth information. The participant teams were asked to submit their final results on the private test set within \(24\) hours from the release of the private test set. A maximum of three successful attempts were allowed for each team for evaluating their system on the private test set.
## III Data Preparation
The IITG-MV speaker recognition dataset was recorded in four phases for dealing with various speaker recognition applications, viz. speaker identification, verification, and change detection, etc. [3]. Among the four phases, the phase-I dataset is considered for this study. The IITG-MV-Phase-I dataset consists of recordings from \(100\) speakers in reading and conversation mode. In both modes, each speaker has given their speech data in two sessions. The duration of each session is around \(5-8\) minutes. In addition, each speaker has given their data in two languages, viz. English and favorite language. Favorite language mostly meant their mother tongue/native language and varied from person to person. Furthermore, all the speech utterances were recorded through five different sensors, viz. H01, M01, M02, D01 and T01. The details of the dataset can be found at [3]. The utterances belonging to the conversation mode were only considered here. The total duration of the selected utterances is approximately \(100\) hours. The selected utterances are named as the I-MSV dataset. Further, the I-MSV dataset is segregated into four parts, viz. development, enrollment, public test, and private test.
### _Development set_
This partition consists of recordings from \(50\) speakers. The utterances from each speaker are available in two languages, with two sessions, and with five sensors. The approximate duration of the development set is \(50\) hours.
### _Enrollment set_
This partition consists of recordings from \(50\) speakers that are disjoint from the speakers used in the development set. The utterances belonging to both the sessions with the English language and the Headset (H01) sensor are used here. The first session utterances are completely used in this set. However, the utterances from the second session are segmented into two parts. Half of them are used in enrollment and the rest have been used in the public test set (to observe the performance in matched sensor and language conditions). The approximate duration of speech available for each speaker is \(8-10\) minutes.
### _Public test set_
This set consists of the utterances from the second session recordings with three sensors and cross-languages along with the matched utterances. The second session utterances in the original IITG-MV-Phase-I dataset are segregated into two parts. Half of them are reserved for the preparation of the private test set. After that, each utterance is segmented into \(10\), \(30\), and \(60\) second utterances. The segments are split into silence regions using the knowledge of Voice Activity Detection. The segmented files were made available to the participants as the public test set. The total number of utterances available in this partition is \(5907\).
### _Private test set_
This set consists of the utterances from the second session recordings with four sensors and cross-languages. This partition does not consist of matched sensors and language utterances. The selected utterances are segmented into \(10\)s, \(30\)s, and \(60\)s second utterances and made available to the participants as the private test set. The total number of utterances available in this partition is \(9521\). The partition consists of cross-language utterances from \(10\) Indian languages.
## IV Performance Measures and Baselines
This challenge employs the Equal Error Rate (EER) measure to compare the performances of the different submissions with the baseline results. This section briefly describes the method of computing the EER measure and reports the baseline results on the I-MSV dataset. Let, \(N_{P}\) and \(N_{N}\) be the number of positive and negative test samples in the data, respectively. The number of samples out of a total of \(N_{P}\) positive samples predicted as positive are termed as True Positives (TP). On the other hand, the number of samples out of a total of \(N_{N}\) negative samples correctly predicted as negative are termed as True Negatives (TN). Incorrectly predicted positive and negative samples are termed as False Positives (FP) and False Negatives (FN), respectively. The prediction of a test sample as positive or negative is based on a pre-determined threshold \(\tau\) which may be varied. The total number of TP, TN, FP, and FN for the whole test data can be used to compute two measures, viz., False Acceptance Rate (FAR) and False Rejection Rate (FRR). The FAR can be defined using eq. 1.
\[\text{FAR}=\frac{FP}{FP+TN} \tag{1}\]
Similarly, the FRR can be defined as in eq. 2.
\[\text{FRR}=\frac{FN}{TP+FN} \tag{2}\]
When \(\tau\) is varied, different values of FAR and FRR can be obtained. Among all the different \(\tau\) used, a specific threshold \(\tau_{equal}\) can be identified which provides equal (or almost equal) values of FAR and FRR. The EER measure is computed as the mean of FAR and FRR at \(\tau_{equal}\) (eq. 3).
\[\text{EER}=\frac{1}{2}\left(FAR+FRR\right) \tag{3}\]
where, \(\mid\text{FAR}-\text{FRR}\mid\to 0\).
The challenge organizers provided results on the I-MSV dataset using Kaldi based I-vector and X-vector systems as a baseline for comparison. The baseline performances are reported in Table I.
## V Systems and Results
A total of \(25\) teams registered for the I-MSV 2022 challenge. Among these, \(10\) teams submitted their results for the public test set evaluation. For the private test set evaluation, a total of \(6\) teams submitted their results and systems. The best \(5\) participating systems are summarised in the next paragraph. Table II lists a brief summary of the top \(5\) systems.
The submission of _team0_ obtained the best EER of \(0.26\) on the private test set using unconstrained training data. The best system of _team0_ used the Rawnet3 architecture [4] as their front-end system. They initially trained the model with a Triplet Margin loss [5]. Subsequently, they fine-tuned their model with a combination of Adaptive Angular Margin (AAM) K-Subcenter loss [6] and Inter-TopK loss [7]. They performed the backend scoring using the cosine-similarity measure and used adaptive score normalization.
The second best EER of \(0.36\) using unconstrained data was obtained by _team1_. They used the ResNet-34 architecture proposed in [8] with Attentive Statistics Pooling [9] for their front-end. They trained the model using a combination of vanilla Softmax loss and Angular Prototypical loss [10]. They also proposed a two-layer model scoring system composed of Fully-Connected Feed-Forward layers, Random Forests and Gradient Boosting Trees.
The EER obtained by _team2_ on the constrained data scenario was \(2.12\). They achieved an EER of \(0.63\) using unconstrained training data. They used combination of ECAPAP-TDNN [11] and ResNet-34 [8] with Squeeze-and-Excitation (SE) attention as front-end models to obtain the best results in the constrained data scenario. However, only the ResNet-34-SE network provided the best performance in the unconstrained scenario. For the unconstrained scenario, they fine-tuned the backbone model using a combination of Weight-Transfer loss [12], AAM-Softmax loss and \(L_{2}\) loss. The back-end scoring was performed using cosine similarity measure.
The _team3_ obtained an EER of \(2.77\) in the constrained scenario and and EER of \(2.70\) in the unconstrained scenario. They used a similar front-end system as that of _team2_ and trained it using the AAM loss. They also performed the backend scoring using cosine similarity.
The EER obtained by _team4_ in the unconstrained scenario was \(2.97\). They also employed a similar front-end architecture as that of _team2_ and used the Large Margin Cosine loss for training. They performed the backend scoring using Proba
Fig. 1: Illustrating the effect of (a) different duration, (b) different languages, and (c) different sensors on the performance of submitted systems.
bilistic Linear Discriminant Analysis (PLDA) [13].
## VI Summary and Discussion
The results obtained by the submitted systems can be summarised along the following broad directions. First, use of unconstrained training data is hugely beneficial in performing SV in low-resource scenario like the current challenge. Second, automatic feature learning and end-to-end models can learn highly discriminating features. Third, the choice of loss function for the front-end system has a huge impact on the obtained performance of similar architectures. Fourth, simple backend scoring like cosine similarity might be enough if the learnt speaker embedding are highly discriminating. Fifth, longer utterances (Fig. 1(a)) are more helpful in identifying the speakers. Sixth, change in language (Fig. 1(b)) degrades the SV performance. However, it might also be noted that such an observation may also be the result of imbalance in the number of utterances for the different languages in the I-MSV dataset. Seventh, the change in sensor (Fig. 1(a)) has a huge impact on the performance of SV systems. More specifically, SV systems fare poorly when presented with telephone channel recordings. In future, better SV systems may be developed by taking into consideration the observations made in this challenge.
## Acknowledgments
The authors like to acknowledge Ministry of Electronics and Information Technology (MeitY), Govt. of India, for supporting us through "Bhashini: Speech technologies in Indian languages" project. We are also grateful to K. T. Deepak, Rajib Sharma and team (IIIT Dharwal, Karnataka), S. R. Nirmala, S. S. Chikkamath and team (KLETech, Hubballi, Karnataka), Debadatta Pati, Madhusudan Singh and team (NIT Nagaland, Nagaland), Joyanta Basu, Soma Khan and team (CDAC Kolkata, WB), Akhilesh Kumar Dubey, Govind Menon and team (KLU Vijayawada, AP), Gayadhar Pradhan, Jyoti Prakash Singh and team (NIT Patna, Bihar), and S. R. M. Prasanna, Gayathri A. and team (IIT Dharwal, Karnataka) for their help and cooperation in successfully organizing this challenge.
|
2303.15551 | Electron-K-Phonon Interaction In Twisted Bilayer Graphene | We develop an analytic theory to describe the interaction between electrons
and K-phonons and study its influence on superconductivity in the bare bands of
twisted bilayer graphene (TBG). We find that, due to symmetry and the
two-center approximation, only one optical K-phonon (~ 160meV) of graphene is
responsible for inter-valley electron-phonon interaction. This phonon has
recently been found in angular-resolved photo-emission spectroscopy to be
responsible for replicas of the TBG flat bands. By projecting the interaction
to the TBG flat bands, we perform the full symmetry analysis of phonon-mediated
attractive interaction and pairing channels in the Chern basis, and show that
several channels are guaranteed to have gapless order parameters. From the
linearized gap equations, we find that the highest Tc pairing induced by this
phonon is a singlet gapped s-wave inter-Chern-band order parameter, followed
closely by a gapless nematic d-wave intra-Chern-band order parameter. We
justify these results analytically, using the topological heavy fermion mapping
of TBG which has allowed us to obtain an analytic form of phonon-mediated
attractive interaction and to analytically solve the linearized and T=0 gap
equations. For the intra-Chern-band channel, the nematic state with nodes is
shown to be stabilized in the chiral flat band limit. While the flat band
Coulomb interaction can be screened sufficiently enough - around Van-Hove
singularities - to allow for electron-phonon based superconductivity, it is
unlikely that this effect can be maintained in the lower density of states
excitation bands around the correlated insulator states. | Chao-Xing Liu, Yulin Chen, Ali Yazdani, B. Andrei Bernevig | 2023-03-27T19:06:35Z | http://arxiv.org/abs/2303.15551v1 | # Electron-\(K\)-Phonon Interaction In Twisted Bilayer Graphene
###### Abstract
We develop an analytic theory to describe the interaction between electrons and \(K\)-phonons and study its influence on superconductivity in the _bare bands_ of twisted bilayer graphene (TBG). We find that, due to symmetry and the two-center approximation, only one optical \(K\) phonon (\(\sim 160meV\)) of graphene is responsible for inter-valley electron-phonon interaction. This phonon has recently been found in angular-resolved photo-emission spectroscopy to be responsible for replicas of the TBG flat bands. By projecting the interaction to the TBG flat bands, we perform the full symmetry analysis of phonon-mediated attractive interaction and pairing channels in the Chern basis, and show that several channels are guaranteed to have gapless order parameters. From the linearized gap equations, we find that the highest \(T_{c}\) pairing induced by this phonon is a singlet gapped s-wave inter-Chern-band order parameter, followed closely by a gapless nematic d-wave intra-Chern-band order parameter. We justify these results analytically, using the topological heavy fermion mapping of TBG which has allowed us to obtain an analytic form of phonon-mediated attractive interaction and to analytically solve the linearized and \(T=0\) gap equations. For the intra-Chern-band channel, the nematic state with nodes is shown to be stabilized in the chiral flat band limit. While the flat band Coulomb interaction can be screened sufficiently enough - around Van-Hove singularities - to allow for electron-phonon based superconductivity, it is unlikely that this effect can be maintained in the lower density of states excitation bands around the correlated insulator states.
_Introduction -_ Superconductivity in twisted bilayer graphene (TBG) appears within its phase diagram around the correlated insulator states [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. Amongst the mechanisms suggested for superconductivity are the phonons, spin fluctuations, skyrmions, and others [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. Based on a recent experiment that suggest a strong coupling between the graphene \(K\)-phonon and the flat bands in TBG [45], we perform a comprehensive analysis of the electron-\(K\)-phonon (e-K-ph) interaction and the resulting phonon-mediated superconductivity on the bare flat bands of TBG. We develop an exhaustive numerical, analytical, and symmetry based description of the e-K-ph interaction in TBG and the symmetry classifications of the order parameter, and find the competing singlet gapped inter-Chern-band channel and nematic gapless intra-Chern-band channel. Armed with the heavy-fermion description of TBG[46; 47; 48; 49; 50; 51; 52; 53; 54], the form factors of the \(K\)-phonon induced attractive interaction can be analytically computed and matched well to a full numerical calculations. An analysis of the Coulomb screening shows that, due to the high density of states (DOS) of flat bands, the Coulomb interaction might be strongly renormalized down near the Van Hove singularities. However, it remains unclear if the Hartree-Fock bands of the correlated insulator, with the lower DOS, can provide a similar result.
_Model Hamiltonian for electron-phonon interaction in TBG -_ We consider the deformation potential type of theory, described by a tight-binding (TB) model for the electron Hamiltonian with the hopping parameters depending on the atom positions \(\tilde{\mathbf{R}}_{\alpha}^{l}=\mathbf{R}^{l}+\tau_{\alpha}^{l}+\mathbf{u}^ {l}(\mathbf{R}_{\alpha}^{l})\) with a displacement field \(\mathbf{u}^{l}(\mathbf{R}_{\alpha}^{l}=\mathbf{R}^{l}+\tau_{\alpha}^{l})\), where \(\mathbf{R}^{l}\) and \(\tau_{\alpha}^{l}\) label the lattice vector and the sublattice atom position (\(\alpha=A,B\)) at the layer \(l\), respectively. By treating \(\mathbf{u}\) as a perturbation, we expand the intra-layer Hamiltonian up to the linear order in \(\mathbf{u}\) (Supplementary Material (SM) Sec. II [55]). We only keep \(\mathbf{u}\)-independent terms for the inter-layer Hamiltonian for TBG, thus focusing on intra-layer electron-phonon (e-ph) interaction in this work. As only the Dirac bands appear around the Fermi energy close to \(\pm\mathbf{K}_{D}=\pm\frac{4\pi}{3a_{0}}(1,0)\) in Brillouin zone (BZ) with lattice constant \(a_{0}\) in graphene, we also expand the Hamiltonian around \(\eta\mathbf{K}_{D}\) (\(\eta=\pm\) labelling two valleys) and focus on Dirac electrons around two valleys. Our full Hamiltonian consists of three parts
\[H=H_{el}+H_{ph}+H_{eph}. \tag{1}\]
Here \(H_{el}\) describes the Dirac electrons located at valley \(\eta=\pm\) momentum \(\eta\mathbf{K}_{D}\) that are coupled through inter-layer tunnelings and is given by the Bistritzer-MacDonald (BM) model [56] (SM Sec. V [55]),
\[\hat{H}_{\rm el}=\sum_{\eta\ast}\sum_{\mathbf{k}\in{\rm MBZ}}\sum_{\alpha \alpha^{\prime}}\sum_{\mathbf{Q},\mathbf{Q}^{\prime}}h^{(\eta)}_{\mathbf{Q} \alpha,\mathbf{Q}^{\prime}\alpha^{\prime}}(\mathbf{k})c^{\dagger}_{\mathbf{ k},\mathbf{Q},\alpha,\eta,s}c_{\mathbf{k},\mathbf{Q}^{\prime},\alpha^{\prime},\eta,s} \tag{2}\]
where \(c_{\mathbf{k},\mathbf{Q},\alpha,\eta,s}\) is the fermion annihilation operator \(\mathbf{k}\) is a momentum in the Moire Brillouin zone (MBZ) (Fig. 1b), \(\alpha\) is the sublattice index, and \(s\) is spin. The vector \(\mathbf{Q}\) belongs to the lattice set \(\mathcal{Q}_{l\eta}=\{l\eta\mathbf{q}_{2}+n_{1}\mathbf{b}_{M1}+n_{2}\mathbf{b}_{M 2}\mid n_{1,2}\in\mathbb{Z}\}\), where \(l\) is the layer index, \(\mathbf{q}_{2}=k_{\theta}(\frac{\sqrt{3}}{2},\frac{1}{2})\), \(\mathbf{b}_{M1}=k_{\theta}(\frac{\sqrt{3}}{2},\frac{3}{2})\), \(\mathbf{b}_{M2}=k_{\theta}(-\frac{\sqrt{3}}{2},\frac{3}{2})\) and \(k_{\theta}=2|\mathbf{K}_{D}|\sin\frac{\theta}{2}\) with \(\theta\) the twist angle. \(h^{(\eta)}_{\mathbf{Q}\alpha,\mathbf{Q}^{\prime}\alpha^{\prime}}(\mathbf{k})\) is given in SM Sec. V.A [55]. \(\hat{H}_{\rm el}\) exhibits \(C_{6v}\) and
time-reversal symmetries, generated by valley-switching \(\pi/6\)-rotation along z axis (\(\hat{C}_{6z}\)), time reversal (\(\hat{T}\)), and \(\pi\)-rotation along y axis (\(C_{2y}\)), and valley-preserving \(\pi\)-rotation along x axis (\(\hat{C}_{2x}\)), and the composite antiunitary \(C_{2z}T\). In addition, \(\hat{H}_{\rm el}\) has a unitary particle-hole (\(\hat{P}\)) symmetry, as well as a chiral symmetry \(\hat{C}\) in the limit with vanishing \(AA\) region hopping (\(w_{0}=0\)) [19]. A full discussion of symmetry of BM model [57; 58] is found in SM Sec.V.B [55].
\(H_{ph}\) describes the intra-layer in-plane phonon modes. Out-of-plane phonon modes are decoupled from Dirac electrons for intra-layer e-ph interaction. The dynamical matrix for a single layer graphene is derived in SM Sec.III [55] based both on symmetry considerations and the microscopic model, up to the next nearest neighbor interaction. The resulting in-plane phonon dispersion in Fig. 1a reproduces that in literature [59; 60; 61; 62; 63] (SM Sec.III and IV.B [55]). The phonon modes at \(\Gamma\) and \(\eta{\bf K}_{D}\) can induce intra-valley and inter-valley e-ph interactions, respectively. In this paper we focus on the \(\eta{\bf K}_{D}\) phonons; the \(\Gamma\) phonons will be derived in [64]. At \(\eta{\bf K}_{D}\), we have one \(A_{1}\) (\(\sim 160meV\)), one \(A_{2}\) (\(\sim 140meV\)) and one 2D (\(\sim 150meV\)) of \(C_{3v}\) group. Based on the deformation potential theory, we derive the e-ph interaction \(H_{eph}\) by expanding the TB Hamiltonian treating both the momentum and phonon displacement field \({\bf u}\) as perturbations. For e-ph interaction, we only keep the dominant zeroth-order in momentum for the \(\eta{\bf K}_{D}\) phonons. We find, due to both symmetry and the two-center approximation (SM Sec.II.E [55]), that only the \(A_{1}\) phonons at \({\bf K}_{D}\) can scatter an electron from \({\bf K}_{D}\) to \(-{\bf K}_{D}\)[63]. The corresponding Hamiltonian reads
\[H_{inter-wall}^{op,A_{1}}\approx\frac{\gamma_{3}}{\sqrt{2N_{G}M\omega_{A1}}} \sum_{\tilde{\bf k},\tilde{\bf k}^{\prime},\eta,\alpha\beta}(b_{-\eta{\bf K}_ {D}+\tilde{\bf k}-\tilde{\bf k}^{\prime},A_{1}}+b_{\eta{\bf K}_{D}-\tilde{\bf k }+\tilde{\bf k}^{\prime},A_{1}}^{\dagger})c_{\tilde{\bf k}+\eta{\bf K}_{D}, \alpha}^{\dagger}(\sigma_{x})_{\alpha\beta}c_{\tilde{\bf k}^{\prime}-\eta{\bf K }_{D},\beta}, \tag{3}\]
where \(\tilde{\bf k}\) is the electron momentum away from \(\eta{\bf K}_{D}\), \(N_{G}\) is the number of atomic unit cells, \(M\) is the atomic mass, \(\omega_{A1}\) is the \(A_{1}\) phonon frequency, and \(b\) and \(c\) are phonon and electron annihilation operators. The material dependent parameter \(\gamma_{3}\) can be derived from the hopping potential as \(\gamma_{3}=2i\sum_{\tilde{\bf G}}e^{i(\tau_{A}-\tau_{B})\cdot{\bf G}}({\bf G}+{ \bf K}_{D})_{y}t({\bf G}+{\bf K}_{D},0)\approx 17\)eV/ A, where \({\bf G}\) is the reciprocal lattice vector and \(t({\bf q})\) is the Fourier transform of the \(\pi\)-bond hopping function between two carbon \(p_{z}\) orbitals in graphene (Eq. 6 in SM Sec.I [55]). Our next step is to re-write the electron momentum \(\tilde{k}\) into the MBZ by \(\tilde{\bf k}={\bf k}-{\bf Q}_{l\eta}\) with \({\bf k}\in\) MBZ, so that \(c_{{\bf k},{\bf Q}_{l\eta},\alpha,\eta,s}=c_{\eta{\bf K}_{D}^{\dagger}+\tilde{ \bf k},\alpha,l,s}\) and \(\sum_{\tilde{\bf k}}\rightarrow\sum_{{\bf k}\in{\rm MBZ}}\sum_{{\bf Q}_{l\eta}}\), where we have added the spin index \(s\) and layer index \(l\). Finally, we project the e-ph interaction \(H_{inter-wall}^{op,A_{1}}\) into the flat bands of the BM Hamiltonian as
\[H_{inter-wall}^{op,A_{1}}\approx\frac{1}{\sqrt{N_{G}}}\sum G_{{\bf k },{\bf k}^{\prime},{\bf Q}-l\eta}^{mn^{\prime}l}\gamma_{{\bf k},n,\eta,s}^{ \dagger}\gamma_{{\bf k}^{\prime},n^{\prime},-\eta,s}\] \[(b_{-\eta{\bf K}_{D}+{\bf k}-{\bf k}^{\prime}-{\bf Q}_{-l\eta},l, A_{1}}+b_{\eta{\bf K}_{D}-{\bf k}+{\bf k}^{\prime}+{\bf Q}_{-l\eta},l,A_{1}}^{\dagger}) \tag{4}\]
where the summation includes \({\bf k},{\bf k}^{\prime},n,n^{\prime},\eta,s,l,{\bf Q}_{-l\eta}\), \(\gamma_{{\bf k},n,\eta,s}^{\dagger}=\sum_{{\bf Q}_{\bf q}}u_{{\bf Q}\alpha;n \eta}\left({\bf k}\right)c_{{\bf k},{\bf Q},\eta,\alpha s}^{\dagger}\) with \(u_{{\bf k},{\bf Q}_{l\eta},\alpha,\eta}^{n}\) the eigenstates of \(h_{{\bf Q}_{\bf q},\alpha^{\prime}\alpha^{\prime}}^{(\eta)}({\bf k})\). The matrix element
\[G_{{\bf k},{\bf k}^{\prime},{\bf Q}_{-l\eta}}^{\eta mn^{\prime}l}= \frac{\gamma_{3}}{\sqrt{2M\omega_{A_{1}}}}\sum_{{\bf Q}_{l\eta}^{\prime},\alpha \beta}\] \[u_{{\bf k},{\bf Q}_{l\eta}^{\prime},\alpha,\eta}^{n\star}\sigma_{ \alpha\beta}^{x}u_{{\bf k}^{\prime},{\bf Q}_{l\eta}^{\prime\prime}-{\bf Q}_{-l \eta},\beta,-\eta}^{n^{\prime}} \tag{5}\]
characterizes the e-ph interaction strength for TBG and can be evaluated numerically (and later analytically), as shown in SM Sec. VIF [55]. We focus on two flat bands (per valley per spin) of TBG, labelled by \(n=\pm\). Instead of the eigen-state basis, we work on the so-called "Chern-band" basis, defined by
\[u_{{\bf k},{\bf Q},\alpha,\eta}^{ev}=\frac{1}{\sqrt{2}}(u_{{\bf k},{\bf Q}, \alpha,\eta}^{n=+}+ie_{Y}u_{{\bf k},{\bf Q},\alpha,\eta}^{n=-}) \tag{6}\]
with \(e_{Y}=\pm 1\). \(u_{{\bf k},{\bf Q},\alpha,\eta}^{ev}\) carries the Chern number \(\pm 1\). On the Chern-band basis, the expressions for e-ph interaction can be obtained by replacing the \(n,n^{\prime}\) indices in Eqs. (4) and (5) with \(e_{Y},e_{Y}^{\prime}\) indices and \(u_{{\bf k},{\bf Q},\alpha,\eta}^{u}\) in Eq. (5) with \(u_{{\bf k},{\bf Q},\alpha,\eta}^{ev}\). Discrete symmetries can constrain the form of the function \(G_{{\bf k},{\bf k}^{\prime},{\bf Q}_{-l\eta}}^{\eta ev^{\prime}e_{Y}^{\prime}l}\), as discussed in SM Sec.VI.D [55]. In particular, in the chiral limit \(w_{0}=0\) one can show that \(G_{{\bf k},{\bf k}^{\prime},{\bf Q}_{-l\eta}}^{\eta ev^{\prime}e_{Y}^{\prime}l}= \delta_{ev,e^{\prime}_{Y}}G_{{\bf k},{\bf k}^{\prime},{\bf Q}_{-l\eta}}^{\eta ev _{Y}ev_{Y}l}\) has diagonal form on the Chern-band basis.
_Phonon-mediated Electron-electron and Symmetry Classification of Superconducting Pairing Channels_ - We next apply the Schrieffer-Wolff transformation [65] to integrate out the phonon modes and obtain the phonon-mediated electron-electron (el-el) interaction [25; 32]. We focus on the Cooper pair channel of the attractive interaction, which takes the form
\[H_{ee}=-\frac{1}{N_{M}}\sum_{{\bf k},{\bf k}^{\prime},s,s_{1},e_{Y},e _{Y}^{\prime}}V_{{\bf k},{\bf k}^{\prime}}^{n,ev_{Y},e_{Y}^{\prime}}\] \[\gamma_{{\bf k}ev\,\eta s}^{\dagger}\gamma_{-{\bf k}e_{Y}^{\prime},- \eta s_{1}}^{\dagger}\gamma_{-{\bf k}^{\prime}e_{Y}^{\prime},\eta s_{1}} \gamma_{{\bf k}^{\prime}e_{Y},-\eta s}, \tag{7}\]
where
\[V_{{\bf k},{\bf k}^{\prime}}^{\eta,ev_{Y}^{\prime},e_{Y}^{\prime}}= \frac{1}{N_{0}\omega_{A_{1}}}\sum_{{\bf G}_{M,l}}G_{{\bf k},{\bf k}^{\prime},- \eta{\bf q}_{2}+{\bf G}_{M}}^{\eta,ev_{Y},l}G_{-{\bf k},-{\bf k}^{\prime},l\eta{ \bf q}_{2}-{\bf G}_{M}}^{-\eta,e_{Y}^{\prime},l}\]
with \({\bf G}_{M}\
parameter \(V_{\mathbf{k},\mathbf{k}^{\prime}}^{\eta,e_{Y},e_{Y}^{\prime}}\). The ones leaving the momentum \((\mathbf{k},\mathbf{k}^{\prime})\) unchanged are: (1) \(\hat{C}_{2z}\hat{P}\): \(V_{\mathbf{k},\mathbf{k}^{\prime}}^{\eta,e_{Y},e_{Y}^{\prime}}=V_{\mathbf{k}, \mathbf{k}^{\prime}}^{-\eta,e_{Y},e_{Y}^{\prime}}\); (2) \(\hat{C}_{2z}\hat{T}\): \(V_{\mathbf{k},\mathbf{k}^{\prime}}^{\eta,e_{Y},e_{Y}^{\prime}}=V_{\mathbf{k}, \mathbf{k}^{\prime}}^{\eta,-e_{Y},-e_{Y}^{\prime}\star}\); and (3) the combination of index reshuffling and \(\hat{P}\) symmetry: \(V_{\mathbf{k},\mathbf{k}^{\prime}}^{\eta,e_{Y}^{\prime}}=V_{\mathbf{k}, \mathbf{k}^{\prime}}^{-\eta,e_{Y}^{\prime}e_{Y}}\). These three symmetry operations reduce the number of the independent components of the \(V\)-function for a fixed \((\mathbf{k},\mathbf{k}^{\prime})\) from 8 complex parameters to 1 real (\(V_{\mathbf{k},\mathbf{k}^{\prime}}^{+-}\)) and 1 complex parameter (\(V_{\mathbf{k},\mathbf{k}^{\prime}}^{+,++}\)). Other discrete symmetries, including \(\hat{P}\), reshuffling, hermicity, \(\hat{C}_{3z}\) and \(\hat{C}_{2z}\), relate the \(V\)-function at different \((\mathbf{k},\mathbf{k}^{\prime})\). In particular, \(\hat{C}_{3z}\) guarantees \(V_{\mathbf{K}_{M},0}^{\eta,e_{Y},e_{Y}}=0\) for the intra-Chern-band channels. The projected Coulomb interaction into the flat bands of the BM model possesses a large \(U(4)\times U(4)\) spin-valley continuous symmetry [18; 57; 66]. the el-el interaction (7) breaks this symmetry down to the \(U(2)_{e_{Y}=+}\times U(2)_{e_{Y}=-}\) in the chiral limit and further to a total spin \(SU(2)\) together with a valley charge \(U(1)\otimes U(1)\) (SM Sec.VI.E [55]).
At the mean field level, the attractive interaction (7) is decomposed into the fermion bilinear form \(H_{\Delta}=\hat{\Delta}+\hat{\Delta}^{\dagger}\) with
\[\hat{\Delta}=\sum\gamma_{\mathbf{k},e_{Y_{1}},\eta,s_{1}}^{\dagger}\Delta_{ \mathbf{k};e_{Y_{1}}s_{1},e_{Y_{2}}s_{2}}^{\eta}\gamma_{-\mathbf{k},e_{Y_{2} },-\eta,s_{2}}^{\dagger}, \tag{8}\]
where the summation above includes the indices \(\mathbf{k},e_{Y_{1}},e_{Y_{2}},s_{1},s_{2},\eta\) and the gap function
\[\Delta_{\mathbf{k};e_{Y_{1}}s_{1},e_{Y_{2}}s_{2}}^{\eta}=-\frac{1}{N_{M}}\sum _{\mathbf{k}^{\prime}}V_{\mathbf{k}\mathbf{k}^{\prime}}^{\eta e_{Y_{1}}e_{Y_{ 2}}}\langle\gamma_{-\mathbf{k}^{\prime}e_{Y_{2}}\eta s_{2}}\gamma_{\mathbf{k} ^{\prime}e_{Y_{1}}-\eta s_{1}}\rangle. \tag{9}\]
Since the interaction \(V\)-function does not involve spin, we can decompose \(\Delta_{\mathbf{k};e_{Y_{1}}s_{1},e_{Y_{2}}s_{2}}^{\eta}=\sum_{S,M}\Delta_{ \mathbf{k};e_{Y_{1}}e_{Y_{2}}}^{\eta,SM}\mathcal{S}_{s_{1}s_{2}}^{SM}\), where \(S=0\) for spin singlet and \(S=1\) (\(M=-S,...,S\)) for spin triplet (SM Sec.VI.G.1 [55]).
The gap function can be classified according to the discrete symmetries. The \(C_{6v}\) group includes four 1D irreducible representations (irreps), e.g. \(A_{1,2}\) and \(B_{1,2}\), and two 2D irreps, \(E_{1,2}\). 1D irreps \(A_{1,2}\) and \(B_{1,2}\) channels differ by their \(\hat{C}_{2z}\) eigen-values, \(\lambda_{C_{2z}}=+1\) for \(A_{1,2}\) and \(\lambda_{C_{2z}}=-1\) for \(B_{1,2}\). Combining \(\hat{C}_{2z}\) and reshuffling symmetries leads to \(\Delta_{\mathbf{k};e_{Y_{1}},e_{Y_{2}}}^{\eta}=\lambda_{C_{2z}}\Delta_{ \mathbf{k};e_{Y_{2}},e_{Y_{1}}}^{\eta}\) for spin singlet and \(\Delta_{\mathbf{k};e_{Y_{1}},e_{Y_{2}}}^{\eta}=-\lambda_{C_{2z}}\Delta_{ \mathbf{k};e_{Y_{2}},e_{Y_{1}}}^{\eta}\) for spin triplet. Thus, for intra-Chern-band pairing (\(e_{Y_{1}}=e_{Y_{2}}\)), the \(A_{1,2}\) channel must be spin singlet while the \(B_{1,2}\) channel must be spin triplet. Furthermore, the rotation \(\hat{C}_{3z}\) ensures the existence of nodes at \(\mathbf{K}_{M}\) for the gap function of any 1D irrep intra-Chern-band channel (\(\Delta_{\mathbf{K}_{M};e_{Y},e_{Y}}^{\eta}=0\)), while the inter-Chern-band channel does not have such constraint. The 2D irreps \(E_{1}\) and \(E_{2}\) have different \(\hat{C}_{2z}\) eigen-values, \(\lambda_{C_{2z}}=+1\) for \(E_{2}\) and \(\lambda_{C_{2z}}=-1\) for \(E_{1}\), similarly to the 1D irrep case. Consequently, the \(E_{2}\) channel must be spin singlet while the \(E_{1}\) channel must be spin triplet for intra-Chern-band pairings. \(\hat{C}_{3z}\) guarantees nodes at \(\Gamma_{M}\) for both intra- and inter-Chern-band channels, and it requires additional nodes at \(\mathbf{K}_{M}\) for the inter-Chern-band channels for both 2D \(E_{1,2}\) pairings. Besides discrete symmetries, the continuous \(U(2)_{e_{Y}=1}\times U(2)_{e_{Y}=-1}\) spin symmetry in the chiral limit guarantees the singlet and triplet pairings of inter-Chern-band channel to be degenerate in the chiral flat band limit. The full symmetry analysis of the gap functions can be found in SM Sec.VI.G [55].
_Gap Equations and Self-consistent Solution of Pairing Channels_ - The linearized gap equation (LGE) for the attractive interaction (7) can be derived by evaluating in Eq. (9) and expanding it to linear order of the gap function. In the chiral flat band limit, e.g. the band width is much smaller than the critical temperature \(T_{c}\), the LGE is derived as
\[2k_{B}T\Delta_{\mathbf{k};e_{Y_{1}}e_{Y_{2}}}^{\eta,SM}=\frac{1}{N_{M}}\sum_{ \mathbf{k}^{\prime}}V_{\mathbf{k},\mathbf{k}^{\prime}}^{\eta e_{Y_{1}}e_{Y_{2}}} \Delta_{\mathbf{k}^{\prime};e_{Y_{1}}e_{Y_{2}}}^{-\eta,SM}. \tag{10}\]
This is eigen-equation problem for the matrix \(V_{\mathbf{k},\mathbf{k}^{\prime}}^{\eta e_{Y_{1}}e_{Y_{2}}}\): the \(T_{c}\) is determined by the largest eigen-value and the symmetry of the gap function is determined by that of its eigenvector. As mentioned, the only two independent components of the \(V\)-function (complex \(V_{\mathbf{k},\mathbf{k}^{\prime}}^{++++}\) and real \(V_{\mathbf{k},\mathbf{k}^{\prime}}^{++-}\)) leads to two independent LGEs for the intra- and inter-Chern-band channels, respectively. The form of the LGE suggests that all the gap functions are doubly degenerate at \(T_{c}\) in the flat-band limit. They belong either to two degenerate 1D irreps or one 2D irrep. We first
Figure 1: (a) Phonon dispersion of graphene. The irreps for phonon modes at \(\Gamma\) and \(\mathbf{K}_{D}\) are labelled. Inset: BZ of graphene. (b) MBZ of TBG. (c) and (d) shows the momentum dependence of the normalized gap function \(|\Delta_{\mathbf{k}}|\) for the inter-Chern-band \(A_{1}\) singlet (or the \(A_{2}\) triplet) channel and intra-Chern-band pairing 2D \(E_{2}\) singlet channel, respectively. The inset in (c) shows the \(|\Delta_{\mathbf{k}}|\) along the dashed line \(k_{y}=0\) for both the inter-Chern-band (red) and intra-Chern-band (blue) channels. The momenta \(\mathbf{\Gamma}_{M},\mathbf{K}_{M},\mathbf{M}_{M}\) are labelled in MBZ in (b) and (d).
numerically solve these two LGEs from Eq. (341), and find the forms of the gap functions with the largest eigenvalues, as shown in Fig. 1. Our numerical calculations show \(k_{B}T_{c}\sim 0.21meV\) for the inter-Chern-band channel, slightly larger than \(k_{B}T_{c}\sim 0.16meV\) for the intra-Chern-band channel. For the inter-Chern-band channels, the gap function is almost a constant in Fig. 1a, featuring a fully gapped s-wave pairing with even \(C_{2z}\)-parity (\(A_{1}\) or \(A_{2}\) irrep). In the chiral flat-band limit, spin singlet and triplet pairings are degenerate, as required by the continuous \(U(2)\times U(2)\) spin symmetry (SM Sec.VLE [55]). Including kinetic energy splits this degeneracy and makes the spin singlet \(A_{1}\) irrep channel to have the highest \(T_{c}\). For the intra-Chern-band channel, one can see nodes appear at the \(\Gamma_{M}\) in Fig. 1b. As our previous symmetry analysis shows that the gap function should have nodes at \({\bf K}_{M}\) for the 1D irrep (\(A_{1,2},B_{1,2}\)) and \(\Gamma_{M}\) for the 2D irrep (\(E_{1,2}\)), numerical results should correspond to a 2D irrep. Numerically analyzing the symmetry property of the gap function suggests that the intra-Chern-band channel belongs to the 2D \(E_{2}\) irrep with spin singlet. Full numerical results are discussed in SM Sec.VI.H.2 and 3 [55].
Our results for the intra-Chern-band channels reveal a d-wave character of the gap. Using the heavy fermion formalism of TBG [64; 46], we analytically obtain \(V^{\eta,ev,ev}_{{\bf k},{\bf k}^{\prime}}\):
\[V^{\eta,ev}_{{\bf k},{\bf k}^{\prime}}=U^{*}_{ev,{\bf k}}U_{ev,{\bf k}^{\prime }};\;\;U_{ev,{\bf k}}=\frac{\sqrt{V_{0}}}{k^{2}+b^{2}}k^{2}_{ev}, \tag{11}\]
with \(k_{ev}=k_{x}+ie_{V}k_{y}\) (\(e_{Y}=\pm\)). This interaction allows us to solve the LGE analytically to obtain the \(T_{c}\),
\[k_{B}T_{c}=\frac{\tilde{V}_{0}}{2},\;\;\tilde{V}_{0}=\frac{1}{N_{M}}\sum_{{\bf k }}V_{0}\frac{k^{4}}{(k^{2}+b^{2})^{2}}, \tag{12}\]
where \(V_{0}\) and \(b\) are material dependent parameters. The corresponding self-consistent gap function takes the d-wave form
\[\left(\begin{array}{c}\Delta^{+,00}_{{\bf k},ev,ev}\\ \Delta^{-,00}_{{\bf k},ev,ev}\end{array}\right)=\Delta_{ev}\frac{k^{2}_{-ev}}{ k^{2}+b^{2}}\left(\begin{array}{c}1\\ 1\end{array}\right) \tag{13}\]
with \(e_{Y}=\pm\) and \(\Delta_{ev}\) a parameter to be determined. Time reversal, if exists, requires \(\left(\Delta^{-,00}_{{\bf k};--},\Delta^{+,00}_{{\bf k};--}\right)=\left( \Delta^{+,00}_{{\bf k};++},\Delta^{-,00}_{{\bf k};++}\right)^{*}\). The d-wave nature of the gap function suggests the possibility of the nodal superconductivity. However, one should note that the single-particle Hamiltonian is _not_ diagonal in the Chern-band basis. The Bogoliubov-de Gennes (BdG) spectrum must be checked with kinetic energy added. The BdG Hamiltonian for the intra-Chern-band pairing is block diagonal and one block \({\cal H}^{+,+}_{BdG}\) on the basis (\(\gamma^{\dagger}_{{\bf k},ev}=\pm,+,s=\uparrow,\gamma-_{{\bf k},ev}=\pm,-,s=\downarrow\)) reads
\[H^{+,+}_{BdG}({\bf k})=\begin{pmatrix}h_{+}({\bf k})&\Delta^{+}_{{\bf k}}\\ (\Delta^{+}_{{\bf k}})^{\dagger}&-h^{*}_{-}(-{\bf k})\end{pmatrix} \tag{14}\]
with \(h_{\eta}({\bf k})=(d_{0,\eta}({\bf k})-\mu)\zeta^{0}+d_{x,\eta}({\bf k})\zeta^ {x}\) and \(\Delta^{+}_{{\bf k}}=Diag[\Delta^{+,00}_{{\bf k},++},\Delta^{+,00}_{{\bf k},--}]\). Here \(d_{0,\eta}({\bf k})=(\epsilon_{+,\eta}({\bf k})+\epsilon_{-,\eta}({\bf k}))/2\) and \(d_{x,\eta}({\bf k})=(\epsilon_{+,\eta}({\bf k})-\epsilon_{-,\eta}({\bf k}))/2\), where \(\epsilon_{\pm,\eta}({\bf k})\) are the eigen-energies for the two low-energy flat bands (per valley per spin) of the BM model \(\hat{H}_{\rm el}\) (2). The corresponding energy spectrum can possess nodes when the pairing amplitudes of two Chern-band channels are equal, \(|\Delta_{ev=+}|=|\Delta_{ev,v=-}|=\Delta_{0}\), which corresponds to the Euler pairing discussed in Ref.[67; 32]. Point nodes appear at the location defined by two conditions (1) \(\cos((\Phi_{{\bf k},-}-\Phi_{{\bf k},+})/2)=0\), where \(\Phi_{{\bf k},ev}=\varphi_{ev}-2e_{V}\Phi_{\bf k}\) with \(\Delta_{ev}=\Delta_{0}e^{i\varphi_{ev}}\) and \(k_{ev}=ke_{ev}^{i\varphi_{\bf k}}\); and (2) \(d^{2}_{x,{\bf k}}=(d_{0,{\bf k}}-\mu)^{2}+\Delta^{2}_{0,{\bf k}}\) with \(\Delta_{0,{\bf k}}=\Delta_{0}\frac{k^{2}}{k^{2}+b^{2}}\), as discussed in SM Sec.VI.H.5 [55]. The first condition determines the momentum angle for the nodes while the second gives the momentum amplitude, thus together fixing the location of point nodes in the 2D momentum space. We next solve the self-consistent gap equation at zero temperature for the interaction form (363). With the gap function ansatz \(\Delta_{{\bf k};ev}=\Delta_{ev}\frac{k^{2}_{-ev}}{k^{2}+b^{2}}\), we find a self-consistent gap equation
\[\Delta_{ev}=\frac{V_{0}}{N_{M}}\sum_{{\bf k}^{\prime},e_{Y}}\frac{{k^{\prime}}^ {2}_{ev}}{k^{\prime 2}+b^{2}}u_{-{\bf k}^{\prime},ev\,e_{Y_{1}}}w^{*}_{-{\bf k}^{\prime},ev \,e_{Y_{1}}}, \tag{15}\]
where \(\psi_{{\bf k},e_{Y_{1}}}=(u_{{\bf k},\pm,e_{Y_{1}}},w_{{\bf k},\pm e_{Y_{1}}})\) (\(e_{Y_{1}}=\pm\)) are the eigen-wave functions with the positive eigen-energies of the BdG Hamiltonian \(H^{+,+}_{BdG}({\bf k})\) (14). Fig. 2A shows the chemical potential dependence of the gap functions and the condensation energy. The Euler pairing \(|\Delta_{+}|=|\Delta_{-}|\) is always energetically favored for a non-flat band width \(\sim 0.3meV\), quite different from chiral d-wave pairing in doped graphene [68; 69; 26]. For the chemical potential \(\mu\) below \(0.1meV\), a nodal superconductor phase with four point nodes (Fig. 2B) located at the positions determined by two conditions discussed above [32]. With increasing \(\mu\), four nodes move towards \(\Gamma_{M}\) and eventually a gapped superconductor phase (Fig. 2C) appears for \(\mu>0.1meV\).
The energy scale of the Coulomb interaction in TBG is \(\sim 24meV\)[57], much larger than the estimated energy scale of e-K-ph mediated attractive interaction \(\sim 0.3meV\)[70]. Near the Van-Hove singularities of flat bands, the screening can significantly reduce the Coulomb interaction to a similar order as e-K-ph mediated interaction due to the large DOS (SM Sec.VI.H.7 [55]), thus making superconductivity from this mechanism possible. If, however, the DOS is that of the Hartree-Fock bands of correlated insulators, the screening might not be enough to reduce the Coulomb interaction. Hence, superconductivity from this \(K\)-phonon flat bare band mechanism could appear only when the correlated insulator states are suppressed.
_Conclusion -_ In conclusion, we develop a theory for the projected e-K-ph interaction of the flat bands and the resulting superconductor pairing channels in TBG. We find the inter-Chern-band s-wave singlet pairing and the intra-Chern-band d-wave nematic singlet pairing have highest \(T_{c}\), and the \(T_{c}\) of inter-Chern-band channel is slightly higher than the intra-Chern-band channel. The
intra-Chen-band channel can have nodes in a large parameter regime. From the estimate of the screened Coulomb interaction, we argue that this mechanism requires the correlated insulators to be suppressed.
_Acknowledgement_ - We would like to acknowledge Biao Lian, Xi Dai and Zhida Song for the helpful discussion. B.A.B.'s heavy fermion in twisted bilayer research was supported by the DOE Grant No. DE-SC0016239. BAB's sabbatical support also comes from the Simons Investigator Grant No. 404513, the Gordon and Betty Moore Foundation through Grant No.GBMF8685 towards the Princeton theory program, the Gordon and Betty Moore Foundation's EPiQS Initiative (Grant No. GBMF11070), Office of Naval Research (ONR Grant No. N00014-20-1-2303), and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement no. 101020833). CXL also acknowledges the support through the Penn State MRSEC-Center for Nanoscale Science via NSF award DMR-2011839. AY acknowledges support from the Gordon and Betty Moore Foundation's EPiQS initiative grant GBMF9469, DOE- Bess grant DE-FG02-07ER46419, NSF-DMR-1904442, ARO MURI (W911NF-21-2-0147), and ONR N00012- 21-1-2592. CXL and AY also acknowledges the support from the NSF-MERSEC (Grant No. MER- SEC DMR 2011750). YLC acknowledges the support from the Oxford-ShanghaiTech collaboration project and the Shanghai Municipal Science and Technology Major Project (grant 2018SHZDZX02).
|
2304.11790 | Adaptive-saturated RNN: Remember more with less instability | Orthogonal parameterization is a compelling solution to the vanishing
gradient problem (VGP) in recurrent neural networks (RNNs). With orthogonal
parameters and non-saturated activation functions, gradients in such models are
constrained to unit norms. On the other hand, although the traditional vanilla
RNNs are seen to have higher memory capacity, they suffer from the VGP and
perform badly in many applications. This work proposes Adaptive-Saturated RNNs
(asRNN), a variant that dynamically adjusts its saturation level between the
two mentioned approaches. Consequently, asRNN enjoys both the capacity of a
vanilla RNN and the training stability of orthogonal RNNs. Our experiments show
encouraging results of asRNN on challenging sequence learning benchmarks
compared to several strong competitors. The research code is accessible at
https://github.com/ndminhkhoi46/asRNN/. | Khoi Minh Nguyen-Duy, Quang Pham, Binh T. Nguyen | 2023-04-24T02:28:03Z | http://arxiv.org/abs/2304.11790v1 | # Adaptive-saturated RNN:
###### Abstract
Orthogonal parameterization is a compelling solution to the vanishing gradient problem (VGP) in recurrent neural networks (RNNs). With orthogonal parameters and non-saturated activation functions, gradients in such models are constrained to unit norms. On the other hand, although the traditional vanilla RNNs are seen to have higher memory capacity, they suffer from the VGP and perform badly in many applications. This work proposes Adaptive-Saturated RNNs (asRNN), a variant that dynamically adjusts its saturation level between the two mentioned approaches. Consequently, asRNN enjoys both the capacity of a vanilla RNN and the training stability of orthogonal RNNs. Our experiments show encouraging results of asRNN on challenging sequence learning benchmarks compared to several strong competitors. The research code is accessible at [https://github.com/ndminhkhoi46/asRNN/](https://github.com/ndminhkhoi46/asRNN/).
## 1 Motivation
Training vanilla RNNs (with tanh activation) is notoriously challenging due to the VGP, where the gradients' magnitudes become _exponentially_ smaller (Collins et al., 2017). Thus, extensive efforts have been devoted to develop more stable, effective models such as orthogonal RNNs (Lezcano-Casado and Martinez-Rubio, 2019), and LSTM (Hochreiter and Schmidhuber, 1997). Empirically, orthogonal RNNs show more competitive performances compared to LSTM and vanilla RNN on long-sequence tasks. However, due to non-saturated activations and unitary constrain, their memory capacity is also more limited compared to vanilla RNN (Collins et al., 2017).
This study aims to realize the memory capacity potential of vanilla RNNs by endowing them with the capability to address the VGP of orthogonal RNNs. As a result, we propose the _adaptive-saturated RNN_ (asRNN), a vanilla RNN variant that dynamically adjusts the activation's saturation level. Particularly, we observe that, let \(f(x;a)=\frac{\tanh(ax)}{a}\), then:
\[\lim_{a\to 0}f(x;a)=x,\quad\text{and}\quad\lim_{a\to 1}f(x;a)=\tanh(x). \tag{1}\]
Thus, by generalizing and using \(f(x;a)\) as an activation function, we can adjust \(a\) and update the parameters freely of a vanilla RNN to achieve high memory capacity without being affected by the VGP. In the following, we will formally introduce asRNN and outline a key result of a condition for avoiding the VGP in asRNN.
## 2 Methodology
**Formulation** Based on the observation Eq. 1, we formally define the hidden cell of asRNN as:
\[h_{t}=\mathbf{W}_{f}^{-1}\mathrm{tanh}(\mathbf{W}_{f}(\mathbf{W}_{zh}\mathbf{x}_{t}+\mathbf{W}_{hh }\mathbf{h}_{t-1}+\mathbf{b})), \tag{2}\]
where \(\mathcal{W}=\{\mathbf{W}_{f},\mathbf{W}_{zh},\mathbf{W}_{hh},\mathbf{b}\}\) is the set of trainable parameters, \(\mathbf{W}_{f}\) introduces an end-to-end composite layer to control the saturation level of asRNN. To ensure the non-singularity of \(\mathbf{W}_{f}\), we parameterize \(\mathbf{W}_{f}=\mathbf{U}_{f}\mathbf{D}_{f}\), where \(\mathbf{U}_{f}\) is orthogonal, \(\mathbf{D}_{f}\) is positively diagonal. Remarkably, we observe that: (i) by fixing \(\mathbf{W}_{f}\) to be identity, we recover a vanilla RNN; and (ii) let \(\mathbf{W}_{hh}\) be orthogonal, fix \(\mathbf{U}_{f}\) to be the identity, and let \(\mathbf{D}_{f}\to\mathbb{0}\), we recover an orthogonal RNN. From such construction, asRNN not only dynamically adjusts the saturation level but also controls the singular values of the temporal Jacobian (Thm. 2.1), which in turn alleviates the VGP.
**Key theoretical result** Let \(\mathbf{w}\in\mathbf{W}\), we define the gradient and Jacobian as (Pascanu et al., 2012): \(\frac{\partial\,\mathcal{J}}{\partial\mathbf{w}}=\sum_{1\leq t_{1}\leq T}\frac{ \partial\,\mathcal{J}}{\partial\mathbf{h}\mathbf{r}}\,\mathcal{J}(T,t_{1})\frac{ \partial\mathbf{h}_{t_{1}}}{\partial\mathbf{h}}\), where \(\mathbf{J}(t_{2},t_{1})=\prod_{t_{1}<t\leq t_{2}}\mathbf{J}(t)\) and \(\mathbf{J}(t)=\mathbf{W}_{f}^{-1}\text{diag}[1-(\mathbf{W}_{f}\mathbf{h}_{t})^{2}|\mathbf{W}_{f} \mathbf{W}_{hh}\). The vanishing gradient problem in RNNs is credited to the existence of a temporal Jacobian \(\mathbf{J}(t_{2},t_{1})\approx 0\) that bottlenecks the backpropagated signal. Under mild assumptions (Appendix A.2), we show a condition for asRNN where all singular values of temporal Jacobian matrices are lower bounded by \(1\). Thanks to this, the VGP is alleviated on vanilla RNN.
**Theorem 2.1**.: _Let \(G\) and \(H\) be respectively the \(d_{h}\)-th degree generalized permutation group and its signed permutation subgroup. Under the assumptions in Appendix A.2, if \(||\mathbf{D}_{f}||_{2}\leq\frac{\text{arctanh}(\sqrt{1-||\mathbf{W}_{hh}^{-1}||_{2}} )}{(||\mathbf{W}_{eh}||_{2}C_{x}+||\mathbf{b}||_{h})\sum_{i=0}^{T}(||\mathbf{W}_{hh}||_{max +1})}\), and \(\min_{\mathbf{E}\in G}||\mathbf{W}_{hh}-\mathbf{E}||_{2}\leq\frac{\sigma_{\min}(\mathbf{D}_{f })}{||\mathbf{D}_{f}||_{2}}\), then_
\[\forall t_{2}>t_{1}\in\mathbb{N}^{*},\exists\epsilon\geq 0:\min_{\mathbf{E}\in H}|| \mathbf{U}_{f}-\mathbf{E}||_{2}\leq\epsilon\rightarrow\sigma_{\min}(\mathbf{J}(t_{2},t_{ 1}))\geq 1.\]
Importantly, while Zhao et al. (2020) questioned and showed a scenario where vanilla RNN and LSTM have a short memory, a problem associated with the VGP. In contrast, our Thm. 2.1 suggests the existence of another in which vanilla RNN resists VGP and possesses long memory.
## 3 Experiment
To validate the model's memory capacity, we consider the Copy task(Hochreiter & Schmidhuber, 1997), sequential MNIST(LeCun et al., 1998), and permuted MNIST(Goodfellow et al., 2014). Next, we use the Temebank character-level prediction (PTB-c)(Marcus et al., 1993) task to explore the model's expressivity (Kerg et al., 2019; Bojanowski et al., 2016). We benchmark asRNN against the strong orthogonal RNNs with long memories and high trainability such as expRNN(Lezcano & Martinez-Rubio, 2019), scobNN(Helfrich et al., 2018), uRNN(Arjovsky et al., 2016), and the popular LSTM(Hochreiter & Schmidhuber, 1997) with high expressivity from the gated mechanism. We follow the setting described in Appendix A.4 and report the results in Fig. 1 and Tab. 1. On long sequence learning tasks (Fig. 1), our asRNN shows excellent performance by converging stably on the Copy task, and achieves better generalization on the sequential and permuted MNIST tasks. On the PTB task, asRNN achieved encouraging results by outperforming all orthogonal RNN baselines, only second to LSTM. Overall, our empirical results show that asRNN possesses both high memory capacity and expressivity compared to other non-gated RNNs and can alleviate the EVGP, which corroborates our motivation in Sec. 1.
## 4 Conclusion
We have investigated the potential and limitations of vanilla RNNs in learning long-sequence tasks. Then, we proposed asRNN, a novel vanilla RNN variant that enjoys strong resistance to the VGP and can possess long memory. Our experiment results show asRNN enjoys encouraging performances on tasks that demand memory span, memory capacity, or expressivity.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Model & \(T=150\) & \(T=300\) \\ \hline LSTM & \(\mathbf{1.41\pm 0.005}\) & \(\mathbf{1.43\pm 0.004}\) \\ \hline asRNN & \(1.46\pm 0.006\) & \(1.49\pm 0.005\) \\ \hline nnRNN & \(1.47\pm 0.003\) & \(1.49\pm 0.002\) \\ \hline expRNN & \(1.49\pm 0.008\) & \(1.52\pm 0.001\) \\ \hline ERUN & \(1.61\pm 0.001\) & \(1.62\pm 0.001\) \\ \hline RNN-orth & \(1.62\pm 0.004\) & \(1.66\pm 0.006\) \\ \hline RNN & \(2.89\pm 0.002\) & \(2.90\pm 0.002\) \\ \hline \end{tabular}
\end{table}
Table 1: Test BPC on PTB-c at different BPTT lengths (T). Best results are in bold.
Figure 1: Training loss for the Copy task and test accuracies for the sequential and permuted MNIST.
## Acknowledgement
This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-027).
## URM Statement
* K. M. Nguyen-Duy meets the URM criteria of the ICLR 2023 Tiny Papers Track.
|
2308.08114 | OmniZoomer: Learning to Move and Zoom in on Sphere at High-Resolution | Omnidirectional images (ODIs) have become increasingly popular, as their
large field-of-view (FoV) can offer viewers the chance to freely choose the
view directions in immersive environments such as virtual reality. The M\"obius
transformation is typically employed to further provide the opportunity for
movement and zoom on ODIs, but applying it to the image level often results in
blurry effect and aliasing problem. In this paper, we propose a novel deep
learning-based approach, called \textbf{OmniZoomer}, to incorporate the
M\"obius transformation into the network for movement and zoom on ODIs. By
learning various transformed feature maps under different conditions, the
network is enhanced to handle the increasing edge curvatures, which alleviates
the blurry effect. Moreover, to address the aliasing problem, we propose two
key components. Firstly, to compensate for the lack of pixels for describing
curves, we enhance the feature maps in the high-resolution (HR) space and
calculate the transformed index map with a spatial index generation module.
Secondly, considering that ODIs are inherently represented in the spherical
space, we propose a spherical resampling module that combines the index map and
HR feature maps to transform the feature maps for better spherical correlation.
The transformed feature maps are decoded to output a zoomed ODI. Experiments
show that our method can produce HR and high-quality ODIs with the flexibility
to move and zoom in to the object of interest. Project page is available at
http://vlislab22.github.io/OmniZoomer/. | Zidong Cao, Hao Ai, Yan-Pei Cao, Ying Shan, Xiaohu Qie, Lin Wang | 2023-08-16T02:58:43Z | http://arxiv.org/abs/2308.08114v2 | # OmniZoomer: Learning to Move and Zoom in on Sphere at High-Resolution
###### Abstract
Omnidirectional images (ODIs) have become increasingly popular, as their large field-of-view (FoV) can offer viewers the chance to freely choose the view directions in immersive environments such as virtual reality. The Mobius transformation is typically employed to further provide the opportunity for movement and zoom on ODIs, but applying it to the image level often results in blurry effect and aliasing problem. In this paper, we propose a novel deep learning-based approach, called **OmniZoomer**, to incorporate the Mobius transformation into the network for movement and zoom on ODIs. By learning various transformed feature maps under different conditions, the network is enhanced to handle the increasing edge curvatures, which alleviates the blurry effect. Moreover, to address the aliasing problem, we propose two key components. Firstly, to compensate for the lack of pixels for describing curves, we enhance the feature maps in the high-resolution (HR) space and calculate the transformed index map with a spatial index generation module. Secondly, considering that ODIs are inherently represented in the spherical space, we propose a spherical resampling module that combines the index map and HR feature maps to transform the feature maps for better spherical correlation. The transformed feature maps are decoded to output a zoomed ODI. Experiments show that our method can produce HR and high-quality ODIs with the flexibility to move and zoom in to the object of interest. Project page is available at [http://vilslab22.github.io/OmniZoomer/](http://vilslab22.github.io/OmniZoomer/).
## 1 Introduction
Omnidirectional images (ODIs) have garnered significant attention as a means to maximize the amount of content and context captured within a single image, and
there is a growing demand for utilizing such visual content within devices, _e.g._, mobile apps and head-mounted displays (HMDs) for virtual reality (VR) [38]. To provide an interactive experience, these devices enable users to control the view direction. However, most \(360^{\circ}\) cameras have a fixed focal length and do not support optical zoom, which causes the apparent size of objects in ODIs fixed. This limits the immersive experience when users expect to move and zoom in to an object of interest to see more details.
Generally, there exist three solutions to zoom in on the equirectangular projection (ERP) format ODIs or their perspective patches. The first is to zoom in on ERP images uniformly. However, as ERP images have non-uniform pixel density in different latitudes [8], uniform zoom can severely distort the object shapes. The second is to zoom in on perspective patches projected from ODIs. As the perspective patches of ODI have uniform pixel density [10], distortion problem can be solved. However, due to the limited FoV, these patches only concentrate on local regions and ignore the relationship between each other during transformations. Thirdly, Mobius transformation has recently been employed to provide movement and zoom freedom on ODIs [34; 15; 27]. It is the only conformal bijective transformation on the sphere that preserves angles. However, applying Mobius transformation on the image level often leads to blurry and aliasing problems due to two reasons. Firstly, zoom-in makes a portion of the ODIs enlarged, making the enlarged region blurry and pixelated. Moreover, if \(360^{\circ}\) cameras are placed vertically, the ODIs suffer from distortion mainly in high-latitude regions and remain lots of straight lines in equator regions. After transformations, the appearance of the vertically captured ODIs varies greatly, resulting in more curves in both high-latitude and equator regions (See Fig. 2). Describing these curves with the same amount of pixels that originally represent straight lines becomes challenging.
To obtain high-quality ODIs after movement and zoom, in this paper, we propose a novel deep learning-based approach, dubbed **OmniZoomer**, to incorporate the Mobius transformation into the network for freely moving and zooming in on ODIs, as shown in Fig. 1. By learning transformed feature maps in various conditions, the network is enhanced to handle the increasing curves caused by movement and zoom, as well as the inherent spherical distortion in ODIs. In this case, the blurry effect can be solved to some extent, but the aliasing problem still exists, such as edge discontinuity and shape distortion (See Fig. 9(c)).
To further address the aliasing problem, we propose two key components. Firstly, to compensate for the lack of pixels for describing curves, we propose to enhance the extracted feature maps to high-resolution (HR) space before the transformation. The HR feature maps contain more fine-grained textural details, and are sufficient to represent the increasing curvatures and maintain the object shapes precisely. We then propose a spatial index generation module (Sec. 3.2) to calculate the transformed index map based on the HR feature maps and Mobius transformation matrix, which can be conducted on the HR feature space. Although applying Mobius transformation on HR images with existing super-resolution (SR) methods [40; 44; 24] can serve the same purpose, this solution is sub-optimal because the models might not handle the increasing curves (See Tab. 1 and Fig. 6). In addition, some image warping methods [36; 22] can learn the warping process in the network but are constrained to estimate spatial-varying grids on the 2D plane, rather than the sphere. There are also some SR models designed for ODIs [8; 43]. However, they are limited to vertically captured ODIs or predetermined data structures.
Subsequently, we propose a spherical resampling module that combines the HR feature maps and transformed index maps for feature map transformation. The spherical resampling is inspired by the inherent spherical representation of the ODIs and the spherical conformality of Mobius transformation. It resamples based on the spherical geodesic of two points on the sphere, which better relates the original HR feature maps and transformed ones. With HR feature representation and the spherical resampling module, OmniZoomer alleviates the blurry effect and aliasing problem substantially, enabling moving and zooming in to an object of interest on ODIs with preserved shapes and continuous curves. Finally, these feature maps are processed with a decoder to output a zoomed ODI. After movement and zoom, OmniZoomer can generate more precise visual results with clear textural and structural details (See Fig. 2).
As collecting real-world ODI pairs under Mobius transformation is difficult, we propose a dataset based on ODI-SR dataset [8], dubbed ODIM dataset, containing synthesized ODIs with various Mobius transformations. We evaluate the effectiveness of OmniZoomer on the ODIM dataset under various Mobius transformations and up-sampling factors. The experimental results show that OmniZoomer outperforms existing methods quantitatively and qualitatively.
Figure 2: Visual comparisons of different methods for movement and zoom. Our OmniZoomer predicts more continuous lines.
The main contributions of this paper can be summarized as follows: (**I**) We propose a novel deep learning-based approach, called _OmniZoomer_, to incorporate the Mobius transformation into the deep network. (**II**) We enhance the feature maps to HR space and calculate the HR index map with a spatial index generation module. We also propose a spherical resampling module for better spherical correlation. (**III**) We establish ODIM dataset for supervised training. Compared with existing methods, OmniZoomer achieves the state-of-the-art performance under various Mobius transformations and up-sampling factors.
## 2 Related Work
**Application of Mobius Transformation.** One of the main immersive experience in \(360^{\circ}\) devices is the control by the viewers. Although current devices can provide the opportunity to control view directions and field-of-views (FoVs) [5], the zoom quality needs to be improved to see more details [33]. Mobius transformation has been applied on ODIs, including straight line rectification [32, 15, 14],, stereo pairs rectification [16], and rotation and zoom [34]. However, these methods operate on the image level, whose performance relies heavily on the quality of the raw ODI.
Recently, in [41], Mobius transformation has been employed in deep learning for feature augmentation. Especially, [41] fuses the features, which are applied multiple transformations to predict the raw ODI and address the spherical distortion. However, [41] dose not generate the HR and high-quality transformed ODIs. Moreover, Mobius transformation has also been applied for data augmentation [45], activation function [25, 31], pose estimation [3], and convolutions [26]. Although Mobius convolution [26] shows a strong capability of spherical equivalence, it requires the spherical harmonic transform in each convolution block, resulting in low computational efficiency. _In this work, we propose a learning-based approach to improve the textural and structural details of ODIs when moving and zooming in to an object of interest._
**Image Warping.** It is widely utilized in various tasks, _e.g_., optical flow estimation [6] and video SR [23]. Generally, it is conducted by calculating transformed spatial indices, and resampling information from the input images based on the transformed indices [18]. Considering jagging and blurry effects of image warping [36], SRWarp interprets the image warping as a spatially-varying SR problem and proposes an adaptive warping layer to estimate the rotation during warping. SRWarp also shows that simply concatenating existing SR models [24, 40, 44, 17] with warping operation is sub-optimal. Furthermore, LTEW [22] estimates the varied shape and integrates the priors into an implicit representation in the Fourier space. _Differently, we focus on transforming and resampling on the sphere with a curved surface. Our proposed spherical resampling module outperforms these warping methods significantly. (See Tab. 3)._
**ODI Super-Resolution.** Traditional ODI SR methods primarily utilize a sequence of low-resolution (LR) ODIs to stitch an HR ODI [1, 2, 4, 20, 28]. Recently, [12] and [29] propose learning-based SR methods that incorporate the distortion maps to tackle spherical distortions. [30] employs adversarial learning for ODI SR, but only treats ODIs as 2D planar images. Observing that different latitudes have non-uniform pixel densities, LAU-Net [8] crops ODIs into different latitude bands and dynamically up-samples these
Figure 3: **The overall pipeline of the proposed OmniZoomer. With the spatial index generation module and spherical resampling module, OmniZoomer can provide users with a flexible way to zoom in and out to objects of interest, such as the enlarged center building.**
bands. However, after transformations, _e.g_., movement and zoom, the bands can not be simply cropped along latitudes. SphereSR [43] proposes to super-resolve an LR ODI to an HR ODI with arbitrary projection types. Nevertheless, the predetermined spherical data structure can not adapt to the transformed ODIs. _Unlike these methods, we address a new task of incorporating the Mobius transformation into the network to move and zoom in to the object of interest on ODIs with high-quality textural and structural details._
## 3 Methodology
**Overview.** As shown in Fig. 3, we propose a novel end-to-end pipeline, dubbed _OmniZoomer_, which allows for free movement of the "eyes" to objects of interest and zooming in directly on the sphere with preserved shapes and high-quality textural details. Firstly, we extract HR feature maps \(F_{\text{UP}}\in\mathbb{R}^{H\times W\times C}\) from the input ODI \(I_{\text{IN}}\in\mathbb{R}^{h\times w\times 3}\) through an encoder and an up-sampling block (Sec. 3.1). With \(F_{\text{UP}}\)'s index map \(X\in\mathbb{R}^{H\times W\times 2}\) as the input, we propose the spatial index generation module (Sec. 3.2) to apply the Mobius transformation [19] with arbitrary parameters on \(X\) for the transformed spatial index map \(Y\in\mathbb{R}^{H\times W\times 2}\). Note that the channel numbers of \(X\) and \(Y\) indicate the longitude and latitude, respectively. Subsequently, we introduce a spherical resampling module (Sec. 3.3) that generates the transformed HR feature maps \(F_{\text{M}}\in\mathbb{R}^{H\times W\times C}\) by resampling the pixels on the sphere guided by the transformed index map \(Y\). Finally, we decode the feature maps to output a zoomed-in ODI where the region of interest is a clear close-up shot. The decoder consists of three ResBlocks [24] and a convolution layer. We take the same parameters used in the spatial index generation module to transform the HR ground truth ODIs, and employ the \(L1\) loss as the supervision loss. We now provide detailed descriptions of these components.
### Feature Extraction
Given an ODI \(I_{\text{IN}}\in\mathbb{R}^{h\times w\times 3}\) with the ERP format, we first apply an encoder consisting of several convolution layers to extract the feature maps \(F_{\text{IN}}\in\mathbb{R}^{h\times w\times C}\). Accordingly, we design an upsampling block with several pixel-shuffle layers [35] to generate the HR feature maps \(F_{\text{UP}}\in\mathbb{R}^{H\times W\times C}\), where \(H=s*h\), \(W=s*w\), \(s\) is the scale factor and \(C\) is the channel number. Especially, we apply the Mobius transformation on the HR feature maps based on two considerations: 1) _The blurry effect on image level_. By learning various transformations, the extracted feature maps demonstrate an enhanced representation capability in handling increasing edge curvatures and solving the blurry effect. 2) _The aliasing problem._ For instance, in Fig. 4(a), due to the insufficient pixels to describe continuous and clear curves after transformations, the shape of the railing is distorted. Moreover, the aliasing problem is challenging to tackle even if super-resolving the transformed ODIs. In contrast, applying the Mobius transformation to the super-resolved ODI has a significant improvement, as demonstrated in Fig. 4(b).
### Spatial Index Generation
In this section, we apply the Mobius transformation on the spatial index map \(X\) of HR feature maps \(F_{\text{UP}}\) and generate the transformed spatial index map \(Y\) for the subsequent resampling operation. Mobius transformation is known as the only conformal bijective transformation between the sphere and the complex plane. To apply the Mobius transformation on the HR feature maps \(F_{\text{UP}}\), we first use spherical projection (SP) to project the spatial index map \(X\) from spherical coordinates \((\theta,\phi)\) (where \(\theta\) represents the longitude and \(\phi\) represents the latitude) to the Riemann sphere \(\mathbb{S}^{2}=\{(x,y,z)\in\mathbb{C}^{3}|x^{2}+y^{2}+z^{2}=1\}\), formulated as:
\[\text{SP}:\begin{pmatrix}x\\ y\\ z\end{pmatrix}=\begin{pmatrix}\cos(\phi)\cos(\theta)\\ \cos(\phi)\sin(\theta)\\ \sin(\phi)\end{pmatrix}. \tag{1}\]
Then, using stereographic projection (STP) [11], we can project a point \((x,y,z)\) of the Riemann sphere \(\mathbb{S}^{2}\) onto the complex plane and obtain the projected point (\(x^{\prime}\), \(y^{\prime}\)). Let point \((0,0,1)\) be the pole, STP can be formulated as:
\[\text{STP}:x^{\prime}=\frac{x}{1-z}\;,\;y^{\prime}=\frac{y}{1-z}. \tag{2}\]
Subsequently, given the projected point \(p\) (\(Z_{p}\) = \(x^{\prime}\)+\(iy^{\prime}\)) on the complex plane, we can conduct the Mobius transformation with the following formulation:
\[f(Z_{p})=\frac{aZ_{p}+b}{cZ_{p}+d}, \tag{3}\]
where \(a\), \(b\), \(c\), and \(d\) are complex numbers satisfying \(ad-bc\neq 0\). Finally, we apply the inverse stereographic projection \(\text{STP}^{-1}\) and inverse spherical projection \(\text{SP}^{-1}\) to
Figure 4: Comparisons of directly applying Möbius transformation on the ODI and on the super-resolved ODI.
re-project the complex plane into the ERP plane:
\[\begin{split}\text{STP}^{-1}:\begin{pmatrix}x\\ y\\ z\end{pmatrix}&=\left(\begin{array}{c}\frac{2x^{\prime}}{1+x^{\prime 2}+y^{\prime 2 }}\\ \frac{2y^{\prime}}{1+x^{\prime 2}+y^{\prime 2}}\\ \frac{-1+x^{\prime 2}+y^{\prime 2}}{1+x^{\prime 2}+y^{\prime 2}}\\ \end{array}\right)\ ;\\ \text{SP}^{-1}:\begin{pmatrix}\theta\\ \phi\end{pmatrix}&=\left(\begin{array}{c}\arctan(y/x)\\ \arcsin(z)\end{array}\right)\.\end{split} \tag{4}\]
In summary, as shown in Fig. 3, we first project the input index map \(X\) to the complex plane using SP (Eq. 1) and STP (Eq. 2), and then conduct the Mobius transformation with Eq. 3, and generate the transformed index map \(Y\) through the inverse STP (Eq. 4) and inverse SP (Eq. 4). After transformation, both the indices (represented with matrix and gradient color) and grid shapes (represented with black lines) in \(Y\) have a noticeable change compared with \(X\).
### Spherical Resampling
As spatial indices recorded in \(Y\) are not equidistant, it is necessary to design a resampling method to calculate the feature values for the transformed feature maps \(F_{\text{M}}\) based on \(Y\). Generally, the resampling process can be divided into three steps. The first step is to determine the neighboring pixel set \(N_{q}\) of the query pixel \(q\), _i.e._, the four corner pixels \(\{p_{i}\in N_{q},i=0,1,2,3\}\), as illustrated in Fig. 5(a). The second step is to calculate the weight \(w_{q,p_{i}}\) for each neighboring pixel \(p_{i}\in N_{q}\), _i.e._, the partial area \(S_{i}\). The third step is to calculate the weighted average of neighboring feature values: \(F(q)=\sum_{p_{i}\in N_{q}}w_{q,p_{i}}F(i)\).
Previous image warping methods, _e.g._, SRWarp [36] and LTEW [22], consider the rotation of the local varied grid with Jacobian matrix. In this case, the resampling bases are recalculated according to the grid rotation, and the resampling weight \(w_{q,p_{i}}\) is re-projected to the new bases, either explicitly [36] or implicitly [22]. Although these methods can be used directly for Mobius transformation on ODIs, they can not deal with the spherical representation of ODIs due to two key reasons: 1) As shown in Fig. 5(a), the partial area (marked with blue region) in 2D plane is stretched non-uniformly when projected to the spherical surface due to spherical distortion; 2) Although the resampling bases can be corrected with rotation, the resampling process is still limited to 2D plane, which is sub-optimal for describing the relationship between two points on the sphere.
Inspired by the inherent spherical representation of ODIs and the spherical conformality of Mobius transformation, we propose the spherical resampling module to generate the transformed feature maps \(F_{\text{M}}\). The spherical resampling module directly resamples on the curved sphere based on the spherical geodesic of two points on the sphere. Given a query pixel \(q\) with the spatial index \((\theta_{q},\phi_{q})\) from the index map \(Y\), we choose its four corner pixels \(\{p_{i},i=0,1,2,3\}\) as the neighbouring pixels, which are located on the feature maps \(F_{\text{UP}}\) (as shown in the left of Fig. 5(b)). The indices of the neighboring pixels satisfy the following conditions: \(\theta_{0}=\theta_{3}\), \(\theta_{1}=\theta_{2}\), \(\phi_{0}=\phi_{1}\), and \(\phi_{2}=\phi_{3}\). To obtain the feature value of the query pixel \(q\), we employ the spherical linear interpolation (Slerp) [13], which is a constant-speed motion along the spherical geodesic of two points on the sphere, formulated as follows:
\[\text{Slerp}(a,b)=\frac{\sin(1-t)\beta}{\sin\beta}a+\frac{\sin t\beta}{\sin \beta}b, \tag{5}\]
where \(\beta\) is the angle subtended by \(a\) and \(b\), and \(t\) is the resampling weight. Note that \(t\) is easy to determine if \(a\) and \(b\) are located on the same longitude. Therefore, we calculate the feature value of pixel \(q\) with two steps. Firstly, we resample \(p_{0},p_{1}\) and \(p_{2},p_{3}\) to \(p_{01}\) and \(p_{23}\), respectively, as shown in the right of Fig.5(b). Taking the resampling of \(p_{0,1}\) as example, the formulation can be described as:
\[F(p_{01})=\frac{\sin(1-t_{01})\alpha_{01}}{\sin\alpha_{01}}F(p_{0})+\frac{\sin t _{01}\alpha_{01}}{\sin\alpha_{01}}F(p_{1}), \tag{6}\]
where \(\alpha_{01}\) is the angle subtended by \(p_{0}\) and \(p_{1}\), and the weight \(t_{01}\) is decided by the location of \(p_{01}\) on the curve \(p_{0}\widehat{p_{1}}\). Notably, \(t_{01}\) should ensure \(p_{01}\) to have the same longitude with the query pixel \(q\). Similarly, \(\alpha_{23}\) is the angle subtended by \(p_{2}\) and \(p_{3}\), and \(p_{23}\) also has the same longitude with the query pixel \(q\) by calculating the weight \(t_{23}\). After that, we follow the Slerp (Eq. 5) to calculate the feature value \(F_{q}\) as follows:
\[F(q)=\frac{\sin(1-t_{q})\Omega}{\sin\Omega}F(p_{01})+\frac{\sin t_{q}\Omega}{ \sin\Omega}F(p_{23}), \tag{7}\]
where \(\Omega\) is the angle subtended by \(p_{01}\) and \(p_{23}\), and \(t_{q}\) is decided by the location of \(q\) on the curve \(p_{01}\widehat{p_{23}}\). _Due to
Figure 5: (a) Linear resampling is related to the the partial area \(S_{i}\) diagonally opposite to the corner pixel \(i\). (b) Spherical resampling considers the angles (_i.e._, \(\alpha_{01}\), \(\alpha_{23}\), \(\Omega\)) between points on the sphere, which are corresponding to the red solid curves.
the page limit, more formulations about the parameter \(t_{01}\), \(t_{23}\) and \(t_{q}\) can be found in the supplementary material._ Actually, our spherical resampling module calculates the angular relationship of query pixels and their corresponding corner pixels, which can better describe the resampling on the spherical surface with curvatures. Meanwhile, there is no need to estimate the transformed grid shape like [22], because Mobius transformation is conformal on the sphere that preserves the angles subtended by two curves.
## 4 Experiment
### Dataset and Implementation Details
**Datasets.** No datasets for ODIs under Mobius transformations exist and collecting real-world ODI pairs with corresponding Mobius transformation matrices is difficult. Thus, we propose ODI-Mobius (ODIM) dataset to train our OmniZoomer and compared methods in a supervised manner. Our dataset is based on the ODI-SR dataset [9] with 1191 images in the train set, 100 images in the validation set, and 100 images in the test set. During training, we applied various Mobius transformations via setting random parameters \(\{a,b,c,d\}\) of Eq. 3. Each Mobius transformation includes all horizontal rotation, vertical rotation, and zoom, as we aim to move and zoom in on ODIs. _More details can be found in the Suppl material_. During validating and testing, we assign a fixed Mobius transformation matrix for each ODI. Besides, we further test on SUN360 [42] dataset with 100 images.
**Implementation details.** The resolution of the HR ERP images is \(1024\times 2048\), and the up-sampling factors we choose are \(\times 8\) and \(\times 16\). We use L1 loss, which is optimized by Adam optimizer [21], with an initial learning rate of 1e-4. The batch size is 2 when using EDSR-baseline [24] as back
\begin{table}
\begin{tabular}{c||c|c|c|c||c|c|c|c} Scale & \multicolumn{4}{c||}{\(\times 8\)} & \multicolumn{4}{c}{\(\times 16\)} \\ \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{ODI-SR} & \multicolumn{2}{c||}{SUN 360} & \multicolumn{2}{c|}{ODI-SR} & \multicolumn{2}{c}{SUN 360} \\ \cline{2-9} & WS-PSNR & WS-SSIM & WS-PSNR & WS-SSIM & WS-PSNR & WS-SSIM & WS-PSNR & WS-SSIM \\ \hline \hline Bicubic & 26.77 & 0.7725 & 25.87 & 0.7103 & 24.79 & 0.7404 & 23.87 & 0.6802 \\ \hline EDSR-baseline(+Transform) [24] & 27.42 & 0.7930 & 26.97 & 0.7468 & 25.39 & 0.7572 & 24.66 & 0.7011 \\ \hline Ours-EDSR-baseline & 27.48 & 0.7949 & 27.15 & 0.7526 & 25.47 & **0.7600** & 24.79 & **0.7050** \\ \hline \hline RRDB(+Transform) [40] & 27.45 & 0.7946 & 27.10 & 0.7515 & 25.42 & 0.7578 & 24.72 & 0.7033 \\ \hline RCAN(+Transform) [44] & 27.46 & 0.7906 & 27.04 & 0.7443 & 25.45 & 0.7541 & 24.70 & 0.7001 \\ \hline ETDS(+Transform) [7] & 27.38 & 0.7912 & 26.84 & 0.7418 & 25.42 & 0.7572 & 24.65 & 0.7012 \\ \hline Omni-SR(+Transform) [39] & 27.45 & 0.7920 & 26.99 & 0.7463 & 25.45 & 0.7574 & 24.68 & 0.7010 \\ \hline LAU-Net(+Transform) [8] & 27.25 & 0.7813 & 26.77 & 0.7363 & 25.23 & 0.7455 & 24.49 & 0.6921 \\ \hline SRWarp [36] & 27.43 & 0.7911 & 27.12 & 0.7495 & 25.40 & 0.7570 & 24.73 & 0.7014 \\ \hline LTEW [22] & 27.32 & 0.7899 & 26.85 & 0.7420 & 25.39 & 0.7558 & 24.63 & 0.6996 \\ \hline Ours-RCAN & **27.53** & **0.7970** & **27.34** & **0.7592** & **25.50** & 0.7584 & **24.84** & 0.7034 \\ \hline \end{tabular}
\end{table}
Table 1: **Quantitative comparison of Möbius transformation results on ODIs. \((+\mathrm{Transform})\) denotes that we first employ a scale-specific SR model for image SR and then conduct image-level Möbius transformation on the SR image. We report on ODI-SR dataset and SUN360 dataset with up-sampling factors \(\times 8\) and \(\times 16\). Bold indicates the best results.**
Figure 6: Visual comparisons of Möbius transformation results with \(\times 8\) up-sampling factor on ODI-SR dataset.
bone, while the batch size is 1 when using RCAN [44] as backbone. Especially, considering the spherical imagery of ODIs, we use specific WS-PSNR [37] and WS-SSIM [46] metrics for evaluation.
### Quantitative and Qualitative Evaluation
**Move and Zoom in:** As OmniZoomer is the first learning-based method, there are no prior-arts that can be directly compared. For fair and sufficient evaluation, we design two types of comparative experiment. First, we combine the existing image SR models for 2D planar images and ODIs [24, 40, 44, 7, 39, 8] with image-level Mobius transformations, whose resampling process is achieved by nearest interpolation. This way, we compare OmniZoomer with these approaches under various Mobius transformations. The SR models designed for 2D planar images are retrained with their provided hyperparameters. Secondly, we further compare our OmniZoomer with existing image warping methods [36, 22] and incorporate Mobius transformations into their learning process.
Tab. 1 provides a quantitative comparison of different methods for various Mobius transformations with up-sampling factors \(\times 8\) and \(\times 16\). We use a lightweight backbone EDSR-baseline [24] and a deep backbone RCAN [44]. _OmniZoomer with EDSR-baseline as backbone outperforms several 2D SR models with image-level transformations_, _e.g_., EDSR-baseline [24], RRDB [40] and RCAN [44], in all metrics. It reveals the effectiveness of our OmniZoomer incorporating Mobius transformation into the feature representation. Compared with the ODI-specific SR method LAU-Net [8], our OmniZoomer also achieves better performance. Note that LAU-Net shows lower performance than SR models designed for 2D planar images, _e.g_., EDSR-baseline. We ascribe it to that LAU-Net is limited
\begin{table}
\begin{tabular}{c||c|c|c||c|c|c|c|c} \hline \multirow{2}{*}{Scale} & \multicolumn{4}{c||}{\(\times 8\)} & \multicolumn{4}{c}{\(\times 16\)} \\ \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{ODI-SR} & \multicolumn{2}{c||}{SUN 360} & \multicolumn{2}{c|}{ODI-SR} & \multicolumn{2}{c}{SUN 360} \\ \cline{2-9} & WS-PSNR & WS-SSIM & WS-PSNR & WS-SSIM & WS-PSNR & WS-SSIM & WS-PSNR & WS-SSIM \\ \hline \hline Bicubic & 19.64 & 0.5908 & 19.72 & 0.5403 & 17.12 & 0.4332 & 17.56 & 0.4638 \\ \hline EDSR [24] & 23.97 & 0.6483 & 23.79 & 0.6472 & 22.24 & 0.6090 & 21.83 & 0.5974 \\ \hline RCAN [44] & 24.26 & 0.6554 & 23.88 & 0.6542 & 22.49 & 0.6176 & 21.86 & 0.5938 \\ \hline
360-SS [30] & 24.14 & 0.6539 & 24.19 & 0.6536 & 22.35 & 0.6102 & 22.10 & 0.5947 \\ \hline SphereSR [43] & 24.37 & 0.6777 & 24.17 & 0.6820 & 22.51 & 0.6370 & 21.95 & 0.6342 \\ \hline LAU-Net [8] & 24.36 & 0.6602 & 24.24 & 0.6708 & 22.52 & 0.6284 & 22.05 & 0.6058 \\ \hline LAU-Net+ [9] & **24.63** & **0.6815** & 24.37 & 0.6710 & **22.97** & **0.6316** & **22.22** & 0.6111 \\ \hline Ours-EDSR-baseline & 24.48 & 0.6756 & 24.31 & 0.7019 & 22.65 & 0.6304 & 22.09 & 0.6449 \\ \hline Ours-RCAN & 24.53 & 0.6797 & **24.41** & **0.7106** & 22.66 & 0.6304 & 22.12 & **0.6454** \\ \hline \end{tabular}
\end{table}
Table 2: **Quantitative comparison of ODI SR task.** The numbers are excerpted from [9] except for [43], due to its reported results are obtained by utilizing 800 training images in the ODI-SR dataset. We report \(\times 8\), \(\times 16\) SR results on the ODI-SR and SUN360 datasets. Bold indicates the best results, and blue indicates the second-best results.
Figure 7: Visual comparisons of different methods for Möbius transformation with \(\times 8\) up-sampling factor on SUN360 dataset.
to only consider the ODIs captured with vertically placed \(360^{\circ}\) cameras, which have no movement and zoom. By applying deeper backbone RCAN, _OmniZoomer outperforms existing methods in all metrics, all up-sampling factors, and test sets_. For example, compared with LAU-Net [8], OmniZoomer has a 0.57dB improvement of WS-PSNR on SUN360 dataset with \(\times 8\) up-sampling factor. As shown in Fig. 6, our OmniZoomer predicts clearer wood strips with high-quality textural details with \(\times 8\) up-sampling factor, which are missing in other methods' predictions. It shows the effectiveness of our HR feature representation and spherical resampling. Similarly, in Fig. 7, OmniZoomer reconstructs more complete structures of the building and preserves the shape of buildings after transformations.
**Direct SR:** Although our work shares a different purpose with the image SR methods, it can also perform SR when the Mobius transformation matrix is applied as the identity matrix. In this case, OmniZoomer is degraded to a conventional SR model, except for the spherical resampling module. Tab. 2 shows the quantitative results of OmniZoomer with two backbones, exhibiting that OmniZoomer with RCAN as backbone obtains 3 (total 8) best metrics, while OmniZoomer with EDSR-baseline as backbone obtains 2 (total 8) second best metrics. _OmniZoomer has a strong capability to handle the increasing curves and the inherent distortions on ODIs._
### Ablation Studies
**Spherical resampling module.** Tab. 3 illustrates that our spherical resampling module achieves the best performance compared with both applying traditional resampling algorithm (_e.g._, Bicubic), and estimated base rotation in the image warping method [36]. For example, our spherical resampling module obtains 0.07dB WS-PSNR gain compared with utilizing Jacobian matrix for base rotation. Also, by adding a multi-layer perceptron (MLP) to make the 2D rotation estimation learnable, the performance gain is limited (0.01dB). It is mainly because that [36] estimates the 2D rotation, which is not applicable for 3D rotation on the sphere surface. Compared with these planar resampling methods, spherical resampling module has obvious improvement benefiting from fitting the sphere surface with curvatures.
\begin{table}
\begin{tabular}{c|c} \hline Methods & Number of parameters \\ \hline LAU-Net [8] & 9.4M \\ RCAN [44] & 15.9M \\ OmniZoomer-EDSR-baseline & 1.9M \\ OmniZoomer-RCAN & 16.0M \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of the number of parameters (million), which is conducted on ODI-SR dataset with \(\times 8\) up-sampling factor.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Method & WS-PSNR & WS-SSIM \\ \hline Bicubic & 27.42 & 0.7908 \\ Jacobian [36] & 27.39 & 0.7909 \\ Jacobian [36]+MLP & 27.40 & 0.7923 \\ Spherical (Ours) & 27.46 & 0.7930 \\ Spherical+ResBlocks (Ours) & **27.48** & **0.7949** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation results of different **resampling methods**. We evaluate with \(\times 8\) up-sampling factor on ODI-SR dataset.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Where to apply Möbius Trans. & WS-PSNR & WS-SSIM \\ \hline Input image level & 26.06 & 0.7621 \\ Input feature level & 27.03 & 0.7823 \\ HR feature level & **27.48** & **0.7949** \\ HR Image level & 27.41 & 0.7914 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies on different **positions for conducting Möbius transformation**. We evaluate with \(\times 8\) up-sampling factor on the ODI-SR dataset.
Figure 8: Visual comparisons of different resampling methods with \(\times 8\) up-sampling factor.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Integration & ✓ & ✗ \\ \hline WS-PSNR & 27.41 & **27.48** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation studies on **whether to integrate** the two processes of feature up-sampling and Möbius transformation.
Figure 9: Visual comparisons of different positions for conducting Möbius transformation with \(\times 8\) up-sampling factor.
This can be verified qualitatively in Fig. 8, where spherical resampling recovers more continuous edges of the wind-mill. Furthermore, by adding ResBlocks [24] into the decoder, the transformed feature maps can be further refined. It brings 0.02dB gain in WS-PSNR metric.
**Different positions for Mobius transformation.** There are totally four possible positions to conduct Mobius transformation, _i.e_., the input image level, the input feature level, the HR feature level, and the HR image level (output of the network). Tab. 4 demonstrates that conducting Mobius transformation in the input image level and input feature level is not applicable, due to the severely destroyed structures, which are difficult to reconstruct in the HR space. Also, conducting Mobius transformation on the HR image level leads to sub-optimal results as the network has no knowledge to handle increasing edge curvatures in various transformations, _e.g_., movement and zoom. From Fig. 9, we can see that conducting Mobius transformation on the HR feature level recovers the clearest pipelines on the ceiling.
**Integrating feature up-sampling and Mobius transformation.** By integrating them into a whole, we find that the performance drops by about 0.07dB WS-PSNR. The reason is about the aliasing problem in the input feature level.
**Number of parameters.** Our OmniZoomer with EDSR-baseline as the backbone is compact and performs better than LAU-Net in Mobius transformation tasks. Also, with 0.1M extra parameters, OmniZoomer-RCAN achieves a significant performance gain than RCAN [40]. _More details about computational costs and time consuming of each module can be found in the Suppl material._
## 5 Conclusion
In this paper, we proposed to incorporate the Mobius transformation into the network for freely moving and zooming in on ODIs. We found that ODIs under Mobius transformations suffer from blurry effect and aliasing problems due to zoomed-in regions and increasing edge curvatures. Based on the problems, we found that learning Mobius transformations on the HR feature level and resampling on the sphere surface enhance the network to predict clear curves and preserved shapes. We demonstrated that this deep learning-based network outperforms existing methods under various Mobius transformations. Therefore, OmniZoomer can produce HR and high-quality ODIs with the flexibility to move and zoom in to the object of interest.
**Limitation and Future Work:** This work can estimate HR and high-quality ODIs under various Mobius transformations. However, the parameters of the Mobius transformation need to be determined by users, according to the movement and zoom level. In this case, users might try for several times to determine the optimal transformation, which influences the interactive experiences. In the future work, we hope to learn to how to select an optimal transformation by only assigning the interested objects. This might include the techniques about omnidirectional object detection and scene understanding.
**Acknowledgement:** This work was supported by the CCF-Tencent Open Fund and the National Natural Science Foundation of China (NSFC) under Grant No. NSFC22FYT45.
|
2301.04225 | Inferring Gene Regulatory Neural Networks for Bacterial Decision Making
in Biofilms | Bacterial cells are sensitive to a range of external signals used to learn
the environment. These incoming external signals are then processed using a
Gene Regulatory Network (GRN), exhibiting similarities to modern computing
algorithms. An in-depth analysis of gene expression dynamics suggests an
inherited Gene Regulatory Neural Network (GRNN) behavior within the GRN that
enables the cellular decision-making based on received signals from the
environment and neighbor cells. In this study, we extract a sub-network of
\textit{Pseudomonas aeruginosa} GRN that is associated with one virulence
factor: pyocyanin production as a use case to investigate the GRNN behaviors.
Further, using Graph Neural Network (GNN) architecture, we model a single
species biofilm to reveal the role of GRNN dynamics on ecosystem-wide
decision-making. Varying environmental conditions, we prove that the extracted
GRNN computes input signals similar to natural decision-making process of the
cell. Identifying of neural network behaviors in GRNs may lead to more accurate
bacterial cell activity predictive models for many applications, including
human health-related problems and agricultural applications. Further, this
model can produce data on causal relationships throughout the network, enabling
the possibility of designing tailor-made infection-controlling mechanisms. More
interestingly, these GRNNs can perform computational tasks for bio-hybrid
computing systems. | Samitha Somathilaka, Daniel P. Martins, Xu Li, Yusong Li, Sasitharan Balasubramaniam | 2023-01-10T22:07:33Z | http://arxiv.org/abs/2301.04225v1 | # Inferring Gene Regulatory Neural Networks for Bacterial Decision Making in Biofilms
###### Abstract
Bacterial cells are sensitive to a range of external signals used to learn the environment. These incoming external signals are then processed using a Gene Regulatory Network (GRN), exhibiting similarities to modern computing algorithms. An in-depth analysis of gene expression dynamics suggests an inherited Gene Regulatory Neural Network (GRNN) behavior within the GRN that enables the cellular decision-making based on received signals from the environment and neighbor cells. In this study, we extract a sub-network of _Pseudomonas aeruginosa_ GRN that is associated with one virulence factor: pyocyanin production as a use case to investigate the GRNN behaviors. Further, using Graph Neural Network (GNN) architecture, we model a single species biofilm to reveal the role of GRNN dynamics on ecosystem-wide decision-making. Varying environmental conditions, we prove that the extracted GRNN computes input signals similar to natural decision-making process of the cell. Identifying of neural network behaviors in GRNs may lead to more accurate bacterial cell activity predictive models for many applications, including human health-related problems and agricultural applications. Further, this model can produce data on causal relationships throughout the network, enabling the possibility of designing tailor-made infection-controlling mechanisms. More interestingly, these GRNNs can perform computational tasks for bio-hybrid computing systems.
Gene Regulatory Networks, Graph Neural Network, Biofilm, Neural Network.
## I Introduction
Bacteria are well-known for their capability to sense external stimuli, for complex information computations and for a wide range of responses [1]. The microbes can sense numerous external signals, including a plethora of molecules, temperatures, pH levels, and the presence of other microorganisms [2]. The sensed signals then go through the Gene Regulatory Network (GRN), where a large number of parallel and sequential molecular signals are collectively processed. The GRN is identified as the main computational component of the cell [3], which contains about 100 to more than 11000 genes (the largest genome identified so far belongs to _Sorangium cellulosum_ strain So0157-2) [4]. Despite the absence of neural components, the computational process through GRN allows the bacteria to actuate through various mechanisms, such as molecular production, motility, physiological state changes and even sophisticated social behaviors. Understanding the natural computing mechanism of cells can lead to progression of key areas of machine learning in bioinformatics, including prediction of biological processes, prevention of diseases and personalized treatment [5].
Bacterial cells are equipped with various regulatory systems, such as single/two/multi-component systems including _Quorum sensing_ (QS), to respond to environmental stimuli. The receptors and transporters on cell membranes can react and transport extracellular molecules, which subsequently interact with respective genes. In turn, the GRN is triggered to go through a complex non-linear computational process in response to the input signals. In the literature, it has been suggested that the computational process through the GRN of a bacterial cell comprises a hidden neural network (NN)-like architecture [6, 7]. This indicates that, even though bacterial cells can be categorized as non-neural organisms, they perform neural decision-making processes through the GRN. This results in recent attention towards Molecular Machine Learning systems, where AI and ML are developed using molecular systems [8]. In these systems, several neural components can be identified in GRNs, in which genes may be regarded as
Fig. 1: Illustration of the Gene Regulatory Neural Networks (GRNN) extraction and the implementation of the GNN to model the biofilm. The diffusion of molecules from one cell to another is modeled as a vector, where \(m_{q}\) represents the concentration of the \(q^{th}\) molecular signal.
computational units or neurons, transcription regulatory factors as weights/biases and proteins/second messenger Molecular Communications (MC) as neuron-to-neuron interactions. Owing to a large number of genes and the interactions in a GRN, it is possible to infer sub-networks with NN behaviors that we term Gene Regulatory Neural Networks (GRNN). The non-linear computing of genes results from various factors that expand through multi-omics layers, including proteomic, transcriptomic and metabolomic data (further explained in Section II-A). In contrast, the GRNN is a pure NN of genes with summarized non-linearity stemmed from multi-omics layers with weights/biases.
Identification of GRNNs can be used to model the decision-making process of the cell precisely, especially considering simultaneous multiple MC inputs or outputs. However, due to the limited understanding and data availability, it is still impossible to model the complete GRN with its NN-like behaviors. Therefore, this study uses a GRNN of _Pseudomonas aeruginosa_ that is associated with _PhoR-PhoB_ and _BqsS-BqsR_ two-component systems (TCSs) and three QS systems related to pyocyanin production as a use case to explore the NN-like behaviors. Although a single bacterium can do a massive amount of computing, they prefer living in biofilms. Hence, in order to understand the biofilm decision-making mechanism, we extend this single-cell computational model to an ecosystem level by designing an _in-silico_ single species biofilm with inter-cellular MC signaling as shown in Fig. 1.
The contributions of this study are as follows:
* **Extracting a GRNN:** Due to the complexity and insufficient understanding of the gene expression dynamics of the full GRN, we only focus on a sub-network associated with pyocyanin production (shown in Fig. (a)a) to investigate the NN-like computational behavior of the GRN. Further, the genes of extracted sub-network are arranged following a NN structure that comprises input, hidden and output layers, as shown in Fig. (b)b.
* **Modeling a biofilm as a GNN:** The GRNN only represents the single-cell activities. To model the biofilm-wide decision-making process, we use a Graph Neural Network (GNN). First, we create a graph network of the bacterial cell and convert it to a GNN by embedding each node with the extracted GRNN as the update function. Second, the diffusion-based MCs between bacterial cells in the biofilm are encoded as the message-passing protocol of the GNN, as shown in Fig. (c)c.
* **Exploring the behaviors of the GRNN and intracellular MC dynamics to predict cell decisions:** The output of the GRNN is evaluated by comparing it with the transcriptomic and pyocyanin production data from the literature. Finally, an edge-level analysis of the GRNN is conducted to explore the causal relationships between gene expression and pyocyanin production.
This paper is organized as follows: Section II explains the background of bacterial decision-making in two levels: cellular-level in Section II-A and population-level in Section II-B, while the background on the _P. aeruginosa_ is introduced in Section II-C. Section III is dedicated to explaining the model design of cellular and population levels. The results related to model validation and the intergenic intra-cellular signaling pattern analysis are presented in Section IV and the study is concluded in Section V.
## II Background
As the model expands through single cellular and biofilm-wide decision-making layers, this section provides the background of how a bacterium uses the GRN to make decisions and how bacterial cells make decisions in biofilms. Moreover, we briefly discuss the cellular activities of the _Pseudomonas aeruginosa_ as it is the use case species of this study.
### _Decision-Making Process of an Individual Cell_
Prokaryotic cells are capable of sensing the environment through multiple mechanisms, including TCSs that have been widely studied and it is one of the focal points of this study. The concentrations of molecular-input signals from the extracellular environment influence the bacterial activities at the cellular and ecosystem levels [9]. Apart from the extracellular signals of nutrients, it is evident that the QS input signals have a diverse set of regulative mechanisms in biofilm-wide characteristics, including size and shape [10]. These input signals undergo a computational process through the GRN, exhibiting a complex decision-making mechanism. Past studies have explored and suggested this underpinning computational mechanism in a plethora of directions, such
Fig. 2: Extraction of a GRNN considering a specific sub-network of the GRN where a) is the two-component systems (TCSs) and QS network that is associated with the pyocyanin production, b) is the derived GRNN that is equipped with hypothetical nodes (**hns**) without affecting its computation process to form a symmetric network structure and c) is the conversion of real biofilm to the suggested _in-silico_ model.
as using differential equations [11] and probabilistic Boolean networks [12] and logic circuit [13]. All of these models mainly infer that the bacterial cells can make decisions not just based on the single input-output combinations, but they can integrate several incoming signals non-linearly to produce outputs.
The studies that focus on differences in gene expression levels suggest that a hidden weight behavior controls the impact of one gene on another [6]. This weight behavior emerges through several elements, such as the number of transcription factors that induce the expression, the affinity of the transcription factor binding site, and machinery such as thermoregulatory and enhancers/silencers [14, 15]. Fig. 3 depicts a set of factors influencing the weight between genes. The weight of an edge between two genes has a higher dynamicity as it is combinedly determined by several of these factors. Based on environmental conditions, the GRN of the bacterial cell adapts various weights to increase the survivability and repress unnecessary cellular functions to preserve energy. An example of such regulation is shown in Fig. 4 where a _P. aeruginosa_ cell uses a thermoregulatory to regulate the QS behaviors. Fig. 3(a) has a set of relative weights based on cell activities in an environment at 37\(\,{}^{\circ}\)C, while Fig. 3(b) represents weights at 30\(\,{}^{\circ}\)C. The weights between the **hn21** and _rhlR_ are different in two conditions, and these cellar activities are further explained in [14].
### _Biofilm Decision-Making_
Even though an individual cell is capable of sensing, computing, and actuating, the majority of bacterial cells live in biofilms, where the survivability is significantly increased compared to their planktonic state. Biofilm formation can cause biofouling and corrosion in water supply and industrial systems [16]. However, biofilms formation can be desirable in many situations, for example, bioreactors in wastewater treatment [17], bioremediation of contaminated groundwater [18, 19], where biofilms serve as platforms for biogeochemical reactions. A massive number of factors can influence biofilm formation, including substrant surface geometrical characteristics, diversity of species constituting the biofilm, hydrodynamic conditions, nutrient availability, and especially communication patterns [20] where the TCS and QS play significant roles. A TCS comprises a _histidine kinase_ that is the sensor for specific stimulus and a cognate response regulator that initiates expressions of a set of genes [21]. Hence, in each stage, essential functions can be traced back to their gene expression upon a response to the input signals detected by bacterial cells. For instance, in the first stage of biofilm formation, the attachment of bacteria to a surface is associated with sensing a suitable surface and altering the activities of the flagella. In the next stage, _rhramnolipids_ production is associated with ferric iron Fe\({}^{3+}\) availability in the environment, mostly sensed through _BqsS-BqsR_ TCSs. Further, Fe\({}^{3+}\) was identified as a regulator of _pqSA_, _pqsR_, and _pqsE_ gene expressions that are associated with the production of two critical components for the formation of microcolonies: eDNA and EPS [22]. Similarly, in the final stage, the dispersion process can also be traced back to a specific set of gene regulations, including _bdlA_ an _rbdA_[23, 24]. An understanding of the underlying decision-making process of bacteria may enable us to control their cellular activities.
### _Pseudomonas Aeruginosa_
The main reason for selecting _P. aeruginosa_ in this work lies in its alarming role in human health. For example, this species is the main cause of death in cystic fibrosis patients [25]. _P. aeruginosa_ is a gram-negative opportunistic pathogen with a range of virulence factors, including pyocyanin and cytotoxin secretion [26]. These secreted molecules can lead to complications such as respiratory tract ciliary dysfunction and induce proinflammatory and oxidative effects damaging the host cells [27]. The biofilms are being formed on more than 90% endotracheal tubes implanted in patients who are getting assisted ventilation, causing upper respiratory tract infections [28]. In addition, another important reason for targeting _P. aeruginosa_ is the data availability for the GRN structure [29], pathways [30], genome [31], transcriptome [32] and data from mutagenesis studies [33, 34]. Compared to the complexity of the GRN, the amount of data and information available on the
Fig. 4: Two GRNN setups with different weights associated with two environmental conditions. a) is the relative weight setup of _P. aeruginosa_ cell in 37\(\,{}^{\circ}\)C and b) is in 30\(\,{}^{\circ}\)C.
Fig. 3: Illustration of gene expression regulators that are considered the weight influencers of the edges of GRNN. Here, the \(\alpha_{(\varphi)}\), \(\alpha_{(\sim\varphi)}\), \(\alpha_{(TF)}\), \(\alpha_{(Rep)}\), \(\alpha_{(eTF)}\) and \(\alpha_{(sTF)}\) are relative concentrations of sigma factors, anti-sigma factors, transcription factors (TFs), repressors, enhancer-binding TFs and silencer-binding TFs respectively. Moreover, \(\beta_{(Prom)}\), \(\beta_{(Eph)}\), and \(\beta_{(Sil)}\) are the binding affinities of the promoter, operator, enhancer and silencers regions respectively.
gene-to-gene interactions and expression patterns is insufficient to develop an accurate full _in-silico_ model. Therefore, we chose a set of specific genes that are associated with QS, TCS, and pyocyanin production.
## III System design
This section explains the system design in two main phases, extracting a NN-like architecture from the GRN targeting the set of genes and creating a model of the biofilm ecosystem.
### _Extracting Natural Neural Network from GRN_
We first fetch the structure of the GRN graph from Reg-ulomePA [29] database that contains only the existence of interactions and their types (positive or negative). As the next step, using information from the past studies [35, 36, 37, 38], we identified the genes involved in the **Las**, **Rhl** and **PQS** QS systems, _PhoR-PhoB_ and _BqsS-BqsR_ TCSs, and pyocyanin production to derive the sub-network of GRN as shown in Fig 2a. We further explored the expression dynamics using transcriptomic data [39, 40] where we observed the non-linearity in computations that are difficult to capture with existing approaches such as logic circuits, etc. [6], making the NN approach more suitable. However, a NN model with a black box that is trained on a large amount of transcriptomic data records to do computations similar to the GRN has a number of limitations, especially in understanding the core of the computational process [41]. Our model does not use a conventional NN model; instead, we extract a NN from the interaction patterns of the GRN, which we consider a pre-trained GRNN. In this sub-network, we observed that the lengths of expression pathways are not equal. For example, the path from _PhoR-PhoB_ to the _phz2_ gene has two hops, but the path from the _BqsS-BqsR_ system to the _rhlR_ gene only has one hop. The extracted network has the structure of a random NN. Hence, we transform this GRNN to Gene Regulatory Feedforward Neural Network by introducing hypothetical nodes (**hns**) that do not affect the behaviors of the GRNN as shown in Fig 2b. In this transformation, we decide the number of hidden layers based on the maximum number of hops in gene expression pathways. In our network, the maximum number of hops is two, which determines the number of hidden layers as one, and then the number of hops of all the pathways is leveled by introducing **hns**. If a **hn** is introduced between a source and target genes, the edge weights from the source node to the **hn** and from **hn** to the target node are made "1" so that the **hn** does not have an influence on the regulation of genes. Moreover, if a gene does not induce another in the network, the weight of the edge between that pair is made "0".
Here, we summarize multiple factors of interaction into a weight that determines the transcriptional regulation of a particular gene. This regulation process occurs when the gene products get bound to the promoter region of another, influencing the transcriptional machinery. Hence, we observe this regulation process of a target gene as a multi-layered model that relies on the products of a set of source genes, the interaction between gene products, and the diffusion dynamics within the cell. Creating a framework to infer an absolute weight value using all the above factors is a highly complex task. In order to infer weight, one method is to train a NN model with the same structure as the GRN using a series of transcriptomic data. However, this approach also has numerous challenges, such as the lack of a sufficient amount of data in similar environments.
Therefore, we estimate a set of relative weights based on genomic, transcriptomic, and proteomic explanations of each interaction from the literature. The weights were further fine-tuned using the transcriptomics data. A relative weight value of an edge can be considered a summarizing of multi-layer transcriptional-translation to represent the impact of the source gene on a target gene.
In this computational process, we identify another layer of interactions that occur within the cell. The produced molecules by the considered TCs network go through a set of metabolic interactions that are crucial for the functionality of the cell. Since our primary goal is to explore the NN behaviors of GRN, we model these inter-cellular chemical reactions as a separate process, keeping the gene-to-gene interactions and metabolic interactions in two different layers. To model the complete pyocyanin production functionality of the cell, we use the inter-cellular molecular interactions shown in Fig 5. Here, RhlR is a transcriptional regulator of _P. aeruginosa_ that forms a complex by getting attached to its cognate inducer C4-HSL and then binds to the promoter regions of relevant genes [42]. Similarly, LasR transcriptional regulator protein and 3-oxo-C12-HSL (3OC), and PqsR with PQS and HHQ form complexes and get involved in the regulation of a range of genes [43, 44]. Further, C10H10O6 in the environment are converted by the _P. aeruginosa_ cells in multiple steps using the products of the GRN we consider. First, C10H10O6 is converted into phenazine-1-carboxylic using the enzymes of _pqsABCDEFG_ genes. Later, phenazine-1-carboxylic was converted into 5-Methylphenazine-1-carboxylate, and finally, 5-Methylphenazine-1-carboxylate into Pyocyanin by PhzM and PhzS, respectively [45].
Molecular accumulation within a bacterial cell can be considered its memory module where certain intra-cellular interactions occurs. Therefore, we define an internal memory matrix \(IM\) as,
\[\begin{array}{ccccc}&im_{1}&im_{2}&...&im_{J}\\ &B_{1}&\begin{pmatrix}C_{1,im_{1}}^{(t)}&C_{1,im_{2}}^{(t)}&...&C_{1,im_{J}}^{( t)}\\ C_{(2,im_{1})}^{(t)}&C_{(2,im_{2})}^{(t)}&...&C_{2,im_{J}}^{(t)}\\ \vdots&\vdots&\ddots&\vdots\\ &B_{P}&\begin{pmatrix}C_{(P,im_{1})}^{(t)}&C_{(P,im_{2})}^{(t)}&...&C_{(P,im_{J })}^{(t)}\\ \end{pmatrix},\end{array}\end{array} \tag{1}\]
where the concentration of the internal molecule \(im_{j}\) is
Fig. 5: Illustrations of intra-cellular metabolite interaction.
\(C_{(i,im_{j})}^{(t)}\).
GRNN process molecular signals from the environment and other cells. Hence, we used the approach of GNN as a scalable mechanism to model the MCs and biofilm wide decision-making process. The extreme computational power demand of modeling the diffusion-based MCs of each cell is also avoided by using this approach.
### _Graph Neural Network Modeling of Biofilm_
First, the biofilm is created as a graph network of bacterial cells where each node is a representation of a cell, and an edge between two nodes is a MC channel. We convert the graph network into a Graph Neural Network (GNN) in three steps: 1) embedding the extracted GRNN of pyocyanin production into each node as the update function, 2) encoding the diffusion-based cell-to-cell MC channels as the message passing scheme, and 3) creating an aggregation function at the reception of molecular messages by a node as shown in Fig. 6. Next, we define feature vectors of each node of the GNN to represent the gene expression profile of the individual cell at a given time. Subsequently, considering \(L\) is the number of genes in the GRNN, \(P\) is the number of bacterial cells in the biofilm and \(b_{(i,g)}^{(t)}\) is the expression of gene \(g_{l}\) by the bacteria \(B_{i}\), we derive the following matrix \(\mathbf{FV}^{(t)}\) that represents all the feature vectors of the GNN at time \(t\).
\[\mathbf{FV}^{(t)}=\begin{array}{ccccc}g_{1}&g_{2}&...&g_{L}\\ B_{1}&\begin{pmatrix}b_{(1,g_{1})}^{(t)}&b_{(1,g_{2})}^{(t)}&...&b_{(1,g_{L}) }^{(t)}\\ b_{(2,g_{1})}^{(t)}&b_{(2,g_{2})}^{(t)}&...&b_{(2,g_{L})}^{(t)}\\ \vdots&\vdots&\ddots&\vdots\\ B_{P}&\begin{pmatrix}b_{(P,g_{1})}^{(t)}&b_{(P,g_{2})}^{(t)}&...&b_{(P,g_{L}) }^{(t)}\end{pmatrix}\end{array}\end{array} \tag{2}\]
The computational output of the GRNN of each node results in the secretion of a set of molecules that are considered messages in our GNN model as illustrated in the Fig. 7.
When the number of molecular species considered in the network is \(Q\) and output \(m_{q}\) molecular message from bacterial cell \(B_{i}\) at TS \(t\) is \(msg_{(i,m_{q})}^{(t)}\), we derive the matrix
\[\mathbf{MSG}^{(t)}=\begin{array}{ccccc}m_{1}&m_{2}&...&m_{Q}\\ B_{1}&\begin{pmatrix}msg_{(1,m_{1})}^{(t)}&msg_{(1,m_{2})}^{(t)}&...&msg_{(1, m_{Q})}^{(t)}\\ msg_{(2,m_{1})}^{(t)}&msg_{(2,m_{2})}^{(t)}&...&msg_{(1,m_{Q})}^{(t)}\\ \vdots&\vdots&\ddots&\vdots\\ B_{P}&\begin{pmatrix}msg_{(P,m_{1})}^{(t)}&msg_{(P,m_{2})}^{(t)}&...&msg_{(P, m_{Q})}^{(t)}\end{pmatrix}\end{array} \tag{3}\]
Further, we use a static diffusion coefficients vector
\[\mathbf{D}=\{D_{m_{1}},D_{m_{2}},...,D_{m_{Q}}\}, \tag{4}\]
where \(D_{m_{q}}\) is diffusion coefficient of molecular species \(m_{q}\).
We define another matrix \(\mathbf{ED}\) that contains the euclidean distances between bacterial cells in the biofilm as follows
\[\mathbf{ED}=\begin{array}{ccccc}B_{1}&B_{2}&...&B_{P}\\ B_{1}&\begin{pmatrix}d_{(1,1)}&d_{(1,2)}&...&d_{(1,P)}\\ B_{2}&d_{(2,1)}&d_{(2,2)}&...&d_{(2,P)}\\ \vdots&\vdots&\ddots&\vdots\\ B_{P}&\begin{pmatrix}d_{(P,1)}&d_{(P,2)}&...&d_{(P,P)}\end{pmatrix}\end{array} \tag{5}\]
where \(d_{i,j}\) is the euclidean distance between the \(i^{th}\) and \(j^{th}\) cells.
Fig. 6: Illustration of the GNN components where a) is a snapshot of the bacterial network that has the gene expression profile as the feature vector. Further, this gene expression pattern of a cell is encoded to a message of secreted molecules where MC plays a crucial role. Moreover, b) shows the temporal behavior of the GNN, that the output of one graph snapshot influences the next.
Fig. 7: The process of one GRNN outputs reaching another GRNN as molecular messages.
The feature vector of \(i^{th}\) bacterial cell at the TS \(t+1\) is then modeled as,
\[\mathbf{FV}_{i}^{(t+1)}=GRNN_{i}(\mathbf{MSG}_{i}^{(t)}+S_{i}^{(t)}) \tag{6}\]
where \(MSG_{i}^{(t)}\) is the message generated by the same cell in the previous TS. The \(GRNN_{i}\) is the extracted GRNN that is the update function in the GNN learning process and \(\mathbf{S}_{i}^{(t)}=\mathbf{R}_{i}^{(t)}+\mathbf{K}_{i}^{(t)}\), is the aggregate function. In the aggregation component, the \(\mathbf{R}_{i}^{(t)}\) is the incoming signals from peer bacterial cells and \(K_{(i:m_{q})}^{(t+1)}\) is the external molecule input vector at the location of \(B_{i}\) and the TS \(t\) that is expressed as
\[\mathbf{K}_{i}^{(t+1)}=\big{\{}K_{i:m_{1}}^{(t+1)},K_{:i:m_{2}}^{(t+1)},...,K_{ :m_{Q}}^{(t+1)}\big{\}}. \tag{7}\]
In order to compute \(\mathbf{R}_{i}^{(t+1)}\), we use a matrix \(\mathbf{Y}_{i}\); \(\mathbf{Y}_{i}=\overset{\leftrightarrow}{1}_{[Q\times 1]}\times\)ED\({}_{i}\), where \(\overset{\leftrightarrow}{1}_{[Q\times 1]}\) is an all-ones matrix of dimension \(Q\times 1\). The \(\hat{g}\) matrix is then defined as follows,
\[\mathbf{\hat{g}}(\mathbf{D}^{\intercal},\mathbf{Y},t)= \tag{8}\] \[\begin{bmatrix}g(D_{m_{1}},d_{(i,1)},t)&g(D_{m_{1}},d_{(i,2)},t)&...&g(D_{m_{1}},d_{(i,P)},t)\\ g(D_{m_{2}},d_{(i,1)},t)&g(D_{m_{2}},d_{(i,2)},t)&...&g(D_{m_{2}},d_{(i,P)},t) \\ \vdots&\vdots&\ddots&\vdots\\ g(D_{m_{Q}},d_{(i,1)},t)&g(D_{m_{Q}},d_{(i,2)},t)&...&g(D_{m_{Q}},d_{(i,P)},t) \end{bmatrix}\]
In the above matrix, \(g(D_{m_{l}},d_{(i,j)},t)\) is the Green's function of the diffusion equation as shown below,
\[\hat{G}(D_{m_{l}},d_{(i,j)},t)=\frac{1}{\left(4\pi D_{m_{l}}t\right)^{\frac{ 3}{2}}}\exp\left(-\frac{d_{(i,j)}^{2}}{4D_{m_{l}}t}\right). \tag{9}\]
Further, the incoming signal vector \(\mathbf{R}_{i}^{(t+1)}\) is denoted as below,
\[\mathbf{R}_{i}^{(t+1)}=diag\big{(}\mathbf{\hat{g}}(\mathbf{D}^{\intercal}, \mathbf{Y},t)\times\mathbf{MSG}^{(t)}\big{)}. \tag{10}\]
Further, we equip our model with a 3-D environment to compensate for the noise element and external molecule inputs. Environment-layer is designed as a 3-D grid of voxels that can store precise information on external nutrients (similarly to our previous model in [46]). The diffusion of nutrient molecules through the medium is modeled as a random-walk process. This layer allows us to enrich the model with the dynamics of nutrient accessibility of bacterial cells due to diffusion variations between the medium and the Extracellular Polymeric Substance (EPS).
The bacterial cells in the ecosystem also perform their own computing tasks individually, resulting in a massively parallel processing framework. Hence, we use the python-cuda platform to make our model closer to the parallel processing architecture of the biofilm, where we dedicate a GPU block for each bacterial cell and the threads of each block for the matrix multiplication of the GRNN computation associated with the particular cell. Additionally, due to the massive number of iterative components in the model, the computational power demand faces significant challenges with serial programming making parallelization the best match for the model.
## IV Simulations
In this section, we first explain the simulation setup and then discuss the results of gene expression and molecular production dynamics to prove the accuracy of the extracted GRNN, emphasizing that it works similarly to the real GRN. Later, we use computing through the GRNN to explain certain activities of the biofilm.
### _Simulation Setup_
As our interest is to investigate the NN-like computational process, we do not model the formation process of the biofilm, but we only remodel a completely formed biofilm and disregarding the maturation and dispersion stages. In this model, we consider the biofilm as a static 3-D structure of bacterial cells. Hence, we first place bacterial cells randomly in the model in a paraboloid shape using the equation, \(z<\frac{x^{2}}{5}+\frac{y^{2}}{5}+20\) where \(x\), \(y\) and \(z\) are the components of 3-D Cartesian coordinates. This paraboloid shape is chosen to make the spacial arrangement of the cells close to real biofilm while keeping the cell placement process mathematically simple. Within this 3-D biofilm region, we model the diffusivity according to \(D_{B}/D_{aq}=0.4\), which is the mean relative diffusion [47] where \(D_{B}\) and \(D_{aq}\) are the average molecular diffusion coefficients of the biofilm and pure water, respectively. Further, to start the simulation at a stage where the biofilm is fully formed and the MC is already taking place, we filled the internal memory vector of each cell with the average molecular level at the initial TS. Each bacterial cell will use the initial signals from the internal memory and use its GRNN to process and update the feature vector for the next TS. Table I presents the parameter descriptions and values used for the simulation. As shown in Table I, the model runs for 150 TSs, generating data on a range of functions for the system. For instance, this model can produce data on feature vector of each cell, MC between cells,
\begin{table}
\begin{tabular}{l r r} \hline
**Parameter** & **Value** & **Description** \\ \hline No. of cells & 2000 & The number of cells is limited due to the memory availability of the server. \\ No. of genes & 13 & The network only consists of the gene that are directly associated with QS, _PhoR-PhoB_ and _BqsS-BqsR_ TCSs, and pyeocyanin production. \\ No. internal memory molecules & 16 & The set of molecules that involved in QS, _PhoR-PhoB_ and _BqsS-BqsR_ TCSs,and ppyocyanin production. \\ No. messenger molecules & 4 & The number of molecules that were exchanged between cells in the sub network. \\ Dimensions of the environment & 20x20x20\(\mu m\) & The dimensions were fixed considering the average sizes of _P. aeruginosa_ biofilms and computational demand of the model. \\ Duration & 150 TSs & The number of TSs can be modified to explore the cellular and ecosystem level activities. For this experiment we fixed a TS to represent 30mins. \\ No. iterations per setup & 10 & Considering the stochasticity ranging from the gene expression to ecosystem-wide communications, the experiments were iterated 10 times. \\ \hline \end{tabular}
\end{table} TABLE I: Parameters utilised in the system development
molecular consumption by cells, secretion to the environment, and nutrient accessibility of cells for each TS.
In order to prove that our GRNN computes similarly to the natural bacterial cell and collective behaviors of the cells are the same as the natural biofilm, we conduct a series of experiments. We explore the GRNN computation and biofilm activities under High Phosphate (HP) and Low Phosphate (LP) levels using eight experimental setups as follows, 1) wild-type bacteria (WD) in LP, 2) _lasR_ mutant (_lasR\(\Delta\)_) in LP, 3) phoB mutant (_phoB\(\Delta\)_) in LP, 4) _lasR_ & PhoB double mutant (\(\text{LasR}\Delta\text{PhoB}\Delta\)) in LP, 5) WD in HP, 6) _lasR\(\Delta\)_ in HP, 7) _PhoB\(\Delta\)_ in HP and _LasR\(\Delta\)PhoB\(\Delta\)_ in HP. While the WD uses the full GRNN, _lasR\(\Delta\)_ is created by making the weight of the link between _hn22_ and _lasR_ as "0". Further, the GRNN of _phoB\(\Delta\)_ is created by making the weights of links from _PhoB_ to _hn23_ and _PhoB_ to _pqsABCDE_ also "0".
### _Model Validation_
First, we show the nutrient accessibility variation in the biofilm through Fig 8. The cells in the biofilm core have less accessibility while the cells closer to the periphery have more access to nutrients due to variations in diffusion between the environment and the EPS. Fig 8 shows that when a low phosphate concentration is introduced to the environment, the direct access to the nutrient by the cells is limited. After the \(TS=10\), around 60% of cells have access to 20% of the nutrient concentration. Further, Fig 8 shows that the increased nutrient introduction to the environment positively reflects on the accessibility. This accessibility plays a role mainly in the deviation of gene expression patterns resulting in phenotypic differentiations that is further analyzed in Section IV-C.
Comparing the predictions of molecular production through GRNN computing with the wet-lab experimental data from the literature, we are able to prove that the components of the GRN work similarly to a NN. Fig. 9 shows the pyocyanin accumulation variations of the environment in the eight setups mentioned earlier as results of decision-making of the GRNN. Production of pyocyanin of the WD _P. aeruginosa_ biofilms is high in LP, compared to the HP environments as shown in Fig: 9. Further, the same pattern can be observed in the _lasR\(\Delta\)_ biofilms, but with a significantly increased pyocyanin production in LP as shown in Fig. 9. The _phob\(\Delta\)_ and _LasR\(\Delta\)phob\(\Delta\)_ biofilms produce a reduced level of pyocyanin compared to WD and _LasR\(\Delta\)_ that are shown in Fig. 9 and Fig. 9 respectively. We then present a comparison between GRNN prediction and wet-lab experimental data [48] as ratios of HP to LP in Fig. 10. The differences between pyocyanin production through GRNN in HP and LP condition for all the four setups in Fig. 10 are fairly close to the wet-lab data. In the WD setup, the difference between the GRNN model and wet-lab data only has around 5% difference, while deviations around 10% can be observed in _lasR\(\Delta\)_ and _phoB\(\Delta\)_. The most significant deviation around 20% of pyocyanin production difference is visible in _lasR\(\Delta\)phoB\(\Delta\)_ that is caused by the lack of interaction from other gene expression pathways, as we only extracted a sub-network portion of the GRN. Therefore, these results prove that the extracted GRNN behaves similarly to the GRN dynamics.
We further prove that the GRNN computing process performs similarly to the GRN by comparing the gene expression behaviors of the model with the wet-lab data [48] as shown in Fig. 11. First, we show the expression dynamics of genes _lasI_, _pqsA_ and _rhR_ of WD in LP in Fig. 11, Fig. 11 and Fig. 11 respectively. All the figures depict that gene expression levels are higher in LP compared to HP until around \(TS=100\). Beyond that point, relative gene expression levels are close to zero as the the environment run out of nutrients. Moreover, the differences in gene expression levels predicted by the GRNN
Fig. 8: The nutrient accessibility variations of cells is expressed in two different environment conditions: a) low phosphate and b) high phosphate concentrations.
Fig. 10: Evaluation of the model accuracy by comparing HP to LP pyocyanin production ratio with wet-lab data from [48].
Fig. 9: Relative Pyocyanin accumulation of four different biofilms of a) WD, b) lasR\(\Delta\), c) phoB\(\Delta\) and d) lasR\(\Delta\)phob\(\Delta\) in both low and high phosphate levels.
computing for LP and HP are also compared with the wet-lab data in Fig. (d)d. In this comparison, it is evident that the predicted gene expression differences of all three genes are close to the wet-lab data with only around 10% variation. The performance similarities between the GRNN and real cell activities once again prove that the GRN has underpinning NN-like behaviors.
### _Analysis of GRNN Computing_
Fig. 12 and Fig. 13 are used to show the diverse information flow of the GRNN that cause the variations in pyocyanin
Fig. 11: Expression levels of three different genes to that were used to prove the accuracy of the GRNN: a) _lasI_, b) _pqSA_, c) _rhIR_ expression levels in LP and HP and d) comparison between GRNN computing results and wet-lab data.
Fig. 12: Gene expression and associated information flow variations in GRNNs of a) WD b) _lasR\(\Delta\)_ and c) phoB\(\Delta\) in LP.
Fig. 13: Gene expression and associated information flow variations in GRNNs of a) WD b) _lasR\(\Delta\)_ and c) phoB\(\Delta\) in HP.
production in LP and HP conditions, respectively. Here, we use gene expression profiles extracted from one bacterial cell located at (7, 9, 2)\(\mu\)m in the Cartesian coordinates that is in the middle region of the biofilm with limited access to the nutrients. First, the gene expression variations of WD, _lasR\(\Delta\)_, and _phob\(\Delta\)_ bacterial cells in LP (Fig. 12) and HP (Fig. 13) are shown for \(TS<50\). Next, the information flow through the GRNN is illustrated above each expression profile at time \(TS=20\), where the variations will be discussed. In Fig. (a)a, impact of the inputs 3OC-LasR and phosphate cause higher expression levels of the nodes **hn12** and _phoB_ in the input layer that cascade the nodes _phZ1_, _phZ2_, _pqsR_, _lasR_, 3OC, _rhlR_ and _PqsH_ in the output layer at \(TS=20\). Fig. (b)b has significantly higher _pqsA_ operon expression levels compared to HP conditions (Fig. (b)b), reflecting higher yyocanin production that can be seen in Fig. (b)b. Nevertheless, the reduced gene expression levels, except _pqsA_ operon, of _lasR\(\Delta\)_ biofilm in both LP (Fig. (b)b) and HP (Fig. (b)b) conditions compared to the other setups emphasize that the inputs via inter-cellular MC significantly alter GRNN computing outputs. In contrast, only a smaller gene expression difference can be observed between the two setups of _phob\(\Delta\)_ in LP (Fig. (c)c and _phob\(\Delta\)_ in HP (Fig. (c)c) resulting in minimized myocardium production differences as shown earlier in Fig. (c)c.
The GRNN model supports the understanding of the gene expression variations due to factors such as nutrient accessibility, where in our case is a single species biofilm. Fig. 14 depicts the variability in the gene expression levels for four different locations of the biofilm at \(TS=3\). Fig. (a)a and Fig. (b)b are the gene expression profiles and the signal flow through GRNN pairs of two cells located close to the attached surface and the center of the biofilm. The phosphate accessibility for these two locations is limited. Hence, edges from _phob_ have a higher information flow compared to the other two cells near the periphery of the biofilm, which can be observed in Fig. (c)c and Fig. (d)d. The microbes in the center (Fig. (a)a) and the bottom (Fig. (b)b) mainly have access to the inter-cellular MCs, while the other two bacteria have direct access to the extracellular phosphate.
This GRNN produced data can further be used to understand the spatial and temporal dynamics of phenotypic clustering of gene expressions which is important in predicting and diagnosis of diseases [49]. Fig. 15 shows the phenotypic variation of WD biofilm in LP. Fig. (a)a shows the number of cluster variations over the first 30 \(TSs\) when the significant phenotypic changes of the biofilm is evident. At around \(TS=9\) and \(TS=10\), the bacterial cells have the most diverse expression patterns due to the highest extracellular nutrient penetration (can be seen in Fig. (a)a) to the biofilm and inter-cellular communications. Here we use four TSs (\(TS=5\) - Fig. (b)b, \(TS=15\) - Fig. (c)c, \(TS=23\) - Fig. (d)d and \(TS=30\) - Fig. (e)e) to analyze this phenotypic differentiation. Each pair of Uniform Manifold Approximation and Projection (UMAP) plot and diagram of cell locations of each cluster explain how nutrient accessibility contribute to the phenotypic clustering. Although at \(TS=5\) (Fig. 15) the average number of clusters is over four, there are only two major clusters that can be observed with higher proportions, as shown in the pie chart. Among the two major clusters (blue and green) of Fig. (b)b, the bacteria in the blue cluster can mostly be found in the
Fig. 14: Illustration of GRNN information flow variations concerning the particular positions of cells within the biofilm. We selected four cells at a) [10, 10, 0] – close to the attached surface, b) [10, 10, 5]- close to the periphery, c) [7, 10, 13] – at the center and d) [3, 15, 0] – close to the attached surface and the periphery of the biofilm.
center of the biofilm, while the green cluster cells are close to the periphery. Fig. 15c and Fig. 15d have more clusters as the nutrient accessibility among cells is high. In contrast, due to the lack of nutrients in the biofilm, a limited number of clusters can be seen in the biofilm after around \(TS=30\), which can be observed Fig. 15e.
## V Conclusion
The past literature has captured the non-linear signal computing mechanisms of Bacterial GRNs, suggesting underpinning NN behaviors. This study extracts a GRNN with summarized multi-omics gene expression regulation mechanisms as weights that can further analyze gene expression dynamics, design predictive models, or even conduct _in-vivo_ computational tasks. We used _P. aeruginosa_ single species biofilm as a use case and extracted relevant gene expression data from databases such as RegulomePA and transcriptomic data from databases including GEO. Due to the complexity of the GRN and expression dynamics, we only considered a smaller sub-network of the GRN as a GRNN that is associated with QS, iron and phosphate inputs, and pyocyanin production. Considering this GRNN, we modeled the computation process that drives cellular decision-making mechanism. As bacteria live in ecosystems in general where intra-cellular communication play a significant role in cellular activities, an _in-silico_ biofilm is modeled using GNN to further analyze the biofilm-wide decision-making. A comparison between the GRNN generated data and the transcriptomic data from the literature exhibits that the GRN behaves similarly to a NN. Hence, this model can explore the causal relationships between gene regulation and cellular activities, predict the future behaviors of the biofilm as well as conduct bio-hybrid computing tasks. Further, in the GRNN extraction phase, we were able to identify the possibility of modeling more network structures with various number of input nodes, hidden layers, and output nodes. In addition, GRN components including auto regulated genes and bidirectional intergenic interactions hints the possibility of extracting more sophisticated types of GRNNs such as Recurrent NN and Residual NN in the future. The idea of extracting sub-networks as NNs can lead to more intriguing intra-cellular distributed computing. Further, this model can be extended to multi-species ecosystems for more advanced predictive models as well as distributed computing architectures combining various NNs.
|
2302.11436 | Industrial Policy for Advanced AI: Compute Pricing and the Safety Tax | Using a model in which agents compete to develop a potentially dangerous new
technology (AI), we study how changes in the pricing of factors of production
(computational resources) affect agents' strategies, particularly their
spending on safety meant to reduce the danger from the new technology. In the
model, agents split spending between safety and performance, with safety
determining the probability of a ``disaster" outcome, and performance
determining the agents' competitiveness relative to their peers. For given
parameterizations, we determine the theoretically optimal spending strategies
by numerically computing Nash equilibria. Using this approach we find that (1)
in symmetric scenarios, compute price increases are safety-promoting if and
only if the production of performance scales faster than the production of
safety; (2) the probability of a disaster can be made arbitrarily low by
providing a sufficiently large subsidy to a single agent; (3) when agents
differ in productivity, providing a subsidy to the more productive agent is
often better for aggregate safety than providing the same subsidy to other
agent(s) (with some qualifications, which we discuss); (4) when one agent is
much more safety-conscious, in the sense of believing that safety is more
difficult to achieve, relative to his competitors, subsidizing that agent is
typically better for aggregate safety than subsidizing its competitors;
however, subsidizing an agent that is only somewhat more safety-conscious often
decreases safety. Thus, although subsidizing a much more safety-conscious, or
productive, agent often improves safety as intuition suggests, subsidizing a
somewhat more safety-conscious or productive agent can often be harmful. | Mckay Jensen, Nicholas Emery-Xu, Robert Trager | 2023-02-22T15:18:12Z | http://arxiv.org/abs/2302.11436v1 | # Industrial Policy for Advanced AI:
###### Abstract
Using a model in which agents compete to develop a potentially dangerous new technology (AI), we study how changes in the pricing of factors of production (computational resources) affect agents' strategies, particularly their spending on safety meant to reduce the danger from the new technology. In the model, agents split spending between safety and performance, with safety determining the probability of a "disaster" outcome, and performance determining the agents' competitiveness relative to their peers. For given parameterizations, we determine the theoretically optimal spending strategies by numerically computing Nash equilibria. Using this approach we find that (1) in symmetric scenarios, compute price increases are safety-promoting if and only if the production of performance scales faster than the production of safety; (2) the probability of a disaster can be made arbitrarily low by providing a sufficiently large subsidy to a single agent; (3) when agents differ in productivity, providing a subsidy to the more productive agent is often better for aggregate safety than providing the same subsidy to other agent(s) (with some qualifications, which we discuss); (4) when one agent is _much more_ safety-conscious, in the sense of believing that safety is more difficult to achieve, relative to his competitors, subsidizing that agent is typically better for aggregate safety than subsidizing its competitors; however, subsidizing an agent that is only _somewhat_ more safety-conscious often decreases safety. Thus, although subsidizing a much more safety-conscious, or productive, agent often improves safety as intuition suggests, subsidizing a somewhat more safety-conscious or productive agent can often be harmful.
Introduction
Rapid advances in artificial intelligence (AI) systems have led to concerns about the alignment of such systems with human values, especially as they come to influence decision-making over increasingly significant aspects of human society (Christiano (2019); Yudkowsky (2013)). These risks are exacerbated by the strategic environment in which AI developers find themselves. If misalignment risks are not fully internalized by developers, they may have an incentive to reduce safety investments in favor of investments in performance to increase the chance of being the first to develop new technologies. Such a **safety-performance tradeoff**(Trager et al. (2021)) is an example of what Christiano (2019) calls a **safety tax**, or the marginal cost of deploying an AI system aligned with human values over an equivalent but unaligned system.
As a result, within the AI governance field, the development of mechanisms to reduce safety taxes is an active area of research. In the present work, we study the role of input pricing in reducing the risk from such a competitive scenario. We develop a formal model in which agents are racing to develop a novel AI system. Each agent purchases an input, computation, which is allocated between investments in performance or safety. agents' relative levels of performance determine the probabilities of each agent winning the race, while safety investments reduce the risk of a disaster that negatively impacts all players. Formalizing the tradeoff between safety and performance in this way allows us to study how agents respond to changes in the prices of factors of production, which is a key contribution of this work; in the context of AI technology, our model allows us to consider how changes in compute prices are likely to affect safety. We consider the
problem of a principal who wants to increase equilibrium safety and is able to influence compute pricing (or more generally, the price of some key production input) to that end. We investigate how compute price changes for one or multiple agents affect equilibrium safety.
We solve computationally for the Nash equilibrium levels of safety and performance investments and derive four main results.
First, restricting the principal to set a single price for all agents, we show that safety is increasing in compute price if and only if the elasticity of safety with respect to spending on safety is greater than the elasticity of safety with respect to spending on performance. In this case, increasing the input cost is beneficial for safety because agents will reduce performance more than safety for a given increase in price. This result implies that safety declines as the price of compute declines if the effort required to make systems safe increases enough in the performance of the system.
Second, we allow the principal to set individual prices for agents. This might be accomplished, for example, by setting a single price for renting cloud compute and then offering differential subsidies to firms.1 We find that arbitrarily high safety can be achieved in equilibrium by providing a sufficient subsidy to a single agent. By giving one agent a large enough advantage in the race, the subsidized agent can afford to both devote sufficient resources to win the race and produce a high level of safety.
Footnote 1: This is a regular practice for providers of cloud compute. See, for example, OpenAI’s partnership with Microsoft Azure or C3.ai’s partnership with Google Cloud.
Third, if one agent is more efficient at producing performance, we find that the principals should subsidize the more productive agent when the
safety is high. When the elasticity is low - for instance, when increasing performance decreases the effort required to make a system safe - subsidizing the less productive agent reduces the equilibrium difference in capabilities of the agents. In this case, the developers face little or no safety-performance tradeoff, and thus they continue to maintain high levels of safety, even when they have similar levels of capabilities. In what is probably the more likely case, when the safety elasticity of performance is high (implying a significant safety-performance tradeoff), providing subsidies to bring agents' capabilities closer together is not beneficial in this model because they are then incentivized to race to the bottom by cutting corners on safety. In this latter case, a principal should instead subsidize the more productive agent, increasing her probability of winning and allowing her to choose a higher level of safety.
Fourth, we examine scenarios in which agents differ in their attitudes toward the risk of a disaster. In particular, we find that, given some reasonable assumptions, when agents have sufficiently different beliefs about the cost of achieving a given level of safety, providing a subsidy to an agent who believes safety to be costly to achieve is better for aggregate safety than providing the same subsidy to an agent who believes safety to be relatively easy to achieve. This matches the intuition that assisting safety-conscious agents is better for safety than assisting their competitors; however, there are some cases under which this intuition fails, so we also examine some of those cases. In particular, subsidies for safety-conscious agents are not reliably safety-promoting when the differences in safety-consciousness between them and their competitors are not large or if we use some other definition of safety-consciousness.
## 2 Risk and compute pricing
### Mechanisms for reducing risk
Following the work by Armstrong et al. (2016), a growing body of literature has sought to understand how the strategic environment in a technology race affects risk and uncover mechanisms to reduce it. Factors that have been identified as being important to risk are agents' knowledge about capabilities (Emery-Xu et al. (2022), Armstrong et al. (2016)), the capability gap between the leader and her competitors (Stafford and Trager (2022), Stafford et al. (2021)), and the influence of safety on both development speed and the probability of risk (Han et al. (2021)).
A variety of mechanisms have been proposed to reduce these risks. Han et al. (2021) consider the conditions under which a government can use taxes to punish unsafe development or subsidies to reward safe development, finding that both interventions reduce risk under certain conditions but only taxes can lead to overregulation and a suboptimal reduction in innovative output. This work assumes that AI development is proceeding without international cooperation or competion. The global nature of contemporary AI development, however, hinders the efficacy of government regulation, as countries may have an incentive to underprovide regulation in order to outcompete their rivals. Other scholars have, therefore, focused on mechanisms to which agents will voluntarily agree. Drawing from the success of the Nuclear Non-Proliferation Treaty, Stafford and Trager (2022) study the role of information sharing agreements in reducing risk, finding that if agents are not too close in capability, the leader has an incentive to share some technology with the laggard in return for the latter exiting the race. Emery-Xu et al. (2022) find that,
except when the race is highly rivalrous and cutting corners on safety can approximately guarantee that a agent wins the race, public revelation of capabilities reduces risk.
While all of these models assume that agents' capabilities are exogenously endowed by nature, AI developers must purchase research performance in competitive markets for human capital, computational capital, and other inputs (Khan, Langenkamp and Flagg). Thus, an input producer with market power has the ability to influence the safety choices of agents. Governments, through industrial policy, and other agents, through technical collaborations and subsidies, can influence equilibrium risk levels. Because our baseline model analyzes a complete information scenario, we can allow the principal to implement first-degree price discrimination and thus study the effects of the first-best pricing strategy. However, even though the principal can observe the agents' types, we still observe a moral hazard constraint stemming from the dual-use nature of compute.
### Compute scaling
Our present work simplifies the production process by focusing on a single input - computation. Why? First, we do so to simplify the analysis and make the results easier to interpret. We encourage future researchers to build on these results in considering other inputs into production functions. Second, computational capital plays a key role in driving progress in deep learning, the most prominent AI paradigm, compared to other US R&D sectors (Besiroglu et al. (2022)). Third, because physical capital is more accumulable following a change in price than is labor, it is relatively easier for agents to respond to a price change in computation than to a wage change for researchers.
How, then, does computation translate into AI progress? We assume it takes the following power-law form:
\[p:=BX_{p}^{\beta} \tag{1}\]
where \(X_{p}\) is the amount of compute used to advance the performance of the system, \(p\) is the level of performance and the other parameters are constants of the production function. We use this functional form because experimental results have shown that neural network performance tends to scale in this way with respect to computation. (Jones (2021); Henighan et al. (2020); Kaplan et al. (2020); Lepikhin et al. (2020); Hestness et al. (2017)).2 Thompson et al. (2020) shows that, across a wide variety of machine learning benchmarks, performance is highly dependent upon the level of computational inputs.3 We also assume safety research follows a similar scaling law. While there exists far less research on how safety scales with computation, there exists evidence that safety outputs scale with compute according to a power law on some AI safety benchmarks (Bai et al. (2022); Askell et al. (2021)).
Footnote 2: Forutiously, by assuming a power-law relationship, we can consider our model a special case of the canonical “ideas production function” in endogenous growth theory (Jones (1995); Romer (1990)), which takes the form \(p\equiv\frac{A}{A}=A^{\nu-1}K_{p}^{\beta}\). With \(\nu=1\), we recover our model.
Footnote 3: Hoffmann et al. (2022) show that other inputs, in particular the size of the training dataset, are also important - scaling compute without scaling data is not efficient.
## 3 The model
There are \(n\) players, and each player \(i=1,2,\ldots,n\) chooses, simultaneously, to purchase some amount \(X_{i}\) of a factor of production (compute power), at a per-unit price \(r\), and di
vides it between creating performance and creating safety. Thus, \(X_{i}=X_{s,i}+X_{p,i}\), where \(X_{s,i}\) and \(X_{p,i}\) are the amounts of the factor of production used for safety and performance, respectively. Safety (\(s\)) and performance (\(p\)) are produced according to the following production functions:
\[s_{i}:=A_{i}X_{s,i}^{\alpha_{i}}p_{i}^{-\theta_{i}} \tag{2}\]
\[p_{i}:=B_{i}X_{p,i}^{\beta_{i}} \tag{3}\]
Note that \(p_{i}\) appears in equation (2) to reflect the idea that safety may become more expensive as performance increases, corresponding to the case where \(\theta_{i}>0\). \(\alpha_{i}\) is the compute elasticity of safety, describing how well safety progress scales with additional compute dedicated to safety. \(\beta_{i}\) is the analogous parameter for performance research. Finally, \(\theta_{i}\) controls the degree of the safety-performance tradeoff. In particular, \(-\theta_{i}\) is the elasticity of safety with respect to performance (the \(p_{i}\)-elasticity of \(s_{i}\)); when we let \(\theta_{i}>0\), spending on performance has a negative impact on safety. A high value of \(\theta_{i}\) indicates that there is a large safety tax, as there is a large cost to performance in investing in safe systems, while a low or even negative value of \(\theta_{i}\) indicates that the safety tax is small or nonexistent. This might be the case when only safe systems perform well as evaluated by the market. For example, consumers are unlikely to purchase autonomous vehicles that do not exhibit a high degree of both performance and safety.4
### Payoffs and players' objectives
Players (agents) compete in a contest, where the probability that player \(i\) wins is defined as the simple contest success function
\[q_{i}:=\frac{p_{i}}{\sum_{j=1}^{n}p_{j}}. \tag{4}\]
At the same time, players' realized levels of safety aggregate to produce some probability that a disaster occurs (discussed more in section 3.2); we define \(\sigma_{i}\) as the probability of a safe outcome (no disaster), given that player \(i\) wins the contest. If player \(i\) wins the contest, and none of the players cause a disaster, player \(i\) gets a normalized payoff of 1. Players that do not win receive a payoff of 0. If there is a disaster, none of the players get a payoff for winning the contest; instead, each player pays a disaster cost \(d_{i}\).
Putting this all together, player \(i\)'s expected net payoff is
\[u_{i}:=\sigma_{i}q_{i}-\left(1-\sum_{j}\sigma_{j}q_{j}\right)d_{i}-r(X_{s,i}+X _{p,i}). \tag{5}\]
### Disaster risk aggregation
The players' realized level of safety determines the probability that a disaster occurs. Here, we focus on two different ways of aggregating player safety choices to determine that probability.
#### 3.2.1 Independent (multiplicative) disaster risks
In our base case, each player has some independent probability of causing a disaster; \(s_{i}\) represents the odds that player \(i\)_does not_ cause a disaster. Thus, \(s_{i}/(1+s_{i})\) is the probability that player \(i\) does not cause a disaster, and
\[\sigma:=\prod_{j=1}^{n}\frac{s_{j}}{1+s_{j}} \tag{6}\]
is the probability that none of the players cause a disaster. (We refer to \(\sigma\) as the "aggregate safety.") Note that this probability is the same regardless of who wins the contest; i.e., \(\sigma_{i}=\sigma\), and thus we can simplify equation (5) to
\[u_{i}=\sigma q_{i}-(1-\sigma)d_{i}-r(X_{s,i}+X_{p,i}) \tag{7}\]
in this case.
#### 3.2.2 Disaster risk only from contest winner
As an alternate case, we can assume that instead of each player carrying some independent risk of causing a disaster, only the winner of the contest can cause a disaster. That is, we make the assumption that \(s_{i}/(1+s_{i})\) is the probability of a safe outcome, conditional on player \(i\) being the contest winner:
\[\sigma_{i}=\frac{s_{i}}{1+s_{i}} \tag{8}\]
The aggregate safety in this case (unconditional probability that no player causes a disaster) is
\[\sigma=\sum_{i=1}^{n}\sigma_{i}q_{i}=\sum_{i=1}^{n}\left(\frac{s_{i}}{1+s_{i}} \right)q_{i}. \tag{9}\]
### Heterogeneous beliefs
Up to this point, we've assume that all players have the same (correct) beliefs about the model parameters. Unless stated otherwise, this will be our default assumption, but we can also consider cases where players disagree on those parameters' values. In this case, we assume that each player \(i\)'s objective is to maximize \(u_{i}\) subject to their own beliefs about the model parameters; we also assume that all players are accurately informed of each other's beliefs (i.e., higher-order beliefs are perfect).
In this paper, we will be particularly interested in heterogenous beliefs about the parameter \(A\) (the safety productivity factor). When players have different beliefs about \(A\), they have different beliefs about the cost of achieving a given level of safety: a player that believes that \(A\) is higher believes safety is cheap and therefore that less investment is required to reduce the risk of a disaster. Thus players' beliefs about \(A\) can be used as a measure of their safety-consciousness - we say that players who believe \(A\) to be low (i.e. believe safety to be expensive) are safety-conscious, and conversely for players who believe \(A\) to be high.
### Solution criterion
We look for pure-strategy Nash equilibria for this model.5 That is, we find solutions where each player \(i\) chooses \(X_{s,i}\), \(X_{p,i}\) such that \(u_{i}\) is optimal, given the other players' choices. Due to the intractability of finding a closed-form solution, we implement a computational approach to solve for equilibrium values of \(X_{s,i}\), \(X_{p,i}\). A description of our solver is presented in Appendix A.
Footnote 5: Although mixed-strategy equilibria or multiple pure-strategy equilibria may exist for a given parameterization of the model, we have found that this is rarely the case unless there is some discontinuity in payoffs based on player strategies. One situation in which this may occur is if we choose extreme parameter values that result in situations where players may sometimes prefer not to produce at all. However, studying such scenarios is not the focus of this paper, since the version of the model we use here does not allow players to enter or fully exit competition and therefore is likely to reflect these scenarios poorly. Extending our model to allow for player entry/exit may be a worthwhile way to expand on this work.
## 4 Response of safety to changes in input cost
In this section, we analyze how the principal can use the input price \(r\) to affect the probability of a safe outcome. For simplicity, we analyze the case with two players and begin by assuming the principal can only set a single price. The first two claims presented here are true for both risk aggregation assumptions presented in section 3.2, while the later claims are sensitive to that assumption.
**Claim 1:** When players are identical, the probability \(\sigma\) of a safe outcome increases with the factor price \(r\) if and only if \(\theta>\alpha/\beta\).
A typical scenario illustrating this claim is shown in Figure 1. In the figure, \(\alpha=\beta=0.5\) so that safety and performance each scale with the square root of compute. Thus, safety increases in the price of compute for \(\theta>1\) and decreases for \(\theta<1\).
Suppose that we start at some level of inputs, \(X_{s}\) and \(X_{p}\), and scale both up by a factor of \(c>1\). We have
\[s(cX_{s},cX_{p})=c^{\alpha-\theta\beta}\frac{A}{B^{\theta}}X_{s}^{\alpha}X_{p}^{ -\theta\beta}=c^{\alpha-\theta\beta}s(X_{s},X_{p}), \tag{10}\]
meaning that this scaling-up of inputs results in increased safety if and only if \(\alpha>\theta\beta\). For \(\alpha\leq\theta\beta\), if we want to increase safety, we must increase \(X_{s}\) at a greater rate than we increase \(X_{p}\).
We can also see this by combining equations (2) and (3), giving us
\[s=\frac{A}{B^{\theta}}X_{s}^{\alpha}X_{p}^{-\theta\beta}, \tag{11}\]
Figure 1: **The effect of price changes on safety depends on the scaling of the safety-performance tradeoff.** Here, \(\alpha=\beta=0.5\). Aggregate safety \(\sigma\) increases with \(r\) iff \(\theta>1=\alpha/\beta\).
meaning that the elasticity of safety with respect to \(X_{p}\) is \(-\theta\beta\). Recalling that \(\alpha\) represents the elasticity of safety with respect to \(X_{s}\), we can see that the elasticity of safety with respect to a uniform increase in \(X_{s}\) and \(X_{p}\) is \(\alpha-\theta\beta\). Safety's returns to scale are determined by the sign of this quantity.
We can thus interpret Claim 1 as saying that \(\sigma\) increases with \(r\) if and only if safety has negative returns to scale in all its inputs (i.e., if safety is decreases when all outputs are scaled up uniformly). More loosely, we can think of this as saying that price increases are safety-promoting when production of performance outpaces production of safety.
We now consider outcomes in which the principal can engage in first-degree price discrimination and charge \(r_{i}\) based on observed characteristics of the agents.
**Claim 2:** When \(d>0\), arbitrarily high probabilities of a safe outcome can be achieved by giving a single player (and not that player's competitors) a sufficiently low factor price.
A typical scenario illustrating this claim is shown in Figure 2. Player 1's price of compute is held constant while player 2's is allowed to vary. Moving from right to left, we see that decreasing the price of compute at first decreases safety when \(\theta\) is not too low. This happens for essentially the same reasons discussed in relation to Claim 1. As player 2's price gets even lower, however, the probability of a safe outcome goes to 1.
Thus, in symmetric cases where players have the same factor price \(r\), the relationship between that price and safety is as stated in Claim 1. In general, giving a single player a subsidy (lower \(r\)) is not necessarily safety-promoting, and in asymmetric cases, giving a subsidy to different players can have different effects on safety. The intuition for Claim
2 is that, although small subsidies to a single player may not increase safety, if we give a player enough of a subsidy that all other players become practically unable to compete, then the subsidized player is able to take their focus off of performance and use their inexpensive resources to achieve a high level of safety.6
Footnote 6: An important caveat is that this is dependent on the assumption that only the potential reward for winning is fixed, with players’ relative levels of performance determining who wins the contest but not the size of the reward for winning. If performance has some intrinsic benefit (e.g., if we think that players value creating advanced AI sooner, even in the absence of competition), then this claim will not be strictly true.
The next claim examines the case where one player is more effective at converting resources into performance than their competitor. In this case, whether risks come all players, or only from the winner of the technology competition, becomes significant for the findings.
**Claim 3:** Suppose that one player is more productive at producing perfor
Figure 2: **Probability of a safe outcome goes to 1 as a single player’s compute price goes to 0.** Aggregate safety \(\sigma\) is shown as player 2’s \(r\) varies and player 1’s \(r\) is held fixed at \(r_{1}=1\). (Dashed line marks \(r_{1}\).) In all cases, probability of a safe outcome (\(\sigma\)) converges to 1 as \(r_{2}\to 0\).
mance (has a higher \(B\)) relative to their competitor(s), with players otherwise identical.
**(3.a)**: In the case of multiplicative risks, if \(\theta\) is low, then giving this player a reduced factor price is worse for aggregate safety than giving their competitor(s) a reduced factor price; for sufficiently high \(\theta\), subsidizing the more productive player is better.
**(3.b)**: In the case where only the contest winner can cause a disaster, subsidizing the more productive player is better for aggregate safety if and only if \(\theta>-1\).
Figure 3 illustrates this claim for both risk assumptions. On the right of the figure, where \(\theta\) is high, subsidizing only the more productive player is better than subsidizing either the less productive player or subsidizing both players, but not giving out a subsidy is the best option of all. Note that the subsidy illustrated in the figure is not extremely large; if it were large enough, Claim 2 dynamics would apply. In the middle of the figure, where the safety-performance tradeoff parameter \(\theta\) is high, but not too high, subsidizing only the most productive player is the best option. On the left, where \(\theta\) is low, things are more complicated. The optimal policy depends on just how low \(\theta\) is and how risk is aggregated, among other factors.
Lowering the price of inputs for one player has both a direct effect in changing their optimal portfolio of performance and safety investments and an indirect effect in altering the strategic scenario. When \(\theta\) is low, both players are willing to invest in safety. However, giving a subsidy to the more productive player increases the capability gap between the
Figure 3: **Subsidies for more productive agents lead to higher safety when the safety-performance tradeoff is moderately strong.** Illustration of Claim 3, This figure shows differences in aggregate safety for various subsidy schemes, where one player has \(B\) twice as high as the other. Both the upper and lower plots use the same parameter values, with the only change being the way in which risk is aggregated. In this and all subsequent figures, the subsidized player pays _half_ the per-unit cost that their competitor pays.
players, forcing the less capable agent to cut corners on safety. Here, the strategic effect is relatively important: a subsidy that increases the gap between the agents is less beneficial than one that reduces it. On the other hand, when \(\theta\) is high, both agents are reluctant to invest highly in safety, so by increasing the capabilities gap between agents, we take some competitive pressure off of the productive player, making them more willing to spend on safety relative to performance. Subsidizing the less productive player is less beneficial because the less productive player still has a strong incentive to cut corners on safety, driven both by her low \(B_{i}\) and high \(\theta\).7
Footnote 7: It’s important to note that Claim 2 still holds here: given a large enough subsidy for either player, we can achieve high aggregate safety. This claim is relevant when we cannot provide an arbitrarily generous subsidy and want to decide whom (if anyone) to subsidize.
We now turn to an analysis of subsidizing more and less safety-conscious players. Claim 4 examines the case where there is a large difference between how difficult the players believe it is to achieve safe outcomes. Here again, whether risk derives from the race winner, or from both competitors, influences the dynamics.
**Claim 4:** Suppose that only the contest winner can cause a disaster and that players differ in their beliefs about the safety productivity factor \(A\). Regardless of all other parameter values, if the difference in players' beliefs about \(A\) is great enough (so one player believes \(A\) to be sufficiently large relative to the other), it is better for aggregate safety to subsidize the player who believes \(A\) to be lower. This is not true when disaster risk is aggregated multiplicatively.
Figure 4 shows an example of a scenario that illustrates this claim. In this scenario, we have two players who are identical except for their beliefs about the \(A\) parameter: both
Figure 4: **Subsides for safety-conscious agents increase safety if competitors are sufficiently unconcerned about safety.** Player 1 believes that \(A=10\), which is the true value, while player 2 believes (incorrectly) that \(A=A^{\prime}>10\). Here, \(\Delta\sigma\), the difference in safety for giving player 1 a subsidy rather than giving player 2 the same subsidy, is shown for a range of values of \(A^{\prime}\) and various values of \(\theta\). We assume that only the contest winner can cause a disaster. As asserted in Claim 4, we can see that for sufficiently high \(A^{\prime}\), \(\Delta\sigma>0\) in all cases.
players have the same \(A\), but they disagree about the true value of this \(A\), with player 1 believing that \(A=10\), and player 2 believing that \(A=A^{\prime}\), which we let vary along the x-axis of the figure. On the y-axis, we measure the difference in aggregate safety from giving a subsidy to player 1, relative to giving the same subsidy to player 2. We can see that, as player 2's belief about the ease of achieving a safe outcome, \(A^{\prime}\), increases, it may be safer to subsidize player 2, but for very high values of \(A^{\prime}\), it is always better to subsidize player 1. Intuitively, if we want to promote safety, we shouldn't subsidize a player who thinks that being safe is trivially easy (i.e., a player who believes \(A\) to be very high).
This claim gives us some idea of when it may be safety-promoting to assist a safety-conscious agent: if one agent thinks that being safe is trivial, it's probably best to assist that agent's competitors; however, if agents are more similar in their attitudes toward disaster risk, the question of whom (if anyone) to subsidize is more unclear.
We should note that this claim addresses only one notion of what it means to be safety-conscious; namely, we say that an agent is safety-conscious if she believes \(A\) to be low relative to her competitors. In Appendix C, we consider a different notion of safety-consciousness based on the cost \(d\) that players face in the event of a disaster. Importantly, when we measure safety-consciousness based on players' appraisals of the costs of disaster, we find that, under many parameter values, it is better to subsidize the less safety-conscious agent, as the safety-conscious agent will not compromise as much on safety in order to compete on performance.
It is also worth noting that this claim deals with cases where all agents intrinsically value performance, and value safety only insofar as it guarantees the returns to their
performance. An agent who cares purely about safety would not be subject to the same consideration.
## 5 Conclusion
In this paper, we develop and solve a simple baseline model for competitive AI development to provide policy recommendations for third parties concerned about risks from competition to develop a new technology. We demonstrate that the effect on safety of lowering the price of inputs to AI developers depends on whether performance or safety scales more rapidly with compute. From our model, we see that there are a number of potential scenarios in which lowering the price of inputs leads to increased safety. The first case is when safety scales more rapidly with increases in safety research than with reductions in performance research. For example, Bai et al. (2022) find that both helpful and harmless language models scale similarly with the number of parameters in the model.8 The second case is if the principal is able to price discriminate, offering different prices to each player: she can increase safety by giving one player a subsidy large enough to discourage risky competition from that player's competitors.
Footnote 8: Though this is not compute scaling, the two are positively correlated (see e.g. Sevilla et al. (2022)).
Next, we examine the case of heterogeneous agents. We find that, when there is a steep safety-performance tradeoff, it is better to subsidize the agent who is more productive at performance research. Finally, depending on how risk is aggregated between players, if one player is much more safety-conscious than others, it is better for safety to subsidize that player rather than to subsidize any of her competitors.
Although subsidizing a much more safety-conscious, or productive, agent often improves safety as intuition suggests, subsidizing a somewhat more safety-conscious or productive agent can often be harmful. Very large subsidies to one agent are beneficial when smaller subsidies are not. We should not allow intuitions derived from extreme cases to govern considerations of more moderate interventions and cases.
We hope that this model provides a foundation that can be built on in future work. One promising line of research would be to explore ways the principal could contract with agents to incentivize them to commit to a certain level of safety in return for a compute discount, potentially altering the range of scenarios over which agents would agree to reduce competition in exchange for resources (Stafford and Trager (2022)). A second line of research might examine the role of information in optimal compute provision, studying the cases in which sharing productivity-enhancing insights between agents is safety-promoting, and understanding better the welfare loss that results if agents are able to conceal their true preferences from the principal. Finally, one could adapt the model presented here to allow for explicit consideration of agents' strategies over multiple time periods, which could enable us to better model accrual of technology (via investment or information diffusion) and more faithfully represent general racing dynamics over time.
|
2308.04706 | Pareto Invariant Representation Learning for Multimedia Recommendation | Multimedia recommendation involves personalized ranking tasks, where
multimedia content is usually represented using a generic encoder. However,
these generic representations introduce spurious correlations that fail to
reveal users' true preferences. Existing works attempt to alleviate this
problem by learning invariant representations, but overlook the balance between
independent and identically distributed (IID) and out-of-distribution (OOD)
generalization. In this paper, we propose a framework called Pareto Invariant
Representation Learning (PaInvRL) to mitigate the impact of spurious
correlations from an IID-OOD multi-objective optimization perspective, by
learning invariant representations (intrinsic factors that attract user
attention) and variant representations (other factors) simultaneously.
Specifically, PaInvRL includes three iteratively executed modules: (i)
heterogeneous identification module, which identifies the heterogeneous
environments to reflect distributional shifts for user-item interactions; (ii)
invariant mask generation module, which learns invariant masks based on the
Pareto-optimal solutions that minimize the adaptive weighted Invariant Risk
Minimization (IRM) and Empirical Risk (ERM) losses; (iii) convert module, which
generates both variant representations and item-invariant representations for
training a multi-modal recommendation model that mitigates spurious
correlations and balances the generalization performance within and cross the
environmental distributions. We compare the proposed PaInvRL with
state-of-the-art recommendation models on three public multimedia
recommendation datasets (Movielens, Tiktok, and Kwai), and the experimental
results validate the effectiveness of PaInvRL for both within- and
cross-environmental learning. | Shanshan Huang, Haoxuan Li, Qingsong Li, Chunyuan Zheng, Li Liu | 2023-08-09T04:57:56Z | http://arxiv.org/abs/2308.04706v2 | # Pareto Invariant Representation Learning for Multimedia Recommendation
###### Abstract.
Multimedia recommendation involves personalized ranking tasks, where multimedia content is usually represented using a generic encoder. However, these generic representations introduce spurious correlations that fail to reveal users' true preferences. Existing works attempt to alleviate this problem by learning invariant representations, but overlook the balance between independent and identically distributed (IID) and out-of-distribution (OOD) generalization. In this paper, we propose a framework called Pareto Invariant Representation Learning (PalnvRL) to mitigate the impact of spurious correlations from an IID-OOD multi-objective optimization perspective, by learning invariant representations (intrinsic factors that attract user attention) and variant representations (other factors) simultaneously. Specifically, PalnvRL includes three iteratively executed modules: (i) heterogeneous identification module, which identifies the heterogeneous environments to reflect distributional shifts for user-item interactions; (ii) invariant mask generation module, which learns invariant masks based on the Pareto-optimal solutions that minimize the adaptive weighted Invariant Risk Minimization (IRM) and Empirical Risk (ERM) losses; (iii) convert module, which generates both variant representations and item-invariant representations for training a multi-modal recommendation model that mitigates spurious correlations and balances the generalization performance within and cross the environmental distributions. We compare the proposed PalnvRL with state-of-the-art recommendation models on three public multimedia recommendation datastes (Movielens, Tiktok, and Kwai), and the experimental results validate the effectiveness of PalnvRL for both within- and cross-environmental learning.
Multimedia Recommendation, Multimedia Representation Learning, Invariant Learning, Multi-objective Optimization +
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
## 1. Introduction
With the rapid development of the internet, multimedia recommendation systems have become indispensable tools to help users find their interesting items, and have been widely used in many online applications, such as e-commerce platforms, social media, and instant video platforms. For multimedia recommendation, item content includes multiple modalities, including visual, acoustic, and textual representations. These multi-modal data may reflect user preferences at the fine-grained modality level. The core of multimedia recommendation is to use the historical interactions between users and items and the auxiliary multi-modal item representations to improve recommendation performance.
Collaborative filtering (CF) serves as the foundation of personalized recommendation systems, which leverages historical user-item interactions to learn user and item representations and provides recommendations based on these representations (Song et al., 2018; Wang et al., 2019). Extending to multimedia tasks, previous studies, e.g., VBPR (Kang et al., 2019), DeepStyle (Wang et al., 2019), incorporate multi-modal contents as side information in addition to id embeddings of items to learn the user preference. However, these methods have limited expressiveness as they neglect high-order user-item semantic relations (Wang et al., 2019). Inspired by the recent advances in graph neural networks, recent studies (Kang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) take advantage of powerful graph convolution networks (GCNs) to model user-item relationships as bipartite graphs to improve the performance of CF-based recommendation systems. Further, many researchers have also attempted to apply GCNs to incorporate modality information into the message passing for inferring user and item representations, such as MMGCN (Wang et al., 2019), GRCN (Wang et al., 2019), LATTICE (Wang et al., 2019), MICRO (Wang et al., 2019) and HCGCN (Wang et al., 2019).
Despite achieving promising performance, previous approaches often use encoder architectures designed for general content understanding tasks (Krizhevsky et al., 2017) (including image classification, object recognition, image colorization, and text classification, etc.), e.g., pretrained VGG19 (Zheng et al., 2018), ResNet50 (He et al., 2017), VILBERT (Wang et al., 2018), and sentence-transformer (Wang et al., 2019), to encode multimedia content. The use of these generic encoders may introduce spurious correlations (i.e., some learned representations may affect the recommendation results, but are irrelevant to user's true preferences from a causal perspective), making it difficult for recommendation models to capture user's true preferences and provide accurate recommendations. To alleviate this issue, existing studies mainly rely on preference-aware representations (Zheng et al., 2018; Wang et al., 2019; Wang et al., 2019), which were extracted with specifically designed multimedia models for specific recommendation tasks. Therefore, the existing methods face the limitation of domain-specific analysis and design, and thus can hardly be generalized.
To address this issue, a recent research work, named InvRL (Liu et al., 2019), introduced invariant risk minimization (IBM) to multimedia recommendation, by learning invariant item representations to alleviate the impact of spurious correlations. Although experimentally promising, it is widely known that there is a conflict between independent and identical distributed (IID) tasks (where the source and target environments are similar) and out-of-distribution (OOD) tasks (where there is a significant difference between the source and target environments), which may lead to significant degradation of model performance on IID tasks. We verify empirically that the superiority of InvRL is only guaranteed in OOD tasks, whereas empirical risk minimization (ERM) typically outperforms in IID tasks, which motivates us to balance this conflict between IID and OOD. Specifically, in this paper, we formalize the IID-OOD task as a multi-objective optimization problem (Zheng et al., 2018) and adaptively weight the ERM and IRM losses via a gradient approach to obtain the Pareto optimal solution. We theoretically prove that our solution cannot be dominated by other solutions, i.e., there does not exist any solution that performs better compared to our solution on both tasks at the same time. Specifically, we divide the raw multimedia representations into two parts: variant and invariant representations, where the variant representations account for spurious correlations while the invariant representations reflect the user's true preferences.
The main contributions of this paper are summarized as follows.
* We first formalize the IID-OOD task as a multi-objective optimization problem and adaptively weight the ERM and IRM losses using a gradient-based representation learning approach to obtain the Pareto optimal solution, i.e., there does not exist any solution that outperforms compared to our solution on both IID and OOD tasks.
* We propose a new multimedia recommendation framework, called PaInvRL, that aims to obtain a Pareto solution between IID and OOD tasks via a gradient-based updating method, where the gradient is shown to be either 0 when there are no other solution in its neighborhood can have lower values in both ERM and IRM losses, or the gradient givens a descent direction that improves both IID and OOD generalization by reducing ERM and IRM losses simultaneously.
* We instantiate the framework over UltraGCN and conduct extensive experiments over three public datasets, verifying the rationality and effectiveness of PaInvRL.
## 2. Related Work
### Multimedia Recommendation
The multi-modal recommendation system aims to learn informative representations of users and items by leveraging multi-modal representations. Many efforts (Zheng et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) have been devoted to enhancing recommendation systems by incorporating multimedia content. VBPR (Wang et al., 2019) is the first model that considers introducing visual representations into the recommendation system by concatenating visual embeddings with id embeddings as the item representations. DVBPR (Wang et al., 2019) attempts to jointly train the image representations as well as the parameters in a recommendation model. In recent years, graph neural networks have been demonstrated as powerful solutions for multimedia recommendation by capturing high-order dependent structures among users and items. For example, MMGCN (Zheng et al., 2018) constructs a modal-specific graph and conducts graph convolution operations, to capture the modal-specific user preference and distills the item representations simultaneously. MGAT (Wang et al., 2019) based on the MMGCN framework utilizes the original GCN to do aggregation and the same way to combine the aggregated result. To manage the information transmission for each modality, it added a new gated attention mechanism. DualGNN (Zheng et al., 2018) also introduces a model preference learning module and draws the user's attention to various modalities. InvRL (Liu et al., 2019) introduces IRM to learn invariant item representations, which reduces the impact of spurious correlations and improves the recommendation performance of multi-modal recommendation models. DRAGON (Zheng et al., 2019) learns dual representations of users and items by constructing homogeneous graphs to enhance the relationship between the two parties, enabling multi-modal recommendations. Different from these works, for robust multi-modal user preference learning, this paper proposes a new framework for invariant representation learning, which first views the IID-OOD task in the multi-modal recommendation as a multi-objective optimization problem, and then adaptively weights the IRM and ERM losses and uses gradient-based methods to seek Pareto optimal solutions for learning invariant representations.
### Invariant Representation Learning
Invariant representation learning aims to learn the essential representations of data, and improve the generalization ability and robustness of models. Recently, some studies have been conducted, among which IRM (Beng et al., 2019) is an earlier method proposed based on invariant principle (Zheng et al., 2019), which aims to learn representations with invariance in different environments. Several works (Beng et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) further develop several variants of IRM by introducing game theory, regret minimization, variance penalization, etc., and (Zheng et al., 2018; Wang et al., 2019) try to learn invariant representations by coupled adversarial neural networks. Other approaches (Wang et al., 2019; Wang et al., 2019) attempt to learn invariant representations without providing explicit environment indicators. Liu et al. (Liu et al., 2019) proposed the HRM to achieve joint learning of latent heterogeneity and invariant relationships in the data, resulting in stable predictions despite distributional shifts. Furthermore, they extended HRM to the representation level using kernel tricks (Wang et al., 2019).
An alternative class of methods for learning invariant representations are causality-based approaches with debiased loss (Zheng et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), such as outcome regression methods (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), propensity-based weighting methods (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), doubly robust learning methods (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019).
24, 29, 33, 54, 55, 69, 73], multiple robust learning method [28], and representation learning methods [7, 22, 26, 58, 60, 62, 68, 91]. However, these previous approaches failed in obtaining Pareto solutions between IID and OOD tasks [35, 59]. In this paper, we aim to learn representations corresponding to the Pareto solutions between within- and cross-environmental learning to improve the model's generalization performance in multimedia recommendations.
## 3. Methodology
### Preliminaries
Considering a multimedia recommendation system, we denote the set of users and items as \(\mathcal{U}\) and \(\mathcal{I}\), respectively. For each user-item pairs \((u,i)\in\mathcal{U}\times\mathcal{I}\), denote \(r_{u,i}=1\) if user \(u\) make a positive feedback on item \(i\), and \(r_{u,i}=0\) otherwise. In addition to user-item interactions, we also have access to multi-modal representations that provide content information about items. We represent the modality representation of item \(i\) as \(\mathbf{f}_{r,i}\in\mathbb{R}^{d_{r}}\), where \(d_{r}\) is the dimension of the modality representation, \(r\in R=\{V,T,A\}\) denotes the modality, and \(R\) is the set of all modalities. In this paper, \(R\) includes visual (V), textual (\(\mathcal{I}\)), and acoustic (A) modalities, let \(\mathbf{f}_{i}=concat(\mathbf{f}_{V,i},\mathbf{f}_{T,i},\mathbf{f}_{A,i})\in \mathbb{R}^{d}\), where \(d=d_{V}+d_{T}+d_{A}\) and \(concat(\cdot)\) indicates the concatenation operation. The multi-modal recommendation aims to learn a model \(\Gamma(u,i,\mathbf{f}_{i}|\Theta)\) parameterized by \(\Theta\) to predict users' true preferences, which can be formalized as
\[\arg\min_{\Theta}\mathcal{L}(\Gamma(u,i,\mathbf{f}_{i}|\Theta)|\mathcal{R}^{tr }), \tag{1}\]
where \(\mathcal{L}(\cdot)\) denotes the recommendation loss, and \(\mathcal{R}^{tr}\) denotes the training set, with both positive samples \(\mathcal{R}^{+}=\{(u,i):r_{u,i}=1\}\) and negative samples \(\mathcal{R}^{-}=\{(u,i):r_{u,i}=0\}\).
### Model Overview
We now present the proposed PalInvRL model, the architecture of which is illustrated in Figure 1. There are four components in the framework: (1) the generic feature extraction network that is used to extract multi-modal representations, including visual, acoustic, and visual representations; (2) the heterogeneous identification module (HIM) that is designed to partition the input historical user-item dataset interaction into multiple heterogeneous environments for invariant representation learning, each reflecting a spurious correlation in user-item interactions; (3) the invariant mask generation module and (4) the convert module work together to select representations that have stable and invariant relationships across environments. Specifically, the generic feature extraction module adopts a pre-trained model and is not the focus of this paper, and we therefore provide only a brief introduction to this module in Section 4. The HIM and the invariant mask generation module promote each other: on one hand, the invariant mask generation module uses the heterogeneous environment identified by HIM to learn the invariant mask \(\mathbf{m}\), which leads to the corresponding invariant representations \(\mathbf{\Phi}_{i}\) and variant representations \(\mathbf{\Psi}_{i}\) using the learned invariant mask; on the other hand, the variant representations are utilized to enhance the training of HIM. The convert module divides the raw multimedia representations into invariant representations and variant representations. Finally, we use the learned invariant representations to learn the final multi-modal recommendation model with both promising IID and OOD generalization.
Different from InvRL [10] that utilizes the invariant mask generation module to generate invariant masks used to generate the corresponding invariant representations with superior performance under the OOD task, we propose to generate the invariant mask corresponding to a Pareto solution between IID and OOD tasks via a gradient-based updating method. The proposed invariant mask update gradient is either \(0\) when no neighboring solution can offer lower values in both ERM and IRM losses, or it provides a descent direction enhancing IID and OOD generalization through simultaneous reduction of ERM and IRM losses.
### Heterogeneous Environment Identification
Heterogeneous identification module (HIM), which takes in the historical user-item interactions and outputs an environment set \(\mathcal{E}\) for invariant mask generation [10]. This module comprises two phases: an environment learning phase and a user-item interaction partitioning phase. Specifically, in the environment learning phase, we learn different environments \(e\in\mathcal{E}\) by training a recommendation model \(\Gamma_{(e)}(u,i,\mathbf{\Psi}_{i}|\Theta_{e})\) for each environment \(e\in\mathcal{E}\), where \(\Theta_{e}\) denotes the parameters of the recommendation model \(\Gamma_{(e)}\), and can be optimized by
\[\arg\min_{\Theta_{e}}\mathcal{L}(\Gamma_{(e)}(u,i,\mathbf{\Psi}_{i}|\Theta_{e })|\mathcal{R}_{e}^{tr}), \tag{2}\]
where the variant representations \(\mathbf{\Psi}_{i}\) are obtained by initializing the invariant mask \(\mathbf{m}\) by \(0.5\). We employ UltraGCN [47] as the recommendation model and drive the representations through a graph-based loss function to encode the user-item graph by
\[\mathcal{L}=\mathcal{L}_{O}+\eta\mathcal{L}_{U}+\kappa\mathcal{L}_{I}, \tag{3}\]
where \(\mathcal{L}_{O}\) is used as the main optimization objective of the recommendation model \(\Gamma(u,i,\mathbf{\Psi}_{i})\), and \(\mathcal{L}_{U}\) and \(\mathcal{L}_{I}\) are used as constraints to learn better user-item graphs, and item-item graphs, respectively. \(\eta\) and \(\kappa\) are used as weights of \(\mathcal{L}_{U}\) and \(\mathcal{L}_{I}\) to adjust the relative importance of user-item and item-item relationships. Following [47], we choose the binary cross entropy loss to calculate \(\mathcal{L}_{O}\) by
\[\mathcal{L}_{O}=-\sum_{(u,i)\in\mathcal{R}^{+}}\log(\sigma(\Gamma(u,i, \mathbf{\Psi}_{i})))-\sum_{(u,j)\in\mathcal{R}^{-}}\log(\sigma(-\Gamma(u,j, \mathbf{\Psi}_{j}))), \tag{4}\]
where \(\sigma\) is the sigmoid function. \(\mathcal{L}_{U}\) is derived by negative log-likelihood, as
\[\mathcal{L}_{U}= -\sum_{(u,i)\in\mathcal{R}^{+}}v_{u,i}\log(\sigma(\Gamma(u,i, \mathbf{\Psi}_{i})))\] \[-(\sum_{(u,j)\in\mathcal{R}^{-}}v_{u,i})\log(\sigma(-\Gamma(u,j, \mathbf{\Psi}_{j}))), \tag{5}\]
where \(v_{u,i}\) and \(v_{u,i}\) can be derived from the user-item graph by
\[v_{u,i}=\frac{1}{d_{u}}\sqrt{\frac{d_{u}+1}{d_{i}+1}}, \tag{6}\]
where \(d_{u}\) and \(d_{i}\) denote the degrees for the corresponding nodes. The term \(\mathcal{L}_{I}\) induced from item-item graph can be calculated by
\[\mathcal{L}_{I}=-\sum_{(u,i)\in\mathcal{R}^{+}}\sum_{j\in S(i)}s_{i,j}\log( \sigma(\Gamma(u,j,\mathbf{\Psi}_{j})), \tag{7}\]
where \(S(i)\) include \(K\) weighted positive sample pairs \((u,k)\) corresponding to each positive sample pair \((u,i)\), which are selected
from the weighted adjacency matrix of the item-item co-occurrence graph \(G\) according to the similarity score \(s_{i,j}\). We calculate \(s_{i,j}\) by
\[s_{i,j}=\frac{G_{i,j}}{g_{i}-G_{i,i}}\sqrt{\frac{g_{i}}{g_{j}}},\quad g_{i}=\sum_ {k=1}^{K}G_{i,k}, \tag{8}\]
where \(G_{i,j}\) represent the number of co-occurrences of item \(i\) and item \(j\), and \(g_{i}\) and \(g_{j}\) denote the degrees of item \(i\) and item \(j\) in \(G\).
In the user-item interaction partitioning phase, we use the trained recommendation model to partition the user-item interaction into the corresponding environments by
\[e\left(u,i\right)=\arg\max_{e\in\mathcal{E}}\Gamma_{\left(e\right)}(u,i,\mathbf{ \Psi}_{i}|\Theta_{e}). \tag{9}\]
The obtained results \(\{\mathcal{R}_{\left(e\right)}|e\in\mathcal{E}\}\) are used in the training of the following invariant mask generation module.
### Invariant Mask Generation
Here, we introduce our invariant mask generation module, which takes multiple environments training data \(\{\mathcal{R}_{\left(e\right)}|e\in\mathcal{E}\}\) as input, and outputs the corresponding invariant mask \(\mathbf{\mathrm{m}}\). As mentioned above, we learn the invariant mask generation module together with the convert module to generate invariant and variant representations across environments. Following InvRL (Liu et al., 2017), we approximate \(\mathbf{\mathrm{m}}=\left(m_{1},m_{2},m_{3},...,m_{d}\right)^{T}\) using clipped Gaussian random variable parameterized by \(\mu=\left(\mu_{1},\mu_{2},\mu_{3},...,\mu_{d}\right)^{T}\) as
\[\mu_{i}=\max\{0,\min\{1,m_{i}+\epsilon\}\}, \tag{10}\]
where \(\epsilon\) is sampled from \(\mathcal{N}(0,\sigma^{2})\). With this approximation, the objective function of the invariant mask generation module can be written as
\[\begin{split}\mathcal{L}_{\text{mask}}&=w_{ERM} \mathbb{E}_{ee\in\mathcal{E}}\mathcal{L}^{e}+w_{IRM}\left\|\mathrm{Var}_{ee \in\mathcal{E}}\left(\nabla_{\Theta^{mask}}\mathcal{L}^{e}\right)\odot\mu \right\|^{2}+\frac{\lambda}{2}\left\|\mathbf{\mathrm{m}}\right\|^{2}\\ &=w_{ERM}\mathcal{L}_{ERM}+w_{IRM}\mathcal{L}_{IRM}+\frac{\lambda }{2}\left\|\mathbf{\mathrm{m}}\right\|^{2},\end{split} \tag{11}\]
where \(\lambda\) represents the weight of the regularization term, \(w_{ERM}\) and \(w_{IRM}\) represent the weights of \(\mathcal{L}_{ERM}\) and \(\mathcal{L}_{IRM}\), respectively. The first term is the ordinary recommended loss, which is the average loss within environment \(\mathcal{E}\) and can be viewed as the ERM loss, i.e.,
\[\mathcal{L}_{ERM}=\mathcal{L}(\Gamma^{mask}(u,i,\mu\odot\mathbf{h}_{i}| \Theta^{mask})|\mathcal{R}_{e}^{tr}), \tag{12}\]
where \(\mathbf{h}_{i}\) symbolizes the weighted representations, \(\Theta^{mask}\) denotes the parameters of \(\Gamma^{mask}\), \(\odot\) means dot product operation. The second term is the cross-environment constraint, which is the IRM loss. The last term is the regularization term.
To learns invariant masks based on the Pareto-optimal solution, architecturally, instead of just using invariant representations (Liu et al., 2017), we incorporate an attention mechanism, which empowers us to dynamically assign weights to both the invariant representations \(\mathbf{\Phi}_{i}\) and variant representations \(\mathbf{\Psi}_{i}\). This attention mechanism allows our model to focus on the most relevant representations from both invariant and variant representations. Formally, the weighted representations \(\mathbf{h}_{i}\) can be expressed as
\[\mathbf{h}_{i}=\alpha_{i}^{\Phi}\cdot\mathbf{\Phi}_{i}+\alpha_{i}^{\Psi}\cdot\mathbf{ \Psi}_{i}, \tag{13}\]
where \(\alpha_{i}^{\Phi}\) and \(\alpha_{i}^{\Psi}\) are implemented using multi-layer perceptron (MLP). Specifically, we first concatenate the collaborative embedding and content representations of users and items, and then use two MLPs, respectively, to obtain the weights of the variant and
Figure 1. The framework of PaInvRL, where V, A, and T denote the extracted visual representations, acoustic representations, and textual representations, respectively. The symbol \(\oplus\) represents the operation of weighted summation.
invariant representations, which can be formalized as
\[\begin{split} a_{i}^{\Phi}&=\mathrm{MLP}_{1}((\mathbf{p} _{u}^{(t)},\mathbf{p}_{u}^{(f)},\mathbf{t}_{i},\mathbf{f}_{i}]),\\ a_{i}^{\Psi}&=\mathrm{MLP}_{2}([\mathbf{p}_{u}^{(t)}, \mathbf{p}_{u}^{(f)},\mathbf{t}_{i},\mathbf{f}_{i}]),\end{split} \tag{14}\]
where \(\mathbf{t}_{i}\) and \(\mathbf{f}_{i}\) denote the collaborative and raw multimedia representations of item \(i\), and \(\mathbf{p}_{u}^{(t)}\) and \(\mathbf{p}_{u}^{(f)}\) denote the corresponding user representations. In such case, the recommendation model \(\Gamma(u,i,\mathbf{h}_{i})\) can be formalized as
\[\begin{split}\Gamma(u,i,\mathbf{h}_{i})=\Gamma(\mathbf{p}_{u}^{(t )},\mathbf{p}_{u}^{(f)},\mathbf{t}_{i},\mathbf{h}_{i})=\langle\mathbf{p}_{u}^ {(t)},\mathbf{t}_{i}\rangle+\langle\mathbf{p}_{u}^{(f)},\mathbf{W}\cdot \mathbf{h}_{i}\rangle,\end{split} \tag{15}\]
where \(\mathbf{W}\) refers to a projection matrix that is used to compress the dimension of the raw multimedia representations. To obtain the Pareto optimal invariant mask, we require to solve the minimization problem of the loss function \(\mathcal{L}_{mask}\) via an adaptive manner, where
\[\begin{split}\min_{w_{ERM},w_{IRM}}&\left\|w_{ERM} \nabla_{\mathbf{m}}\mathcal{L}_{ERM}+w_{IRM}\nabla_{\mathbf{m}}\mathcal{L}_{ IRM}\right\|_{2}^{2},\\ \text{s.t.}& w_{ERM}+w_{IRM}=1,w_{ERM}\geq 0,w_{ IRM}\geq 0,\end{split} \tag{16}\]
with an analytical solution
\[\begin{split} w_{ERM}^{*}=\frac{(\nabla_{\mathbf{m}}\mathcal{L}_{ IRM}(\mathbf{m})-\nabla_{\mathbf{m}}\mathcal{L}_{ERM}(\mathbf{m}))^{\top} \nabla_{\mathbf{m}}\mathcal{L}_{IRM}(\mathbf{m})}{\left\|\nabla_{\mathbf{m}} \mathcal{L}_{ERM}(\mathbf{m})-\nabla_{\mathbf{m}}\mathcal{L}_{IRM}(\mathbf{m} )\right\|_{2}^{2}},\end{split} \tag{17}\]
and we clip \(w_{ERM}^{*}\) to ensure \(0\leq w_{ERM}^{*}\leq 1\) after each iteration
\[\begin{split} w_{ERM}^{*}\leftarrow\max\{0,\min\{1,w_{ERM}^{*}\} \},\end{split} \tag{18}\]
and let \(w_{IRM}=1-w_{ERM}^{*}\). Finally, we update \(\mathbf{m}\) by
\[\begin{split}\mathbf{m}\leftarrow\mathbf{m}-s(w_{ERM}^{*}\nabla _{\mathbf{m}}\mathcal{L}_{ERM}+w_{IRM}^{*}\nabla_{\mathbf{m}}\mathcal{L}_{ IRM}+\lambda\mathbf{m}),\end{split} \tag{19}\]
where \(s\) is the step-size for invariant mask update.
To prove that the gradient-based update in Eq. (16) and Eq. (19) lead to Pareto optimality, i.e., there exists no \(\mathbf{m}^{\prime}\) such that \(\mathcal{L}_{ERM}(\mathbf{m}^{\prime})\leq\mathcal{L}_{ERM}(\mathbf{m})\) and \(\mathcal{L}_{IRM}(\mathbf{m}^{\prime})\leq\mathcal{L}_{IRM}(\mathbf{m})\), we follow (Caktor and Welling, 2016; Goyal et al., 2017; Goyal et al., 2017) to consider the following optimization problem
\[\begin{split}(\Delta\mathbf{m},\zeta)=&\arg\min_{ \zeta}\zeta+\frac{1}{2}\left\|\Delta\mathbf{m}\right\|_{2}^{2},\\ \text{s.t.}&(\nabla_{\mathbf{m}}\mathcal{L}_{ERM})^ {T}\Delta\mathbf{m}\leq\zeta,(\nabla_{\mathbf{m}}\mathcal{L}_{IRM})^{T} \Delta\mathbf{m}\leq\zeta.\end{split} \tag{20}\]
Then we claim the solution to this optimization problem is either \(\Delta\mathbf{m}=0\) and the resulting point satisfies the Karush-Kuhn-Tucker (KKT) conditions (i.e., no other solution in its neighborhood can have lower values in both \(\mathcal{L}_{ERM}\) and \(\mathcal{L}_{IRM}\), thus if we want to improve the performance for a specific task, the other task's performance will be deteriorated), or the solution gives a descent direction that improves both IID and OOD generalization by reducing \(\mathcal{L}_{ERM}\) and \(\mathcal{L}_{IRM}\) simultaneously.
In fact, the Lagrange function of Eq. (20) can be written as
\[\begin{split}&\mathcal{L}(\Delta\mathbf{m},\zeta,w_{ERM},w_{IRM})= \zeta+\frac{1}{2}\left\|\Delta\mathbf{m}\right\|_{2}^{2}\\ &+w_{ERM}((\nabla_{\mathbf{m}}\mathcal{L}_{ERM})^{T}\Delta \mathbf{m}-\zeta)+w_{IRM}((\nabla_{\mathbf{m}}\mathcal{L}_{IRM})^{T}\Delta \mathbf{m}-\zeta),\end{split} \tag{21}\]
where \(w_{ERM}\geq 0\) and \(w_{IRM}\geq 0\) are the Lagrange multipliers. Then
\[\begin{split}\frac{\partial\mathcal{L}}{\partial\Delta_{\mathbf{m }}}=&\Delta_{\mathbf{m}}-w_{ERM}\cdot\nabla_{\mathbf{m}}\mathcal{L}_{ ERM}-w_{IRM}\cdot\nabla_{\mathbf{m}}\mathcal{L}_{IRM}=0,\\ \Rightarrow&\Delta_{\mathbf{m}}=-w_{ERM}\cdot\nabla_{ \mathbf{m}}\mathcal{L}_{ERM}-w_{IRM}\cdot\nabla_{\mathbf{m}}\mathcal{L}_{ IRM},\\ \frac{\partial\mathcal{L}}{\partial\zeta}=& 1-w_{ERM}-w_{IRM}. \Rightarrow w_{ERM}+w_{IRM}=1.\end{split} \tag{22}\]
Notably, the dual problem of Eq. (20) is Eq. (16), and according to KKT condition, we have
\[\begin{split} w_{ERM}^{*}((\nabla_{\mathbf{m}}\mathcal{L}_{ERM})^ {T}\Delta\mathbf{m}^{*}-\zeta^{*})=0,\\ w_{IRM}^{*}((\nabla_{\mathbf{m}}\mathcal{L}_{IRM})^{T}\Delta \mathbf{m}^{*}-\zeta^{*})=0.\end{split} \tag{23}\]
Thus, if \(\Delta\mathbf{m}^{*}=0\), then \((\nabla_{\mathbf{m}}\mathcal{L}_{ERM})^{T}\Delta\mathbf{m}^{*}=(\nabla_{ \mathbf{m}}\mathcal{L}_{IRM})^{T}\Delta\mathbf{m}^{*}=0\). If \(\Delta\mathbf{m}^{*}\neq 0\), then we have \(-\left\|\Delta\mathbf{m}^{*}\right\|_{2}^{2}-\zeta^{*}=0\), which implies that \((\nabla_{\mathbf{m}}\mathcal{L}_{ERM})^{T}\Delta\mathbf{m}^{*}\leq\zeta^{*} \leq-\left\|\Delta\mathbf{m}^{*}\right\|_{2}^{2}\) and \((\nabla_{\mathbf{m}}\mathcal{L}_{IRM})^{T}\Delta\mathbf{m}^{*}\leq\zeta^{*}\leq- \left\|\Delta\mathbf{m}^{*}\right\|_{2}^{2}\), and reduces \(\mathcal{L}_{ERM}\) and \(\mathcal{L}_{IRM}\) simultaneously.
### Representation Convertion
Based on the invariant mask obtained by the invariant mask generation module, we use the convert module to divide the raw multimedia representations into variant representations and invariant representations. Specifically, the invariant representations are
\[\begin{split}\Phi_{i}=\mathbf{m}\odot\mathbf{f}_{i}.\end{split} \tag{24}\]
Correspondingly, the variant representations can be expressed as
\[\begin{split}\Psi_{i}=(1-\mathbf{m})\odot\mathbf{f}_{i},\end{split} \tag{25}\]
where \(\mathbf{m}\in[0,1]^{d}\) is the float invariant mask.
### Final Recommendation Model
By repeating \(T\) times the workflow shown in Figure 1 until convergence, stable invariant masks are generated. Thus, we learn the final recommendation model \(\Gamma^{*}(u,i,\Phi_{i}|\Theta^{*})\) parameterized by \(\Theta^{*}\) based on the invariant representations generated by the convert module. The learning objective shown in Eq. (1) can be rewritten as
\[\begin{split}\arg\min_{\Theta^{*}}\mathcal{L}(\Gamma^{*}(u,i,\Phi_{i} |\Theta^{*})|\mathcal{R}^{tr}).\end{split} \tag{26}\]
The whole training process of PalmvRL is described in Algorithm 1.
## 4. Experiments
In this section, we conduct experiments on three widely used real-world datasets to answer the following research questions:
* **RQ1**: Can PalInvRL outperform other recommendation methods in both IID and OOD tasks?
* **RQ2**: How masks incorporating attention mechanisms affect learned representations?
* **RQ3**: How does each component in \(\mathcal{L}_{\text{mask}}\) affect the performance of PalInvRL in both IID and OOD tasks?
* **RQ4**: How does the number of environments affect the performance of PalInvRL?
### Datasets
We conduct experiments on three publicly available real-world datasets: Movielens, Tiktok, and Kwai. The summary statistics of these datasets are shown in Table 1.
**Movielens.** This dataset is widely used in personalized recommendation tasks. The dataset is constructed by collecting movie titles and descriptions from the Movielens dataset1 and retrieving the corresponding trailers. The visual, acoustic, and textual representations were extracted from the pre-trained ResNet50 (He et al., 2017), VGGish (He et al., 2017), and Sentence2Vec (He et al., 2017), respectively.
Footnote 1: [https://movielens.org/](https://movielens.org/).
**Tiktok.** It is collected from the micro-video sharing platform TikTok2. It includes micro-videos with a duration of 3-15 seconds, along with video captions, user information, and user-item interactions. The multi-modal representations include visual, acoustic, and textual representations of micro-videos. All of the multi-modal representations are provided by the official.
**Kwai.** It is a large-scale micro-video dataset collected from the Kwai platform3. Similar to the TikTok dataset, it includes user information, micro-video content representations, and interaction data. We follow the previous work (He et al., 2017) to obtain the raw multimedia representations. It should be noticed that this dataset only includes visual representations.
Footnote 2: [https://ww.kwai.com/](https://ww.kwai.com/).
Footnote 3: [https://www.kwai.com/](https://www.kwai.com/).
### Experiment Setup
#### 4.2.1. Baselines
To verify the effectiveness of PalInvRL, we compare it with the following baseline methods:
**NGCF (Zhou et al., 2017).** It is based on graph neural networks that explicitly encode collaborative signals as higher-order connections by performing embedding propagation.
**UltraGCN (Wang et al., 2018).** It is an ultra-simplified GCN model that does not perform explicit message passing, but directly approximates the limit of infinite layer graph convolutions by constraining losses.
**LightGCN (He et al., 2017).** It is a graph-based model designed to improve the performance and efficiency of recommendations by simplifying the graph convolution networks.
**VBPR (He et al., 2017).** It is the first model that considers introducing visual representation into the recommendation system by concatenating visual embeddings with id embeddings as the item representations.
**MMGCN (Wang et al., 2018).** It is a model that builds on the message-passing idea of graph neural networks to generate user and micro-video-specific pattern representations to capture user preferences better.
**InvRL (He et al., 2017).** This model introduces IRM to multi-modal recommendations for the first time, which mitigates the effects of spurious correlations by learning invariant item representations.
**MMSSL (Wang et al., 2018).** This method solves the problem of label sparsity in multimedia recommendations by two-stage self-supervised learning to achieve modality-aware data scaling.
#### 4.2.2. Experiment Protocol and Details
Following the previous work (He et al., 2017), three widely-used metrics are adopted to evaluate the ranking performance: Recall@K (R@K), NDCG@K (N@K), and Precision@K (P@K). We set \(K=10\) in our experiments. All the experiments are implemented with PyTorch (Vaswani et al., 2017) and Adam is implemented as the optimizer. The embedding size is fixed to 64 for all models. For Movielens, we set \(d_{V}=2,048\), \(d_{A}=128\), and \(d_{T}=10\). For Tiktok, we set \(d_{V}=128\), \(d_{A}=128\), and \(d_{T}=128\). For Kwai, we only use visual representations and set \(d_{V}=4,096\). The batch size is set to 512 and the number of environments is set to 10. We also set the parameters \(\lambda\) in Eq. (11) to 1, and the hyper-parameters \(\eta\) and \(\kappa\) in Eq. (3) to 0.0001 and 0.01, respectively. The heterogeneous identification module, invariant mask generation module, and the final recommendation model are trained for 20, 40, and 500 epochs, respectively. To evaluate the performance of PalInvRL in both IID and OOD tasks, we first use UltraGCN to identify two environments using the heterogeneous identification module. We train the model in the environment that contains more data, and test the model in the environment that contains less data for the OOD task. We split the training set for the OOD task into two parts with 9:1 ratio to obtain the training set and test set for the IID task.
### Performance Comparison (RQ1)
We report the performance of various methods on all three datasets in Table 2, where the best-performing baselines are bolded. We have the following observations.
First, multi-modality-based methods outperform single-modality-based methods in both IID and OOD tasks, and MMSSL achieves the most competitive performance among all the baseline methods.
Second, in the OOD recommendation task, it shows that PalInvRL significantly outperforms other methods, due to PalInvRL learning invariant representations and identifying spuriously correlation. In addition, it should be noticed that PalInvRL outperforms InvRL, which is attributed to PalInvRL learning a better mask by considering Pareto optimization and weighting both invariant and variant representation using the attention mechanism.
Third, in the IID task, although InvRL achieved better performance than some single-modality-based methods like NGCF and LightGCN, its performance is not as good as that of other multi-modality-based methods like MMGCN. This is because InvRL only focuses on learning invariant representations, which leads to performance degradation in the IID task. However, the proposed method
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Dataset** & **\#Interactions** & **\#Items** & **\#Users** & **Sparsity** & \(d_{V}\) & \(d_{A}\) & \(d_{T}\) \\ \hline
**Movielens** & 1,239,508 & 5,986 & 55,485 & 99.63\% & 2,048 & 128 & 100 \\
**Tiktok** & 726,065 & 76,085 & 36,656 & 99.99\% & 128 & 128 & 128 \\
**Kwai** & 298,492 & 86,483 & 7,010 & 99.98\% & 2,048 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 1. The statistics of datasets. \(d_{V}\), \(d_{A}\), and \(d_{T}\) denote the dimensions of visual, acoustic, and textual modalities.
PalInvRL weights the ERM loss \(\mathcal{L}_{ERM}\) and IRM loss \(\mathcal{L}_{IRM}\) to ensure the learned representations are able to perform well in both IID and OOD tasks. Therefore, PalInvRL also achieves the best performance compared to other methods in the IID task.
Overall speaking, PalInvRL not only outperforms other baseline methods in the OOD task, but also has the best performance in the IID task. In addition, we conduct a more detailed experiment to compare PaInvRL, InvRL, and UltraGCN using Recall@K as the evaluation metric in the OOD task on all three datasets. The results are presented in Figure 2, which indicates that PaInvRL stably outperforms UltraGCN and InvRL across different K values, which further verifies the effectiveness of the proposed method.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Modality**} & \multicolumn{3}{c}{**Movielens**} & \multicolumn{3}{c}{**Tiktok**} & \multicolumn{3}{c}{**Kwai**} \\ \cline{3-11} & & & **P@10** & **R@10** & **N@10** & **P@10** & **R@10** & **N@10** & **P@10** & **R@10** & **N@10** \\ \hline \multirow{6}{*}{**IIID**} & **NGC**[72] & Single & 0.0180 & 0.1355 & 0.0383 & 0.0138 & 0.0409 & 0.0513 & 0.0425 & 0.0487 & 0.0697 \\ & **UltraGCN**[47] & Single & 0.0126 & 0.1060 & 0.0418 & 0.0163 & 0.0437 & 0.0543 & 0.0459 & 0.0509 & 0.0729 \\ & **LightGCN**[16] & Single & 0.0215 & 0.1643 & 0.0554 & 0.0164 & 0.0444 & **0.0584** & 0.0496 & 0.0435 & 0.0688 \\ & **VBPR**[15] & Multi & 0.0176 & 0.1290 & 0.0400 & 0.0142 & 0.0409 & 0.0469 & 0.0409 & 0.0476 & 0.0682 \\ & **MMGCN**[79] & Multi & 0.0207 & 0.1613 & 0.0641 & 0.0154 & 0.0444 & **0.0584** & 0.0496 & 0.0535 & 0.0738 \\ & **InvRL**[10] & Multi & 0.0218 & **0.1681** & 0.0617 & 0.0213 & 0.0440 & 0.0576 & 0.0528 & 0.0549 & 0.0729 \\ & **MMSSL**[76] & Multi & 0.0237 & 0.1587 & 0.0572 & 0.0202 & 0.0443 & 0.0555 & 0.0523 & 0.0518 & 0.0748 \\ & **PaInvRL (ours)** & Multi & **0.0240** & 0.1660 & **0.0650** & **0.0229** & **0.0463** & 0.0578 & **0.0536** & **0.0595** & **0.0815** \\ \hline \multirow{6}{*}{**OOD**} & **NGC**[72] & Single & 0.0191 & 0.0733 & 0.0474 & 0.0048 & 0.0060 & 0.0153 & 0.0411 & 0.0828 & 0.1466 \\ & **UltraGCN**[47] & Single & 0.0212 & 0.0708 & 0.0508 & 0.0043 & 0.0061 & 0.0174 & 0.0321 & 0.0784 & 0.1464 \\ & **LightGCN**[16] & Single & 0.0159 & 0.0638 & 0.0412 & 0.0053 & 0.0082 & 0.0169 & 0.0420 & 0.0883 & 0.1331 \\ & **VBPR**[15] & Multi & 0.0165 & 0.0649 & 0.0396 & 0.0036 & 0.0057 & 0.0153 & 0.0324 & 0.0763 & 0.1504 \\ & **MMGCN**[79] & Multi & 0.0188 & 0.0732 & 0.0463 & 0.0044 & 0.0058 & 0.0161 & 0.0402 & 0.0831 & 0.1503 \\ & **InvRL**[10] & Multi & 0.0253 & 0.0791 & 0.0543 & 0.0059 & 0.0097 & 0.0226 & 0.0407 & 0.0862 & 0.1894 \\ & **MMSSL**[76] & Multi & 0.0225 & 0.0813 & 0.0537 & 0.0055 & 0.0098 & 0.0234 & 0.0462 & 0.1036 & 0.1682 \\ & **PaInvRL (ours)** & Multi & **0.0291** & **0.0825** & **0.0584** & **0.0067** & **0.0107** & **0.0252** & **0.0524** & **0.1113** & **0.2061** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Performance comparison on different datasets in terms of Recall@10, Precision@10, and NDCG@10.
Figure 3. Visualization of the masks on different modalities and corresponding patterns on all three datasets.
Figure 2. The performance comparison between UltraGCN, InvRL and PaInvRL on all three datasets using Recall@K as evaluation metric, where K is varying from 0 to 100 with step-size 10.
### Ablation Study (RQ2)
In ablation studies, we first investigate the effect of IRM loss \(\mathcal{L}_{IRM}\) and ERM loss \(\mathcal{L}_{ERM}\), which is used for training the invariant mask generation module. Then we discuss how the generated mask \(m\) works. Additionally, we conducted experiments to study the impact of environmental quantity on experimental performance.
We consider three cases in the ablation study: only with the ERM loss \(\mathcal{L}_{ERM}\), only with the IRM loss \(\mathcal{L}_{IRM}\), and with adaptively weighted ERM loss and IRM loss \(\mathcal{L}_{ERM}+\mathcal{L}_{IRM}\) across all three datasets. The experiment results are shown in Table 3. From this table, we can observe that when PalInvRL only with the IRM loss, it is able to achieve the best performance in the OOD task but perform the worst in the IID task. Meanwhile, when only using the ERM loss, it will perform best in the IID task but perform worst in the OOD task. It shows that if only focusing on a single task, though we will obtain a competitive result, the model performance is harmed in another task, which shows the necessity of considering two tasks simultaneously. When using weighted IRM loss and ERM loss together, it has competitive performance in both IID tasks and OOD tasks. Overall, using only one type of loss or simply using a hyper-parameter to weight them directly (i.e., InvRL) cannot achieve good recommendation performance. When we adaptively weight ERM loss and IRM loss together and obtain the weights from a Pareto optimal solution, it obtains competing recommendation performance. This can be attributed to the fact that our solution cannot be dominated by other solutions. In other words, there does not exist any solution that performs better than our solution on both IID and OOD tasks at the same time.
### In Depth Analysis (RQ3, RQ4)
**Study on the Generated Mask (RQ3).** To study the effect of the generated mask \(m\) in the invariant mask generation module, we visualize the invariant mask generated on three datasets, Movielens, Tiktok, and Kwai, as shown in Figure 3. According to the results in Figure 3, the generated masks show different distributions in different modalities, especially, for the Movielens and Tiktok datasets, which contain three different modal representations of visual, acoustic, and textual. Since the Kwai dataset has only one modal representation, the distribution of the masks varies subtly. Additionally, our method demonstrates a more uniform distribution across different modalities compared to InvRL (Deng et al., 2019). It can be attributed to the fact that PalInvRL learns a better mask by considering Pareto optimization during mask generation, while InvRL only considers a simple hyper-parameter to weight two losses together.
**Study on Number of Environments (RQ4).** To investigate the capacity of PalInvRL under different numbers of environments, we conduct several experiments on the Movielens dataset with different numbers of environments. The experimental results are shown in Figure 4. First, PalInvRL performs better under a moderate number of experiments. When the number of environments is small, we cannot effectively separate the variant and invariant information. When the number of environments is large, only a few samples are in each environment. Therefore, either too small or too large number will harm the performance of the proposed method.
## 5. Conclusions
In this paper, we provide a fresh perspective on the optimization dilemma in the IID-OOD generalization task of multimedia recommendation from a multi-objective optimization viewpoint. We propose a new Pareto-optimality-based invariant representation learning method, PalInvRL, which adaptively assigns the weights of ERM loss and IRM loss to obtain Pareto-optimal solutions. In contrast to previous approaches like InvRL, our gradient-based invariant mask generation method is shown to provide a descent direction that improves both IID and OOD generalization by reducing ERM and IRM losses simultaneously. This allows the final recommendation model trained on the learned invariant representations to achieve Pareto optimality in both IID and OOD recommendation tasks. Extensive experimental results show that our method achieves significant performance improvements compared to various baselines on three public datasets. In our future work, it is interesting to enhance the explainability of the learned invariant representations by developing a GNN-based explainer to learn causal effects on modality-aware user-item interaction graphs. This will help provide insights into how the invariant representations contribute to the recommendation performance and enable us to make more informed decisions in the recommendation process.
Figure 4. Experimental comparison of different environment numbers on IID and OOD recommendation tasks.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline \multirow{3}{*}{**Task**} & \multirow{3}{*}{**Loss**} & \multicolumn{3}{c}{**Movielens**} & \multicolumn{3}{c}{**Tiktok**} & \multicolumn{3}{c}{**Kwai**} \\ \cline{3-10} & & **P@10** & **R@10** & **N@10** & **P@10** & **R@10** & **N@10** & **P@10** & **R@10** & **N@10** \\ \hline \multirow{3}{*}{**IID**} & \(\mathcal{L}_{ERM}\) & 0.0253 & 0.1068 & 0.0410 & 0.0263 & 0.0537 & 0.0643 & 0.0574 & 0.0619 & 0.0837 \\ & \(\mathcal{L}_{IRM}\) & 0.0141 & 0.1027 & 0.0311 & 0.0217 & 0.0385 & 0.0572 & 0.0562 & 0.0612 & 0.0831 \\ & \(\mathcal{L}_{ERM}\) + \(\mathcal{L}_{IRM}\) & 0.0240 & 0.1660 & 0.0650 & 0.0229 & 0.0463 & 0.0578 & 0.0536 & 0.0595 & 0.0815 \\ \hline \multirow{3}{*}{**OOD**} & \(\mathcal{L}_{ERM}\) & 0.0212 & 0.0583 & 0.0380 & 0.0058 & 0.0097 & 0.0234 & 0.0508 & 0.1026 & 0.1983 \\ & \(\mathcal{L}_{IRM}\) & 0.0353 & 0.1009 & 0.0693 & 0.0068 & 0.0103 & 0.0235 & 0.0575 & 0.1504 & 0.2522 \\ \cline{1-1} & \(\mathcal{L}_{ERM}\) + \(\mathcal{L}_{IRM}\) & 0.0291 & 0.0825 & 0.0584 & 0.0067 & 0.0107 & 0.0252 & 0.0524 & 0.1113 & 0.2061 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Performance comparison with different loss components in the IID and OOD tasks.
## Acknowledgments
This work was supported by grants from the National Major Science and Technology Projects of China (grant no: 2022YFB3303302), the National Natural Science Foundation of China (grant nos: 61977012, 62207007) and the Central Universities Project in China for the Digital Cinema Art Theory and Technology Lab at Chongqing University (grant nos: 2021CDJYGRH011, 2020CDJSK06PT14).
|
2307.12939 | The width of embedded circles | We develop a Morse-Lusternik-Schnirelmann theory for the distance between two
points of a smoothly embedded circle in a complete Riemannian manifold. This
theory suggests very naturally a definition of width that generalises the
classical definition of the width of plane curves. Pairs of points of the
circle realising the width bound one or more minimising geodesics that
intersect the curve in special configurations. When the circle bounds a totally
convex disc, we classify the possible configurations under a further geometric
condition. We also investigate properties and characterisations of curves that
can be regarded as the Riemannian analogues of plane curves of constant width. | Lucas Ambrozio, Rafael Montezuma, Roney Santos | 2023-07-24T17:08:32Z | http://arxiv.org/abs/2307.12939v2 | # The width of embedded circles
###### Abstract.
We develop a Morse-Lusternik-Schnirelmann theory for the distance between two points of a smoothly embedded circle in a complete Riemannian manifold. This theory suggests very naturally a definition of width that generalises the classical definition of the width of plane curves. Pairs of points of the circle realising the width bound one or more minimising geodesics that intersect the curve in special configurations. When the circle bounds a totally convex disc, we classify the possible configurations under a further geometric condition. We also investigate properties and characterisations of curves that can be regarded as the Riemannian analogues of plane curves of constant width.
L.A. is supported by CNPq - Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (309908/2021-3 - Bolsa PQ) and by FAPERJ - Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro (grant SEI-260003/000534/2023 - BOLSA E-26/200.175/2023 and grant SEI-260003/001527/2023 - APQ1 E-26/210.319/2023). R.M. is supported by Instituto Serrapilheira grant "New perspectives of the min-max theory for the area functional" and by CNPq - Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (311028/2020.9 - Bolsa PQ). R.S. is supported by Instituto Serrapilheira grant "New perspectives of the min-max theory for the area functional".
smooth embedded curve, that in fact contains no segment orthogonal to the boundary curve, see Figure 1 in [5].
The function \(\theta\in S^{1}\mapsto w(\theta)\in(0,+\infty)\) contains interesting information about of the geometry of \(\Gamma\). For instance, the _Cauchy-Crofton formula_ for convex curves in the Euclidean plane [28] computes the length of such rectifiable curves \(\Gamma\) from the information about the width of \(\Gamma\) in almost every direction. More precisely, if \(\Gamma\) is convex, we have
\[\int_{S^{1}}w(\theta)d\theta=2L(\Gamma).\]
An immediate and interesting consequence of this formula is the inequality
\[\frac{w(\Gamma)}{L(\Gamma)}\leq\frac{1}{\pi},\]
which is an equality if and only if \(\Gamma\) has the same width in all directions, that is, \(\Gamma\) is a _curve of constant width_. These curves are also characterised as those for which the width \(w(\Gamma)\) equals the diameter of \(\Gamma\).
More generally, consider a smoothly embedded circle \(\Gamma\) in a complete Riemannian surface. What is the right generalisation of the "width" for such objects? Can it also be realised as the length of a certain minimising geodesic intersecting the curve orthogonally? How does this notion of "width" compare to the diameter and the length of the curve? What should be the analogues of plane curves of constant width in this more general setup?
We investigate these and other related questions, adopting the perspective of a min-max variational theory for a non-local, geometric functional.
### Lusternik-Schnirelmann theory
The set \(\mathcal{P}\) of subsets of the circle \(S^{1}\) with at most two elements is a compact manifold that can be identified with the set of ordered pairs \((p,q)\in S^{1}\times S^{1}\) modulo the equivalence relation that identifies \((p,q)\) and \((q,p)\). In particular, \(\mathcal{P}\) is homeomorphic to a Mobius band, whose boundary points are precisely the singletons \(\{p\}\subset S^{1}\). The elements of \(\mathcal{P}\) will be denoted by \(\{p,q\}\) (where, of course, we may have \(p=q\)).
If \(\Gamma\) is a smoothly embedded circle in some complete Riemannian manifold \((M^{n},g)\), we define a functional \(\mathcal{D}:\mathcal{P}\to\mathbb{R}\) by setting
\[\mathcal{D}(\{p,q\})=d(p,q)\quad\text{for all}\quad p,q\in\Gamma,\]
where \(d:M\times M\to[0,+\infty)\) is the Riemannian distance of \((M^{n},g)\).
Clearly, \(\mathcal{D}\) is a non-negative continuous functional. While its absolute minimum is zero and attained precisely at the subsets with exactly one element, its maximum is just the _diameter_ of \(\Gamma\) in \((M^{n},g)\), that is, the maximum distance in \((M^{n},g)\) between pairs of points of \(\Gamma\).
This particular bounded continuous function on \(\mathcal{P}\) has more interesting properties, though. In fact, it passes down continuously to the quotient \(\mathcal{P}_{*}\) of \(\mathcal{P}\) by the relation that identifies all boundary points of \(\mathcal{P}\) as a single
point. By doing so, we can regard \(\mathcal{D}\) as a continuous non-negative function on the real projective plane \(\mathcal{P}_{*}\), which attains its minimum value zero and its maximum value equal to the diameter of \(\Gamma\) in \((M^{n},g)\).
Inspired by the Morse and Lusternik-Schnirelmann theories about the number of critical points of smooth functions on a real projective plane, one could hope to detect another "critical point" of \(\mathcal{D}\) at some "critical level" between the other two. The general notion of "width" of the curve \(\Gamma\) in \((M^{n},g)\) that we will propose captures precisely this intuition.
Before we continue, there are two issues of technical nature that are worth to be highlighted, because the geometric functional \(\mathcal{D}\) has two interesting features from a variational perspective that pose new difficulties for the min-max method.
The first feature is the non-local character of \(\mathcal{D}\). By this we just mean that the geometry of \((M^{n},g)\) must be known in order to compute the distance between pairs of points in \(\Gamma\). In that regard, one can make an analogy between the "critical value" that we are seeking to define and the first Steklov eigenvalue of a compact Riemannian surface with connected boundary. (_Cf._ Section 4.1 of the survey article [13]).
The second feature is the fact that the Riemannian distance function is not smooth in general. For instance, it is not smooth if there is a pair of points that is connected by two or more minimising geodesics. While in Euclidean space this does not happen, in other Riemannian manifolds this is a possibility.
Nevertheless, the Riemannian distance function is well-behaved enough so to allow a distinction between "critical" and "regular" points of \(\mathcal{D}\), and the development of a Morse theory. In the case of the distance function to a fixed point in a complete Riemannian manifold, the suitable notions were introduced by Grove and Shiohama [17], and yielded important results in Riemannian Geometry (for instance, the Diameter Sphere Theorem [17] and Gromov's Betti Number Theorem [15]). As it will become clearer from the discussion that follows, we will eventually prove that the "critical value" of \(\mathcal{D}\) predicted by the above informal discussion is indeed attained as the value of \(\mathcal{D}\) at some "critical point".
### Regular and critical points
Let \((M^{n},g)\) be a complete Riemannian manifold, and let \(\Gamma\) be a smoothly embedded circle in \((M^{n},g)\).
Given points \(p\) and \(q\) in \(M\), let \(\gamma:[0,a]\to M\) be a geodesic of \((M^{n},g)\) joining \(p\) and \(q\). If \(\gamma\) is not the trivial geodesic, we always assume it to be normalised, _i.e._\(|\gamma^{\prime}|=1\). A geodesic \(\gamma\) is called _minimising_ if its length \(L(\gamma)\) is equal to the Riemannian distance \(d(p,q)\) between \(p\) and \(q\) in \((M^{n},g)\).
By the first variation formula of the length, if \(c_{t}\) is a smooth variation of a geodesic \(c_{0}=\gamma\) by smooth curves in \(M\) joining points of \(\Gamma\), then
\[\frac{d}{dt}_{|_{t=0}}L(c_{t})=\langle v_{2},\gamma^{\prime}(a)\rangle- \langle v_{1},\gamma^{\prime}(0)\rangle=\langle v_{2},(\gamma^{\prime}(a))^{T }\rangle-\langle v_{1},(\gamma^{\prime}(0))^{T}\rangle, \tag{1.3.1}\]
where \(v_{1}\in T_{p}\Gamma\), \(v_{2}\in T_{q}\Gamma\) are the boundary values of the variational vector field \(dc_{t}/dt_{|t=0}\) along \(\gamma\), and \((\gamma(0)^{\prime})^{T}\), \((\gamma(a)^{\prime})^{T}\) denote orthogonal projections onto \(T\Gamma\). (We follow standard conventions and use \(\langle-,-\rangle\) for \(g\) sometimes).
As before, denote by \(\mathcal{P}\) the set of subsets \(\{x,y\}\subset\Gamma\) with at most two elements, and let \(\mathcal{D}:\mathcal{P}\to[0,+\infty)\) be the functional that assigns to each \(\{x,y\}\in\mathcal{P}\) the Riemannian distance between \(x\) and \(y\) in \((M^{n},g)\), \(\mathcal{D}(\{x,y\})=d(x,y)\). Motivated by the above remarks, we adopt the definition below.
**Definition 1.1**.: _The set \(\{p,q\}\subset\Gamma\) is called a regular point of \(\mathcal{D}\) if there exists a vector \((v_{1},v_{2})\in T_{p}\Gamma\times T_{q}\Gamma\) such that, for every minimising geodesic \(\gamma\) joining \(p=\gamma(0)\) to \(q=\gamma(a)\), we have_
\[\langle v_{2},\gamma^{\prime}(a)\rangle-\langle v_{1},\gamma^{\prime}(0) \rangle<0.\]
_The set \(\{p,q\}\subset\Gamma\) is called a critical point of \(\mathcal{D}\) if it is not a regular point._
Thus, if \(\{p,q\}\subset\Gamma\) is a regular value, there exists a vector field \(V\) on \(M\), that is tangent to \(\Gamma\), and whose flow \(\phi_{t}\) has the following property: For _every_ minimising geodesic \(\gamma\) in \((M^{n},g)\) joining \(p\) and \(q\), the curves \(\phi_{t}(\gamma)\) for small \(t>0\) are curves with extremities in \(\Gamma\) such that \(L(\phi_{t}(\gamma))<L(\gamma)=d(p,q)\). In particular, the extremities of the curves \(\phi_{t}(\gamma)\), for \(t>0\) small enough, are at distance strictly smaller than \(d(p,q)\).
From Definition 1.1, it is clear that all singletons \(\{p\}\subset\Gamma\) are trivially critical points. More interestingly, if \(p\) and \(q\) are the extremities of a _free boundary_ minimising geodesic, _i.e._ a minimising geodesic that is orthogonal to \(\Gamma\) at both extremities, then \(\{p,q\}\subset\Gamma\) is a critical point of \(\mathcal{D}\).
There are, however, many other types of critical points. For instance, if \(p\), \(q\in\Gamma\) are points that are joined by two minimising geodesics \(\gamma_{1},\gamma_{2}:[0,a]\to M\) that are _simultaneously stationary_ in the sense that there exists a constant \(c>0\) such that
\[(\gamma_{1}^{\prime}(0))^{T}=-c(\gamma_{2}^{\prime}(0))^{T}\quad\text{and} \quad(\gamma_{1}^{\prime}(a))^{T}=-c(\gamma_{2}^{\prime}(a))^{T},\]
then \(\{p,q\}\) is a critical point of \(\mathcal{D}\) as well. There are smoothly embedded circles in Riemannian manifolds such that all non-trivial critical points of \(\mathcal{D}\) bound a pair of simultaneously stationary minimising geodesics, while none of them are the extremities of a free boundary minimising geodesic, see Example 4 in Section 8.
Free boundary minimising geodesics and pairs of simultaneously stationary minimising geodesics will play an important role when \(\Gamma\) is the boundary of a totally convex disc inside a Riemannian surface, see Section 1.6.
Finally, we remark that when the distance function \(d\) of \((M^{n},g)\) restricted to pairs of different points is a smooth function, it is immediate to check that Definition 1.1 captures precisely the standard regular and critical points of the restriction of \(d\) to pairs of different points of \(\Gamma\). This is the case, for instance, of Cartan-Hadamard manifolds.
### The width of a curve
Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold \((M^{n},g)\). Following the discussion in Section
1.2, we propose a definition of the width of \(\Gamma\). First, we define certain closed loops in the projective plane \(\mathcal{P}_{*}\) that we call _sweepouts_.
**Definition 1.2**.: _A family of pairs of points \(\{p_{t},q_{t}\}\subset\Gamma\), \(t\in[0,1]\), is called a sweepout of \(\Gamma\) when the following conditions hold:_
1. \(p_{0}=q_{0}\) _and_ \(p_{1}=q_{1}\)_._
2. \(t\in[0,1]\mapsto p_{t}\in\Gamma\) _and_ \(t\in[0,1]\mapsto q_{t}\in\Gamma\) _are continuous maps._
3. \(p_{t}\) _and_ \(q_{t}\) _bound a closed arc_ \(C_{t}\subset\Gamma\) _in such way that_ \(t\in[0,1]\mapsto C_{t}\) _is a continuous map with_ \(C_{0}=\{p_{0}\}=\{q_{0}\}\) _and_ \(C_{1}=\Gamma\)_._
Here, the distance between arcs \(C_{1}\), \(C_{2}\subset\Gamma\) is measured by the length of the symmetric difference \(C_{1}\Delta C_{2}=(C_{1}\setminus C_{2})\cup(C_{2}\setminus C_{1})\).
Sweepouts can be regarded as homotopically non-trivial loops in the projective plane \(\mathcal{P}_{*}\). Indeed, this space is doubly covered by the set \(\mathcal{O}_{*}\) formed by ordered pairs \((C_{1},C_{2})\) consisting of closed arcs of the circle \(\Gamma\) such that \(\partial C_{1}=\partial C_{2}=C_{1}\cap C_{2}\). (Here, we abuse the notation and write \(\partial\{x\}=\{x\}\) and \(\partial\Gamma=\Gamma\), for convenience). The covering map is \((C_{1},C_{2})\in\mathcal{O}_{*}\mapsto\partial C_{1}=\partial C_{2}\in \mathcal{P}_{*}\). Then, a continuous path \(t\in[0,1]\mapsto\{p_{t},q_{t}\}\subset\mathcal{P}_{*}\) is a sweepout of \(\Gamma\) if and only if the continuous lift \(t\in[0,1]\mapsto(C_{t},\Gamma\setminus int(C_{t}))\in\mathcal{O}_{*}\), where \(\partial C_{t}=\{p_{t},q_{t}\}\) for \(t\in(0,1)\), starts at \((\{p_{0}\},\Gamma)\) and finishes at \((\Gamma,\{p_{1}\})\neq(\{p_{0}\},\Gamma)\).
**Definition 1.3**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold \((M^{n},g)\). The width of \(\Gamma\) is the number_
\[\mathcal{S}(\Gamma)=\inf_{\{p_{t},q_{t}\}\in\mathcal{V}}\;\max_{t\in[0,1]}d(p _{t},q_{t})\]
_where \(\mathcal{V}\) is the set of sweepouts \(\{p_{t},q_{t}\}\) of \(\Gamma\)._
Since the subset of points that divides \(\Gamma\) into two arcs of equal length form a compact subset of \(\mathcal{P}\), and since every sweepout of \(\Gamma\) contains points \(\{p_{t},q_{t}\}\subset\Gamma\) with this property because of condition \(iii)\), we have
\[\mathcal{S}(\Gamma)>0.\]
If \(\Gamma\) is the boundary of a region \(\Omega\) in a complete Riemannian surface \((M^{2},g)\), and if all minimising geodesics joining points of \(\Omega\) lie in \(\Omega\), the distance between pairs of points in \(\Gamma=\partial\Omega\) depends only on \((\Omega,g)\), and \(\mathcal{S}(\partial\Omega)\) can be regarded as a Riemannian invariant of \((\Omega,g)\).
### The basic min-max theorem
We are now ready to formulate the most basic version of the min-max theorem for the functional \(\mathcal{D}\).
**Theorem A**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold \((M^{n},g)\). Then \(\mathcal{S}(\Gamma)>0\) and there exists a critical point \(\{p,q\}\subset\Gamma\) of \(\mathcal{D}\) such that_
\[\mathcal{D}(\{p,q\})=\mathcal{S}(\Gamma).\]
An immediate consequence of Definition 1.1 is that, if some non-trivial critical point \(\{p,q\}\subset\Gamma\) bounds only one minimising geodesic, then this minimising geodesic meets \(\Gamma\) orthogonally both at \(p\) and at \(q\). Thus, if all pairs
of points of \((M^{n},g)\) are joined by a unique minimising geodesic, Theorem A guarantees that \(\Gamma\) is always cut orthogonally by some geodesic that is, at the same time, the curve of least length in \((M^{n},g)\) joining its extremities in \(\Gamma\). This is the case for Cartan-Hadamard manifolds, for example.
On the other hand, there are examples of curves \(\Gamma\) bounding convex regions of Riemannian surfaces such that \(\mathcal{S}(\Gamma)\) is not attained as the length of a free boundary minimising geodesic, see Example 4 in Section 8.
The preceding remarks already points to the fact that Theorem A detects a different geometric feature of certain Riemannian surfaces with convex, connected boundary \((\Omega,g)\) than other notions of width arising from min-max procedures designed to produce free boundary geodesics on such surfaces. Before we elaborate on this point in Section 1.10, we discuss what else can be said about the minimising geodesics bounded by a non-trivial critical point of \(\mathcal{D}\) in \(\partial\Omega\), when \(\Omega\) is a disc.
### The width of the boundary of discs
Let \(\Omega\) be a totally convex disc with smooth boundary inside a complete Riemannian surface \((M^{2},g)\). Recall that a subset of a Riemannian manifold is called _totally convex_ when every geodesic with extremities in the subset lies entirely in the subset. Thus, for most of our purposes, we may well forget about the ambient surface and work within \((\Omega,g)\). (A large class of examples of these objects is described in Example 2 in Section 8).
The boundary of a totally convex disc \((\Omega,g)\) is _convex_ in \(\Omega\) in the sense that the second fundamental form \(A\) of \(\partial\Omega\), with respect to the outward pointing unit normal \(N\), satisfies \(A(X,X)=\langle\nabla_{X}N,X\rangle\geq 0\) for all vector fields \(X\) that are tangent to \(\partial\Omega\). (If \(X\) has norm one, \(A(X,X)\) is nothing but the _geodesic curvature_ of \(\partial\Omega\) in \((\Omega,g)\)). If the strict inequality holds for all non-zero tangent vectors to the boundary, we say that the boundary of \(\Omega\) is _strictly convex_.
If the extremities of a geodesic \(\gamma\) lie in \(\partial\Omega\), we say that \(\gamma\) is _proper_ whenever it intersects \(\partial\Omega\) only at its extremities. By the convexity assumption on \(\Omega\), a non-proper geodesic joining two boundary points must be itself part of \(\partial\Omega\). In particular, if the boundary is strictly convex, then all geodesics joining boundary points are proper.
Under these geometric conditions, critical points of \(\mathcal{D}\) on \(\partial\Omega\) and the minimising geodesics they bound enjoy further properties. For instance, a uniqueness property holds (Proposition 2.8), and local minima of \(\mathcal{D}\) bound exactly one minimising geodesic (Proposition 2.9). Moreover, Theorem A can be refined to yield much more precise information about critical points of \(\mathcal{D}\) at the level \(\mathcal{S}(\partial\Omega)\), at least under a natural geometric condition that we will introduce shortly.
First, we need to recall the definition of the Morse index of a free boundary geodesic. Given a free boundary proper geodesic \(\gamma:[0,a]\to\Omega\) and a vector field \(X\) along \(\gamma\) that is normal to \(\gamma\) (and therefore tangent to \(\Gamma\) at the
extremities of \(\gamma\)), define
\[Q(X,X)=\int_{0}^{a}\left(|\nabla_{\gamma^{\prime}}X|^{2}-K|X|^{2} \right)dt\\ -A(X(\gamma(0)),X(\gamma(0)))-A(X(\gamma(a)),X(\gamma(a))),\]
where \(K\) is the Gaussian curvature of \((\Omega,g)\).
This quadratic expression in \(X\in\Gamma(N\gamma)\) appears when one computes the second variation of the length of a smooth family of smooth curves \(c_{t}\), with extremities in \(\partial\Omega\), starting at \(c_{0}=\gamma\) and with variational vector field \(X\) along \(\gamma\). In fact, in this case
\[\frac{d^{2}}{dt^{2}}_{|_{t=0}}L(c_{t})=Q(X,X). \tag{1.6.1}\]
The geodesic \(\gamma\) is called _free boundary stable_ if \(Q(X,X)\geq 0\) for every \(X\in\Gamma(N\gamma)\), and _free boundary unstable_ otherwise. The _free boundary index_ of the geodesic \(\gamma\) is the index of the quadratic form \(Q\). (If \(\gamma\) is minimising, then the second variation formula (1.6.1) implies \(Q(X,X)\geq 0\) for every \(X\in\Gamma(N\gamma)\) that vanishes at the extremities, but this condition does not imply free boundary stability).
For instance, consider \(X\) so that \(\{\gamma^{\prime}(s),X(\gamma(s))\}\) is an orthonormal basis for all \(t\). Then \(Q(X,X)<0\) if the Gaussian curvature of \((\Omega,g)\) is non-negative and its boundary is strictly convex, or if the Gaussian curvature is positive and the boundary is convex. In other words, under these geometric assumptions, no free boundary geodesic of \((\Omega,g)\) is free boundary stable.
The geometric assumption
\[(\star)\]
_no free boundary stable geodesic exists on \((\Omega,g)\)_
allows a rather detailed understanding of the possible configurations of minimising geodesics bounded by the critical points whose existence is guaranteed by Theorem A.
**Theorem B**.: _Let \((\Omega,g)\) be a totally convex disc with smooth boundary in a complete Riemannian surface. Assume \((\Omega,g)\) has property \((\star)\). Then every critical point \(\{p,q\}\subset\partial\Omega\) of \(\mathcal{D}\) with \(p\neq q\) satisfies_
\[d(p,q)\geq\mathcal{S}(\partial\Omega).\]
_Moreover, let \(\{p,q\}\subset\partial\Omega\) be a critical point with \(d(p,q)=\mathcal{S}(\partial\Omega)\). Then:_
* _If there exists a free boundary minimising geodesic joining_ \(p\) _and_ \(q\)_, then this geodesic has free boundary index one._
* _If two different minimising geodesics_ \(\gamma_{1},\gamma_{2}:[0,a]\to\Omega\) _joining the points_ \(p\) _and_ \(q\) _satisfy_ \[\langle(\gamma_{1}^{\prime}(0))^{T},(\gamma_{2}^{\prime}(0))^{T}\rangle\leq 0 \quad\text{and}\quad\langle(\gamma_{1}^{\prime}(a))^{T},(\gamma_{2}^{\prime}(a ))^{T}\rangle\leq 0,\] _and none of them is free boundary, then_ \(\gamma_{1}\) _and_ \(\gamma_{2}\) _are simultaneously stationary._
This theorem and its proof are reminiscent of some results of Marques and Neves [20]. There are examples of Riemannian discs that satisfy the assumptions of Theorem B and provide examples of both behaviours described in \(i)\) and \(ii)\), see Example 4 in Section 8.
On the other hand, if the assumption (\(\star\)) is dropped, there are examples where one finds critical points \(\{x,y\}\subset\partial\Omega\) of \(\mathcal{D}\) with \(d(x,y)<\mathcal{S}(\partial\Omega)\), see Example 5 in Section 8.
Plane convex regions with smooth boundary are totally convex regions of the plane, and among them the strictly convex ones satisfy assumption (\(\star\)). Therefore, a consequence of Theorem B is that the geometric invariant \(\mathcal{S}(\partial\Omega)\) of a strictly convex plane region \(\Omega\) with smooth boundary is equal to the width of \(\partial\Omega\) discussed in Section 1.1. This gives some justification to our choice of terminology. Moreover, since plane smooth convex curves can be approximated by plane smooth strictly convex curves (_e.g._ by flowing it a bit by the curve shortening flow), by a straightforward continuity argument we can actually conclude:
**Corollary 1.4**.: _Let \(\Omega\) be a compact, convex, regular domain of the Euclidean plane. Then \(\mathcal{S}(\partial\Omega)=w(\partial\Omega)=\min_{\theta\in S^{1}}w(\theta)\)._
Non-convex plane smooth curves might be such that \(\mathcal{S}(\Gamma)<w(\Gamma)\), though. For instance, a thin U-shaped curve \(\Gamma\) has strictly larger widths \(w(\theta)\) in all directions than the value of \(\mathcal{S}(\Gamma)\).
Another consequence of Theorem B is the following existence theorem, of independent interest:
**Corollary 1.5**.: _Let \((\Omega,g)\) be a totally convex disc with smooth boundary in a complete Riemannian surface. Assume \((\Omega,g)\) has property (\(\star\)). Then, either there exists a free boundary proper minimising geodesic of free boundary index one, or there exists a pair of simultaneously stationary minimising geodesics with extremities in \(\partial\Omega\)._
### Width, diameter and length
Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold. Clearly, the distance between two points of \(\Gamma\) is at most the diameter of \(\Gamma\), which by its turn is bounded by the length of the shortest arc of \(\Gamma\) bounded by them. Thus,
\[\mathcal{S}(\Gamma)\leq diam(\Gamma)\leq\frac{1}{2}L(\Gamma),\]
Notice that \(L(\Gamma)/2\) is nothing but the _(intrinsic) diameter_ of \((\Gamma,g_{|_{\Gamma}})\).
In Section 8, we describe examples where equality holds between the first and second numbers, or between the second and the third numbers, without all of them being equal, see Examples 4 and 8.
It is interesting to characterise when any of the above inequalities is an equality. The most interesting case is perhaps the equality between the width and the diameter, because plane curves of constant width have this property.
**Theorem C**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold._
1. _If there exists a continuous map_ \(\phi:\Gamma\to\Gamma\) _such_ \(d(x,\phi(x))=diam(\Gamma)\)_, then_ \(\mathcal{S}(\Gamma)=diam(\Gamma)\)_._
2. _If_ \(\mathcal{S}(\Gamma)=diam(\Gamma)\)_, then for every_ \(x\in\Gamma\) _there exists_ \(y\in\Gamma\) _such that_ \(d(x,y)=diam(\Gamma)\)_._
_Assume, moreover, that \(\Gamma\) is the boundary of a totally convex smoothly embedded disc. Then \(\mathcal{S}(\Gamma)=diam(\Gamma)\) if and only if there exists a continuous map \(\phi:\Gamma\to\Gamma\) such \(d(x,\phi(x))=diam(\Gamma)\)._
Notice that a continuous map \(\phi:\Gamma\to\Gamma\) such that \(d(x,\phi(x))=diam(\Gamma)\) is a _monotone_ homeomorphism of the circle \(\Gamma\), _i.e._ its lift \(\hat{\phi}:\mathbb{R}\to\mathbb{R}\) is an increasing homeomorphism of the real line.
The equality between the extrinsic and intrinsic diameters of a curve \(\Gamma\), on the other hand, is easily characterised. In fact, \(diam(\Gamma)=L(\Gamma)/2\) if and only if \(\Gamma\) is a closed geodesic formed by two minimising geodesics of the same length \(L(\Gamma)/2\), see Lemma 5.6.
Finally, we characterise the equality between the three geometric invariants above as follows:
**Theorem D**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold. The following assertions are equivalent:_
1. \(\mathcal{S}(\Gamma)=L(\Gamma)/2\)_._
2. _For every_ \(x\)_,_ \(y\in\Gamma\)_, the distance between_ \(x\) _and_ \(y\) _equals the length of the shortest arc of_ \(\Gamma\) _bounded by these two points._
In particular, under any of these conditions, \(\Gamma\) is a geodesic such that any points \(x\), \(y\in\Gamma\) bounding arcs of \(\Gamma\) of the same length lie at distance \(d(x,y)=L(\Gamma)/2\). Moreover, the pair \(\{x,y\}\) divides \(\Gamma\) in two minimising geodesics. (See also Remark 5.8).
It is interesting to contrast the objects described in Theorem D to Riemannian fillings of the circle [16]. Recall that a Riemannian filling of the circle is a Riemannian surface \((M^{2},g)\) with connected boundary such
\[d_{(M,g)}(x,y)=d_{(\partial M,g_{|_{\partial M}})}(x,y)\quad\text{for all} \quad x,y\in\partial M.\]
In Section 8, we describe several examples of fillings, see Example 9.
In our view, the comparison between the width and the boundary length of a curve is, to a certain extent, analogous to the comparison between the first Steklov eigenvalue and the boundary length of a compact surface [13], or even to the isosistolic/isodiastolic inequalities investigated in [2] and [3]. Theorem D, in particular, characterises curves attaining the absolute maximum of the scaling invariant quotient \(\mathcal{S}(\Gamma)/L(\Gamma)\). In contrast, no Riemannian surface with connected, strictly convex boundary is such that its boundary is a local maximum of \(\mathcal{S}/L\), see Proposition 5.9.
In Section 1.10, we discuss relations between the above three geometric invariants of \(\Gamma\) and other min-max geometric quantities. Before doing
that, let us derive an immediate consequence of the results of this section, regarding the set of critical points of \(\mathcal{D}\)
### On the number of critical points
Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold. Combining Theorems A and C, we can confirm that the functional \(\mathcal{D}\) on \(\mathcal{P}_{*}\) enjoys similar variational properties to the smooth maps \(\Phi:[v]\in\mathbb{RP}^{2}\mapsto\langle A(v),v\rangle/|v|^{2}\in\mathbb{R}\), where \(A:\mathbb{R}^{3}\to\mathbb{R}^{3}\) is a linear self-adjoint operator with one-dimensional kernel and non-negative eigenvalues. (_Cf._[19], Chapter 4, SS7, example 2).
**Theorem E**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold. Let \(\mathcal{D}\) be the distance function on pairs of points of \(\Gamma\). Then, there are two possibilities:_
1. \(0<\mathcal{S}(\Gamma)<diam(\Gamma)\)_, in which case_ \(\mathcal{D}\) _has at least two non-trivial critical points in_ \(\mathcal{P}_{*}\)_._
2. \(\mathcal{S}(\Gamma)=diam(\Gamma)\)_, in which case_ \(\mathcal{D}\) _has infinitely many non-trivial critical points in_ \(\mathcal{P}_{*}\)_._
_Assume, moreover, that \(\Gamma\) is the boundary of a smoothly embedded totally convex disc. Then, in case \((2)\), the set of all non-trivial critical points of \(\mathcal{D}\) is a continuously embedded, homotopically non-trivial circle in \(\mathcal{P}_{*}\)._
The estimate on the minimal number of non-trivial critical points is sharp, as the ellipses show.
### An application: involutive symmetry
We would like to illustrate how the concepts and methods introduced in this paper lead very naturally to generalisations of classical results about plane curves of constant width to a Riemannian setting.
A very simple result about plane curves of constant width is that circles centred at the origin are the only such curves that are invariant by the involution \(x\mapsto-x\). The generalisation we propose reads as follows:
**Theorem F**.: _Let \(\Omega\) be a totally convex disc with smooth boundary in a complete Riemannian surface \((M^{2},g)\). Assume that every two points of \(\partial\Omega\) are joined by a unique geodesic._
_If \((\Omega,g)\) admits an isometric involution with no fixed boundary points and_
\[\mathcal{S}(\partial\Omega)=diam(\partial\Omega),\]
_then the involution has a unique fixed point \(x_{0}\in\Omega\), \(\mathcal{S}(\partial\Omega)/2\) is not bigger than the injectivity radius of \((M^{2},g)\) at \(x_{0}\), and \(\Omega\) is the geodesic ball of \((M^{2},g)\) of diameter \(\mathcal{S}(\partial\Omega)=diam(\partial\Omega)\) and center \(x_{0}\)._
The assumption on the uniqueness of geodesics is satisfied by convex discs in Cartan-Hadamard surfaces and by convex discs in a hemisphere of an Euclidean sphere, for instance. The converse of Theorem F is discussed in Example 7 in Section 8. It is interesting to observe that there are discs satisfying all the hypotheses of Theorem F, except the property that every two
points of \(\partial\Omega\) are joined by a unique geodesic, and which are not rotationally symmetric, nor geodesic balls, see Example 6 in Section 8.
On the other hand, there are plane curves of constant width with reflection symmetries. There are also plane curves of constant width without any non-trivial symmetries at all [10]. In other words, among Riemannian discs with convex boundary such that the width of the boundary equals the diameter of the boundary, those described by Theorem F form a special class.
### Comparison between min-max invariants
Let \((\Omega,g)\) be a totally convex disc with smooth boundary in a complete Riemannian surface. We proved that if \(\Omega\) is a convex plane region, then \(\mathcal{S}(\partial\Omega)\) coincides with the width of the boundary curve in the sense of the narrowest slab containing this curve, see Corollary 1.4. There are other min-max quantities that also generalise the classical notion in this sense. Let us describe two of them.
A min-max construction to prove the existence of free boundary geodesics was considered by Xin Zhou in [32]. Define the number
\[E_{*}=\inf_{\{\alpha_{t}\}}\max_{t\in[0,1]}\,\int_{0}^{1}|\alpha_{t}^{\prime} (u)|^{2}du,\]
where \(\{\alpha_{t}\}\) is a continuous path of \(W^{1,2}\) maps \(\alpha_{t}:[0,1]\to\Omega\) such that
* the extremities \(\alpha_{t}(0)\) and \(\alpha_{t}(1)\) belong to \(\partial\Omega\);
* the map \((t,u)\in[0,1]^{2}\mapsto\alpha_{t}(u)\in\Omega\) is continuous;
* the maps \(\alpha_{0}\) and \(\alpha_{1}\) are constant maps; and
* \(\{\alpha_{t}\}\) is homotopic to a fixed path \(\{\overline{\alpha}_{t}\}\) with all the above properties.
Zhou proved that, if \(E_{*}>0\), then \(E_{*}\) is realised as the energy of a free boundary geodesic \(\gamma:[0,1]\to\Omega\). In particular, its length \(L(\gamma)=\sqrt{E_{*}}\) can be regarded as a min-max invariant \(w_{*}\), which is a critical value of the length functional. In the same article, Zhou also considered the more general setting in which \(\partial\Omega\) is replaced by a closed submanifold of general dimension and codimension, and without convexity assumptions. (For related work, see also [14], [31], [23] and [18]).
The free boundary setting of the Almgren-Pitts min-max theory for curves on surfaces was investigated by Donato [8] and Donato and the second named author [9]. Consider the min-max invariant
\[\omega(\Omega,g)=\inf_{\{c_{t}\}}\sup_{t\in[0,1]}L(c_{t}),\]
where the infimum is considered over the paths \(\{c_{t}\}\) of relative flat cycles modulo \(2\), which are homotopically non-trivial loops (with no base point fixed) in the space of relative cycles modulo \(2\). This definition allows for more sweepouts of \(\Omega\) than those considered in [32]. For instance, \(c_{t}\) can be a network of curves with extremities that lie either in \(\partial\Omega\) or at interior junctions at which an even number of curves meet. In particular, \(c_{t}\) could be a closed embedded curve. It is proven in [9] that, if the Gaussian curvature of \((\Omega,g)\) is non-negative and its boundary is strictly convex, then \(\omega(\Omega,g)\) is
realised either as the length of a free boundary geodesic, or as the length of a geodesic loop whose vertex lies at the boundary of \(\Omega\). (See [9], Theorem 1.1). Moreover, while \(\omega(\Omega,g)<L(\partial\Omega)\) always hold, examples show that \(\omega(\Omega,g)\) can be arbitrarily close to the boundary length (see [9], Section 6).
When \(\Omega\) is a strictly convex planar domain, all the three numbers \(\mathcal{S}(\partial\Omega)\), \(\omega(\Omega,euc)\) and \(w_{*}\) coincide. If the Gaussian curvature of \(\Omega\) is non-negative and its boundary is strictly convex, then \(\mathcal{S}(\partial\Omega)\leq\omega(\Omega,g)\leq w_{*}\). If moreover \(\mathcal{S}(\partial\Omega)=\omega(\Omega,g)\), then \(\mathcal{S}(\partial\Omega)=w_{*}\) as well, and these three numbers coincide with the length of some free boundary minimising geodesic. On the other hand, there are examples of such discs for which
\[\mathcal{S}(\partial\Omega)<\frac{L(\partial\Omega)}{2}<\omega(\Omega,g)<L( \partial\Omega)<w_{*}, \tag{1.10.1}\]
see Figure 1. The justification for all these assertions is given in Section 7.
These assertions show that \(\mathcal{S}(\partial\Omega)\) is more suitable for the formulation of the results of this article than the other two min-max quantities.
### Perspectives
It would be interesting to know whether some refinement of the min-max Theorems A and B could prove that smoothly embedded totally convex discs always contain either a free boundary minimising geodesic with free boundary index at most one, or a pair of minimising geodesics, with extremities on the boundary, that are simultaneously stationary. (_Cf_. Corollary 1.5).
It would also be interesting to see a systematic development of the Morse-Lusternik-Schnirelman theory of the distance function between pairs of points of compact embedded submanifolds of Riemannian manifolds, and a systematic investigation of the geometric meaning of the width-like invariants appearing in such theory in comparison to other geometric invariants.
We believe that the methods of this paper can be generalised to the functional that computes the area of solutions to the Plateau problem for a given Jordan curve in a sphere inside a Riemannian three-manifold. We intend to describe this generalisation in a future work.
Figure 1. The curves realising different min-max widths.
### Plan for the paper
In Section 2, we develop the basic theory of regular and critical points of the distance functional \(\mathcal{D}:\mathcal{P}_{*}\mapsto[0,+\infty)\) along the lines of [17] and [15]. (See also the excellent exposition of this theory by W. Meyer [22]. We point out that a related, but different notion of critical pairs of points of the distance function has been investigated in [1]). In particular, we prove the existence of gradient-like flows on the set of regular points (Proposition 2.5). We also prove that critical points satisfy special properties when the embedded circle bounds a totally convex smoothly embedded disc, see in particular Proposition 2.8 and 2.9.
Building on these preliminary results, we prove in Section 3 the basic min-max theorem asserting that the width \(\mathcal{S}\) is a critical value of \(\mathcal{D}\) (Theorem 3.1).
The basic min-max Theorem A is improved in Section 4, under the assumption that the circle is the boundary of a totally convex disc. The key argument is to show that, under the extra assumption that no free boundary stable minimising geodesic exist in the disc, every critical point belongs to a sweepout restricted to which the functional attains a strict maximum exactly at this critical point, and moreover this sweepout is "monotone" near this critical point (Propositions 4.4 and 4.5). Birkhoff's curve shortening process, adapted to curves with boundary as in [14], [32] and [9], is used here as a convenient technical device. This leads to Theorem B (Theorem 4.8).
Section 5 contains the characterisations of the equalities between the width and the diameter (Theorem 5.5), the diameter and half the length (Lemma 5.6), and the width and half the length of a smoothly embedded circle (Theorem 5.7). The combination of Theorem A and Theorem C yields the proof of the sharp estimate for the number of critical points of \(\mathcal{D}\), Theorem E, which is explained at the end of Section 5.
Section 6 is dedicated to the proof of Theorem F (Theorem 6.1), and Section 7 to the comparison between min-max quantities associated to Riemannian discs with non-negative Gaussian curvature and strictly convex boundary. Finally, Section 8 contains the several examples which illustrate different aspects of what was discussed in the rest of the paper.
## 2. Basic properties of regular and critical points
Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold \((M^{n},g)\). We denote by \(\mathcal{P}\) the set consisting of subsets \(\{x,y\}\) of \(\Gamma\) with at most two points, endowed with the obvious topology with respect to which \(\{x_{k},y_{k}\}\to\{x,y\}\) if and only if \(x_{k}\to x\) and \(y_{k}\to y\). Recall that \(\mathcal{P}\) is a smooth surface with boundary \(\partial P=\{\{p\}\in\mathcal{P}\,|\,p\in\Gamma\}\), diffeomorphic to a Moebius band. We identify \(\mathcal{X}\in T_{\{p,q\}}\mathcal{P}\) with sets \(\{\mathcal{X}(p),\mathcal{X}(q)\}\) where \(\mathcal{X}(p)\in T_{p}\Gamma\) and \(\mathcal{X}(q)\in T_{q}\Gamma\). The surface \(\mathcal{P}\setminus\partial\mathcal{P}\) inherits a Riemannian metric so that \(\langle\mathcal{X},\mathcal{Y}\rangle(\{p,q\})=\langle\mathcal{X}(p),\mathcal{ Y}(p)\rangle+\langle\mathcal{X}(q),\mathcal{Y}(q)\rangle\).
The quotient of \(\mathcal{P}\) by the identification of all boundary points as a single point \(0_{*}=[\{x,x\}]\) is a projective plane that we denote \(\mathcal{P}_{*}\).
We assume non-trivial geodesics are parametrized by arc length. When \(\gamma\) joins \(p\) and \(q\in\Gamma\), the orthogonal projections of outward-pointing conormal vectors of \(\gamma\) onto \(T\Gamma\) define an element \(\nu_{\gamma}^{T}=\{\nu_{\gamma}^{T}(p),\nu_{\gamma}^{T}(q)\}\) of \(T\mathcal{P}\). If \(\gamma:[0,a]\to M\) is such that \(\gamma(0)=p\) and \(\gamma(a)=q\), then
\[\nu_{\gamma}(p)=-\gamma^{\prime}(0)\quad\text{and}\quad\nu_{\gamma}(q)=\gamma^ {\prime}(a).\]
Denote by \(d(x,y)\) the Riemannian distance between points \(x\) and \(y\) in \((M^{n},g)\). We collect below some basic lemmas concerning the "Morse Theory" of the distance functional,
\[D:\{p,q\}\in\mathcal{P}\mapsto d(p,q)\in[0,+\infty),\]
developed according to the notions of regular and critical points introduced in Definition 1.1.
We begin with the statement of a simple compactness result.
**Lemma 2.1**.: _If \(\{p_{i},q_{i}\}\subset\Gamma\) converges to \(\{p,q\}\subset\Gamma\), then any sequence of minimising geodesics \(\gamma_{i}\) joining \(p_{i}\) and \(q_{i}\) has a converging subsequence to a minimising geodesic \(\gamma\) joining \(p\) and \(q\)._
Recall that regular points \(\{x,y\}\subset\Gamma\) of \(\mathcal{D}\) are such that \(x\neq y\). It is not difficult to check that if \(x\) and \(y\) are different points of \(\Gamma\) that lie in some geodesically convex ball of \((M^{n},g)\) and, moreover, bound a very small arc of \(\Gamma\), then \(\{x,y\}\subset\Gamma\) is a regular point of \(\mathcal{D}\). Thus, there exists \(\varepsilon>0\) such that no critical point of \(\mathcal{D}\) exists on \(\{\{x,y\}\in\mathcal{P}\,|\,0<d(x,y)<\varepsilon\}\).
**Lemma 2.2**.: _Suppose \(\{p,q\}\subset\Gamma\) is a regular point of \(\mathcal{D}\). Then, there exist disjoint open neighbourhoods \(U_{1}\) of \(p\) and \(U_{2}\) of \(q\) in \(\Gamma\) with the following property: there exist \(\theta>0\) and vector fields \(v_{1}\) on \(U_{1}\) and \(v_{2}\) on \(U_{2}\) that are tangent to \(\Gamma\) such that_
\[\langle v_{1}(x),\nu_{\gamma}^{T}(x)\rangle+\langle v_{2}(y),\nu_{\gamma}^{T} (y)\rangle\leq-\theta \tag{2.0.1}\]
_for every minimising geodesic \(\gamma\) joining a pair \(\{x,y\}\) with \(x\in U_{1}\) and \(y\in U_{2}\)._
Proof.: By Definition 1.1 and the compactness of minimising geodesics bounding the regular point \(\{p,q\}\) (Lemma 2.1), it follows that there exists a number \(\theta>0\) and a vector \((v_{1},v_{2})\in T_{p}\Gamma\times T_{q}\Gamma\) such that
\[\langle v_{1},\nu_{\gamma}^{T}(p)\rangle+\langle v_{2},\nu_{\gamma}^{T}(q) \rangle\leq-2\theta\]
for every minimising geodesic \(\gamma\) joining \(p\) and \(q\). Extend \(v_{1}\) and \(v_{2}\) smoothly and tangentially to \(\Gamma\) in some small neighbourhood of \(p\) and \(q\). (Notice that \(p\neq q\), so that there is no ambiguity). Then, using again Lemma 2.1, possibly after taking smaller neighbourhoods we find an open neighbourhood of \(\{p,q\}\) in \(\mathcal{P}\) in such way that the vector fields \(v_{1}\) and \(v_{2}\) satisfy
\[\langle v_{1}(x),\nu_{\gamma}^{T}(x)\rangle+\langle v_{2}(y),\nu_{\gamma}^{T} (y)\rangle\leq-\theta\]
for all \(\{x,y\}\) in this neighbourhood of \(\{p,q\}\) in \(\mathcal{P}\) and for all minimising geodesics \(\gamma\) joining such pairs \(\{x,y\}\)
**Corollary 2.3**.: _The critical points of \(\mathcal{D}\) form a compact subset of \(\mathcal{P}\)._
Proof.: In fact, by the Lemma 2.2, its complement (the set of regular points) is an open subset of \(\mathcal{P}\).
From now on, we denote by \(\mathcal{U}\) the set of regular points of \(\mathcal{D}\) in \(\mathcal{P}\).
**Lemma 2.4**.: _Let \(\mathcal{U}\subset\mathcal{P}\) be the open subset of regular points of \(\mathcal{D}\). There exists a unit vector field \(\mathcal{X}\in\text{T}\mathcal{U}\) such that \(\langle\mathcal{X},\nu_{\gamma}^{T}\rangle<0\) for every minimising geodesic \(\gamma\) joining a pair of points \(\{x,y\}\in\mathcal{U}\)._
Proof.: In a finite dimensional vector space with an inner product, linear combinations \(w=\lambda_{1}w_{1}+\ldots+\lambda_{k}w_{k}\) with \(\lambda_{i}\in[0,1]\) and \(\sum\lambda_{i}=1\) of vectors \(w_{i}\) that satisfy \(\langle w_{i},v\rangle<0\) for all \(i\) also satisfy \(\langle w,v\rangle<0\). Thus, we may glue the vector fields \(V\) constructed in Lemma 2.2 together, by means of a partition of unity subordinated to some locally finite cover of \(\mathcal{U}\) in \(\mathcal{P}\) consisting of open sets as in Lemma 2.2, and normalise the resulting vector field on \(\mathcal{U}\) so to obtain a unit vector field on \(\mathcal{U}\) with the desired properties.
**Proposition 2.5**.: _Let \(\mathcal{X}\) be a unit vector field on \(\mathcal{U}\subset\mathcal{P}\) as in Lemma 2.4 and let \(\mathcal{K}\) be a compact subset of \(\mathcal{U}\). Denote by \(\phi_{t}\) the flow of \(\mathcal{X}\) starting at points of \(\mathcal{U}\). Then, there exists \(\theta>0\) such that, for every pair of points \(\{x,y\}\in\mathcal{K}\),_
\[\mathcal{D}(\phi_{t}(\{x,y\}))\leq d(x,y)-\theta t\]
_and_
\[\mathcal{D}(\phi_{-t}(\{x,y\})\geq d(x,y)+\theta t\]
_as long as the flow starting at \(\{x,y\}\in\mathcal{K}\) exists in \(\mathcal{K}\)._
Proof.: Fix some compact \(\mathcal{L}\subset\mathcal{U}\) whose interior contains the given compact subset \(\mathcal{K}\subset\mathcal{P}\). The flow \(\phi_{t}\) of \(\mathcal{X}\) (and of \(-\mathcal{X}\)) starting at any point \(\{x,y\}\in\mathcal{K}\) exists for a uniformly positive time and \(\phi_{t}(\{x,y\})\) remains in \(\mathcal{L}\) for this short duration of time. As a consequence of Lemma 2.4 and the compactness of \(\mathcal{L}\subset\mathcal{U}\), there exists \(\theta>0\) such that \(\langle\mathcal{X},\nu_{\gamma}^{T}\rangle\leq-\theta\) for every \(\{x,y\}\in\mathcal{L}\) and every minimising geodesic \(\gamma\) joining \(x\) and \(y\).
Given \(\{p,q\}\in\mathcal{K}\), we write \(\phi_{t}\{p,q\}=\{p_{t},q_{t}\}\subset\Gamma\) for all \(t\) contained in the interval of existence of the flow starting at \(\{p,q\}\). Notice that \(p_{t}\) and \(q_{t}\) are smooth functions of \(t\). (There is no ambiguity since \(p_{t}\neq q_{t}\) as long as the flow exists in \(\mathcal{U}\)). The proof of the Lemma will be finished as soon as we check that the continuous function \(h(t)=d(p_{t},q_{t})\), defined for those values of \(t\) such that \(\{p_{t},q_{t}\}\) lies in \(\mathcal{K}\), satisfies the differential inequality \(h^{\prime}\leq-\theta\) in the sense of support functions.
In order to check this claim, we proceed as follows. Given \(t_{0}\) in the domain of \(h\), choose once and for all some minimising geodesic \(\gamma\) of \((M^{n},g)\) joining \(p_{t_{0}}\) and \(q_{t_{0}}\), and some smooth variation \(\gamma_{t}\) of \(\gamma_{0}=\gamma\) by curves with extremities \(p_{t_{0}+t}\) and \(q_{t_{0}+t}\). Then, define the smooth function \(\hat{h}(t)=L(\gamma_{t})\) for all \(t\) sufficiently small so that the definition makes sense.
Since \(\gamma=\gamma_{0}\) is minimising,
\[\hat{h}(0)=L(\gamma)=d(p_{t_{0}},q_{t_{0}})=h(t_{0}).\]
Since \(\gamma_{t}\) is a curve in \((M^{n},g)\) joining the points \(p_{t_{0}+t}\) and \(q_{t_{0}+t}\), we have \(\hat{h}(t)=L(\gamma_{t})\geq d(p_{t_{0}+t},q_{t_{0}+t})=h(t_{0}+t)\). Thus, \(\hat{h}\) is a support function for \(h\) at \(t_{0}\), which moreover satisfies
\[\hat{h}^{\prime}(0)=\frac{d}{dt}_{|_{t=0}}L(\gamma_{t})=\big{\langle}\frac{d} {dt}_{|_{t_{0}}}p_{t},\nu_{\gamma}(p_{t_{0}})\big{\rangle}+\big{\langle}\frac{ d}{dt}_{|_{t_{0}}}q_{t},\nu_{\gamma}(q_{t_{0}})\big{\rangle}\]
by the first variation formula (1.3.1) for the geodesic \(\gamma=\gamma_{0}\). By construction, \(\mathcal{X}(\{p_{t_{0}},q_{t_{0}}\})=\frac{d}{dt}_{|_{t=t_{0}}}\phi_{t}(\{p_ {t},q_{t}\})=\{\frac{d}{dt}_{|_{t_{0}}}p_{t},\frac{d}{dt}_{|_{t_{0}}}q_{t}\}\). Hence,
\[\hat{h}^{\prime}(0)=\langle\mathcal{X}(p_{t_{0}}),\nu_{\gamma}^{T}(p_{t_{0}}) \rangle+\langle\mathcal{X}(q_{t_{0}}),\nu_{\gamma}^{T}(q_{t_{0}})\rangle\leq-\theta.\]
The claim follows, and this finishes the proof of the proposition.
As an immediate corollary, we have that boundary points at distance \(diam(\Gamma)\) are critical points of \(\mathcal{D}\). More generally:
**Corollary 2.6**.: _Any local maximum or local minimum \(\{p,q\}\subset\Gamma\) of \(\mathcal{D}\) is a critical point of \(\mathcal{D}\)._
To finish this section, we discuss some properties of minimising geodesics bounding a pair \(\{x,y\}\subset\Gamma\) of critical points of \(\mathcal{D}\) when \(\Gamma\) bounds a totally convex disc \(\Omega\) in a complete Riemannian surface.
Two-dimensional discs are special from a topological point of view because embedded curves joining boundary points divide \(\Omega\) into two components, by Jordan Theorem. We always orient it so that the induced orientation on \(\partial\Omega\) is the counter-clockwise orientation.
Since minimising geodesics joining the same two points cannot intersect except at their extremities, the topological observation above implies that, for every \(\{p,q\}\subset\Gamma\) with \(p\neq q\), there exists (possibly equal) minimising geodesics \(\gamma_{+}\) and \(\gamma_{-}\), with extremities \(p\) and \(q\), such that every other minimising geodesic between \(p\) and \(q\) lie in between the region bounded by \(\gamma_{+}\) and \(\gamma_{-}\). The pair \(\gamma_{+}\) and \(\gamma_{-}\) is unique up to relabelling. We call them the _extremal minimising geodesics_ of the pair \(\{p,q\}\).
Of course, \(p\) and \(q\) are joined by a unique minimising geodesic if and only if \(\gamma_{+}=\gamma_{-}\). Furthermore, \(\gamma_{+}\) or \(\gamma_{-}\) may be in fact arcs of \(\partial\Omega\). Notice that the only scenario where both extremal minimising geodesics bounded by \(p\) and \(q\) are arcs of \(\partial\Omega\) is when \(\partial\Omega\) is a closed geodesic formed by two minimising arcs with extremities \(p\) and \(q\).
**Proposition 2.7**.: _Let \((\Omega,g)\) be a totally convex disc with smooth boundary in some complete Riemannian surface, and denote by \(T\) the unit tangent vector field to \(\partial\Omega\) that is compatible with its orientation._
_Let \(\{p,q\}\subset\partial\Omega\) be a non-trivial critical point of \(\mathcal{D}\), and denote by \(\gamma_{+}\) and \(\gamma_{-}\) the extremal minimising geodesics joining \(p\) and \(q\), labelled in such way that \(\langle\nu_{\gamma_{+}}(p),T(p)\rangle\leq\langle\nu_{\gamma_{-}}(p),T(p)\rangle\)._
_Then_
\[\langle\nu_{\gamma_{+}}(p),T(p)\rangle\leq 0\leq\langle\nu_{\gamma_{-}}(p),T(p)\rangle\]
_and_
\[\langle\nu_{\gamma_{-}}(q),T(q)\rangle\leq 0\leq\langle\nu_{\gamma_{+}}(q),T(q)\rangle.\]
Proof.: If \(\gamma_{-}=\gamma_{+}\), then there is only one minimising geodesic joining \(p\) and \(q\), and it must be a free boundary minimising geodesic because \(\{p,q\}\subset\partial\Omega\) is a critical point of \(\mathcal{D}\). In this case, there is nothing else to be proven.
From now on, we therefore assume \(\gamma_{+}\neq\gamma_{-}\). Observe that since all minimising geodesics joining \(p\) and \(q\) must lie in the region of \(\Omega\) bounded between \(\gamma_{-}\) and \(\gamma_{+}\), every minimising geodesic \(\gamma\) joining these two points is such that the number \(\langle\nu_{\gamma},T(p)\rangle\) belongs to the closed interval between the numbers \(\langle\nu_{\gamma_{+}},T(p)\rangle\) and \(\langle\nu_{\gamma_{-}},T(p)\rangle\) (which is contained in \([-1,1]\)). For, otherwise, the minimising geodesic \(\gamma\) issues from \(p\) enters the complement of the region bounded between \(\gamma_{-}\) and \(\gamma_{+}\) and must stay there until it reaches \(q\), because it cannot cross either \(\gamma_{+}\) or \(\gamma_{-}\). But this contradicts the extremal property of these two minimising geodesics. A similar assertion holds, of course, at the point \(q\).
**Claim**: The interval between the numbers \(\langle\nu_{\gamma_{+}}(p),T(p)\rangle\) and \(\langle\nu_{\gamma_{-}}(p),T(p)\rangle\) contains zero, and the interval between the numbers \(\langle\nu_{\gamma_{+}}(q),T(q)\rangle\) and \(\langle\nu_{\gamma_{-}}(q),T(q)\rangle\) contains zero.
Suppose not. Then, at the point \(p\) we have \(\langle\nu_{\gamma_{+}}(p),T(p)\rangle<0\) and \(\langle\nu_{\gamma_{-}}(p),T(p)\rangle<0\), say. It follows from the preceding observations that the vector \((v_{1},v_{2})=(T(p),0)\in T_{p}\partial\Omega\times T_{q}\partial\Omega\) is such that \(\langle v_{1},\nu_{\gamma}^{T}\rangle+\langle v_{2},\nu_{\gamma}^{T}\rangle= \langle T(p),\nu_{\gamma}^{T}\rangle<0\) for all minimising geodesics joining \(p\) and \(q\). But this contradicts the assumption that \(\{p,q\}\subset\partial\Omega\) is a critical point of \(\mathcal{D}\). Making obvious modifications, the same argument rules out all other possibilities except those described in the claim.
Next, recall that we labelled \(\gamma_{+}\) and \(\gamma_{-}\) in such way that \(\langle\nu_{\gamma_{+}}(p),T(p)\rangle\leq\langle\nu_{\gamma_{-}}(p),T(p)\rangle\) holds. Hence, it follows from the Claim that
\[\langle\nu_{\gamma_{+}}(p),T(p)\rangle\leq 0\leq\langle\nu_{\gamma_{-}}(p),T(p)\rangle. \tag{2.0.2}\]
The Claim also implies that there are two possibilities for the numbers \(\langle\nu_{\gamma_{\pm}}(q),T(q)\rangle\). If \(\langle\nu_{\gamma_{+}}^{T}(q),T(q)\rangle\leq 0\leq\langle\nu_{\gamma_{-}}^{T}(q),T(q)\rangle\), then it it is easy to see that this angle conditions together with (2.0.2) imply that the different geodesics \(\gamma_{+}\) and \(\gamma_{-}\) cross each other at some point inside the disc \(\Omega\). Since they are minimising geodesics joining \(p\) and \(q\), this is not possible. Therefore the other possibility holds, namely
\[\langle\nu_{\gamma_{-}}(q),T(q)\rangle\leq 0\leq\langle\nu_{\gamma_{+}}(q),T(q)\rangle,\]
as we wanted to show.
Using Proposition 2.7, we can prove a uniqueness result that will play a key role in Sections 4 and 5.
**Proposition 2.8**.: _Let \((\Omega,g)\) be a totally convex Riemannian disc with smooth boundary in a complete Riemannian surface. If \(\{p,q\}\), \(\{p,r\}\subset\partial\Omega\) are critical points of \(\mathcal{D}\) with \(p\neq q\) and \(p\neq r\), then \(q=r\)._
Proof.: Suppose, by contradiction, that \(q\neq r\). Let \(\{T,N\}\subset T_{p}\Omega\) be an orthonormal basis so that \(N\) points inside \(\Omega\) and \(T\) defines the induced orientation of \(T_{p}\partial\Omega\). Relabelling the points \(q\) and \(r\), if necessary, we may assume that \(p\), \(q\) and \(r\) appears in this order as we move in the counter-clockwise direction along the boundary.
Since \(\{p,q\}\) is critical, one of its extremal minimising geodesics, say \(\gamma_{-}\), will issue from \(p\) with \(\langle\gamma_{-}^{\prime}(0),T\rangle\leq 0\), by Proposition 2.7. Similarly, since \(\{p,r\}\) is also critical, one of its extremal minimising geodesics, say \(\gamma_{+}\), will issue from \(p\) with \(\langle\gamma_{+}^{\prime}(0),T\rangle\geq 0\), by Proposition 2.7.
By the relative position of \(r\) with respect to \(p\) and \(q\), the point \(r\) does not lie in the component \(A\) of \(\Omega\setminus\gamma_{-}\) that contains the (oriented) arc of \(\Gamma\) between \(p\) and \(q\). Similarly, the point \(q\) does not lie in the component \(B\) of \(\Omega\setminus\gamma_{+}\) that contains the (oriented) arc of \(\Gamma\) between \(r\) and \(p\).
Since \(\gamma_{-}\) and \(\gamma_{+}\) are not equal (because we are assuming by contradiction that their extremities \(q\) and \(r\) are not equal), their tangent vectors at \(p\) are not equal. Since \(\langle\gamma_{-}^{\prime}(0),T\rangle\leq 0\leq\langle\gamma_{+}^{\prime}(0),T\rangle\), there are two possibilities. Either \(0<\langle\gamma_{+}^{\prime}(0),T\rangle\) so that \(\gamma_{+}^{\prime}(0)\) points inside \(A\), in which case \(\gamma_{+}\) must cross \(\gamma_{-}\) somewhere in \(\Omega\) before reaching \(r\in\Omega\setminus\overline{A}\), a contradiction. Or \(\langle\gamma_{-}^{\prime}(0),T\rangle<0\) so that \(\gamma_{-}^{\prime}(0)\) points inside \(B\), and a contradiction arises similarly. The proposition follows.
We remark that the distance function \(d_{p}\) to a given boundary point \(p\), when restricted to \(\partial\Omega\), \((d_{p})_{|_{\partial\Omega}}\), may have more than one critical point in \(\partial\Omega\), say \(q\neq q^{\prime}\). For instance, consider a smooth plane convex curve that contains an arc of circle centred at some point \(p\) of the curve. What Proposition 2.8 says is that, when \((\Omega,g)\) is a totally convex disc with _smooth_ boundary in a complete Riemannian surface, then such point \(p\) cannot be part of two different non-trivial critical points \(\{p,q\},\{p,q^{\prime}\}\subset\partial\Omega\) of the distance function \(\mathcal{D}\). (The Reuleaux triangle shows that some smoothness assumption is essential).
To finish this section, we use the same Proposition 2.7 to characterise the minimising geodesics bounding a non-trivial local minimum of \(\mathcal{D}\).
**Proposition 2.9**.: _Let \((\Omega,g)\) be a totally convex Riemannian disc with smooth boundary in a complete Riemannian surface. If \(\{p,q\}\subset\partial\Omega\) is a non-trivial local minimum of \(\mathcal{D}\), then there exists only one minimising geodesic joining \(p\) and \(q\), and this minimising geodesic is moreover free boundary stable._
Proof.: Let \(\gamma_{+}\) and \(\gamma_{-}\) be the extremal minimising geodesics joining \(p\) and \(q\), labelled as in Proposition 2.7. We will first show, by a contradiction argument, that \(\gamma_{+}=\gamma_{-}\).
If \(\gamma_{+}\neq\gamma_{-}\), then at least one of the inequalities in Proposition 2.7 is strict. Let us say \(\gamma_{+}\) satisfies \(\langle\nu_{\gamma_{+}}(p),T(p)\rangle<0\). (The other cases are analogous).
Then, by the first variation formula (1.3.1), any vector field tangent to \(\partial\Omega\) extending \(T(p)\) with support in a sufficiently small neighbourhood of \(p\) that do not contain \(q\neq p\) generates a variation of \(\gamma_{+}\) by curves with extremities of the form \(\{x,q\}\subset\partial\Omega\), for some \(x\in\partial\Omega\) arbitrarily close to \(p\), that are strictly shorter than \(\gamma_{+}\). But then there are points \(\{x,q\}\) arbitrarily close to \(\{p,q\}\) with \(d(x,q)<L(\gamma_{+})=d(p,q)\), a contradiction with the minimising property of \(\{p,q\}\).
Thus, as claimed, the extremal minimising geodesics are the same geodesic. Since \(\{p,q\}\) is a non-trivial critical point of \(\mathcal{D}\), the unique minimising geodesic \(\gamma_{+}=\gamma_{-}\) joining them meets \(\partial\Omega\) orthogonally. Since \(\{p,q\}\) is a local minimum of \(\mathcal{D}\), it follows easily from the second variation formula (1.6.1) that \(\gamma_{+}=\gamma_{-}\) is free boundary stable.
## 3. The basic min-max theorem
The width of a smoothly embedded circle \(\Gamma\) in a complete Riemannian manifold is the number
\[\mathcal{S}(\Gamma)=\inf_{\{p_{t},q_{t}\}\subset\mathcal{V}}\max_{t\in[0,1]}d (p_{t},q_{t}),\]
where \(\mathcal{V}\) is the set of sweepouts \(\{p_{t},q_{t}\}\) of \(\Gamma\), see Definition 1.2. The key property of sweepouts is that, when they are regarded as continuous paths in the projective plane \(\mathcal{P}_{*}\) associated to \(\Gamma\), they are homotopically non-trivial loops based on the absolute minimum of \(\mathcal{D}\). Thus, isotopies of \(\mathcal{P}_{*}\) that fix a neighbourhood of the point \(0_{*}=[\{p\}]\in\mathcal{P}_{*}\) deform any sweepout into another sweepout.
Using the gradient-like flow constructed on compact subsets of the set of regular values of \(\mathcal{D}\) in \(\mathcal{P}\) (Proposition 2.5), it is then easy to deduce Theorem A. (_Cf._[24]).
**Theorem 3.1**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold. Then \(\mathcal{S}(\Gamma)>0\) and there exists a critical point \(\{p,q\}\subset\Gamma\) of \(\mathcal{D}\) such that_
\[d(p,q)=\mathcal{S}(\Gamma).\]
Proof.: As seen in Section 1.4, there exists \(\varepsilon>0\) such that \(\mathcal{S}(\Gamma)>2\varepsilon\). Suppose, by contradiction, that there exists \(\delta>0\) such that the compact subset
\[\mathcal{K}=\{\{x,y\}\in\mathcal{P}\,|\,\mathcal{S}(\Gamma)-\varepsilon\leq d (x,y)\leq\mathcal{S}(\Gamma)+\delta\}\]
contains no critical points of \(\mathcal{D}\). Multiply the vector field \(\mathcal{X}\) from Lemma 2.4 by an appropriate smooth non-negative cut-off function \(\rho\) with \(|\rho|\leq 1\) that equals \(1\) on \(\mathcal{K}\) and has support on a sufficiently small open neighbourhood of \(\mathcal{K}\), which is still contained in the set of regular points, but is disjoint from \(\{\mathcal{D}<\mathcal{S}(\Gamma)-2\varepsilon\}\). By Proposition 2.5, the flow of such vector field on \(\mathcal{P}_{*}\) retracts \(\{\mathcal{D}\leq\mathcal{S}(\Gamma)+\delta\}\) into \(\{\mathcal{D}\leq\mathcal{S}(\Gamma)-\varepsilon\}\), in such way that no point of \(\{\mathcal{D}<\mathcal{S}(\Gamma)-2\varepsilon\}\) moves. But then, a sweepout \(\{p_{t},q_{t}\}\) with \(\max_{t\in[0,1]}d(p_{t},q_{t})<\mathcal{S}(\Gamma)+\delta\) (which exists, by definition
of \(\mathcal{S}(\Gamma)\)) can be deformed continuously to another sweepout \(\{\hat{p}_{t},\hat{q}_{t}\}\) with \(\max_{t\in[0,1]}d(\hat{p}_{t},\hat{q}_{t})<\mathcal{S}(\Gamma)-\varepsilon\), a contradiction with the definition of \(\mathcal{S}(\Gamma)\).
Hence, the set \(\mathcal{K}\) contains a critical point of \(\mathcal{D}\). Since both \(\delta\) and \(\varepsilon\) can be made arbitrarily small, and the set of critical points is compact (Lemma 2.3), passing to the limit of a suitable subsequence of critical points we find a critical point \(\{p,q\}\) of \(\mathcal{D}\) such that \(d(p,q)=\mathcal{S}(\Gamma)>0\).
## 4. The width of the boundary of discs
In this section, we consider only embedded circles that are the boundary of a compact totally convex surface \((\Omega,g)\) with smooth boundary in some complete Riemannian surface. We will also work under the extra geometric assumption:
\((\star)\) _no free boundary stable geodesics exists on \((\Omega,g)\)._
It is well-known that this condition implies \(\Omega\) to be diffeomorphic to a disc.
**Lemma 4.1**.: _Let \((\Omega,g)\) be a compact totally convex region with smooth boundary in a complete Riemannian manifold. If no free boundary minimising geodesic is free boundary stable, then \(\Omega\) is a disc._
Proof.: If there are at least two boundary components \(\partial_{1}\Omega\) and \(\partial_{2}\Omega\), then minimisation of the length of curves with one extremity in \(\partial_{1}\Omega\) and the other in \(\partial_{2}\Omega\) produces a proper minimising geodesic that meets \(\partial_{1}\Omega\) and \(\partial_{2}\Omega\) orthogonally, as can be seen by the first variation formula. If, as we are assuming, its index is not zero, then, by the second variation formula (1.6.1), some variation of this geodesic would be by curves, with one extremity in \(\partial_{1}\Omega\) and the other in \(\partial_{2}\Omega\), with strictly less length. But this a contradiction.
If the surface \(\Omega\) has just one boundary component but is not orientable, then its orientable double cover \(\hat{\Omega}\) has two boundary components. Minimising the length among curves on \((\Omega,g)\) with extremities in \(\partial\Omega\) that lift to a curve joining the two different components of \(\partial\hat{\Omega}\) produces similarly a free boundary proper minimising geodesic with extremities in \(\Omega\) that is free boundary stable, a contradiction.
Finally, if \(\Omega\) is orientable and has connected boundary and genus \(g\geq 1\), then again minimisation of length among curves with extremities in \(\Omega\) and going trough the holes of \(\Omega\) produces a proper minimising geodesic that is free boundary stable, a contradiction.
Hence, the only topology that is not contradictory with property \((\star)\) is the topology of an orientable genus zero surface with connected boundary. In other words, \(\Omega\) is a disc.
Without loss of generality, we assume from now on that \(\Omega\) is a disc. As we will see, the importance of property \((\star)\) lies on the fact that it rules out non-trivial local minima of \(\mathcal{D}\). (See Proposition 2.9).
Before we continue, we need to recall a few properties of unstable free boundary geodesics \(\gamma\) in \((\Omega,g)\). By definition, a normal vector field \(X\) exists
along such \(\gamma\) such that \(Q(X,X)<0\). By virtue of the second variation formula (1.6.1), a small deformation of \(\gamma\) by the flow of some extension of \(X\) by a vector field that is tangent to \(\partial\Omega\) is a deformation by curves in \(\Omega\) with extremities in \(\partial\Omega\) that are strictly shorter than \(\gamma\).
The next lemma refines this observation, in the sense that we may choose the vector field \(X\) in such way that the curves move in a definite direction inside the disc \(\Omega\).
**Lemma 4.2**.: _Let \(\gamma:[0,a]\to\Omega\) be a free boundary geodesic joining points \(p\) and \(q\) in \(\partial\Omega\), and let \(\{\gamma^{\prime}(s),X(\gamma(s))\}\) be an orthonormal frame along \(\gamma\). If \(\gamma\) is not free boundary stable, then there exists a smooth positive function \(u\) on \(\gamma\) such that \(Q(uX,uX)<0\)._
Proof.: The index form of \(\gamma\) can then be thought as acting on functions,
\[Q(\phi,\phi):=Q(\phi X,\phi X) =\int_{0}^{a}|\phi^{\prime}(s)|^{2}-K(\gamma(s))\phi^{2}(s)ds\] \[\quad-A(X(p),X(p))\phi^{2}(0)-A(X(q),X(q))\phi^{2}(a).\]
By standard techniques, the minimisation of \(Q(\phi,\phi)/\int_{0}^{a}\phi(s)^{2}ds\) among non-zero functions in \(C^{\infty}([0,a])\) yields a solution \(u\neq 0\) of
\[\begin{split} u^{\prime\prime}+Ku+\lambda_{1}u&=0 \quad\text{on}\quad(0,a),\\ -u^{\prime}(0)&=A_{\Gamma}(X(p),X(p))u(0),\\ u^{\prime}(a)&=A_{\Gamma}(X(q),X(q))u(a).\end{split} \tag{4.0.1}\]
for the constant
\[\lambda_{1}=\inf\Big{\{}\frac{Q(\phi,\phi)}{\int_{0}^{a}\phi(s)^{2}ds}\,|\, \phi\in C^{\infty}([0,a]),\phi\neq 0\Big{\}}.\]
Also, conversely, any function \(u\neq 0\) such that \(Q(u,u)=\lambda_{1}\int_{0}^{a}u^{2}ds\) is a non-trivial solution of (4.0.1) which is, moreover, either strictly positive or such that \(-u\) is strictly positive on \([0,a]\).
If \(\gamma\) is not stable, then we clearly have \(\lambda_{1}<0\), and the corresponding positive solution \(u\) of (4.0.1) satisfies \(Q(u,u)=\lambda_{1}\int_{0}^{a}\phi^{2}(s)ds<0\).
Let \(\gamma_{1}\) and \(\gamma_{2}\) be minimising geodesics on \((\Omega,g)\) with extremities \(p_{1}\), \(q_{1}\) and \(p_{2}\), \(q_{2}\), respectively. Assume that \(p_{1}\), \(p_{2}\), \(q_{2}\) and \(q_{1}\) are four different points that appear in that order as we traverse \(\partial\Omega\) in the counter-clockwise direction. Because the geodesics are minimising, they cannot intersect each other. Thus, one of them belongs entirely to one of the components of the disc \(\Omega\) that the other one defines.
The next result shows that this configuration is not compatible with property (\(\star\)). Thus, in a sense, the minimising geodesics bounding critical points of \(\mathcal{D}\) on the boundary of Riemannian discs with property (\(\star\)) satisfies some sort of "_Frankel property_", _cf. [11]_, Section 2, and [12], Lemma 2.4.
**Proposition 4.3**.: _Let \((\Omega,g)\) be a totally convex Riemannian disc with smooth boundary in a complete Riemannian surface. Assume \((\Omega,g)\) has property
\((\star)\). If \(\{p,q\}\), \(\{r,s\}\subset\partial\Omega\) are non-trivial critical points of \(\mathcal{D}\) such that \(\{p,q\}\cap\{r,s\}=\emptyset\), then \(r\) and \(s\) lie in different connected components of \(\partial\Omega\setminus\{p,q\}\). In particular, every minimising geodesic joining \(p\) and \(q\) meets every minimising geodesic joining \(r\) and \(s\)._
Proof.: Since \(\Omega\) is a disc, it is enough to show that property \((\star)\) precludes the possibility that \(r\) and \(s\) lie in the same component of \(\partial\Omega\setminus\{p,q\}\).
Suppose, by contradiction, that \(\{r,s\}\) lies in one of the boundary arcs determined by \(\{p,q\}\). Renaming the points, we may assume that they appear in the following order \(p\to r\to s\to q\), as we go around \(\Gamma\) in the counter-clockwise direction.
Let \(\gamma_{+}\) be the extremal minimising geodesic joining \(p\) and \(q\) such that \(\langle\nu_{\gamma_{+}}(p),T(p)\rangle\leq 0\leq\langle\nu_{\gamma_{+}}(q),T(q)\rangle\), and let \(\gamma_{-}\) the extremal minimising geodesic joining \(r\) and \(s\) such that \(\langle\nu_{\gamma_{-}}(r),T(r)\rangle\geq 0\geq\langle\nu_{\gamma_{-}}(s),T(s)\rangle\). (Such angle conditions are met, recall Proposition 2.7). Notice that these two geodesics cannot intersect, and therefore they bound a compact region \(K\) of the disc \(\Omega\).
If any of them is free boundary, because of property \((\star)\), we may use the positive function given in Lemma 4.2 to construct an appropriate variation of that geodesic by curves with extremities in different components of \(K\cap\partial\Omega\) that are, moreover, strictly shorter than that geodesic. Hence, the distance between the extremities of some curves in the variation is strictly smaller than \(d(p,q)\) or \(d(r,s)\). If any of them is not free boundary, combining the angle conditions satisfied by \(\gamma_{+}\) and \(\gamma_{-}\) at \(\partial\Omega\) and the first variation formula, we can also produce variations of these curves in an appropriate direction so that the extremities of the varying curves lie in different components of \(K\cap\partial\Omega\) and are, moreover, at distance strictly smaller than \(d(p,q)\) or \(d(r,s)\), because the curves in the variation are strictly shorter than \(\gamma_{+}\) and \(\gamma_{-}\).
Hence, in any of the four possible cases for \(\gamma_{+}\) and \(\gamma_{-}\), after minimising distances between pair of points in different components of \(K\cap\partial\Omega\), we find a local minimum \(\{x,y\}\subset\partial\Omega\) of \(\mathcal{D}\) that has extremities in different components of \(int(K)\cap\partial\Omega\). By Proposition 2.9, \(\{x,y\}\) bounds a free boundary stable minimising geodesic. But this contradicts the assumption that \(\Omega\) satisfies property \((\star)\).
Thus, the configuration \(p\to r\to s\to q\) leads to a contradiction. It follows that \(r\) and \(s\) must lie in different boundary arcs determined by \(p\) and \(q\), as we wanted to prove.
Before stating the next key propositions, it is convenient to introduce some terminology.
Let \(\{c_{t}\}\), \(t\in I\), be a smooth family of smoothly embedded curves in the disc \(\Omega\), whose extremities lie in \(\partial\Omega\) at their extremities, and such that \(I\) is a proper subinterval of \([0,1]\). (The interval may be closed, or not, in any of its extremities). The family \(\{c_{t}\}\) is called _strictly monotone_ if there exists a point \(x\in\partial\Omega\), that is not an extremity of any of the curves \(c_{t}\), such that either of the following conditions hold:
(1) all curves \(c_{t}\) are proper, and the closed regions \(\Omega_{t}\) of the disc \(\Omega\) determined by the curves \(c_{t}\) that contain the given point \(x\) are _strictly nested_, _i.e._ either \(\Omega_{t}\subset int(\Omega_{u})\) for all \(t<u\) in \(I\) or \(\Omega_{u}\subset int(\Omega_{t})\) for all \(t<u\) in \(I\).
(2) all curves \(c_{t}\) are arcs of \(\partial\Omega\), and the closed arcs that contain \(x\) are _strictly nested_, in the sense that \(c_{t}\subset int(c_{u})\) for all \(t<u\) in \(I\) or \(c_{u}\subset int(c_{t})\) for all \(t<u\) in \(I\).
This definition does not depend on the choice of the point \(x\in\partial\Omega\).
Finally, a sweepout \(\{p_{t},q_{t}\}\) of \(\partial\Omega\) is called _regularly strictly monotone on an interval \(I\subset[0,1]\)_ when there exists a strictly monotone smooth family of curves \(\{c_{t}\}\), \(t\in I\), all of them either properly embedded in \(\Omega\) or else contained in \(\partial\Omega\), such that the extremities of \(c_{t}\) are the points \(p_{t}\) and \(q_{t}\) for every \(t\in I\).
**Proposition 4.4**.: _Let \((\Omega,g)\) be a totally convex Riemannian disc with smooth boundary in a complete Riemannian surface. Assume that \((\Omega,g)\) has property (\(\star\) *> 2)._
_Let \(\{p,q\}\subset\partial\Omega\) be a critical point of \(\mathcal{D}\) with \(p\neq q\). If \(p\) and \(q\) are joined by a free boundary minimising geodesic \(\gamma\), then there exists a sweepout \(\{p_{t},q_{t}\}\) of \(\partial\Omega\) that has the following properties:_
* \(p_{1/2}=p\) _and_ \(q_{1/2}=q\)_._
* \(d(p_{t},q_{t})<d(p,q)\) _for every_ \(t\neq 1/2\)_._
* _For all_ \(t\) _in a small open interval containing_ \(1/2\)_, the sweepout is regularly strictly monotone with_ \(\{p_{t},q_{t}\}=\partial c_{t}\) _for a smooth family of smooth curves in_ \(\Omega\) _with_ \(c_{1/2}=\gamma\)_._
* _In a small neighbourhood of_ \(t=1/2\)_,_ \(t\mapsto L(c_{t})\) _has a strictly negative second derivative at the critical point_ \(t=1/2\)_._
Proof.: Choose a positive orthonormal frame \(\{\gamma^{\prime}(s),X(\gamma(s))\}\) along \(\gamma\). Since \((\Omega,g)\) has property (\(\star\) *> 2), we may extend the vector field given in Lemma 4.2 to a vector field on \(\Omega\) with compact support, that is tangent to \(\partial\Omega\), and whose flow \(\phi\) has the following properties: For every \(t\in(-\delta,\delta)\) sufficiently small, the curves \(c_{t+1/2}=\phi_{t}(\gamma)\) are a smooth family of smooth curves with extremities in \(\partial\Omega\) that is strictly monotone, satisfy \(c_{1/2}=\phi_{0}(\gamma)=\gamma\), and is such that there exists \(\theta>0\) so that
\[d(p_{t},q_{t})\leq L(c_{t})\leq L(c_{1/2})-\theta(t-1/2)^{2}=d(p,q)-\theta(t- 1/2)^{2}\]
for all \(t\) in a sufficiently small interval \((1/2-\delta,1/2+\delta)\), by virtue of the second variation formula (1.6.1). Notice in particular that the extremities \(\{p_{t},q_{t}\}\) of \(c_{t}\) satisfy the properties \(i)\), \(iii)\) and \(iv)\), but also \(ii)\) for parameters close enough to \(t=1/2\).
Thus, in order to prove the proposition, it is enough to extend the smoothly monotone family \(\{p_{t},q_{t}\}\), \(t\in(1/2-\delta,1/2+\delta)\), to a sweepout of \(\partial\Omega\) in such way that \(d(p_{t},q_{t})<d(p,q)\) for every \(t\in[0,1]\). This will be achieved by iterating the Birkhoff chord shortening process.
Recall that the Birkhoff chord shortening process describes how to shorten piecewise smooth curves with extremities in \(\partial\Omega\), of uniformly bounded length,
while keeping the extremities in \(\partial\Omega\), in a continuous way. Roughly speaking, the process has two steps. In the first step, the process takes a (possibly immersed) piecewise smooth curve and sends it to a (possibly immersed) piecewise smooth curve that is made of minimising geodesics between its vertices, and that intersects \(\partial\Omega\) orthogonally. In the second step, the process produces a continuous interpolation between the initial and the final curve. In both steps, the basic operation is either the substitution of a short piece of the curve by the minimising geodesic between the extremities of the piece, or, in case one of the extremities lies in \(\partial\Omega\), it is the substitution of that short piece of the curve by the shortest path between the other vertex and \(\partial\Omega\). Thus, along the process, the length of the curve does not increase. We refer the reader to [14], [32] and [9] for detailed descriptions. (_Cf._ Section 2 of [6], where a concise description of the standard Birkhoff curve shortening process in the case of closed curves is given).
It is well-known that the iteration of the process either terminates at a point, or converges to a free boundary geodesic, which may be immersed. (See, for instance, the Lemma in page 71 in [14]).
In the situation we are analysing, we have control on where curves move during the process. In fact, denote by \(\Omega_{1}\) and \(\Omega_{2}\) the two closed regions of the disc \(\Omega\) determined by \(\gamma\), labelled so that \(c_{1/2-\delta}\) belongs to \(\Omega_{1}\) and \(c_{1/2+\delta}\) belongs to \(\Omega_{2}\). Since \(\gamma\) is a free boundary minimising geodesic, both regions are locally convex in the sense that the minimising geodesic joining two points of \(\Omega_{i}\), or joining a point of \(\Omega_{i}\) that is sufficiently close to \(\partial\Omega\) to the nearest point in \(\partial\Omega\), still lie in \(\Omega_{i}\). Thus, the Birkhoff chord shortening process starting on any of the curves \(c_{\pm\delta+1/2}\) will remain in the respective region \(\Omega_{i}\).
**Claim:** The iteration of the Birkhoff chord shortening process starting at both curves \(c_{\pm\delta+1/2}\) terminates at points.
The Claim is proven by contradiction. Suppose it is not true. Then, the Birkhoff chord shortening process starting at \(c_{\delta+1/2}\), say, produces an immersed free boundary geodesic \(\alpha\) of \((\Omega,g)\). (The other case is analogous). By virtue of the previous remarks on the convexity of \(\Omega_{2}\), the geodesic \(\alpha\) actually lies in \(\Omega_{2}\). Also, the extremities of \(\alpha\) are disjoint from \(\{p,q\}\), because otherwise the two free boundary geodesics \(\alpha\) and \(\gamma\) would coincide, which is absurd since \(L(\alpha)\leq L(c_{1/2+\delta})<L(\gamma)\). Even more, since \(\alpha\) lies in one side of \(\gamma\), it cannot intersect \(\gamma\) at any other points either, because otherwise \(\alpha\) and \(\gamma\) would have the same tangent at some point and so be the same geodesic.
Thus, there exists a connected region \(R\subset\Omega_{2}\) of the disc \(\Omega\) such that \(\partial R\) is the union of \(\gamma\), pieces of \(\alpha\) (or the even the whole \(\alpha\) in case \(\alpha\) is embedded) and the two arcs of \(\partial\Omega\) bounded between the extremities of \(\gamma\) and \(\alpha\). In particular, \(\partial R\) is a piecewise smooth curve, whose pieces are either geodesic (\(\gamma\) and pieces of the immersed curve \(\alpha\)) or have non-negative
geodesic curvature (the arcs of \(\partial\Omega\)), and whose vertices have angles either \(\pi/2\) (if the vertex lies in \(\partial\Omega\)) or strictly smaller than \(\pi\) from inside \(R\) (if the vertex is a point of self-intersection of \(\alpha\)).
Next, minimise the length of curves in \(R\) with extremities in different components of \(\partial R\cap\partial\Omega\). By the previous observations about the local convexity of the region \(R\), the standard replacement by broken geodesics yields another minimising sequence which converges to a minimising smooth curve \(\beta\subset R\), with positive length.
Since the removal of immersed loops decreases length, it is clear that the minimising curve \(\beta\) is embedded. Also, by virtue of the first variation formula of length, it is straighforward to check that \(\beta\) is a geodesic that meets \(\partial\Omega\) orthogonally. Since \(L(\beta)\leq L(\alpha)<L(\gamma)\), arguing similarly as before we conclude that \(\beta\) has no point in common with \(\gamma\).
To finish the contradiction argument, two cases must be considered. The first case is when the extremities of \(\beta\) are also disjoint from the extremities of \(\alpha\). In this case, we may use the second variation formula (1.6.1) together with the minimisation property enjoyed by \(\beta\) to conclude that \(\beta\) is a free boundary stable geodesic with extremities in different arcs of \(\partial R\setminus(\gamma\cup\alpha)\). But this scenario is ruled out by property (\(\star\)).
The other possibility is that \(\beta\) has at least one extremity in common with \(\alpha\). Since these two curves are free boundary geodesics, they must coincide. _A posteriori_, we conclude that \(\alpha\) must be embedded, and minimise length among curves that lie inside the region \(R\) and have extremities in \(\partial R\cap\partial\Omega\). By the second variation formula and Lemma 4.2, we conclude that \(\alpha\) must be a free boundary stable geodesic. And this is again a contradiction with property (\(\star\)).
In conclusion, in all cases, the failure of the claim leads to the existence of a free boundary stable geodesic in \((\Omega,g)\), which contradicts property (\(\star\)). The claim follows.
In view of the shortening properties of the iterated Birkhoff chord shortening process, it follows immediately from our initial construction and the Claim that a sweepout with the desired properties \(i)-iv)\) exist, as we wanted to prove.
The next proposition has a similar proof. Notice that it applies, in particular, to a pair of extremal minimising geodesics whose extremities are a non-trivial critical point of \(\mathcal{D}\), as soon as none of them is free boundary.
**Proposition 4.5**.: _Let \((\Omega,g)\) be a totally convex Riemannian disc with smooth boundary in a complete Riemannian surface. Assume that \((\Omega,g)\) has property \((\star)\). Denote by \(T\) the unit tangent vector field to \(\partial\Omega\) that is compatible with its orientation._
_Let \(\{p,q\}\subset\partial\Omega\) be a critical point of \(\mathcal{D}\) with \(p\neq q\). Suppose \(p\) and \(q\) are joined by two minimising geodesics \(\gamma_{1}\) and \(\gamma_{2}\) such that_
\[\langle\nu^{T}_{\gamma_{2}}(p),T(p)\rangle\leq 0\leq\langle\nu^{T}_{\gamma_{1}} (p),T(p)\rangle,\]
_and_
\[\langle\nu_{\gamma_{1}}^{T}(q),T(q)\rangle\leq 0\leq\langle\nu_{\gamma_{2}}^{T}(q),T(q)\rangle,\]
_Finally, suppose none of the two geodesics \(\gamma_{1}\) and \(\gamma_{2}\) is free boundary._
_Then there exists a sweepout \(\{p_{t},q_{t}\}\) of \(\partial\Omega\) that has the following properties:_
* \(p_{1/2}=p\) _and_ \(q_{1/2}=q\)_._
* \(d(p_{t},q_{t})<d(p,q)\) _for every_ \(t\neq 1/2\)_._
* _For all_ \(t\leq 1/2\) _sufficiently close to_ \(1/2\)_, the sweepout is regularly strictly monotone with_ \(\{p_{t},q_{t}\}=\partial b_{t}\) _for a family of curves with_ \(b_{1/2}=\gamma_{1}\) _and_ \(\partial b_{t}\) _belonging to the the arc of_ \(\partial\Omega\setminus\{p,q\}\) _into which_ \(-T(p)\) _and_ \(T(q)\) _points for_ \(t<1/2\)_._
* _For all_ \(t\geq 1/2\) _sufficiently close to_ \(1/2\)_, the sweepout is regulate geodesicrly strictly monotone with_ \(\{p_{t},q_{t}\}=\partial c_{t}\) _for a family of curves with_ \(c_{1/2}=\gamma_{2}\) _and_ \(\partial c_{t}\) _belonging to the the arc of_ \(\partial\Omega\setminus\{p,q\}\) _into which_ \(T(p)\) _and_ \(-T(q)\) _points for_ \(t<1/2\)_._
* _There exists a constant_ \(\theta>0\) _such that_ \(d(p_{t},q_{t})\leq d(p,q)-\theta|t-1/2|\) _near_ \(t=1/2\)_._
Proof.: Let \(\Omega_{1}\) be the closed component of \(\Omega\) determined by \(\gamma_{1}\) that does not contain \(\gamma_{2}\), and let \(\Omega_{2}\) be the closed component of \(\Omega\) determined by \(\gamma_{2}\) that does not contain \(\gamma_{1}\).
Since neither \(\gamma_{1}\) nor \(\gamma_{2}\) is free boundary, it follows from the assumptions that
\[\langle-T(p),\nu_{\gamma_{1}}^{T}(p)\rangle+\langle T(q),\nu_{\gamma_{1}}^{T} (q)\rangle<0 \tag{4.0.2}\]
and
\[\langle T(p),\nu_{\gamma_{2}}^{T}(p)\rangle+\langle-T(q),\nu_{\gamma_{2}}^{T} (q)\rangle<0. \tag{4.0.3}\]
Notice that the vectors \(-T(p)\) and \(T(q)\) point towards \(\Omega_{1}\). Extend these two vectors to a vector field \(X\) along \(\gamma_{1}\) in such way that \(X\) points towards \(\Omega_{1}\) at all points, and then extend this vector field to a vector field of \(\Omega\) that is tangent to \(\partial\Omega\) and vanishes outside a compact set. The flow of this vector field, \(\phi\), generates curves \(b_{t}=\phi_{1/2-t}(\gamma_{1})\) with extremities \(\{p_{t},q_{t}\}\) in \(\partial\Omega\) that are mutually disjoint and lie in \(\Omega_{1}\), at least for all \(t\in[1/2-\delta,1/2]\), for some \(\delta>0\) sufficiently small. By construction, the family \(\{b_{t}\}\) is strictly monotone. (Notice that if \(\gamma_{1}\) is proper, then all \(b_{t}\) are proper if \(\delta\) is sufficiently small, whereas if \(\gamma_{1}\) is an arc of \(\partial\Omega\), then all \(b_{t}\) are boundary arcs as well).
By the first variation formula (1.3.1) at the geodesic \(\gamma_{1}=b_{0}\), and by (4.0.2), there exists some \(\theta>0\) such that
\[d(p_{t},q_{t})\leq L(b_{t})\leq L(\gamma_{1})-\theta(1/2-t)=d(p,q)-\theta(1/2- t).\]
for all \(t\in[1/2-\delta,1/2]\), possibly after decreasing \(\delta\) if necessary. Thus, in particular, the extremities \(\{p_{t},q_{t}\}=\partial b_{t}\) satisfy properties \(i)\), \(ii)\), \(iii)\) and \(v)\) for all \(t\in[1/2-\delta,1/2]\).
Arguing similarly about \(\gamma_{2}\), we can extend the vectors \(T(p)\) and \(-T(q)\), that point towards \(\Omega_{2}\), to some vector field in \(\Omega\) with compact support,
tangent to \(\partial\Omega\), and whose flow, \(\psi\), will generate curves \(c_{t}=\psi_{t-1/2}(\gamma_{2})\) with extremities \(\{p_{t},q_{t}\}\) in \(\partial\Omega\) that are mutually disjoint and lie in \(\Omega_{2}\), at least if \(t\geq 1/2\) is small enough. Arguing as before using the first variation formula (1.3.1) and (4.0.3), we check that the strictly monotone family \(\{p_{t},q_{t}\}=\partial c_{t}\) satisfies properties \(i)\), \(ii)\), \(iv)\) and \(v)\) for all \(t\in[1/2,1/2+\delta]\), after possibly decreasing \(\delta\) and \(\theta\).
Gluing together these two families, we construct a continuous family \(\{p_{t},q_{t}\}\subset\partial\Omega\), \(t\in[1/2-\delta,1/2+\delta]\), that satisfies properties \(i)\), \(iii)\), \(iv)\) and \(v)\), and thus also \(ii)\) for parameters close enough to \(t=1/2\).
If one of the geodesics \(\gamma_{i}\) is an arc of \(\partial\Omega\), it is obvious how to continue the sweepout on the side of \(\Omega_{i}\) without ever increasing the distance between the points of the sweepout: just shrink the arc inside \(\partial\Omega\cap\Omega_{i}\) continuously to a point. Thus, in order to prove the proposition, it remains to consider the case where one of the extremal minimising geodesics, say \(\gamma_{2}\), is not an arc of \(\partial\Omega\cap\Omega_{2}\). (Up to relabelling, the argument is the same if the other geodesic is not a boundary arc).
By the angle conditions between \(\gamma_{2}\) and \(\partial\Omega\), it is clear that \(\Omega_{2}\) is locally convex in the sense that that the minimising geodesic joining two points of \(\Omega_{2}\), or joining a point of \(\Omega_{2}\) sufficiently close to \(\partial\Omega\) to the nearest point in \(\partial\Omega\), still lies in \(\Omega_{2}\). Thus, the Birkhoff chord shortening process starting at the curve \(c_{\delta+1/2}\subset\Omega_{2}\) will stay in the region \(\Omega_{2}\).
If the process terminates at a point, then we have a family of boundary points to form (half of a) sweepout with the desired properties. If it does not, then the process terminates at an immersed free boundary geodesic \(\alpha\) in \(\Omega_{2}\). By assumption, \(\gamma_{2}\) forms angles \(\leq\pi/2\) with \(\partial\Omega\) from within \(\Omega_{2}\). Hence, \(\alpha\) cannot have an extremity in common with \(\gamma_{2}\), otherwise they would coincide, a contradiction with the assumption that \(\gamma_{2}\) is not free boundary. Since \(\alpha\) lies on one side of \(\gamma_{2}\), by the same reason it cannot touch it at any interior point either.
Thus, \(\gamma_{2}\) and \(\alpha\) determine a region \(R\) whose boundary consists of arcs of geodesics or curves with non-negative geodesic curvature, which moreover meet at angles \(\pi/2\) at \(\partial\Omega\) and strictly less than \(\pi\) from within \(R\) at self-intersection points of \(\alpha\). Minimise the length of curves in \(R\) with extremities in different components of \(R\cap\partial\Omega\). By the convexity properties of \(R\), there exists a minimising curve \(\beta\subset R\) in this class with positive length.
As argued in the proof of the Claim in Proposition 4.4, \(\beta\) is an embedded free boundary geodesic that has no point in common with \(\gamma_{2}\). If the extremities of \(\beta\) are disjoint from the extremities of \(\alpha\), then by the second variation formula (1.6.1) and the minimisation property enjoyed by \(\beta\) we conclude that \(\beta\) is a free boundary stable geodesic with extremities in different arcs of \(\partial R\setminus(\gamma\cup\alpha)\), a contradiction with by property (\(\star\)). The other possibility is that \(\beta\) has at least one extremity in common with \(\alpha\), in which case these two free boundary geodesics coincide. _A posteriori_, we conclude that \(\alpha\) is embedded, and has the least length among curves that lie inside the region \(R\) and have extremities in different components \(\partial R\cap\partial\Omega\). By the second
variation formula (1.6.1) and Lemma 4.2, \(\alpha\) must be a free boundary stable geodesic, which is again a contradiction with property (\(\star\)).
Therefore, in any of the cases considered the family \(\{p_{t},q_{t}\}\) can indeed be extended, via the Birkhoff curve shortening process, to a sweepout of \(\partial\Omega\) so that \(t=1/2\) is the only maximum of \(d(p_{t},q_{t})\), and all other properties described in \(i)-v)\) are satisfied. This finishes the proof.
**Proposition 4.6**.: _Let \((\Omega,g)\) be a totally convex Riemannian disc with smooth boundary in a complete Riemannian surface. Let \(\{p,q\}\subset\partial\Omega\) be a critical point of \(\mathcal{D}\) with \(d(p,q)>0\). Assume \(\{p_{t},q_{t}\}\) is a sweepout of \(\partial\Omega\) that satisfies properties \(i)-iv)\) described in Proposition 4.4._
_If \(p\) and \(q\) are joined by a free boundary minimising geodesic \(\gamma\) of free boundary index \(\geq 2\), then there exists a sweepout \(\{p^{\prime}_{t},q^{\prime}_{t}\}\) of \(\partial\Omega\) such that \(d(p^{\prime}_{t},q^{\prime}_{t})<d(p,q)\) for every \(t\in[0,1]\)._
Proof.: By the hypotheses, there exists a sweepout \(\{p_{t},q_{t}\}\) of \(\partial\Omega\) that satisfies \(i)\), \(ii)\), \(iii)\) and \(iv)\) in Proposition 4.4. In particular, in small interval \((1/2-\delta,1/2+\delta)\), the sweepout \(\{p_{t},q_{t}\}\) is regularly strictly monotone, that is, \(\{p_{t},q_{t}\}=\partial c_{t}\) with \(c_{1/2}=\gamma\) for some smoothly varying strictly monotone family of smooth curves \(c_{t}\).
By construction, and by the second variation formula (1.6.1), \(L(c_{t})\) is a function that has a critical point at \(t=1/2\), and a strictly negative second derivative at that point. Since the free boundary index of \(\gamma\) is at least two, this negative subspace for the index form \(Q\) belongs to a two-dimensional subspace of \(\Gamma(N\gamma^{\prime})\) where \(Q\) is negative definite. Choose some \(Y\in\Gamma(N\gamma^{\prime})\) in this subspace in such way that
\[(Y(p),Y(q))\quad\text{and}\quad\left(\frac{d}{dt}_{|_{t=1/2}}p_{t},\frac{d}{ dt}_{|_{t=1/2}}q_{t}\right)\]
are linearly independent vectors in \(T_{p}\partial\Omega\times T_{q}\partial\Omega\), extend \(Y\) to a vector field in \(\Omega\) with compact support that is tangent to \(\partial\Omega\), and denote by \(\psi_{s}\) the flow generated by \(Y\).
By construction, \(L(\psi_{s}(c_{t}))\) is a function of two variables \((t,s)\in(1/2-\delta,1/2+\delta)\times(-\infty,\infty)\) that is smooth, has a critical point at \((1/2,0)\) (by the first variation formula (1.3.1), since \(c_{0}=\gamma\) is a free boundary geodesic), and its Hessian is negative definite at \((1/2,0)\) (by the second variation formula (1.6.1), property \(iv)\) and our choice of vector field \(Y\)). Thus, \((t,s)=(1/2,0)\) is a unique maximum. Hence, possibly after deceasing \(\delta\) if necessary, we can find a continuous function \(s:t\in[0,1]\to[0,\eta)\), that is zero outside the interval \([1/2-\delta,1/2+\delta]\), so that
\[d(p_{t}^{s(t)},q_{t}^{s(t)})<d(p,q)\quad\text{for all}\quad t\in[1/2-\delta,1/ 2+\delta].\]
The family \(\{p_{t}^{s(t)},q_{t}^{s(t)}\}\), which coincides with the original one for \(t\notin[1/2-\delta,1/2+\delta]\) by construction, is then a sweepout of \(\partial\Omega\) such that
\[\max_{t\in[0,1]}d(p_{t}^{s(t)},q_{t}^{s(t)})<d(p,q),\]
as we wanted to construct.
**Proposition 4.7**.: _Let \((\Omega,g)\) be a totally convex Riemannian disc with smooth boundary in a complete Riemannian surface. Assume \((\Omega,g)\) has property \((\star)\). Denote by \(T\) be the unit tangent vector field to \(\partial\Omega\) that is compatible with its orientation._
_Let \(\{p,q\}\subset\partial\Omega\) be a critical point of \(\mathcal{D}\) with \(p\neq q\). Suppose \(p\) and \(q\) are joined by two minimising geodesics \(\gamma_{1}\) and \(\gamma_{2}\), neither of them free boundary, such that_
\[\langle\nu^{T}_{\gamma_{2}}(p),T(p)\rangle\leq 0\leq\langle\nu^{T}_{\gamma_{1}}(p ),T(p)\rangle,\]
_and_
\[\langle\nu^{T}_{\gamma_{1}}(q),T(q)\rangle\leq 0\leq\langle\nu^{T}_{\gamma_{2}} (q),T(q)\rangle.\]
_Assume \(\{p_{t},q_{t}\}\) is a sweepout of \(\partial\Omega\) that satisfies properties \(i)-v)\) described in Proposition 4.5 with respect to the geodesics \(\gamma_{1}\) and \(\gamma_{2}\)._
_If \(\gamma_{1}\) and \(\gamma_{2}\) are not simultaneously stationary, then there exists a sweepout \(\{p^{\prime}_{t},q^{\prime}_{t}\}\) of \(\partial\Omega\) such that \(d(p^{\prime}_{t},q^{\prime}_{t})<d(p,q)\) for every \(t\in[0,1]\)._
Proof.: By the hypotheses, there exists a sweepout \(\{p_{t},q_{t}\}\) of \(\partial\Omega\) that satisfies \(i)\), \(ii)\), \(iii)\) and \(iv)\) in Proposition 4.5. In particular, in small intervals \((1/2-\delta,1/2]\) and \([1/2,1/2+\delta)\), the sweepout \(\{p_{t},q_{t}\}\) is regularly strictly monotone, where the family of curves \(b_{t}\) with \(\partial b_{t}=\{p_{t},q_{t}\}\) ends at \(\gamma_{1}\) in the first interval, and the family of curves \(c_{t}\) with \(\partial c_{t}=\{p_{t},q_{t}\}\) starts at \(\gamma_{2}\) in the second interval.
Suppose, by contradiction, that \(\gamma_{1}\) and \(\gamma_{2}\) are not simultaneously stationary. Then, there exists \(v_{1}\in T_{p}\partial\Omega\) and \(v_{2}\in T_{q}\partial\Omega\) such that
\[\langle v_{1},\nu^{T}_{\gamma_{1}}(p)\rangle+\langle v_{2},\nu^{T}_{\gamma_{1 }}(q)\rangle<0 \tag{4.0.4}\]
and
\[\langle v_{1},\nu^{T}_{\gamma_{2}}(p)\rangle+\langle v_{2},\nu^{T}_{\gamma_{2 }}(q)\rangle<0. \tag{4.0.5}\]
Extend \(v_{1}\) and \(v_{2}\) to tangent vector field \(X\) on \(\Omega\), that is tangent to \(\partial\Omega\) on boundary points and whose support intersected with \(\partial\Omega\) is contained in the union of the arcs of \(\partial\Omega\) covered by the points \(p_{t}\) and \(q_{t}\) for every \(t\in[1/2-\delta,1/2+\delta]\). (There is no ambiguity because \(p\neq q\)).
Let \(\phi_{s}\) be the flow of \(X\) on \(\Omega\), and write \(\phi_{s}(\{p_{t},q_{t}\})=\{p^{s}_{t},q^{s}_{t}\}\) for all \(t\in[0,1]\) and all \(s\in[0,+\infty)\). Notice that \(\{p^{s}_{t},q^{s}_{t}\}=\{p_{t},q_{t}\}\) if \(t\notin[1/2-\delta,1/2+\delta]\).
The function \(L(\phi_{s}(c_{t}))\) is a smooth function in the variables \((t,s)\in[1/2,1]\times[0,+\infty)\) near \((t,s)=(1/2,0)\). It follows immediately from (4.0.5) and the first variation formula (1.3.1) that the curve \(c_{1/2}=\gamma_{2}\) varies by curves \(\phi_{s}(c_{1/2})\) in such way that the first derivative of the length in the \(s\) variable is strictly negative at \(s=0\). By continuity, the same assertion about the variation of \(c_{t}\) by the curves \(\phi_{s}(c_{t})\) is true for all \(t\geq 1/2\) sufficiently close to \(t=1/2\). Hence, for some small \(0<\delta^{\prime}<\delta\) and some small \(\eta>0\),
\[d(p^{s}_{t},q^{s}_{t})\leq L(\phi_{s}(c_{t}))<L(c_{t})\leq d(p,q)\]
for all \(t\in[1/2,1/2+\delta^{\prime}]\) and \(0<s<\eta\).
By a similar reasoning about the function \(L(\phi_{s}(b_{t}))\) in the variables \((t,s)\in[0,1/2]\times[0+\infty)\), which is smooth near \((t,s)=(1/2,0)\), the curve \(b_{1/2}=\gamma_{1}\) varies by curves \(\phi_{s}(b_{1/2})\) in such way that the first derivative of the length in the \(s\) variable at \(s=0\) is strictly negative, as a consequence of (4.0.4) and the first variation formula (1.3.1). By continuity, the same is true about the variation of \(b_{t}\) by the curves \(\phi_{s}(b_{t})\) for all \(t\geq 1/2\) sufficiently close to \(t=1/2\). Thus, decreasing \(\delta^{\prime}\) and \(\eta\) if necessary, we guarantee that the inequalities
\[d(p_{t}^{s},q_{t}^{s})\leq L(\phi_{s}(b_{t}))<L(b_{t})\leq d(p,q)\]
holds as well for all \(t\in[1/2-\delta^{\prime},1/2]\) and \(0<s<\eta\).
Therefore the continuous map
\[(t,s)\in[1/2-\delta^{\prime},1/2+\delta^{\prime}]\times[0,\eta]\mapsto d(p_{t }^{s},q_{t}^{s})\in[0,d(p,q)]\]
is such that \((1/2,0)\) is the unique maximum \(d(p,q)\). Now, it is possible to chose some continuous function \(s:t\in[0,1]\to[0,\eta)\), that is zero outside the interval \([1/2-\delta^{\prime},1/2+\delta^{\prime}]\), in such way that
\[d(p_{t}^{s(t)},q_{t}^{s(t)})<d(p,q)\quad\text{for all}\quad t\in[1/2-\delta^{ \prime},1/2+\delta^{\prime}].\]
The family \(\{p_{t}^{s(t)},q_{t}^{s(t)}\}\), which coincides with the original one for \(t\notin[1/2-\delta^{\prime},1/2+\delta^{\prime}]\), is therefore a sweepout of \(\partial\Omega\) such that
\[\max_{t\in[0,1]}d(p_{t}^{s(t)},q_{t}^{s(t)})<d(p,q).\]
We are now ready for the proof of Theorem B.
**Theorem 4.8**.: _Let \((\Omega,g)\) be a totally convex Riemannian disc with smooth boundary in some Riemannian manifold. Assume \((\Omega,g)\) has property \((\star)\). Then every critical point \(\{x,y\}\subset\partial\Omega\) of \(\mathcal{D}\) with \(x\neq y\) is such that_
\[d(x,y)\geq\mathcal{S}(\partial\Omega).\]
_Moreover, let \(\{p,q\}\) be a critical point with \(d(p,q)=\mathcal{S}(\partial\Omega)\). Then,_
* _if there exists a free boundary minimising geodesic joining_ \(p\) _and_ \(q\)_, then this geodesic has free boundary index one._
* _if two minimising geodesics_ \(\gamma_{1}\) _and_ \(\gamma_{2}\) _joining the points_ \(p\) _and_ \(q\) _satisfy_ \(\langle\nu_{\gamma_{1}}^{T}(p),\nu_{\gamma_{2}}^{T}(p)\rangle\leq 0\) _and_ \(\langle\nu_{\gamma_{1}}^{T}(q),\nu_{\gamma_{2}}^{T}(q)\rangle\leq 0\)_, and none of them is free boundary, then_ \(\gamma_{1}\) _and_ \(\gamma_{2}\) _are simultaneously stationary._
Proof.: By Propositions 4.4 and 4.5, every critical point \(\{p,q\}\subset\partial\Omega\) of \(\mathcal{D}\) with \(p\neq q\) is such that \(d(p,q)=\max_{t\in[0,1]}d(p_{t},q_{t})\) for some sweepout \(\{p_{t},q_{t}\}\) of \(\partial\Omega\) with \(p_{1/2}=p\) and \(q_{1/2}=q\). By definition of \(\mathcal{S}(\partial\Omega)\), the inequality \(\mathcal{S}(\partial\Omega)\leq d(p,q)\) follows immediately.
If \(\{p,q\}\subset\partial\Omega\) is a critical point with \(d(p,q)=\mathcal{S}(\partial\Omega)>0\), then there are two possibilities. Either \(p\) and \(q\) are joined by a free boundary geodesic, or there exists at least two minimising geodesics joining \(p\) and \(q\), none of them free boundary, that satisfy \(\langle\nu_{\gamma_{1}}^{T}(p),\nu_{\gamma_{2}}^{T}(p)\rangle\leq 0\) and \(\langle\nu_{\gamma_{1}}^{T}(q),\nu_{\gamma_{2}}^{T}(q)\rangle\leq 0\)
(Proposition 2.7). In the first case, the index must be one, otherwise using Proposition 4.6 we would be able to construct a sweepout that violates the definition of \(\mathcal{S}(\partial\Omega)\). In the second case, Proposition 4.7 guarantees that any such pair of geodesics \(\gamma_{1}\) and \(\gamma_{2}\) are simultaneously stationary, for the same reason. This finishes the proof.
_Remark 4.9_.: The attentive reader will have noticed that the arguments in this section proves Theorem 4.8 under the slightly weaker assumption that there are no free boundary stable geodesic with length \(\leq\mathcal{S}(\partial\Omega)\). We leave the details to the interested reader.
## 5. Width, diameter and length
The following general lemma about involutive homeomorphisms of the circle will be used several times in what follows. Recall that a homeomorphism \(\phi:S^{1}\to S^{1}\) is called _monotone_ when \(\phi(x)\) turns around \(S^{1}\) in the same direction as \(x\) does.
**Lemma 5.1**.: _Let \(\phi:S^{1}\to S^{1}\) be a continuous involution without fixed points. Then \(\phi\) is a monotone homeomorphism, and \(\phi\) induces a two-to-one cover \(p\in S^{1}\mapsto\{p,\phi(p)\}\in\mathcal{P}_{*}\) of a homotopically non-trivial loop in \(\mathcal{P}_{*}\)._
Proof.: By assumption, \(\phi\) is continuous and satisfies \(\phi\circ\phi=id_{S^{1}}\). In particular, \(\phi:S^{1}\to S^{1}\) is a homeomorphism.
Suppose, by contradiction, that \(\phi\) is not monotone. Then there exists \(\{p,\phi(p)\}\) such that one of the arcs \(C\) of \(S^{1}\) determined by it contains a some \(\{q,\phi(q)\}\) in its interior.
Let \(\{\overline{q},\phi(\overline{q})\}\) be the subset in \(C\) such that the closed arc \(C_{\overline{q}}\) with \(\partial C_{\overline{q}}=\{\overline{q},\phi(\overline{q})\}\) contained in \(C\) has the least possible length. Since \(\phi\) has no fixed points, possibly after interchanging \(\overline{q}\) and \(\phi(\overline{q})\) we may assume that \(p\), \(\overline{q}\), \(\phi(\overline{q})\) and \(\phi(p)\) appear in that order as we traverse \(S^{1}\) in the counterclockwise direction. Moreover, \(L(C_{\overline{q}})>0\) and \(\phi(x)\notin C_{\overline{q}}\) for every \(x\in C_{\overline{q}}\).
If \(q^{\prime}\in C_{\overline{q}}\) is sufficiently close to \(\overline{q}\), then \(\phi(q^{\prime})\) is sufficiently close to \(\phi(\overline{q})\), and lies outside \(C_{\overline{q}}\). Hence, in this case \(\phi(q^{\prime})\) must lie in the arc inside \(C\) joining \(\phi(\overline{q})\) to \(\phi(p)\). Since \(C_{\overline{q}}\) is connected, and \(\phi(C_{\overline{q}})\) is therefore a connected set that contains \(\phi(\overline{q})\), \(\phi(q^{\prime})\) and \(\overline{q}=\phi(\phi(\overline{q}))\), it follows that \(\phi(C_{\overline{q}})\supset\Gamma\setminus C_{\overline{q}}\). In particular, there exists \(q^{\prime\prime}\in C_{\overline{q}}\) such that \(\phi(q^{\prime\prime})=\phi(p)\). Applying the involution \(\phi\), we conclude that \(q^{\prime\prime}=p\), a contradiction.
Monotonicity implies that \(\phi\) interchanges the two arcs of \(S^{1}\) bounded by a pair \(\{x,\phi(x)\}\in\Gamma\) as \(x\) moves towards \(\phi(x)\). Thus, this path lifts to an open continuous path in the oriented double cover of \(\mathcal{P}_{*}\). In other words, this path is homotopically non-trivial in \(\mathcal{P}_{*}\). The proposition follows.
**Proposition 5.2**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold. Suppose that there exists a continuous map \(p\in\Gamma\mapsto\phi(p)\in\Gamma\) be such that \(d(p,\phi(p))=diam(\Gamma)\) for every \(p\in\Gamma\). Then \(\mathcal{S}(\Gamma)=diam(\Gamma)\)_
Proof.: By Lemma 5.1, if we fix a point \(x\in\Gamma\) and denote by \(\Gamma_{0}\) one of the closed arcs of \(\Gamma\) bounded by \(x\) and \(\phi(x)\), then the continuous map \(\overline{\phi}:y\in\Gamma_{0}\mapsto\{y,\phi(y)\}\in\mathcal{P}_{*}\) is a homotopically non-trivial loop in \(\mathcal{P}_{*}\), as much as any sweepout of \(\Gamma\). Since any two homotopically non-trivial curves on a real projective plane must intersect, any sweepout of \(\Gamma\) must intersect \(\overline{\phi}(\Gamma_{0})\) in \(\mathcal{P}_{*}\). But this just means that any sweepout \(\{p_{t},q_{t}\}\) of \(\Gamma\) must contain a pair \(\{p_{t_{0}},\phi(p_{t_{0}})\}\). Thus, for any sweepout of \(\Gamma\), we have \(\max_{t\in[0,1]}d(p_{t},q_{t}))\geq d(p_{t_{0}},\phi(p_{t_{0}})))=diam(\Gamma)\).
By definition of \(\mathcal{S}(\Gamma)\), and the fact that this number is bounded from above by \(diam(\Gamma)\), the proposition follows.
Moving towards the converse statement, we prove a series of propositions and lemmas. First, we prove a weak converse of Proposition 5.2.
**Lemma 5.3**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian surface. If \(\mathcal{S}(\Gamma)=diam(\Gamma)\), then for every \(p\in\Gamma\) there exists \(q\in\Gamma\) such that \(d(p,q)=diam(\Gamma)\)._
Proof.: Pick a point \(p\in\Gamma\), and let \(t\in[0,1]\mapsto p_{t},q_{t}\in\Gamma\) be continuous maps such that \(p_{t}=p\) for every \(t\in[0,1]\), and \(q_{t}\) traverses \(\Gamma\) in the counter-clockwise direction while moving from \(q_{0}=p\) to \(q_{1}=p\). Then \(\{p_{t},q_{t}\}_{t\in[0,1]}\) is a sweepout of \(\Gamma\). By continuity, the maximum distance between points \(\{p_{t},q_{t}\}\) is attained at some \(t_{0}\in[0,1]\). Under the assumption that \(\mathcal{S}(\Gamma)=diam(\Gamma)\), we have therefore \(diam(\Gamma)=\mathcal{S}(\Gamma)\leq d(p_{t_{0}},q_{t_{0}})\leq diam(\Gamma)\). Hence, \(\{p=p_{t_{0}},q_{t_{0}}\}\subset\Gamma\) is a critical point of \(\mathcal{D}\) with \(d(p,q_{t_{0}})=diam(\Gamma)\).
Using Proposition 2.8, the previous lemma can be improved.
**Proposition 5.4**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold that is the boundary of smoothly embedded totally convex disc. If \(\mathcal{S}(\Gamma)=diam(\Gamma)\), then, for every point \(p\in\Gamma\), there exists a unique \(\phi(p)\in\Gamma\) such that \(d(p,\phi(p))=diam(\Gamma)\). Moreover, the map \(p\in\Gamma\mapsto\phi(p)\in\Gamma\) is a monotone homeomorphism._
Proof.: By Lemma 5.3, for any given \(p\in\Gamma\), the set of points at distance \(diam(\Gamma)\) from \(p\) is non-empty. It is also clearly compact. Thus, taking into account the orientation of \(\Gamma\), there are uniquely defined points \(d_{-}(p)\) and \(d_{+}(p)\) in \(\Gamma\) at distance \(diam(\Gamma)\) from \(p\) such that \(d_{+}(p)\) is closest to \(p\) as we turn around \(\Gamma\) in the counter-clockwise direction, and \(d_{-}(p)\) is furthest from \(p\) as we turn around \(\Gamma\) in the counter-clockwise direction.
Since the pairs \(\{p,d_{+}(p)\}\) and \(\{p,d_{-}(p)\}\) are both critical points of \(\mathcal{D}\) and \(p\neq d_{-}(p)\) and \(p\neq d_{+}(p)\), the equality \(d_{+}(p)=d_{-}(p)\) is an immediate consequence of Proposition 2.8. Thus, a map \(\phi:\Gamma\rightarrow\Gamma\) with \(d(p,\phi(p))=diam(\Gamma)\) for all \(p\in\Gamma\) is well-defined. Notice also that this map satisfies \(\phi(\phi(p))=p\) for all \(p\in\Gamma\), for the same reason.
To check the continuity of \(\phi\), let \(\{p_{i}\}\subset\Gamma\) be a sequence converging to a point \(p\in\Gamma\). Every subsequence of the sequence \(\{\phi(p_{i})\}\subset\Gamma\) has a subsequence converging to some \(q\in\Gamma\). Since
\(diam(\Gamma)\), we have \(q=\phi(p)\), by uniqueness of the point at distance \(diam(\Gamma)\) from \(p\) established earlier in this proof. Hence, \(\{\phi(p_{i})\}\) converges to \(\phi(p)\). Since \(p\) is arbitrary, \(\phi\) is continuous at every point of \(\Gamma\).
The last assertion of the lemma follows now immediately from Lemma 5.1.
Combining the previous results, we prove Theorem C.
**Theorem 5.5**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian surface._
1. _If there exists a continuous map_ \(\phi:\Gamma\to\Gamma\) _such_ \(d(x,\phi(x))=diam(\Gamma)\)_, then_ \(\mathcal{S}(\Gamma)=diam(\Gamma)\)_._
2. _If_ \(\mathcal{S}(\Gamma)=diam(\Gamma)\)_, then for every_ \(x\in\Gamma\) _there exists_ \(y\in\Gamma\) _such that_ \(d(x,y)=diam(\Gamma)\)_._
_Assume, moreover, that \(\Gamma\) is the boundary of a smoothly embedded totally convex Riemannian disc. Then \(\mathcal{S}(\Gamma)=diam(\Gamma)\) if and only if there exists a continuous map \(\phi:\Gamma\to\Gamma\) map such \(d(x,\phi(x))=diam(\Gamma)\)._
Proof.: Assertion \(a)\) was proven in Proposition 5.2. Assertion \(b)\) follows from Proposition 5.3. The final equivalence follows from Proposition 5.4.
The next elementary lemma characterises the case where the extrinsic and intrinsic diameters of \(\Gamma\) are the same.
**Lemma 5.6**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold. The following assertions are equivalent:_
1. \(diam(\Gamma)=L(\Gamma)/2\)_._
2. \(\Gamma\) _is a geodesic formed by two minimising geodesics of length_ \(L(\Gamma)/2\)_._
Proof.: Assume \(i)\). Let \(p\), \(q\in\Gamma\) be such that \(d(p,q)=diam(\Gamma)\). Notice that \(\Gamma\) is the union of two arcs \(\gamma_{1}\), \(\gamma_{2}\) with extremities \(p\) and \(q\), which we label so that \(L(\gamma_{1})\leq L(\gamma_{2})\). Since \(diam(\Gamma)=d(p,q)\leq L(\gamma_{1})\leq L(\Gamma)/2\), it follows from \(i)\) that \(\gamma_{1}\) must be a minimising geodesic joining \(p\) and \(q\) of length \(L(\Gamma)/2\). Then, \(\gamma_{2}\) also has length \(L(\Gamma)/2=L(\Gamma)-L(\gamma_{1})\), and the same argument shows that \(\gamma_{2}\) is also a minimising geodesic joining the same pair of points. This proves \(ii)\).
Assume \(ii)\). Since the two geodesics forming \(\Gamma\) are minimising and have length \(L(\Gamma)/2\), their extremities \(p\) and \(q\) satisfy \(d(p,q)=L(\Gamma)/2\). Since \(L(\Gamma)/2\) is an upper bound for the distance of any pair of boundary points, \(diam(\Gamma)=d(p,q)=L(\Gamma)/2\), which is \(i)\).
Finally, we analyse the equality \(\mathcal{S}(\Gamma)=L(\Gamma)/2\). (Theorem D).
**Theorem 5.7**.: _Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold. The following assertions are equivalent:_
1. \(\mathcal{S}(\Gamma)=L(\Gamma)/2\)_._
2. _For every_ \(x\)_,_ \(y\in\Gamma\)_, the distance between_ \(x\) _and_ \(y\) _equals the length of the shortest arc of_ \(\Gamma\) _bounded by these two points._
Proof.: We begin by proving that \(ii)\) implies \(i)\). Every sweepout \(\{p_{t},q_{t}\}\) of \(\Gamma\) satisfies
\[\max_{t\in[0,1]}d_{(\Gamma,g_{|_{\Gamma}})}(p_{t},q_{t})=L(\Gamma)/2,\]
and the maximum is attained. The equality \(d(x,y)=d_{(\Gamma,g_{|_{\Gamma}})}(x,y)\) for all \(\{x,y\}\subset\Gamma\) thus implies, by the definition of \(\mathcal{S}(\Gamma)\), that \(\mathcal{S}(\Gamma)=L(\Gamma)/2\).
Conversely, suppose \(i)\). In particular, \(diam(\Gamma)=\mathcal{S}(\Gamma)\). By Lemma 5.3, given any \(x\in\Gamma\) there exists \(\phi(x)\in\Gamma\) such that \(d(x,\phi(x))=diam(\Gamma)=\mathcal{S}(\Gamma)=L(\Gamma)/2\). This point \(\phi(x)\) is the unique point of \(\Gamma\) with this property, because the two boundary arcs determined by \(x\) and \(\phi(x)\) must then have the same length. Also, for any pair of boundary points \(x\) and \(y\), the shortest arc of \(\Gamma\) joining \(x\) and \(y\) is minimising as well, since it is part of one of the arcs bounded by \(x\) and \(\phi(x)\), which are both minimising because \(d(x,\phi(x))=L(\Gamma)/2\). Thus, \(d(x,y)=d_{(\Gamma,g_{|_{\Gamma}})}(x,y)\), as we wanted to prove.
_Remark 5.8_.: As a consequence of Theorem 5.7, a smoothly embedded circle whose width equals half its length is a closed embedded geodesic that has the following property as well: any of its periodic Jacobi fields must either never vanish or vanish exactly at a pair \(\{x,y\}\subset\Gamma\) bounding two arcs of \(\Gamma\) of the same length. In fact, if not there would be two conjugate points strictly between some minimising geodesic arc between two points \(\{p,q\}\) such that \(d(p,q)=L(\Gamma)/2\), a contradiction. Therefore the Morse index of this closed embedded geodesic is either zero or one. Both cases are actually possible, see Examples 1 and 10 in Section 9.
Collecting all the results proven so far, we are ready to prove Theorem E.
Proof of Theorem E.: Let \(\Gamma\) be a smoothly embedded circle in a complete Riemannian manifold. According to Corollary 2.6 and Theorem A (Theorem 3.1), if \(0<\mathcal{S}(\Gamma)<diam(\Gamma)\), then there are at least two critical points of \(\mathcal{D}\) in \(\mathcal{P}\), one attaining the width and the other attaining the diameter of \(\Gamma\).
If, however, \(\mathcal{S}(\Gamma)=diam(\Gamma)\), then Proposition 5.4 implies that every \(x\in\Gamma\) belongs to a pair \(\{x,y\}\subset\Gamma\) such that \(d(x,y)=diam(\Gamma)\). In particular, there are infinitely many non-trivial critical points of \(\mathcal{D}\) in \(\Gamma\).
If, moreover, \(\Gamma\) is the boundary of a smoothly embedded totally convex disc, more can be said. Proposition 5.4 guarantees that there exists a monotone homeomorphism \(\phi:\Gamma\to\Gamma\) such that \(\{x,\phi(x)\}\) is a critical point of \(\mathcal{D}\) with \(d(x,\phi(x))=diam(\Gamma)\) for every \(x\in\Gamma\). By the uniqueness property of critical points of \(\mathcal{D}\) on the boundary of totally convex discs (Proposition 2.8), there are no other critical points of \(\mathcal{D}\) besides the pairs \(\{x,\phi(x)\}\), \(x\in\Gamma\). Viewed in \(\mathcal{P}_{*}\), the set of pairs \(\{x,\phi(x)\}\) is an embedded circle that represents a homotopically non-trivial loop in \(\mathcal{P}_{*}\), see Lemma 5.1. Theorem E is therefore proven.
To finish this section, we explain a construction of independent interest, which shows that no Riemannian disc with strictly convex boundary can be
a local maximum of \(\mathcal{S}(\partial M)/L(\partial M)\), among Riemannian discs with strictly convex boundary.
**Proposition 5.9**.: _Let \((M^{2},g)\) be a compact Riemannian surface with connected, strictly convex boundary. Then, there exists a smooth one parameter family of conformal Riemannian metrics \(g_{s}=exp(f_{s})g\), \(s\in[0,\epsilon)\), and a positive constant \(A>0\) such that \(g_{0}=g\) and_
\[\frac{\mathcal{S}(\partial M,g_{s})}{L(\partial M,g_{s})}\geq\frac{\mathcal{S }(M,g)}{L(\Gamma,g)}+As+o(s)\quad\text{as $s$ goes to zero}.\]
Proof.: Denote by \(V_{\delta}\) the collar neighbourhood of radius \(\delta>0\) of \(\partial M\) in \((M^{2},g)\), where \(\delta\) is a sufficiently small constant that will be specified later. Fix some non-negative number \(s\), and let \(\phi_{s}\in C^{\infty}(M)\), \(s\geq 0\), be a smooth one-parameter family with \(\phi_{0}\equiv 1\) such that, for all \(s\geq 0\),
* \(\phi_{s}(x)=1\) if \(d(x,\partial M)\leq\delta/3\).
* \(\phi_{s}(x)=(1+s)^{2}\) if \(d(x,\partial M)\geq 2\delta/3\)
* \(\phi_{s}(x)\in[1,(1+s)^{2}]\) for all \(x\in M\)
The metrics we are looking for will be of the form \(g_{s}=\phi_{s}g\). By virtue of \(i)\), all metrics \(g_{s}\) coincide in \(V_{\delta}\). In particular, \(\partial M\) is strictly convex in \((M^{2},g_{s})\) and \(L(\partial M,g_{s})=L(\partial M,g)\) for every \(s\geq 0\). Thus, it will be enough to show that \(\mathcal{S}(\partial M,g_{s})\) increases at least linearly near \(s=0\).
We begin with some simple estimates relating distances measured according to \(g_{s}\) and \(g_{0}=g\). Let \(c:[0,1]\to M\) be any curve joining points \(x\), \(y\in M\). By property \(iii)\), the length of \(c\) with respect to the conformal metrics \(g_{s}\) and \(g\) can be compared as follows:
\[L(c,g)\leq L(c,g_{s})\leq(1+s)L(c,g).\]
Since \(c\) is arbitrary,
\[d_{g}(x,y)\leq d_{g_{s}}(x,y)\leq(1+s)d_{g}(x,y).\]
In particular, if \(\gamma\) is a minimising geodesic of \((M^{2},g)\) joining points \(x\), \(y\in M\),
\[d_{g_{s}}(x,y)\geq d_{g}(x,y)=L(\gamma,g)\geq\frac{1}{1+s}L(\gamma,g_{s}). \tag{5.0.1}\]
The first claim will allow us to specify a convenient \(\delta>0\).
**Claim 1**: for every \(\varepsilon>0\), there exists \(\delta>0\) with the following property: For every minimising geodesic \(\gamma:[0,a]\to M\) of \((M^{2},g)\) such that \(\gamma(0)\in\partial M\) and \(L(\gamma,g)\geq\varepsilon\), there exists \(t\in(0,a]\) such that \(d(\gamma(t),\partial M)\geq\delta\).
In order to prove the Claim, we argue by contradiction: if not, there exists some \(\varepsilon>0\) and a sequence of minimising geodesics \(\gamma_{i}:[0,a_{i}]\to M\) of \((M^{2},g)\), with \(\gamma_{i}(0)\in\partial M\) and \(L(\gamma_{i})\geq\varepsilon\), such that they lie entirely in the collar neighbourhood \(V_{1/i}\). Passing to the limit, these geodesics converge to an arc of \(\partial M\) of length \(\geq\varepsilon>0\) that is geodesic as a limit of geodesics. But this contradicts the assumption that \(\partial M\) is strictly convex in \((M^{2},g)\).
In view of Claim 1, we define \(\varepsilon=\mathcal{S}(\partial M,g)/10\) and choose the corresponding \(\delta=\delta(\varepsilon)>0\) once and for all.
Next, it will be important to have uniform control for distances measured with respect to \(g_{s}\), at least for sufficiently small \(s\). This is provided by the following refinement of Claim 1, whose proof is entirely analogous.
**Claim 2**: there exists \(s_{0}>0\) with the following property: If \(s\in[0,s_{0}]\), then for every minimising geodesic \(\gamma_{s}\) in \((M^{2},g_{s})\) with \(\gamma_{s}(0)\in\partial M\) and \(L(\gamma_{s},g_{s})\geq 3\varepsilon/2\), there exists \(t\in(0,3\varepsilon/2]\) such that \(d_{g}(\gamma_{s}(t),\partial M)\geq\delta\).
From now on, we assume the parameter \(s\) belongs to the interval \([0,s_{0}]\) determined by Claim 2.
The next claim gives an estimate on the length of the piece of a minimising geodesic, joining sufficiently far apart boundary points, outside the collar neighbourhood \(V_{\delta}\). As expected, this piece is fairly long.
**Claim 3**: Let \(p\), \(q\in\partial M\) be such that \(d_{g}(p,q)\geq\frac{4}{5}\mathcal{S}(\partial M,g)\). If \(\gamma\) is a minimising geodesic in \((M^{2},g)\) joining \(p\) and \(q\), then \(L(\gamma\setminus V_{\delta},g)\geq 3\mathcal{S}(\partial M,g)/5\).
In fact, let \(\gamma:[0,\ell]\to M\) be a minimising geodesic of \((M^{2},g)\) joining the boundary points \(p\) and \(q\) in \((M^{2},g)\). Since \(\ell=L(\gamma)\geq d_{g}(p,q)>2\varepsilon\), we may apply Claim 1 at both extremities and conclude that there exists \(t_{1}\in(0,\varepsilon]\) and \(t_{2}\in[\ell-\varepsilon,\ell)\) such that \(d(\gamma(t_{i}),\partial M)\geq\delta\). By the strict convexity of \(\partial M\), a sufficiently small collar neighbourhood of \(\partial M\) is foliated by strictly convex curves of \((M^{2},g)\). Thus, by the maximum principle,
\[\gamma([t_{1},t_{2}])\subset\{x\in M\,:\,d(x,\partial M)\geq\delta\}.\]
Therefore
\[L(\gamma\setminus V_{\delta},g)\geq t_{2}-t_{1}\geq\ell-\varepsilon-\varepsilon =L(\gamma)-2\varepsilon=d_{g}(p,q)-2\varepsilon.\]
The claim now follows immediately from the definition of \(\varepsilon\) and the lower bound on \(d_{g}(p,q)\).
Repeating the proof above using Claim 2 for sufficiently long minimising geodesics of \((M^{2},g_{s})\) joining boundary points, we deduce the following similar estimate:
**Claim 4**: Let \(p\), \(q\in\partial M\) be such that \(d_{g_{s}}(p,q)>3\varepsilon\). If \(\gamma_{s}\) is a minimising geodesic in \((M^{2},g_{s})\) joining \(p\) and \(q\), then \(L(\gamma_{s}\setminus V_{\delta},g_{s})\geq d_{g_{s}}(p,q)-3\varepsilon\).
We are now ready to make the final estimate.
**Claim 5**: every sweepout \(\{p_{t},q_{t}\}\), \(t\in[0,1]\), of \(\partial M\) satisfies
\[\max_{t\in[0,1]}d_{g_{s}}(p_{t},q_{t})\geq\left(1+\frac{s}{2(1+s)}\right) \mathcal{S}(\partial M,g).\]
In fact, by definition of \(\mathcal{S}(\partial M,g)\), any sweepout of \(\partial M\) must contain a pair \(\{p,q\}\subset\partial M\) such that \(d_{g}(p,q)\geq 4\mathcal{S}(\partial M,g)/5\). We estimate the distance between such points \(p\) and \(q\) in \((M^{2},g_{s})\) as follows. Let \(\gamma_{s}\) be a minimising geodesic in \((M^{2},g_{s})\) joining \(p\) and \(q\). By properties \(i)\) and \(ii)\) of the definition of \(g_{s}\),
\[L(\gamma_{s}\cap V_{\delta},g_{s})\geq L(\gamma_{s}\cap V_{\delta},g)\quad \text{and}\quad L(\gamma_{s}\setminus V_{\delta},g_{s})=(1+s)L(\gamma_{s} \setminus V_{\delta},g).\]
Hence,
\[d_{g_{s}}(p,q) =L(\gamma_{s},g_{s})\] \[=L(\gamma_{s}\cap V_{\delta},g_{s})+L(\gamma_{s}\setminus V_{ \delta},g_{s})\] \[\geq L(\gamma_{s}\cap V_{\delta},g)+(1+s)L(\gamma_{s}\setminus V _{\delta},g)\] \[=L(\gamma_{s},g)+sL(\gamma_{s}\setminus V_{\delta},g)\] \[\geq d_{g}(p,q)+sL(\gamma_{s}\setminus V_{\delta},g).\]
By (5.0.1), we then have
\[d_{g_{s}}(p,q)\geq d_{g}(p,q)+\frac{s}{1+s}L(\gamma_{s}\setminus V_{\delta},g _{s}) \tag{5.0.2}\]
Finally, since \(d_{g_{s}}(p,q)\geq d_{g}(p,q)\geq 4\mathcal{S}(\partial M,g)/5>3\varepsilon\), we can use Claim 4 and the definition of \(\varepsilon\) so to estimate
\[L(\gamma_{s}\setminus V_{\delta},g_{s}) \geq d_{g_{s}}(p,q)-3\varepsilon\] \[\geq d_{g}(p,q)-3\varepsilon\] \[\geq\frac{4}{5}\mathcal{S}(\partial M,g)-\frac{3}{10}\mathcal{S} (\partial M,g) \tag{5.0.3}\] \[\geq\frac{1}{2}\mathcal{S}(\partial M,g).\]
Combining (5.0.2) and (5.0.3), we conclude that the inequality
\[d_{g_{s}}(p,q)\geq d_{g}(p,q)+\frac{s}{2(1+s)}\mathcal{S}(\partial M,g)\]
holds for any pair \(\{p,q\}\) of the sweepout that satisfies \(d(p,q)\geq 4\mathcal{S}(M,g)/5\). Since, by definition of \(\mathcal{S}(\partial M,g)\), there are such pairs with \(d(p,q)\) arbitrarily close to \(\mathcal{S}(M,g)\) in any sweepout, the Claim follows.
By the definition of \(\mathcal{S}(\partial M,g_{s})\), the proposition is now an immediate consequence of Claim 5.
## 6. Involutive symmetry
Let \(\Omega\) be a totally convex disc with smooth boundary in a complete Riemannian surface \((M^{2},g)\). In this section, we make an extra assumption about \(\partial\Omega\), namely:
* _every two points of_ \(\partial\Omega\) _are joined by only one geodesic._
Our purpose is to characterise such discs when the width and the diameter of the boundary coincide, and when there exists an isometric involution \(A:(\Omega,g)\to(\Omega,g)\) that does not fix any point of \(\partial\Omega\). This is Theorem F.
**Theorem 6.1**.: _Let \(\Omega\) be a totally convex disc with smooth boundary in a complete Riemannian surface \((M^{2},g)\). Assume that every two points of \(\partial\Omega\) are joined by a unique geodesic, that \(\mathcal{S}(\partial\Omega)=diam(\partial\Omega)\), and that \((\Omega,g)\) admits an isometric involution \(A\) with no fixed boundary points._
_Then the involution \(A\) has a unique fixed point \(x_{0}\in\Omega\), \(\mathcal{S}(\partial\Omega)/2\) is not bigger than the injectivity radius of \((M^{2},g)\) at \(x_{0}\), and \(\Omega\) is the geodesic ball of \((M^{2},g)\) with center at \(x_{0}\) and diameter \(\mathcal{S}(\partial\Omega)=diam(\partial\Omega)\)._
Proof.: By Lemma 5.1, the restriction of the isometric involution \(A:\Omega\to\Omega\) to \(\Gamma\) is a monotone homeomorphism. As a continuous map from the disc to itself, \(A\) has fixed points (by Brouwer Theorem), and none of them lies in the boundary (by assumption). Let \(x_{0}\in\Omega\) denote a fixed point of \(A\) that is closest to \(\partial\Omega\).
Since \(A\) is an involutive isometry, \(DA(x_{0}):(T_{x_{0}}\Omega,g_{x_{0}})\to(T_{x_{0}}\Omega,g_{x_{0}})\) is a linear isometry such that \(DA(x_{0})\circ DA(x_{0})=Id_{T_{x_{0}}\Omega}\). In particular, the two eigenvalues of \(DA(x_{0})\) belong to \(\{\pm 1\}\).
Let \(y_{0}\) be a point of \(\partial\Omega\) such that \(d(y_{0},x_{0})\) is the minimum distance between points of \(\partial\Omega\) and \(x_{0}\). It follows from the first variation formula (1.3.1) that any minimising geodesic joining \(x_{0}\) and \(y_{0}\) is orthogonal to \(\partial\Omega\) at \(y_{0}\). Thus, there is only one such geodesic from \(x_{0}\) to \(y_{0}\). Call it \(\gamma_{0}:[0,a]\to\Omega\), where \(a=d(y_{0},x_{0})\).
Consider the piecewise smooth curve \(C\) made of the two arcs \(\gamma_{0}\) and \(A(\gamma_{0})\). The extremities of \(C\) are the boundary points \(y_{0}\) and \(A(y_{0})\). This curve is moreover embedded, otherwise the two minimising geodesics issuing from \(x_{0}\) that constitute it would intersect at some point \(x\neq x_{0}\) in the interior of \(\Omega\).
Hence, the curve \(C\) divides the disc \(\Omega\) into two connected components (by Jordan Theorem). Since \(A\) restricted to the boundary permutes the arcs determined by \(y_{0}\) and \(A(y_{0})\), \(A\) permutes the two components of \(\Omega\setminus C\). Thus, no point in any of these components is fixed by \(A\). Even more, \(DA(x_{0})\) cannot fix any non-zero vector of \(T_{x_{0}}\Omega\) except possibly \(\gamma_{0}^{\prime}(0)\) and its multiples. However, if it does so, \(\gamma_{0}\) is equal to \(A\circ\gamma_{0}\), which is absurd because their end-points are \(y_{0}\neq A(y_{0})\).
This has two consequences. First, no other point of \(\Omega\) besides \(x_{0}\) is fixed by the involution \(A\). Second, the number \(1\) is not an eigenvalue of \(DA(x_{0})\). Therefore
\[DA(x_{0})=-Id_{T_{x_{0}}M}. \tag{6.0.1}\]
Using (6.0.1), it is immediate to see that the concatenated curve \(C\) is smooth at \(x_{0}\). It is also orthogonal to \(\partial\Omega\) at \(y_{0}\) and \(A(y_{0})\). Since there exists, by property \((\star\star)\), only one geodesic joining these two boundary points, we conclude that \(C\) is a free boundary minimising geodesic joining \(y_{0}\) and \(A(y_{0})\)
In particular, \(\{y_{0},A(y_{0})\}\subset\partial\Omega\) is a non-trivial critical point of \(\mathcal{D}\), which moreover satisfies
\[\mathcal{S}(\partial\Omega)=d(y_{0},A(y_{0}))=L(C)=L(\gamma_{0})+L(A(\gamma_{0}) )=2d(y_{0},x_{0}). \tag{6.0.2}\]
The first equality in (6.0.2) is a consequence of the assumption \(\mathcal{S}(\partial\Omega)=diam(\partial\Omega)\). (Recall Proposition 2.8 and Proposition 5.4).
Continuing the argument, let \(y\) be any boundary point. Let \(\gamma:[0,b]\to\Omega\) be some minimising geodesic joining \(x_{0}\) and \(y\). Arguing as before, it follows from (6.0.1) that the two minimising arcs \(\gamma\) and \(A(\gamma)\) are part of an embedded geodesic passing through \(x_{0}\) with extremities \(y\) and \(A(y)\) in \(\partial\Omega\). By property \((\star\star)\), it must be the unique geodesic joining \(y\) and \(A(y)\). In particular, it is minimising. Hence,
\[2d(y,x_{0})=L(\gamma)+L(A(\gamma))=d(y,A(y))\leq diam(\partial\Omega). \tag{6.0.3}\]
Since by assumption \(\mathcal{S}(\partial\Omega)=diam(\partial\Omega)\), and by construction \(d(y_{0},x_{0})\leq d(y,x_{0})\) for all \(y\in\partial\Omega\), it follows from (6.0.2) and (6.0.3) that
\[d(y,x_{0})=\mathcal{S}(\partial\Omega)/2\quad\text{for every}\quad y\in \partial\Omega.\]
Now that we know that the distance between \(x_{0}\) and boundary points is constant, using the first variation formula (1.3.1) it is straightforward to check that a minimising geodesic joining \(x_{0}\) and any \(y\in\partial\Omega\) is orthogonal to \(\partial\Omega\) at \(y\), and is, therefore, unique.
Next, we argue that \(\Omega\) is a metric and geodesic ball in \((M^{2},g)\) of radius \(r:=\mathcal{S}(\partial\Omega)/2\) and center \(x_{0}\). Let \(\exp\) denote the exponential map of \((M^{2},g)\). Let \(N\) be the inward-pointing unit normal vector field along \(\partial\Omega\). We have proven above that, for every \(y\in\partial\Omega\), the curve \(t\in[0,r]\mapsto exp_{y}(tN(y))\in\Omega\) is the unique geodesic joining \(y\) to \(x_{0}\), which is therefore minimising. In particular, if \(y\neq y^{\prime}\) in \(\partial\Omega\), then these curves meet only at \(x_{0}\).
As a consequence of this, the map
\[e:(y,t)\in\partial\Omega\times[0,r]\mapsto exp_{y}(tN(y))\in\Omega\]
descends to an injective, continuous map on the disc \(\partial\Omega\times[0,r]/\sim\). (Here, \((y,t)\sim(y^{\prime},t^{\prime})\Leftrightarrow t=t^{\prime}=r\)). This continuous map between discs clearly restricts to a homemomorphism between the respective boundaries. By a standard topological argument, we conclude that \(e\) is surjective as well.
After proving all this, it is straightforward to check that \(\exp_{x_{0}}:\overline{B_{r}(0)}\subset T_{x_{0}}M\to\Omega\) is a bijection, that all points of \(\Omega\) lie within distance \(r\) from \(x_{0}\), and that all points of \(\partial\Omega\) lie at distance \(r\) from \(x_{0}\). In other words, \(\Omega\) is, in fact, a geodesic ball of \((M^{2},g)\) centred at \(x_{0}\) of radius \(r=\mathcal{S}(\partial\Omega)/2\leq inj_{x_{0}}(M,g)\), as we wanted to prove.
## 7. Comparison between min-max quantities
Let \((M^{2},g)\) be a Riemannian disc with non-negative curvature and strictly convex boundary. Recall Section 1.10 for the relevant definitions of the min-max quantities \(\omega(M,g)\) and \(w_{*}\) associated to \((M^{2},g)\). The aim of this section
is to briefly describe some possible comparisons between these two quantities and \(\mathcal{S}(\partial M)\).
We begin by explaining the example depicted in Figure 1 in Section 1.10 in details. Let \(\rho:[0,+\infty)\to[0,+\infty)\) be a smooth function such that \(\rho(0)=0\), \(\rho^{\prime}_{+}(0)=+\infty\), \(\rho^{\prime}>0\) and \(\rho^{\prime\prime}\leq 0\). Assume that \(\rho(x)=mx+n\), for every \(x\geq x_{0}\), where \(m,n>0\). Let \(0<x_{0}<x_{1}\). Consider the Riemannian disc \((M^{2},g)\) obtained by the rotation of the graph of the function \(y=\rho(x)\), \(x\in[0,x_{1}]\), around the \(x\)-axis. (See Figure 1 in Section 1.10).
The Gaussian curvature of \((M^{2}g)\) is non-negative, and \(\partial M\) has strictly positive geodesic curvature. Thus, we can regard it as totally convex disc inside some complete Riemannian surface. (_Cf._ Example 2 in Section 8).
As it is always the case, \(\omega(M,g)<L(\partial M)\). (This inequality has been proven in [9], Section 6). On the other hand, all free boundary geodesics necessarily pass through the tip of \(M\). Hence, if we now choose \(m\) small and \(x_{1}\) large enough, we guarantee that any free boundary geodesic is longer than \(\partial M\). The curve \(\beta\) in Figure 1 represents one free boundary geodesic, whose length coincides with \(w_{*}\).
The main theorem of [9] implies that \(\omega(M,g)\) is realised by the length of a geodesic loop \(\alpha\) based at a boundary point, also depicted in Figure 1. By the rotational symmetry, \(S(\partial M)=diam(\partial M)\). (_Cf._ Example 3 in Section 8). Hence, the endpoints of the two extremal minimising geodesics \(\gamma_{-}\) and \(\gamma_{+}\) in Figure 1 determine a pair of points in \(\partial M\) which is a critical point of \(\mathcal{D}\) realising \(S(\partial M)\). Notice that the velocity vectors of \(\gamma_{\pm}\) point inside the open region between \(\partial M\) and \(\alpha\), because they cannot be part of the loop \(\alpha\), or be part of the strictly convex boundary curve, or cross \(\alpha\) by virtue of their minimisation property. Thus, \(\partial M\) and \(\alpha\) act as barriers while minimimising the length of curves with extremities equal to the extremities of \(\gamma_{\pm}\).
Therefore the inequalities (1.10.1) hold in this case, that is,
\[S(\partial M)<\frac{L(\partial M)}{2}<\omega(M,g)<L(\partial M)<w_{*}.\]
We finish this section with the proof of a general comparison, of independent interest.
**Theorem 7.1**.: _Let \((M^{2},g)\) be a Riemannian disc with non-negative Gaussian curvature and strictly convex boundary. Then_
\[S(\partial M)\leq\omega(M,g)\leq w_{*}.\]
_Moreover, if \(S(\partial M)=\omega(M,g)\), then \(S(\partial M)=w_{*}\) and the three numbers are equal to the length of a free boundary minimising geodesic._
Proof.: Both inequalities can be obtained as consequences of the existence of the sweepouts constructed in Sections 3 and 4 of [9] using the curve shortening process of Birkhoff adapted to the free boundary setting. Indeed, we know that \(w_{*}=L(\beta)\) for some free boundary geodesic \(\beta\), by [32]. Proposition 4.4 implies that \(M\) can be swept out by a path of curves \(\{c_{t}\}\) such that
\(L(c_{t})\leq L(\beta)\). These curves have extremities in \(\partial M\) and form a sweepout of \(M\) that is admissible for \(\omega(M,g)\). Therefore, \(\omega(M,g)\leq w_{*}\).
If \(\omega(M,g)\) is realised as the length of a free boundary geodesic, the same argument produces a path \(\{c_{t}\}\) which induces a sweepout \(\{\partial c_{t}\}\) of \(\partial M\) in the sense of Definition 1.2. In this case, we conclude that \(S(\partial M)\leq\omega(M,g)\).
Alternatively, according to [9], \(\omega(M,g)=L(\alpha)\) is realised as the length of a geodesic loop similarly to the configuration of Figure 1. Let then \(R\) be the region bounded by \(\partial M\cup\alpha\). We can use Lemma 3.1 of [9] to sweep \(R\) out by curves \(\{c_{t}\}\) of lengths bounded by \(L(\alpha)\). This is achieved by an application of the adapted Birkhoff's shortening process starting with the loop \(\alpha\). As before, \(\{\partial c_{t}\}\) defines a sweepout of \(\partial M\) that is admissible to estimate \(\mathcal{S}(\partial M)\), which implies that \(S(\partial M)\leq\omega(M,g)\).
Notice that the curves \(c_{t}\) with length close to \(L(\alpha)\) are not minimising. Indeed, \(c_{0}=\alpha\) and this curve has the same initial and final point. In particular, \(\mathcal{D}(\partial\alpha)=0\). Similarly, \(\mathcal{D}(\partial c_{t})\) is close to zero for small values of \(t>0\). Therefore, if \(\omega(M,g)\) is realised as the length of a loop, then we actually have \(S(\partial M,g)<\omega(M,g)\).
To finish the proof, suppose now that \(S(\partial M)=\omega(M,g)\). By the previous analysis, these two numbers are realised as the length of a free boundary geodesic \(\beta\). It follows that \(w_{*}\) must coincide with the first two invariants. Recall that the sweepout \(\{c_{t}\}\) of \(M\) obtained before satisfies also \(L(c_{t})<L(c_{1/2})=L(\beta)\) for all \(t\neq 1/2\). If \(\beta\) is not minimising, then \(\mathcal{D}(\partial\beta)<L(\beta)\). This implies that the maximum of \(\mathcal{D}(\partial c_{t})\) is strictly smaller than \(L(\beta)=\omega(M,g)=S(\partial M)\), a contradiction. In conclusion, \(\beta\) is a free boundary minimising geodesic, and \(\partial\beta\) is a critical point of \(\mathcal{D}\).
## 8. Examples
**Example 1**.: The equator of the two-dimensional Euclidean sphere of radius one is a smoothly embedded curve of length \(2\pi\). Any critical point of \(\mathcal{D}\) consists of a pair of antipodal boundary points, which are joined by minimising geodesics of length \(\pi\) that form a set parametrised by a circle. Thinking of the equator as the boundary of the southern hemisphere, it is clear that any pair of antipodal points is joined by a unique free boundary minimising geodesic, while any two geodesics that lie in different components of the hemisphere determined by the free boundary geodesic are simultaneously stationary. The width of the equator is equal to half of its length. This closed embedded geodesic has Morse index one.
**Example 2**.: Let \((M^{2},g)\) be a Riemannian surface with compact boundary that is strictly convex in the sense that the geodesic curvature vector of \(\partial M\) points strictly inside \(M\). Then \((M^{2},g)\) can be isometrically embedded into a larger complete Riemannian manifold in such way that the complement of \(M\) is foliated by circles with geodesic curvature vector pointing towards \(M\). (Proof: move \(M\) inside by the gradient flow of the distance to the boundary, pull back the metric, and blow up the collar region into ends that
have the property above, by a suitable conformal factor). By a curvature comparison argument for curves that lie on one side of the other and intersect tangentially, the latter condition implies that \(M\) contains all geodesics of this larger manifold with extremities in \(M\). In other words, \((M^{2},g)\) is a totally convex Riemannian surface with smooth compact boundary in some complete Riemannian surface. In this case, the width of each boundary component of \(\partial M\) is a geometric invariant of \((M^{2},g)\).
**Example 3**.: Let \((M^{2},g)\) be a Riemannian disc with strictly convex boundary as in Example 2. Assume, in addition, that \((M^{2},g)\) is rotationally symmetric. We claim that all non-trivial critical points of \(\mathcal{D}\) are of the form \(\{x,A(x)\}\subset\partial M\), where \(A:M\to M\) represents the rotation by 180 degrees. In particular, \(d(x,A(x))=diam(\partial M)=S(\partial M)\) for all \(x\in\partial M\).
Indeed, let \(\{x,y\}\subset\partial M\) be an arbitrary critical point of \(\mathcal{D}\). If \(R_{\theta}:M\to M\) is a rotation by \(\theta\in[0,2\pi)\), then \(\{R_{\theta}(x),R_{\theta}(y)\}\subset\partial M\) is also a critical point of \(\mathcal{D}\), because \(R_{\theta}\) is an isometry. The uniqueness property (Proposition 2.8) then implies that if \(\theta_{0}\) is such that \(R_{\theta_{0}}(x)=y\), then \(R_{\theta_{0}}(y)=x\). Therefore \(R_{2\theta_{0}}(x)=x\), _i.e._\(y=R_{\theta_{0}}(x)=A(x)\) is the rotation of \(x\) by 180 degrees. The claim follows.
**Example 4**.: Given \(a>0\), let
\[\mathcal{E}_{a}=\left\{(x,y,z)\in\mathbb{R}^{3}\,|\,x^{2}+y^{2}+\frac{z^{2}}{a ^{2}}=1\right\}\]
be an ellipsoid of revolution around the \(z\) axis. Given \(\delta>0\), the disc \(M_{a,\delta}=\mathcal{E}_{a}\cap\{z\geq\delta\}\) has non-negative Gaussian curvature and strictly convex boundary. According to Example 3, the critical points of \(\mathcal{D}\) are of the form \(\{(x,y,\delta),(-x,-y,\delta)\}\subset\partial M\) and satisfy
\[d((x,y,\delta),(-x,-y,\delta))=\mathcal{S}(\partial M_{a,\delta})=diam( \partial M_{a,\delta}).\]
For all \(a>0\) and \(\delta>0\), the critical points bound an unstable free boundary geodesic that goes through the point \((0,0,a)\).
For \(0<a<1\), the critical points bound exactly one minimising geodesic, which is the free boundary geodesic with length \(<L(\partial M_{a,\delta})/2\).
For \(a>1\), if \(\delta>0\) is sufficiently small, then none of the free boundary geodesics are minimising, because they all go through \((0,0,a)\) and therefore have length strictly bigger than half the length of \(\partial M_{a,\delta}\). The boundary points \((x,y,\delta)\) and \((-x,-y,\delta)\) bound two minimising geodesics that are simultaneously stationary (in fact, one is the reflected image of each other, by the reflection that fixes the free boundary geodesic joining \((x,y,\delta)\) and \((-x,-y,\delta)\)).
**Example 5**.: Let \(\Omega\) be the plane region bounded by an ellipse determined by the equation \(x^{2}/a^{2}+y^{2}/b^{2}=1\), where \(a\geq b>0\) will be chosen. Let \(R_{1},R_{2}\subset\Omega\) be the regions represented in Figure 2. Assume that \(\Omega\setminus(R_{1}\cup R_{2})\) is the union of a small collar neighbourhood of \(\partial\Omega\) with a small neighbourhood of the segment representing the minor axis of the ellipse.
Let \(g\) be a Riemannian metric on \(\Omega\) that coincides with the Euclidean flat metric outside a small neighbourhood of \(R_{1}\cup R_{2}\) contained in \(\Omega\), and is very large compared to the Euclidean metric in \(R_{1}\cup R_{2}\) (in the sense that the size of tangent vectors at points in \(R_{1}\cup R_{2}\) measured according to \(g\) are much larger than their Euclidean sizes). In particular, the metric is Euclidean near \(\partial\Omega\). Clearly, \(\Omega\) is a totally convex disc with smooth boundary in \((\mathbb{R}^{2},g)\), where \(g\) extends as the Euclidean outside \(\Omega\). We claim that \((\Omega,g)\) contains a minimising geodesic \(\gamma\) that is free boundary stable, and that
\[L(\gamma)<2b<\mathcal{S}(\partial\Omega),\]
as soon as \(b\) and \(a\) are chosen appropriately.
In order to see this, let \(p_{i}\) and \(q_{i}\) be points in \(\partial\Omega\) as indicated in Figure 2. Let \(A\) be the closed arc of the ellipse from \(q_{3}\) to \(q_{2}\) in the counterclockwise direction, and \(B\) be the arc from \(p_{2}\) to \(p_{3}\). Let \(q\in A\) and \(p\in B\) be a pair of points realising the distance between \(A\) and \(B\) with respect to the metric induced by \(g\). If the arcs \(A\) and \(B\) are large enough compared to \(2b\), then \(q\in int(A)\), and \(p\in int(B)\). Indeed, a path joining \(p_{2}\) or \(p_{3}\) to the arc \(A\) either crosses the region \(R_{1}\cup R_{2}\), where the metric is large, or goes around this region. Making \(p_{2}\) and \(p_{3}\) far enough from the \(y\)-axis and from the vertices of the ellipse, we conclude that the distance with respect to \(g\) from \(p_{2}\) or \(p_{3}\) to \(A\) is larger than \(2b\). On the other hand, since the metric is flat near the minor axis we have \(d(A,B)<2b\). This shows that \(p\in int(B)\). Similarly we have \(q\in int(A)\). In particular, \(\{p,q\}\) is a local minimum of \(\mathcal{D}\), which implies that there exists a free boundary minimising geodesic \(\gamma\) from \(p\) to \(q\) which is free boundary stable (Proposition 2.9) and \(L(\gamma)<2b\).
Let \(C\) be the arc of the ellipse from \(q_{1}\) to \(p_{1}\) in the counterclockwise direction, and \(D\) be the arc from \(p_{4}\) to \(q_{4}\). Assume that \(p_{1}\) is far from \(p_{2}\), \(q_{1}\) is far from \(q_{2}\), and both \(p_{1}\) and \(q_{1}\) are far from the tip of the ellipse compared to the value \(2b\). Therefore, \(d(p_{1},q_{1})\) is also large compared to \(2b\). Moreover, if \(y\in\partial\Omega\) and \(d(p_{1},y)<3b\), then \(y\) belongs to the open arc of the ellipse from \(q_{1}\) to \(p_{2}\). The points \(p_{3}\), \(p_{4}\), \(q_{3}\), and \(q_{4}\) are centrally symmetric to \(p_{2}\), \(p_{1}\), \(q_{2}\), and \(q_{1}\), respectively. Therefore they have analogous properties.
Finally, we can argue that the width of \(\partial\Omega\) in \((\Omega,g)\) is strictly bigger than \(2b\). Fix a sweepout \(\{p_{t},q_{t}\}\) of \(\partial\Omega\). Arguing as in the proof of Proposition 5.2, we conclude that every sweepout contains a pair \(\{x,-x\}\) of centrally
Figure 2.
symmetric points of \(\partial\Omega\). If \(x\) belongs to one of the arcs \(C\) or \(D\), then \(-x\) belongs to the other, and \(d(x,-x)\geq 3b\). If \(x\) belongs to the arc from \(p_{1}\) to \(p_{4}\), then \(-x\) belongs to the arc from \(q_{4}\) to \(q_{1}\). Very the pair \(\{x,-x\}\) continuously by pairs of the sweepout until the first time one has a pair \(\{p_{t_{0}},q_{t_{0}}\}\) with a point in \(C\cup D\). This happens because otherwise \(p_{t}\) and \(q_{t}\) would be trapped in the disjoint arcs from \(q_{4}\) to \(q_{1}\), and from \(p_{1}\) to \(p_{4}\). Without loss of generality, suppose \(p_{t_{0}}=p_{1}\). Our choice implies that \(q_{t_{0}}\) belongs to the arc from \(q_{4}\) to \(q_{1}\). The property obtained in the preceding paragraph implies that \(d(p_{t_{0}},q_{t_{0}})\geq 3b\). Since the sweepout is arbitrary, we conclude that \(S(\partial\Omega,g)>2b\).
**Example 6**.: Consider again the discs \(M_{a,\delta}\) inside the ellipsoid \(\mathcal{E}_{a}\) from Example 4, where \(a>1\) and \(\delta>0\) is sufficiently small. Pick \(\varepsilon>0\) and denote by \(B_{\varepsilon}\) the open metric ball around \((0,0,a)\) of radius \(\varepsilon>0\). If \(\varepsilon>0\) is sufficiently small depending only on \(a\) and \(\delta\), no curve in \(M_{a,\delta}\) that joins points \((x,y,\delta)\) and \((-x,-y,\delta)\) and intersects \(B_{\varepsilon}\) is minimising. Thus, no modification of the metric with compact support in \(B_{\varepsilon}\) changes the value of \(\mathcal{D}\) on pairs of points in \(\partial M_{a,\delta}\). In fact, one may even attach handles and cross caps inside this ball, without changing the fact that the width and the diameter of \(\partial M_{a,\delta}\), as the boundary of these compact totally convex Riemannian surfaces with different topologies, are equal.
In particular, we may even arrange this modification of the metric supported on \(B_{\varepsilon}\) in such way that the map \(A:x\in\Omega_{a,\delta}\mapsto-x\in\Omega_{a,\delta}\) is still an isometry without fixed points in \(\partial\Omega_{a,\delta}\), while \(\Omega_{a,\delta}\) is not rotationally symmetric.
**Example 7**.: There are many other interesting examples of smoothly embedded discs in Riemannian manifolds whose boundaries satisfy one of the equivalent properties described in Theorem C (Theorem 5.5). For instance, let \(x_{0}\) be a point in a complete Riemannian surface \((M^{2},g)\), and assume \(r>0\) is smaller than the convexity radius of \((M^{2},g)\) at \(x_{0}\). Then the geodesic ball \(B_{r}(x_{0})\) is diffeomorphic to a disc and \(\partial B_{r}(x_{0})\) is a smoothly embedded, strictly convex circle, and moreover the minimising geodesics joining points of \(B_{r}(x_{0})\) lie in \(B_{r}(x_{0})\) and are unique. (This is stronger than \((\star\star)\)). Given a unit vector \(v\in T_{x_{0}}M\), the geodesic \(t\in[-r,r]\mapsto\exp_{x_{0}}(tv)\in M\) is the unique minimising geodesic joining the points \(\exp_{x_{0}}(\pm rv)\). It has length \(2r\). The obvious continuous map \(\exp_{x_{0}}(-rv)\in\partial B_{r}\mapsto exp_{x_{0}}(rv)\in\partial B_{r}\) is therefore a smooth map such that \(d(\exp_{x_{0}}(-rv),\exp_{x_{0}}(rv))=2r\). Since by the triangle inequality we have \(diam(\partial B_{r})=2r\), it follows from Proposition 5.2 that \(\mathcal{S}(\partial B_{r})=diam(\partial B_{r})\).
This class of examples also shows that the characterisation in Theorem F (Theorem 6.1) is sharp, because if a sufficiently small ball \(B\) has an isometric involution fixing the center of \(B\), then it has properties \((\star\star)\) and \(\mathcal{S}(\partial B)=diam(B)\). Notice that such ball is not necessarily rotationally symmetric.
**Example 8**.: Consider the ellipsoid \(\mathcal{E}_{a}\) of Example 4. When \(a>1\), the vertical geodesic \(\Gamma=\mathcal{E}_{a}\cap\{x=0\}\) satisfies \(\mathcal{S}(\Gamma)<diam(\Gamma)=L(\Gamma)/2\). This
can be seen from the estimates for the horizontal sweepouts determined by the intersection of \(\Gamma\) with horizontal planes, and from Lemma 5.6.
**Example 9**.: When \(a>1\), the limit \(\delta\to 0\) of the examples described in Example 6 are examples of fillings of the unit circle \(\{(x,y)\in\mathbb{R}^{2}\,|\,x^{2}+y^{2}=1\}\). This is a particular example of a much more general construction that yields many examples of smoothly embedded circles \(\Gamma\) with \(\mathcal{S}(\Gamma)=L(\Gamma)/2\) in Riemannian spheres. (_Cf._[16]).
Let \(g\) be a Riemannian metric on the real projective plane \(\mathbb{RP}^{2}\). Let \(\gamma\) be the simple closed geodesic that realises the least length among homotopically non-trivial loops of \((\mathbb{RP}^{2},g)\). The metric \(g\) lifts to a Riemannian metric on the sphere \(S^{2}\), and \(\gamma\) lifts to a simple closed geodesic \(\Gamma\) in \((S^{2},g)\). We claim that \(\mathcal{S}(\Gamma)=L(\Gamma)/2\). In fact, pick any point \(x\in\Gamma\), and consider the lift of \(\gamma\) to \((S^{2},g)\) based at \(x\). Since \(\gamma\) is homotopically non-trivial, the lift is an open arc of length \(L(\gamma)=L(\Gamma)/2\) that ends at a point \(\phi(x)\). Any other curve \(c\) joining \(x\) and \(\phi(x)\) in \((S^{2},g)\) projects down to a homotopically non-trivial loop, whose length is therefore at least \(L(\gamma)=L(\Gamma)/2\). Thus, \(d(x,\phi(x))=L(\Gamma)/2\). In particular, \(diam(\Gamma)=L(\Gamma)/2\). Since the map \(x\in\Gamma\mapsto\phi(x)\in\Gamma\) is continuous (it is just the restriction to \(\Gamma\) of the deck transformation of \(S^{2}\) that produces \(\mathbb{RP}^{2}\) as a quotient), the claim now follows by Theorem C (Theorem 5.5).
**Example 10**.: Consider the cylinder \(S^{1}\times\mathbb{R}\) endowed with the flat product metric. Since it is a complete surface foliated by geodesics \(\Gamma_{t}=S^{1}\times\{t\}\), \(t\in\mathbb{R}\), it is straightforward to check that the distance between points in \(\Gamma_{t}\) is realised by the shortest arc of \(\Gamma_{t}\) determined by these points. By Theorem C (Theorem 5.5), the curves \(\Gamma_{t}\) satisfies \(S(\Gamma_{t})=L(\Gamma_{t})/2\). Notice that, as closed geodesics, \(\Gamma_{t}\) have Morse index zero.
An example of smoothly embedded circle with the same properties inside a compact surface can be obtained by capping the cylinder sufficiently far from \(\Gamma_{0}\) on both sides, so that geodesics with extremities in \(\Gamma_{0}\) that leave the cylindrical region have length bigger than \(L(\Gamma_{0})/2\).
**Example 11**.: For every \(a\neq 1\), the ellipse
\[\Omega_{a}=\Big{\{}(x,y)\in\mathbb{R}^{2}\,|\,x^{2}+\frac{y^{2}}{a^{2}}=1 \Big{\}}\]
bounds a disc with strictly convex boundary that contains only two non-trivial critical points of \(\mathcal{D}\), namely, \(\{(-1,0),(1,0)\}\) and \(\{(0,-a),(0,a)\}\).
**Example 12**.: Going beyond space forms, different notions of convex subsets of constant width in Riemannian manifolds have been proposed. For instance, generalizations of some classical results for convex bodies in the Euclidean space are obtained in [7], under much more restrictive convexity conditions than the ones ever assumed in this paper.
The works of Robertson [27] and Bolton [4] proposed a different generalisation, which does not involve any convexity assumption and has a more topological flavour. They call a hypersurface \(N\) of a complete Riemannian
manifold a _transnormal hypersuface_ when every geodesic that intersects \(N\) orthogonally at one point intersects \(N\) orthogonally at every other point of intersection. These hypersurfaces enjoy interesting topological and geometric properties (see [27], [4], [25], [26], [29] and [30] for a non-exhaustive list of works on this subject).
If a transnormal circle \(\Gamma\) in a Riemannian surface is the boundary of a compact totally convex region \((\Omega,g)\), then for every point \(x\) there exists a unique \(\phi(x)\neq x\) in \(\Gamma\) such that \(\{x,\phi(x)\}\) bounds a unique free boundary geodesic. Indeed, every such \(\Omega\) contains a free boundary geodesic \(\gamma\). Let \(\{p,q\}=\partial\gamma\). Since \(\gamma\) crosses \(\partial\Omega\) transversally at \(p\) and \(q\), if \(\tilde{p}\in\partial\Omega\) is sufficiently close to \(p\), then the geodesic \(\tilde{\gamma}\) starting at \(\tilde{p}\) normal to \(\partial\Omega\) will cross the boundary curve a second time at a point \(\tilde{q}\) near \(q\). The transnormality of \(\partial\Omega\) implies that the portion of \(\tilde{\gamma}\) connecting \(\tilde{p}\) and \(\tilde{q}\) is a free boundary geodesic. The compactness of the set of proper free boundary geodesics finishes the argument.
Moreover, by the first variation formula (1.3.1), all the free boundary geodesics connecting \(x\) and \(\phi(x)\) have the same length. Also, \(\phi(\phi(x))=x\). The map \(x\in\Gamma\mapsto\phi(x)\in\Gamma\) is therefore a monotone homeomorphism (Lemma 5.1).
It may happen that these free boundary geodesics are not minimising geodesics, so it is not clear that \(\{x,\phi(x)\}\) are critical points of \(\mathcal{D}\). However, if moreover \((\Omega,g)\) has property \((\star\star)\), then it is clear that these free boundary geodesics are minimising, so that \(\{x,\phi(x)\}\), \(x\in\Gamma\), are in fact critical points of \(\mathcal{D}\). By Theorem C (Theorem 5.5), it follows that \(\mathcal{S}(\partial\Omega)=diam(\partial\Omega)\) in this case.
The class of examples discussed in Example 6 contains examples of discs with strictly convex boundary with \(\mathcal{S}(\partial\Omega)=diam(\partial\Omega)\) that are not transnormal. In what follows, we describe three examples of curves that are not transnormal, and which nevertheless have the same width and diameter.
Let \(\mathcal{E}=\mathcal{E}(a,b,c)\) be an ellipsoid \(x^{2}/a^{2}+y^{2}/b^{2}+z^{2}/c^{2}=1\), with \(a>b>c\), all close to \(1\). This surface has exactly simple three simple closed geodesics, the curve \(\Gamma=\mathcal{E}\cap\{x=0\}\) being the shortest one. We claim that \(S(\Gamma)=diam(\Gamma)=L(\Gamma)/2\), but \(\Gamma\) is not transnormal. In fact, notice that \(\mathcal{E}\) is the Riemannian lift of a Riemannian \(\mathbb{RP}^{2}\), and that \(\Gamma\) is the lift of the shortest homotopically non-trivial curve in that \(\mathbb{RP}^{2}\). As in Example 9, the first part of the claim follows. On the other hand, since the two symmetric halves of the orthogonal geodesics \(\mathcal{E}\cap\{y=0\}\) and \(\mathcal{E}\cap\{z=0\}\) to \(\Gamma\) have different lengths, \(\Gamma\) is not transnormal.
Next we explain how to obtain an example from those of Example 6. Recall that these examples have a rotationally symmetric part, and a region where the metric is modified. In the present context, it is convenient to modify the metric in such a way that the tip \(B_{\varepsilon}\) is the graph of a positive function \(z=f(x,y)\) over a disc, with two points of maximum at \((\alpha,0)\) and \((-\alpha,0)\) and one saddle point at \((0,0)\). Assume that the disc \(\Omega\) is invariant under the reflections with respect to the \(xz\)-plane and the \(yz\)-plane. This
surface resembles a mountain with rotationally symmetric base, and two peaks. The symmetries imply that the curves \(\gamma_{1}=\Omega\cap\{y=0\}\) and \(\gamma_{2}=\Omega\cap\{x=0\}\) are free boundary geodesics. The curve \(\gamma_{1}\) goes up the two peaks, while \(\gamma_{2}\) has maximum height at saddle point. If \(L(\gamma_{1})\neq L(\gamma_{2})\), then \(\partial\Omega\) cannot be transnormal.
Finally, we present an example in higher codimension. Let \(\Gamma\) be a smoothly embedded circle in the Euclidean space that is contained in the unit sphere and is centrally symmetric. Since \(d(x,y)=||x-y||\leq 2\) and \(d(x,-x)=2\) for every \(x\), \(y\in\Gamma\), we have \(diam(\Gamma)=2\). Moreover, the map \(\phi:\Gamma\to\Gamma\) defined by \(\phi(x)=-x\) is continuous and satisfies \(d(x,\phi(x))=2\). Thus, Theorem C (Theorem 5.5) implies that \(S(\Gamma)=diam(\Gamma)\). However, not all such curves \(\Gamma\) are transnormal in \((\mathbb{R}^{3},can)\).
We exemplify one of such curves in Figure 3. On the left part of the figure, we see the curve \(\Gamma\) inside the unit sphere and two points \(A\), \(B\subset\Gamma\) such that the line segment from \(A\) to \(B\) is perpendicular at \(A\), but not at \(B\). On the right part of the figure, this property can be better visualised. The depicted great circles in the unit sphere which are normal to \(\Gamma\) at \(A\) and \(B\) define the planes that contain all segments orthogonal to \(A\) and \(B\), respectively. These circles cross \(\Gamma\) in \(6\) points, \(A\) and \(B\) included. However, the line segment connecting \(A\) and \(B\) is not normal at \(B\), since \(A\) is not contained in the normal circle of \(\Gamma\) at \(B\).
|
2303.16070 | Understanding the temperatures of H3+ and H2 in diffuse interstellar
sightlines | The triatomic hydrogen ion H3+ is one of the most important species for the
gas phase chemistry of the interstellar medium. Observations of H3+ are used to
constrain important physical and chemical parameters of interstellar
environments. However, the temperatures inferred from the two lowest rotational
states of H3+ in diffuse lines of sight - typically the only ones observable -
appear consistently lower than the temperatures derived from H2 observations in
the same sightlines. All previous attempts at modelling the temperatures of H3+
in the diffuse interstellar medium failed to reproduce the observational
results. Here we present new studies, comparing an independent master equation
for H3+ level populations to results from the Meudon PDR code for photon
dominated regions. We show that the populations of the lowest rotational states
of H3+ are strongly affected by the formation reaction and that H3+ ions
experience incomplete thermalisation before their destruction by free
electrons. Furthermore, we find that for quantitative analysis more than two
levels of H3+ have to be considered and that it is crucial to include radiative
transitions as well as collisions with H2. Our models of typical diffuse
interstellar sightlines show very good agreement with observational data, and
thus they may finally resolve the perceived temperature difference attributed
to these two fundamental species. | Jacques Le Bourlot, Evelyne Roueff, Franck Le Petit, Florian Kehrein, Annika Oetjens, Holger Kreckel | 2023-03-28T15:47:20Z | http://arxiv.org/abs/2303.16070v1 | Understanding the temperatures of H\({}_{3}^{+}\) and H\({}_{2}\) in diffuse interstellar sightlines
###### Abstract
The triatomic hydrogen ion H\({}_{3}^{+}\) is one of the most important species for the gas phase chemistry of the interstellar medium. Observations of H\({}_{3}^{+}\) are used to constrain important physical and chemical parameters of interstellar environments. However, the temperatures inferred from the two lowest rotational states of H\({}_{3}^{+}\) in diffuse lines of sight - typically the only ones observable - appear consistently lower than the temperatures derived from H\({}_{2}\) observations in the same sightlines. All previous attempts at modeling the temperatures of H\({}_{3}^{+}\) in the diffuse interstellar medium failed to reproduce the observational results.
Here we present new studies, comparing an independent master equation for H\({}_{3}^{+}\) level populations to results from the Meudon PDR code for photon dominated regions. We show that the populations of the lowest rotational states of H\({}_{3}^{+}\) are strongly affected by the formation reaction and that H\({}_{3}^{+}\) ions experience incomplete thermalization before their destruction by free electrons. Furthermore, we find that for quantitative analysis more than two levels of H\({}_{3}^{+}\) have to be considered and that it is crucial to include radiative transitions as well as collisions with H\({}_{2}\). Our models of typical diffuse interstellar sightlines show very good agreement with observational data, and thus they may finally resolve the perceived temperature difference attributed to these two fundamental species.
Astrochemistry; Interstellar Medium; Triatomic hydrogen; Molecular clouds +
Footnote †: journal: Nuclear Physics
,, and
## 1 Introduction
The triatomic hydrogen ion H\({}_{3}^{+}\) is one of the main drivers of interstellar chemistry in the gas phase [1]. It is formed very efficiently in the interstellar medium by collisions between hydrogen molecules and hydrogen molecular ions
\[\rm H_{2}+H_{2}^{+}\longrightarrow H_{3}^{+}+H\,. \tag{1}\]
Owing to the comparatively low proton affinity of H\({}_{2}\), triatomic hydrogen readily reacts with many of the neutral atomic and molecular species present in interstellar environments. By donating a proton in exothermic ion-neutral collisions of the type
\[\rm H_{3}^{+}+X\longrightarrow XH^{+}+H_{2}\,, \tag{2}\]
triatomic hydrogen often initiates gateway processes, which subsequently enable the formation of more complex molecules in interstellar space (here the X stands for any neutral atomic or molecular collision partner). In particular, reactions between H\({}_{3}^{+}\) ions and neutral O, and C atoms will lead to the formation of OH\({}^{+}\), and CH\({}^{+}\) respectively, and thus facilitate the introduction of the heavier atomic species into the chemical networks.
The relevance of H\({}_{3}^{+}\) for interstellar chemistry was recognized already in early quantitative models of interstellar clouds [2, 3]. After the breakthrough work of Oka [4], who identified the infrared spectrum of the H\({}_{3}^{+}\) fundamental vibrational band in the laboratory, H\({}_{3}^{+}\) was found in both dense [5] and diffuse lines of sight [6], confirming the role of ion-neutral chemistry in space.
In the meantime, triatomic hydrogen has been detected in various interstellar sightlines, including the Galactic center, as well as extra-galactic sources and planetary atmospheres (see [7] for a recent review of H\({}_{3}^{+}\) astronomy). Moreover, owing to its seemingly simple formation and destruction mechanisms, H\({}_{3}^{+}\) observations have been used to constrain important astrophysical parameters like, e.g., the cosmic ray ionization rate [8, 9, 10, 11] and the temperatures and densities of the molecular gas in the vicinity of the Galactic center [12]. Submillimeter observations of H\({}_{2}\)D\({}^{+}\), an isotopic variant of triatomic hydrogen, have been used to infer the minimum age of a star-forming molecular cloud [13].
While the hydrogen molecule H\({}_{2}\) is by far the most abundant molecule in space, the bulk of H\({}_{2}\) molecules in colder environments is difficult to observe with ground-based telescopes. Direct observation of H\({}_{2}\) in diffuse and translucent interstellar clouds are obtained either from satellite absorption observations of electronic transitions in the ultraviolet regime (see, e.g., [14, 15]) towards bright stellar sources or by infra-red quadrupolar electric emission of its rovibrational spectrum in bright and dense photodissociation regions (PDRs).
However, attempts to understand the observed column densities \(N_{0}\) and \(N_{1}\) of the two lowest rotational states of H\({}_{2}\) (with \(J=0\) and \(J=1\), respectively) and the column densities \(N_{(1,1)}\) and \(N_{(1,0)}\) of the two lowest states of H\({}_{3}^{+}\) (with \((J,G)=(1,1)\) and \((J,G)=(1,0)\), respectively) in the same lines of sight led to a surprise [16]. The temperature derived from the lowest states of H\({}_{3}^{+}\), \(T_{12}\left(\mathrm{H}_{3}^{+}\right)=32.9/\ln(2\,N_{(1,1)}/N_{(1,0)})\,\)K appeared systematically lower than the temperature of the lowest states of H\({}_{2}\), \(T_{01}=170.5/\ln(9\,N_{0}/N_{1})\,\)K, which is usually found to be in equilibrium with the gas kinetic temperature [15, 16]. While the initial studies used very simple model calculations for the H\({}_{3}^{+}\) abundances and thermalization processes, later models, employing large chemical networks, focused principally on the chemical evolution of the ortho/para forms of H\({}_{2}\) and H\({}_{3}^{+}\) abundances1 without considering detailed collisional excitation mechanisms nor introducing thermal balance considerations [17].
Footnote 1: In fact, only the lowest _para_ and _ortho_ levels are considered in these studies.
Here, we present a different approach and introduce first a master equation describing the evolution of rotational levels of H\({}_{3}^{+}\), based on updated rate coefficients for collisional and chemical processes at fixed density and temperature, which can be solved for steady state at definite physical conditions. This procedure is able to reproduce the observational trends and temperature differences on a quantitative level, and it allows for an analysis of the contributions of the various processes. We further introduce the same mechanisms in our Meudon PDR code [18], which solves both the chemical and thermal equilibrium of the cloud and add their contribution to the equilibrium state of H\({}_{2}\), where photodissociation, collisional excitation and chemical formation/destruction mechanisms are considered together.
The paper is organized as follows: Section 2 presents a brief overview of nuclear spin and the rotational levels of H\({}_{3}^{+}\) and H\({}_{2}\). In Section 3 we describe the most important processes driving the _ortho-para_ ratio of H\({}_{3}^{+}\). A master equation for H\({}_{3}^{+}\) state populations is presented in Section 4 together with the corresponding results on the excitation temperature of H\({}_{3}^{+}\). We compare our model results to astronomical observations in Section 5, where we also introduce the modifications
included in the PDR code to account for the _ortho-para_ character of H\({}_{3}^{+}\). The paper concludes with a brief discussion in Section 6.
## 2 Nuclear spin and rotational states of H\({}_{3}^{+}\) and H\({}_{2}\)
Both H\({}_{2}\) and H\({}_{3}^{+}\) exist in two different nuclear spin configurations. For the lowest rotational state of H\({}_{2}\), with \(J=0\), the proton spins are anti-parallel and add up to \(I=0\). This configuration is denoted as \(para\)-H\({}_{2}\) (or \(p\)-H\({}_{2}\)). The next highest level is \(J=1\), and, owing to the requirement that the total wave function has to change sign under the permutation of both protons, all H\({}_{2}\) states with odd rotational quantum numbers have parallel nuclear spin configurations with \(I=1\), which is denoted as \(ortho\)-H\({}_{2}\) (or \(o\)-H\({}_{2}\)). Likewise, all even rotational quantum numbers in H\({}_{2}\) can be attributed to \(p\)-H\({}_{2}\) with \(I=0\). Figure 1 shows the three lowest levels of H\({}_{2}\) (with \(J\leq 2\)) and their respective energies expressed as \(k_{b}T\) (in units of kelvin).
The excitation temperature of the two lowest states of H\({}_{2}\) is defined as
\[T_{01}=\frac{\Delta E_{01}/k_{b}}{\ln(g_{1}/g_{0}\,\cdot N_{0}/N_{1})}\,, \tag{3}\]
where \(\Delta E_{01}/k_{b}=170.476\,\)K stands for the energy difference between the states, \(g_{1}/g_{0}=9\) is the ratio of the multiplicities of the states with \(J=1\) and \(J=0\), respectively, and \(N_{1}\) and \(N_{0}\) denote their populations. An analogous equation can be given for the excitation temperature \(T_{02}\) between the states with \(J=2\) and \(J=0\), for which \(\Delta E_{02}/k_{b}=509.864\,\)K and \(g_{2}/g_{0}=5\), and both temperatures are usually found to be consistent with one another [15], giving a good proxy of the kinetic gas temperature. Typical values for \(T_{01}\) in these diffuse sightlines range from 50 to 70 K [15, 19, 20, 21].
The nuclear spin configurations of H\({}_{3}^{+}\) are also denoted by _para_ and _ortho_ for \(I=1/2\) and \(I=3/2\), respectively. As with H\({}_{2}\), the symmetry of the nuclear spin wave function imposes restrictions on the rotational quantum numbers. The relevant quantum number is usually denoted by \(G\), which is connected to the projection of the angular momentum (from both vibrational and rotational motion) onto the molecular symmetry axis (see the review by [22] for quantum numbers, symmetries and selection rules). Since we are only concerned with molecules in the vibrational ground state here, the quantum number \(G\) can be regarded as equivalent with the projection \(K\) of the rotational angular momentum (denoted by quantum number \(J\)) onto the normal axis of the molecular plane, implying \(G=K\). The symmetry of the total wave function requires \(G=3n\) (where \(n\) is an integer) for the \(I=3/2\) or _ortho_ levels of H\({}_{3}^{+}\), while all other levels (effectively fulfilling \(G=3n\pm 1\)) are of the _para_ configuration, with \(I=1/2\). The right-hand side of Figure 1 shows all H\({}_{3}^{+}\) rotational levels with \(J\leq 3\) and their respective energies. Because of the Pauli exclusion principle, pure rotational levels with even \(J\) and \(G=0\) do not exist. This rules out the nominal \((J,G)=(0,0)\) ground state and the \((J,G)=(2,0)\) state, which are both indicated in the graph by dotted lines for completeness. Since we are almost exclusively concerned with rotational states in the vibrational ground state of H\({}_{3}^{+}\), we will from now on refer to all H\({}_{3}^{+}\) states by giving their \((J,G)\) quantum numbers only, e.g., \((1,1)\) and \((1,0)\) for the lowest _para_ and _ortho_ states, respectively.
With the notable exception of the Galactic center [12, 23, 24], only the lowest \((1,1)\)_para_-state of H\({}_{3}^{+}\) and the lowest _ortho_-state \((1,0)\) have ever been detected in the interstellar medium. Consequently, the H\({}_{3}^{+}\) excitation temperature, \(T_{12}(\)H\({}_{3}^{+})\), derived from observations is usually calculated using the column density ratio of these two states
\[T_{12}(\mbox{H}_{3}^{+})=\frac{\Delta E/k_{b}}{\ln((g_{1,0}/g_{1,1})\,\cdot N_{ (1,1)}/N_{(1,0)})}\,, \tag{4}\]
where \(\Delta E/k_{b}=32.86\,\)K, and \(g_{1,0}=12\) and \(g_{1,1}=6\) denote the total degeneracies of the _ortho_- and _para_-state, respectively. Most observations yield values for \(T_{12}(\rm H_{3}^{+})\) between 20 and 40 K [16, 17], systematically lower than the \(\rm H_{2}\) excitation temperature \(T_{01}\). As _para_- and _ortho_-states of \(\rm H_{2}\) and \(\rm H_{3}^{+}\) can not be inter-converted by radiative transitions, we now describe the various collisional and reactive processes allowing _ortho_/_para_ exchange.
## 3 Processes controlling the para fraction of \(\rm H_{3}^{+}\)
### Formation of \(\rm H_{3}^{+}\)
\(\rm H_{3}^{+}\) formation via the \(\rm H_{2}^{+}\) + \(\rm H_{2}\) reaction (Eq. 1) has been studied for many years by various experimental techniques. A recent compilation of the results can be found in [25]. Particularly noteworthy is a recent study that employs excited Rydberg molecules in a merged beams approach to reach collision energies between \(5-60\,\)K [26]. The results are in very good agreement with the previous measurements [27] conducted at somewhat higher energies. A recommended fit to the overall cross section can be found in [25], yielding values at low temperature that are slightly higher than the corresponding values derived from the classical Langevin collision rate. We have converted this cross section to a thermal rate coefficient
\[k_{\rm form}=2.27\ 10^{-9}\times\left(\frac{T}{300}\right)^{-0.06}\ \ \ {\rm cm ^{3}\,s^{-1}}\,, \tag{5}\]
Figure 1: Level scheme of all rotational levels with \(J\leq 2\) for \(\rm H_{2}\) and with \(J\leq 3\) for \(\rm H_{3}^{+}\). The level energy (in cm\({}^{-1}\)) is given on the y-axis at the left-hand-side, while the corresponding values in K are given on the right-hand-side of the graph. _Para_ levels are in blue and _ortho_ levels in red. Black arrows show the possible radiative transitions. Stable and metastable levels are shown with a heavier line. The symmetry-forbidden levels \((0,0)\) and \((2,0)\) are shown by dotted lines for completeness. The origin of the energies is fixed at the lowest permitted level.
with \(T\), the gas kinetic temperature, in kelvin. This rate shows only a weak dependence on temperature. Its value is slightly larger than the constant rate of \(2.08\times 10^{9}\) cm\({}^{3}\) s\({}^{-1}\) that is reported in most astrochemistry databases (and based on a previous experimental study [28], which was conducted at room temperature).
To determine the nuclear spin of the H\({}_{3}^{+}\) ions created by reaction (1), we refer to the selection rules outlined by Oka [29]. While this scheme is based on pure angular momentum algebra, and thus does not take any energetic barriers or other restrictions of the reaction into account, we consider this to be a very good approximation for the highly exothermic barrier-less ion neutral reaction between H\({}_{2}^{+}\) and H\({}_{2}\). To quantify the outcome of the reaction in terms of nuclear spin, it is convenient to use the _para_-fractions of both molecular species derived from the densities \(n(p\)-H\({}_{2})\) and \(n(o\)-H\({}_{2})\) of the _para_- and _ortho_-species of H\({}_{2}\), and \(n(p\)-H\({}_{3}^{+})\) and \(n(o\)-H\({}_{3}^{+})\) of H\({}_{3}^{+}\), respectively. The _para_-fraction for both species is then denoted by
\[p_{2}=\frac{n(p\mbox{-}\mbox{H${}_{2}$})}{n(p\mbox{-}\mbox{H${}_{2}$})+n(o \mbox{-}\mbox{H${}_{2}$})} \tag{6}\]
and
\[p_{3}=\frac{n(p\mbox{-}\mbox{H${}_{3}^{+}$})}{n(p\mbox{-}\mbox{H${}_{3}^{+}$}) +n(o\mbox{-}\mbox{H${}_{3}^{+}$})}\,. \tag{7}\]
Following the argumentation by Crabtree et al. [16], we assume that the cosmic ray ionization of H\({}_{2}\) (constituting the main source of ionization that initiates the H\({}_{3}^{+}\) formation) does not affect the nuclear spin of the molecule, and therefore the _para_-fraction of H\({}_{2}^{+}\) has the same value \(p_{2}\).
With these assumptions the \(p\)-H\({}_{3}^{+}\) fraction at formation, \(p_{3}^{f}\), can be derived as a function of \(p_{2}\), using the nuclear spin branching fractions given in [29] (for details see Table 4 in [16]), resulting in a simple linear dependence of the form
\[p_{3}^{f}=\frac{1+2\,p_{2}}{3}\,. \tag{8}\]
The values of \(p_{2}\) and \(p_{3}^{f}\) are displayed as a function of temperature in Figure 2 for H\({}_{2}\) in thermal equilibrium.
We further assume that the rotational levels of \(p\)-H\({}_{3}^{+}\) and \(o\)-H\({}_{3}^{+}\) are each populated at formation according to their Boltzmann distribution at a temperature corresponding to 2/3 of the reaction exothermicity ([30]). Given the low energy of the relevant levels, this amounts in effect at using rates proportional to the statistical weight of the level.
### Thermalizing collisions of H\({}_{3}^{+}\)
Collisions with electrons, He, H and H\({}_{2}\) contribute to the energy exchange between the rovibrational levels of H\({}_{3}^{+}\), but only collisions with H\({}_{2}\) and H may change the nuclear spin of the H\({}_{3}^{+}\) ions. To the best of our knowledge, of the above collision partners, only for collisions with H\({}_{2}\) and electrons detailed information is available in the literature.
#### 3.2.1 Collisions between H\({}_{3}^{+}\) and H\({}_{2}\)
Before any detailed study of H\({}_{3}^{+}\) collision rates with H\({}_{2}\) was available, Oka and Epp [31] suggested to use the Langevin expressions to describe the excitation of H\({}_{3}^{+}\) detected towards the Galactic center. However, as a result of the five identical Fermion nuclei involved in these collisions, [29, 32, 33]
pointed out that one should consider the nuclear spin dependence of the total wavefunction. Three different possibilities have to be accounted for
\[\mathrm{H}_{3}^{+}+\tilde{\mathrm{H}}_{2} \longrightarrow \mathrm{H}_{3}^{+}+\tilde{\mathrm{H}}_{2}\] identity \[\sim\mathrm{inelastic\ collision}, \tag{9a}\] \[\longrightarrow \mathrm{H}_{2}+(\mathrm{H}\tilde{\mathrm{H}}_{2})^{+}\] proton \[-\mathrm{hop}\] (9b) \[\longrightarrow \mathrm{H}\tilde{\mathrm{H}}+(\tilde{\mathrm{H}}\mathrm{H}_{2})^{+}\] exchange \[\sim\mathrm{reactive\ collision}. \tag{9c}\]
Specific selection rules on the nuclear spins of the products can be derived, following [29, 34], and they have been used to model hydrogen plasma experiments [16]. The calculations of [32] compared favourably to ion trap measurements of the nuclear spin equilibrium of \(\mathrm{H}_{3}^{+}\) and \(\mathrm{H}_{2}\) at low temperature [35], although these studies did not allow for detailed comparisons of the absolute rate coefficients.
For our models, we have adopted the rate coefficients resulting from the most advanced theoretical treatment of the \(\mathrm{H}_{3}^{+}\) - \(\mathrm{H}_{2}\) reaction so far, which was presented by [36]. Their approach is similar to the method of [32], but refined by a dynamical bias that is introduced through a scrambling matrix, accounting for the relative probabilities of the identity/hop/exchange channels. The probabilities are calculated using quasi-classical trajectory calculations, based on a global \(\mathrm{H}_{5}^{+}\) potential energy surface [37]. We employ a set of rate coefficients that covers collisions of the lowest 24 rotational states of \(\mathrm{H}_{3}^{+}\) with _ortho_-\(\mathrm{H}_{2}\) and _para_-\(\mathrm{H}_{2}\) (in their lowest rotational states) and temperatures up to \(500\,\mathrm{K}\), which were kindly provided by O. Roncero. Those rate coefficients are also currently used in our PDR model calculations [18].
Figure 2: Fraction \(p_{2}\) of \(p\)-\(\mathrm{H}_{2}\) and \(p_{3}^{f}\) of nascent \(p\)-\(\mathrm{H}_{3}^{+}\) (Eq. 8). The typical range of diffuse cloud kinetic temperature is outlined in yellow.
#### 3.2.2 Collisions between \(\rm H_{3}^{+}\), \(\rm He\) and \(\rm H\)
He and \(\rm H\) are additional collision partners that should be taken into account when describing the excitation equilibrium of \(\rm H_{3}^{+}\), but, to the best of our knowledge, no detailed studies are presently available on these systems. Collisions of \(\rm H_{3}^{+}\) with \(\rm He\) can not modify the nuclear spin configuration of \(\rm H_{3}^{+}\), and we approximate the corresponding collision rates by taking the rates for \(p\)-\(\rm H_{2}\) and scaling them with the reduced mass factor (while vetoing channels that might change the nuclear spin of \(\rm H_{3}^{+}\)), as described previously [38].
Collisions of \(\rm H\) with \(\rm H_{3}^{+}\) may lead to proton exchange or inelastic collisions, similar to the channels discussed for the \(\rm H_{3}^{+}+H_{2}\) reaction. However, since we are not aware of more detailed information on this process, we will use the collisional rates with \(p\)-\(\rm H_{2}\) as a proxy for collisions with \(\rm H\). We checked that this choice has no major influence on our model calculations for the conditions studied here.
#### 3.2.3 Inelastic collisions between \(\rm H_{3}^{+}\) and electrons
Electronic collisions with \(\rm H_{3}^{+}\) preserve the _ortho-para_ character of \(\rm H_{3}^{+}\). They have been computed by [39] and are included in the present model. Until very recently no laboratory measurements of the change of rotational states in electron collisions were available for any molecular ion to benchmark the theoretical approach. But a recent measurement of low-energy electron collisions with \(\rm CH^{+}\) allowed for a comparison between experiment and theory for the lowest rotational states [40], which revealed very good agreement.
### Dissociative recombination of \(\rm H_{3}^{+}\)
In diffuse gas, the main destruction reaction of \(\rm H_{3}^{+}\) is the dissociative recombination (DR) with electrons.
\[\rm H_{3}^{+}+e^{-} \longrightarrow \rm H+H+H, \tag{10a}\] \[\longrightarrow \rm H_{2}+H. \tag{10b}\]
This is an important chemical reaction for interstellar chemistry, and as such, has attracted a lot of attention, as well as some controversy. More than 30 independent experimental studies of the DR rate coefficient of \(\rm H_{3}^{+}\) have been published, with outcomes that differ by orders of magnitude (there are a number of reviews on this topic, see, e.g, [41, 42, 43]). In summary, the present consensus is that the absolute rate of the \(\rm H_{3}^{+}\) DR rate coefficient is correctly derived from the storage ring merged beams measurements [43]. Theoretical studies identified the Jahn-Teller effect as a driver for the recombination process at low temperatures [44, 45, 46].
Both astrochemistry databases, UMIST2 and KIDA3, report rate coefficients with branching ratios of 2/3 for reaction (10a) and 1/3 for reaction (10b), based on storage ring experiments, and the total DR reaction rate coefficient is given as
Footnote 2: available at [http://www.udfa.net](http://www.udfa.net)
Footnote 3: available at [https://kida.astrochem-tools.org](https://kida.astrochem-tools.org)
\[\alpha_{\rm DR}^{tot}=6.70\times 10^{-8}\,\left(\frac{T}{300}\right)^{-0.52}\, \rm cm^{3}\,s^{-1}\,. \tag{11}\]
However, there are recent suggestions concerning a possible difference between the DR rate coefficients of the two nuclear spin modifications of \(\rm H_{3}^{+}\). These were first pointed out in the storage ring
experiments of [47, 48], but the values are dependent on the actual rotational level populations, which could not be determined precisely. Subsequent plasma experiments found an even stronger effect at low temperature [49], and updated theoretical studies [46] predict that at low temperature the difference between the rate coefficients for the two nuclear spin modifications may exceed an order of magnitude, with \(p\)-H\({}_{3}^{+}\) recombining much faster than \(o\)-H\({}_{3}^{+}\). Two different theoretical values are explicitly reported in [50], supporting plasma studies. We have derived an analytic expression from these results, which we can describe by the formulae
\[\alpha_{\rm DR}^{tot}(p\mbox{-H}_{3}^{+}) = 5.25\times 10^{-8}(T/300)^{-0.75}\,{\rm cm}^{3}\,{\rm s}^{-1}, \tag{12a}\] \[\alpha_{\rm DR}^{tot}(o\mbox{-H}_{3}^{+}) = 6\times 10^{-8}\,{\rm cm}^{3}\,{\rm s}^{-1}\qquad T\leq 250\,{\rm K}\] \[= \alpha_{\rm DR}(p\mbox{-H}_{3}^{+})\qquad\qquad T\geq 250\,{\rm K}.\]
Table 1 summarizes the values of the principal reaction rate coefficients introduced in the present study, where we have assumed the same branching ratios for reactions (10a) and (10b) as described above.
## 4 Master equation for H\({}_{3}^{+}\) state populations
In this section, we describe the master equation for the level populations of H\({}_{3}^{+}\), and we show that H\({}_{3}^{+}\) chemistry and specific molecular properties result in an excitation temperature that is lower than the kinetic temperature of the gas.
The demonstration starts from a full treatment of the differential state equations, including all possible processes, i.e., collisional and radiative transitions as well as chemical state-to-state formation and destruction reactions. Solving these equations, the next section (4.1) will show that we can indeed recover a value for the \(T_{12}(\mbox{H}_{3}^{+})\) excitation temperature that is systematically below the gas kinetic temperature, similar to observational results. The next sub-section explores how sensitive these equations are to the number of levels included in the computation. This allows to understand why at least 5 levels must be included for quantitative results. It also shows that the range \([25:50]\,{\rm K}\) is much less sensitive to the number of levels included. By restricting our analysis to this temperature range, we can derive a qualitative analytical approximation with only two levels (Section 4.3). The resulting expression shows that selective formation of \(p\)-H\({}_{3}^{+}\) by \(p\)-H\({}_{2}\) followed by incomplete thermalization is responsible for the deviation from thermal equilibrium.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Ref & Reaction & \multicolumn{2}{c}{\(A\)\({}^{*}\)} & \(\beta\) & Comment \\ & & & \multicolumn{2}{c}{(cm\({}^{3}\) s\({}^{-1}\))} & & \\ \hline
[25] & H\({}_{2}^{+}+\mbox{H}_{2}\) & \(\longrightarrow\) & H\({}_{3}^{+}+\mbox{H}\) & 2.27 \(\times\) 10\({}^{-9}\) & -0.06 & computed from x-section \\
[50] & \(p\mbox{-H}_{3}^{+}+e^{-}\) & \(\longrightarrow\) & H\({}+\mbox{H}+\mbox{H}\) & 3.5 \(\times\) 10\({}^{-8}\) & -0.75 & present fit \\
[50] & \(p\mbox{-H}_{3}^{+}+e^{-}\) & \(\longrightarrow\) & H\({}_{2}+\mbox{H}\) & 1.75 \(\times\) 10\({}^{-8}\) & -0.75 & present fit \\
[50] & \(o\mbox{-H}_{3}^{+}+e^{-}\) & \(\longrightarrow\) & H\({}+\mbox{H}+\mbox{H}\) & 4.0 \(\times\) 10\({}^{-8}\) & - & present fit, T \(\leq\) 250K \\
[50] & \(o\mbox{-H}_{3}^{+}+e^{-}\) & \(\longrightarrow\) & H\({}+\mbox{H}+\mbox{H}\) & 3.5 \(\times\) 10\({}^{-8}\) & -0.75 & present fit, T \(\geq\) 250K \\
[50] & \(o\mbox{-H}_{3}^{+}+e^{-}\) & \(\longrightarrow\) & H\({}_{2}+\mbox{H}\) & 2.0 \(\times\) 10\({}^{-8}\) & - & present fit, T \(\leq\) 250K \\
[50] & \(o\mbox{-H}_{3}^{+}+e^{-}\) & \(\longrightarrow\) & H\({}_{2}+\mbox{H}\) & 1.75 \(\times\) 10\({}^{-8}\) & -0.75 & present fit, T \(\geq\) 250K \\ \hline \end{tabular} \({}^{*}\) The reaction rate coefficient, \(k\), is expressed as \(A\times(T/300)^{\beta}\).
\end{table}
Table 1: Principal chemical reactions for H\({}_{3}^{+}\) formation and destruction.
### Differential equations
The variation over time of the density of a level \(i\) (\(i\in[1,N]\)) of \(\mathrm{H}_{3}^{+}\), \(n_{i}\), can be described by
\[\frac{dn_{i}}{dt} =k_{\mathrm{form},i}\,n\left(\mathrm{H}_{2}^{+}\right)\,n\left( \mathrm{H}_{2}\right)-\alpha_{\mathrm{DR},i}\,n_{i}\,n\left(e^{-}\right) \text{(formation and destruction)} \tag{13}\] \[+\sum_{j\neq i}k_{ji}^{p}\,n\left(\mathrm{H}_{2}\right)\,p_{2}\,n _{j}-\sum_{j\neq i}k_{ij}^{p}\,n\left(\mathrm{H}_{2}\right)\,p_{2}\,n_{i} \text{(collisions with $p$-H}_{2}\right)\] \[+\sum_{j\neq i}k_{ji}^{o}\,n\left(\mathrm{H}_{2}\right)\,\left(1- p_{2}\right)n_{j}-\sum_{j\neq i}k_{ij}^{o}\,n\left(\mathrm{H}_{2}\right)\, \left(1-p_{2}\right)n_{i} \text{(collisions with $o$-H}_{2}\right)\] \[+\sum_{j\neq i,X}k_{ji}^{X}\,n\left(\mathrm{X}\right)\,n_{j}- \sum_{j\neq i,X}k_{ij}^{X}\,n\left(\mathrm{X}\right)\,n_{i} \text{(collisions with other species $X$)}\] \[+\sum_{i<j}A_{ji}\,n_{j}-\sum_{i>j}A_{ij}\,n_{i} \text{(radiative transitions)}\]
with \(k_{\mathrm{form},i}\) and \(\alpha_{\mathrm{DR},i}\) denoting the state-dependent chemical formation and destruction rates, \(k_{ij}^{p,o}\) the collisional excitation/de-excitation rates with \(p\)-\(\mathrm{H}_{2}\) and \(o\)-\(\mathrm{H}_{2}\), and \(k_{ij}^{X}\) denote collision rates where \(X\) stands for H, He, \(e^{-}\). \(A_{ij}\) are the radiative emission transition probabilities.
We underline two important points:
* The state-dependent chemical formation and destruction rates are introduced in the master equation and have to be evaluated. It is important to note that the formation rate of a particular level can differ from its destruction rate (see Section 3.3).
* The main formation process of \(\mathrm{H}_{3}^{+}\) is a highly exothermic reaction, which preferentially populates high energy levels. Thus, chemical formation can be regarded as an excitation mechanism.
Introducing \(x_{i}\), the relative populations of \(\mathrm{H}_{3}^{+}\), so that \(n_{i}=x_{i}\,n\left(\mathrm{H}_{3}^{+}\right)\), the total destruction rate is \(\alpha_{\mathrm{DR}}=\sum_{i}x_{i}\,\alpha_{\mathrm{DR},i}\). The total formation rate is \(k_{\mathrm{form}}=\sum_{i}k_{\mathrm{form},i}\). Since the relative populations \(x_{i}\) are unknown until the differential equations are solved, it is not possible to compute the destruction rate beforehand, as long as the \(\alpha_{\mathrm{DR},i}\) differ for individual states, and thus \(n\left(\mathrm{H}_{3}^{+}\right)\) cannot be computed independently. At steady state,\(\frac{dn_{i}}{dt}=0\) and the system of equations to solve is
\[\left(\sum_{i>j}A_{ij}+n\left(\mathrm{H}_{2}\right)\,\sum_{j\neq i }\left(k_{ij}^{o}\left(1-p_{2}\right)+k_{ij}^{p}\,p_{2}\right)+\sum_{j\neq i,X }n\left(\mathrm{X}\right)\,k_{ij}^{X}+\alpha_{\mathrm{DR},i}\,n\left(e^{-} \right)\right)\,x_{i}\\ -\sum_{i<j}A_{ji}\,x_{j}-n\left(\mathrm{H}_{2}\right)\,\sum_{j\neq i }\left(k_{ji}^{o}\left(1-p_{2}\right)+k_{ji}^{p}\,p_{2}\right)\,x_{j}-\sum_{j \neq i,X}n\left(\mathrm{X}\right)\,k_{ij}^{X}\,x_{j}-k_{\mathrm{form},i}\,R=0 \,, \tag{14}\]
with \(R=\frac{n\left(\mathrm{H}_{2}^{+}\right)n\left(\mathrm{H}_{2}\right)}{n\left( \mathrm{H}_{3}^{+}\right)}\). The value of \(R\) is initially unknown, as the total destruction rate of \(\mathrm{H}_{3}^{+}\) requires the knowledge of the relative level populations of the molecular ion. So, we explicitly add the conservation equation
\[\sum_{i}x_{i}=1\,. \tag{15}\]
We get a system of \(N+1\) equations with \(N+1\) unknowns, which is easily solved if the densities of \(\mathrm{H}_{2}\), \(\mathrm{H}_{2}^{+}\), \(e^{-}\) and the temperature are known or assumed. All other quantities are rate coefficients
or parameters that are derived from experiment or theory.
#### Derivation of \(\mathrm{H}_{2}^{+}\) density and electronic fraction
It is possible to reduce the set of equations further by estimating \(n\left(\mathrm{H}_{2}^{+}\right)\) and \(n\left(e^{-}\right)\). The main formation and destruction reactions of \(\mathrm{H}_{2}^{+}\) are:
\[\mathrm{H}_{2}+\mathrm{CRP} \longrightarrow\mathrm{H}_{2}^{+}+e^{-} \zeta\ (\mathrm{s}^{-1}), \tag{16a}\] \[\mathrm{H}_{2}^{+}+\mathrm{H}_{2} \longrightarrow\mathrm{H}_{3}^{+}+\mathrm{H} k_{\mathrm{form}}\ (\mathrm{cm}^{3}\,\mathrm{s}^{-1}),\] (16b) \[\mathrm{H}_{2}^{+}+e^{-} \longrightarrow\mathrm{H}+\mathrm{H} \alpha^{\mathrm{H}_{2}^{+}}\ (\mathrm{cm}^{3}\,\mathrm{s}^{-1}), \tag{16c}\]
where CRP represents cosmic ray particles and \(\zeta\) the corresponding ionization rate of \(\mathrm{H}_{2}\). At steady state, this leads to (with \(\alpha^{\mathrm{H}_{2}^{+}}\) the dissociative recombination rate of \(\mathrm{H}_{2}^{+}\))
\[n\left(\mathrm{H}_{2}^{+}\right)=\frac{\zeta\,n\left(\mathrm{H}_{2}\right)}{ k_{\mathrm{form}}\,n\left(\mathrm{H}_{2}\right)+\alpha^{\mathrm{H}_{2}^{+}}\,n \left(e^{-}\right)}. \tag{17}\]
This removes one parameter from the system.
As for \(n\left(e^{-}\right)\), in diffuse gas it is often assumed to be equal to the density of \(\mathrm{C}^{+}\). However, for high cosmic ray ionization rates, as, e.g., in the Central Molecular Zone (CMZ) of our galaxy, Le Petit et al. [11] have shown that protons may contribute significantly to the ionization fraction. Here we compute the electronic density considering both \(\mathrm{H}^{+}\) and \(\mathrm{He}^{+}\) as described in App. B, which involves the relative abundances of all atoms with ionization potential below the Lyman cutoff (assumed to be fixed) and the UV radiation field 4. With these derivations of \(n\left(\mathrm{H}_{2}^{+}\right)\) and \(n\left(e^{-}\right)\), the system formed by Eq. (14) and Eq. (15) depends on five astrophysical parameters: \(n_{\mathrm{H}}\)5, \(T\), \(f_{m}\), \(G_{0}\), and \(\zeta\), where we introduced \(f_{m}\) the molecular fraction, \(f_{m}=2\,n(\mathrm{H}_{2})/n_{\mathrm{H}}\).
Footnote 4: The UV radiation field strength \(G_{0}\) is defined in [51]. It controls the interstellar grain charge, which impacts the recombination of \(\mathrm{H}^{+}\) and \(\mathrm{He}^{+}\).
Footnote 5: \(n_{\mathrm{H}}\), the proton density is expressed as \(=n(\mathrm{H})+2\,n(\mathrm{H}_{2})+n(\mathrm{H}^{+})\).
### Numerical solution of the coupled equations
We developed ExcitH3p, a FORTRAN code that solves the \(\mathrm{H}_{3}^{+}\) coupled system of equations (Eq. 14) for any number of levels, and then computes the two-level excitation temperature (Eq.4)
\[T_{12}\left(\mathrm{H}_{3}^{+}\right)=32.86\,\mathrm{K}/\ln\left(\frac{2\,x_{1 }}{x_{2}}\right). \tag{18}\]
Here \(x_{1}\) and \(x_{2}\) denote the relative populations of the \(\left(1,1\right)\) ground state and the first excited state \(\left(1,0\right)\), respectively.
In practice, the 24 energetically lowest rotational levels of \(\mathrm{H}_{3}^{+}\) are included in the model. Those are the levels for which radiative emission rates as well as collisional excitation/de-excitation rates are known or have been estimated. The highest level in this framework is the \(\left(7,6\right)\)_ortho_ level, located 2190 K above the ground state. We find that the solution to the set of equations (14) recovers very nicely the full results obtained with the Meudon PDR code for diffuse line of sight conditions. The advantage of the master equation approach is that it is much less computationally expensive, and allows for a rapid exploration of the parameter space and the various possible hypotheses concerning the less well-known physical processes.
Results derived by the FORTRAN routine are presented in Figure 3 for a range of typical values of the cosmic ionization rate \(\zeta\) and gas kinetic temperature \(T\). The total gas density is set to \(n_{\rm H}=10^{2}\,\)cm\({}^{-2}\) and the molecular fraction to \(f_{m}=0.8\). We find that in the entire parameter space, \(T_{12}({\rm H}_{3}^{+})\), the excitation temperature given by the two lowest \({\rm H}_{3}^{+}\) levels, is systematically lower than the kinetic temperature. The overall magnitude of the discrepancy between the two temperatures is in agreement with the observations [10; 17]. This outcome is a sole property of the microscopic excitation and de-excitation mechanisms of the individual quantum levels of \({\rm H}_{3}^{+}\), which are detailed above. Figure 3 also shows that a maximum of about 44 K is reached for \(T_{12}({\rm H}_{3}^{+})\) at kinetic temperatures around 100 K. The occurrence of such a maximum is remarkable. We have verified that the maximum is still there, although somewhat different, when the dissociative recombination rate coefficients of _para_ and _ortho_-\({\rm H}_{3}^{+}\) are the same: \(T_{12}({\rm H}_{3}^{+})^{max}=38\) K for T\({}_{gas}\) = 150 K under the same physical conditions. We come back to that point in Section 4.2.1 when discussing the influence of the number of \({\rm H}_{3}^{+}\) levels included in the model.
Figure 4 shows the influence of density, showing that \(T_{12}\left({\rm H}_{3}^{+}\right)\) is not sensitive to this parameter up to densities of more than \(10^{3}\,\)cm\({}^{-3}\) if the gas temperature is below \(\sim\) 80K.
#### 4.2.1 Influence of the number of \({\rm H}_{3}^{+}\) levels included in the model
Interesting conclusions can be drawn from the examination of the dependence of our results on the number \(N\) of \({\rm H}_{3}^{+}\) levels included in the model. Except for the Galactic center, only the two lowest rotational levels of \({\rm H}_{3}^{+}\) have been detected in the ISM so far. Consequently, the interpretation of astrophysical observations is usually restricted to these two levels. Figure 5 displays the computed \(T_{12}({\rm H}_{3}^{+})\) value as a function of the kinetic temperature \(T\), for different values of \(N\). We perform this comparison for the following parameters: \(\zeta=5\,10^{-16}\,\)s\({}^{-1}\), \(n_{\rm H}=100\,\)cm\({}^{-3}\), and \(f_{m}=0.8\). We find that the curve corresponding to \(N=2\) is far from the values obtained with 24 levels, except for a small range at very low temperatures. Nevertheless, even with \(N=2\), \(T_{12}({\rm H}_{3}^{+})\) is systematically below the kinetic temperature. \(N=5\) is the minimum needed for acceptable results, and convergence is reached for \(N=10\) for the physical conditions considered here. It is important to realise that \(N=5\) involves the metastable \((3,3)\) level, representing a possible sink for
Figure 4: The excitation temperature \(T_{12}({\rm H}_{3}^{+})\) as a function of the gas kinetic temperature \(T\) and density \(n_{\rm H}\) for \(\zeta=10^{-16}\,{\rm s}^{-1}\), a molecular fraction \(f_{m}=0.8\) and considering the first 24 levels of \({\rm H}_{3}^{+}\).
Figure 5: \(T_{12}({\rm H}_{3}^{+})\) as a function of the number of levels taken into account in the coupled set of equations (14). We assume values of \(n_{\rm H}=10^{2}\,{\rm cm}^{-3}\), \(\zeta=5\,10^{-16}\,{\rm s}^{-1}\), \(f_{m}=0.8\). The straight line in gray is the first diagonal showing complete thermalization.
population.
The decrease of \(T_{12}(\mathrm{H}_{3}^{+})\) with increasing gas kinetic temperature above \(100\,\mathrm{K}\) seems puzzling at first. However, it can be understood by considering the various possible decay paths from the two excited _para_ levels \((2,2)\) (corresponding to relative population \(x_{3}\) in our enumeration) and \((2,1)\) (corresponding to \(x_{4}\)), and the lowest excited _ortho_ level \((3,3)\) (corresponding to \(x_{5}\)). It is important to note that the \((3,3)\) level is metastable with an infinite radiative lifetime. Hence it can only be depopulated in collisional processes. Examination of the collision rates with \(\mathrm{H}_{2}\) shows that its branching ratios towards _ortho_ and _para_ are approximately equal in the \(30-300\,\mathrm{K}\) temperature range. Both _para_ levels \((2,2)\) and \((2,1)\), on the other hand, can decay both radiatively and collisionally. The critical densities of these levels, given by the ratio of the radiative emission rate and the total collisional de-excitation rate coefficient, are a few thousand cubic centimeters. Thus, at densities of a few hundred cubic centimeters, relevant for the astrophysical observations discussed here, radiative decay of _para_ levels to the ground state is much more likely than collisional de-excitation, while the ground _ortho_ level (1,0) is underpopulated compared to a thermal Boltzmann distribution, as population may be trapped in the \((3,3)\) level. This leads to a perceived overpopulation of the lowest _para_ state, compared to the lowest _ortho_ state. Chemical formation can populate efficiently all levels due to the large exothermicity of the reaction. So, increasing \(N\) leads to more open channels to populate levels that decay to \((3,3)\) and inhibits population of the lowest _ortho_ state \((1,0)\).
### Two-level approximation
In a restricted range of kinetic temperatures around \(T\simeq 25-50\,\mathrm{K}\), the two-level case is a fair approximation to the full system, as seen in Figure 5. It allows to considerably simplify the system and to derive an analytic expression for \(x_{2}/x_{1}\) that offers the opportunity to highlight the key microscopic mechanisms at work.
We restrict our study to molecular hydrogen collisions and introduce \(k_{12}=k_{12}^{o}\left(1-p_{2}\right)+k_{12}^{p}\,p_{2}\), the total collisional excitation rate due to \(\mathrm{H}_{2}\) from level 1 to level 2 (and the equivalent for \(k_{21}\)). Then, the coupled equations system (Eq. 14) reduces to:
\[\left(n\left(\mathrm{H}_{2}\right)\,k_{12}+\alpha_{\mathrm{DR},1} \,n\left(e^{-}\right)\right)\,x_{1}-n\left(\mathrm{H}_{2}\right)\,k_{21}\,x_{ 2} =k_{\mathrm{form},1}\,R \tag{19a}\] \[-n\left(\mathrm{H}_{2}\right)\,k_{12}\,x_{1}+\left(n\left( \mathrm{H}_{2}\right)\,k_{21}+\alpha_{\mathrm{DR},2}\,n\left(e^{-}\right) \right)\,x_{2} =k_{\mathrm{form},2}\,R \tag{19b}\]
Introducing \(k_{\mathrm{form}}=k_{\mathrm{form},1}+k_{\mathrm{form},2}\), we can compute the \(x_{2}/x_{1}\) ratio:
\[\frac{x_{2}}{x_{1}}=\frac{n\left(\mathrm{H}_{2}\right)\,k_{12}\,k_{\mathrm{ form}}+\alpha_{\mathrm{DR},1}\,n\left(e^{-}\right)\,k_{\mathrm{form},2}}{n \left(\mathrm{H}_{2}\right)\,k_{21}\,k_{\mathrm{form}}+\alpha_{\mathrm{DR},2} \,n\left(e^{-}\right)\,k_{\mathrm{form},1}}\,, \tag{20}\]
The factor \(R\) cancels out and the ratio \(x_{2}/x_{1}\) does not depend on \(n\left(\mathrm{H}_{3}^{+}\right)\). We introduce the electronic fraction \(x_{e}=n\left(\mathrm{e}^{-}\right)/n\) and the configuration-specific formation rates of \(\mathrm{H}_{3}^{+}\) with \(k_{\mathrm{form},1}=p_{3}^{f}\,k_{\mathrm{form}}\) and \(k_{\mathrm{form},2}=\left(1-p_{3}^{f}\right)k_{\mathrm{form}}\). Furthermore, we apply detailed balance to the \(\mathrm{H}_{3}^{+}\)-\(\mathrm{H}_{2}\) collisional rates, yielding \(k_{12}=\frac{g_{2}}{g_{1}}\,\exp\left(-\frac{E_{21}}{T}\right)\,k_{21}\). Finally, we get the following expression
\[\frac{x_{2}}{x_{1}}=\frac{g_{2}}{g_{1}}\,\exp\left(-\frac{E_{21}}{T}\right)\, \left[\frac{1+\left(1-p_{2}\right)\,\frac{4}{3}\,\frac{\alpha_{\mathrm{DR},1} }{k_{21}}\,\frac{x_{e}}{f_{m}}\,\frac{g_{1}}{g_{2}}\,\exp\left(\frac{E_{21}}{T} \right)}{1+\left(1+2\,p_{2}\right)\,\frac{2}{3}\,\frac{\alpha_{\mathrm{DR},2} }{k_{21}}\,\frac{x_{e}}{f_{m}}}\right]=\frac{g_{2}}{g_{1}}\,\exp\left(-\frac{E _{21}}{T_{12}\left(\mathrm{H}_{3}^{+}\right)}\right)\,. \tag{21}\]
Apart from the kinetic temperature \(T\), the \(x_{2}/x_{1}\) ratio depends on the collisional and dissociative recombination rate coefficients, the molecular fraction \(f_{m}\) of \(\mathrm{H}_{2}\), and the electronic fraction \(x_{e}\). It is
independent of the cosmic ray ionization rate \(\zeta\) and of the _total_ formation reaction rate coefficient of H\({}_{3}^{+}\), but - crucially - not of the branching ratios, which effectively enter the equation through the H\({}_{2}\)_para_ fraction \(p_{2}\).
We can interpret the term in square brackets as a correction factor to the Boltzmann value for the kinetic temperature. At the low temperatures considered here (\(25-50\,\)K) the value of \(p_{2}\) ranges from 0.6 to 1 (see Figure 2), and numerical analysis reveals that the \((1-p_{2})\) and \((1+2\,p_{2})\) pre-factors in the nominator and the denominator are sufficient to keep the overall correction factor smaller than unity in the entire temperature range, resulting in a lowering of the \(\frac{x_{2}}{x_{1}}\) ratio compared to the thermal value (in agreement with astronomical observations). The extent of the deviation from thermal equilibrium depends critically on the ratios \(\frac{\alpha_{\text{DR},1}}{k_{21}}\) and \(\frac{\alpha_{\text{DR},2}}{k_{21}}\) of the electron recombination rates to the collisional rate coefficients. If we, for the sake of argument, extrapolate the formula using an artificially large value for \(k_{21}\) - corresponding to very efficient collisional thermalization - the entire correction factor tends toward unity, and we will recover the ratios given by the gas kinetic temperature. The same is obviously true for very small DR rate coefficients \(\alpha_{\text{DR},1}\) and \(\alpha_{\text{DR},2}\).
The picture that emerges is that the excitation temperature of H\({}_{3}^{+}\) appears lower than the nominal gas temperature because of incomplete thermalization. The formation process strongly favors the formation of \(p\)-H\({}_{3}^{+}\) at low temperatures, as the _para_-fraction \(p_{2}\) of H\({}_{2}\) is large. The electron recombination process then removes H\({}_{3}^{+}\) before it can reach thermal equilibrium in collisions with H\({}_{2}\). While we stress that these conclusions based on the two-level approximation are only valid in a limited temperature range, and that for quantitative results more levels need to be considered, we can reproduce the general trend very well using the master equation approach described above. For artificially enlarged H\({}_{3}^{+}-\text{H}_{2}\) collisional rate coefficients (or sufficiently reduced electron recombination rates) the calculations reach complete thermal equilibrium between the excitation temperature \(T_{12}\left(\text{H}_{3}^{+}\right)\) and the kinetic temperature \(T\).
In essence, the nuclear spin restrictions of the H\({}_{3}^{+}\) formation reaction produce an over-proportional amount of H\({}_{3}^{+}\) in the _para_ configuration, and thermalization in collisions with H\({}_{2}\) is too slow to reach equilibrium before the ions are destroyed by free electrons. This is to be contrasted to the situation for H\({}_{2}\), where the destruction process is much slower compared to thermalizing collisions (see Appendix A for details).
## 5 Comparison to observations
### Master equation approach
To validate our approach, we compare the results of the master equation approach (Eq. 14) with observations. Table 2 presents the H\({}_{3}^{+}\) excitation temperature reported in the literature for a few local diffuse clouds as well as the corresponding H\({}_{2}\) excitation temperatures. Data for H\({}_{2}\) come from Copernicus [19] and FUSE [20; 21] satellite observations. The proton densities reported in this table are those published in the papers reporting the H\({}_{2}\) data. Determination of diffuse cloud density with observations of H and H\({}_{2}\) is not straightforward, because a fraction of the hydrogen atoms observed on the line of sight may not be related to the diffuse clouds to which H\({}_{2}\) belongs. So, these densities are to be seen as order of magnitude estimates, and should be considered only as indicative.
Using the ExcitH3p program that solves Eq. (14), we compute \(T_{12}\left(\text{H}_{3}^{+}\right)\) for all the lines of sight. We assume a cosmic ray ionization rate of \(\zeta=10^{-16}\,\text{s}^{-1}\) and a molecular fraction of \(f_{m}=0.8\). For the gas density, the values in Tab. 2 are used, and for the gas temperature we use the values derived from H\({}_{2}\) observations \(T_{01}^{obs}\left(\text{H}_{2}\right)\).
_Model results and sensitivity to the DR rate coefficients_
Figure 6 shows the results for three different hypothesis for \(\alpha_{DR}(\text{$o$-H}_{3}^{+})\), and the value provided by equation (12a) for \(\alpha_{DR}(\text{$p$-H}_{3}^{+})\). We see a quasi-linear variation of \(T_{12}\left(\text{H}_{3}^{+}\right)\) with \(T_{01}\), which is well reproduced by the models. Using \(\alpha_{DR}\) from equation. (12b) (red circles) leads to temperatures which are too high, while using the same rate for \(\text{$o$-H}_{3}^{+}\) and \(\text{$p$-H}_{3}^{+}\) (pink circles) leads to temperatures which are too low. An empirical adjustment using \(\alpha_{\text{DR}}(\text{$p$-H}_{3}^{+})\) from Eqs. (12a) and \(\alpha_{\text{DR}}(\text{$o$-H}_{3}^{+})=\alpha_{\text{DR}}(\text{$p$-H}_{3}^{+ })/1.5\) (green circles) gives a very satisfying result, given that no attempt has been made to optimize other parameters.
There is a single noticeable exception: X Per. However, \(T_{12}\left(\text{H}_{3}^{+}\right)\) of this line of sight suffers from a particularly large error bar as listed in Table 2. A high excitation temperature of \(\text{H}_{3}^{+}\) can only be reached for a kinetic temperatures close to 100 K with our models, as can be seen in Figure 3.
We note that typically 10 to 12 % of \(\text{H}_{3}^{+}\) ions are in excited stable or metastable levels above the respective lowest _ortho_ or _para_ levels, such as \((3,3)\), \((4,4)\), \((5,5)\), \((6,6)\) and \((7,7)\).
### Full PDR model
For comparison and validation, we compare the results of Section 5.1 to those obtained with our full PDR code for the case of \(\zeta\) Per. We do not seek a complete model of this line of sight, and do not try to optimize the free parameters to reproduce other observations than \(\text{H}_{2}\) and \(\text{H}_{3}^{+}\) excitation.
We use our previous study dedicated to \(\zeta\) Per [52] as a starting point, but we also account for the many updates made since then ([11, 18, 53]). For the present study, we have introduced in the Meudon PDR code the _ortho/para_ dependence of the formation reaction of \(\text{H}_{3}^{+}\) via \(\text{H}_{2}\) + \(\text{H}_{2}^{+}\) collisions and the nuclear spin dependence of the dissociative recombination rate coefficient, in addition to the other excitation mechanisms of \(\text{H}_{3}^{+}\). The collisional excitation of \(\text{H}_{2}\) by \(\text{H}^{+}\), that has been revisited by [54] for highly rovibrationally excited levels, has also been updated. Observations suggest a total visual extinction \(A_{\text{V}}=0.9\,\text{mag}\) (\(R_{\text{V}}=2.8\), \(E_{B-V}=0.32\), \(N_{\text{H}}/E_{B-V}=5.2\,10^{21}\,\text{cm}^{-2}\)). In our models the cloud is illuminated from both sides by the standard ISRF (\(G_{0}=1\)).
We consider two different scenarios
**Model A**: Constant density and constant temperature,
**Model B**: Constant density, with computation of the thermal balance.
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline & & \(n_{\text{H}}\) & \(T_{01}^{obs}\left(\text{H}_{2}\right)\) & \(T_{12}^{obs}\left(\text{H}_{3}^{+}\right)\) & \(N\left(\text{H}_{3}^{+}\right)\) & \(N\left(\text{H}_{2}\right)\) & References \\ & \(\left(\text{cm}^{-3}\right)\) & (K) & (K) & \(\left(10^{13}\,\text{cm}^{-2}\right)\) & \(\left(10^{20}\,\text{cm}^{-2}\right)\) & \\ HD154368 & 240 & \(51\pm 8\) & \(20\pm 4\) & 9.37 & 14.4 & [10, 16, 20] \\ HD73882 & 520 & \(51\pm 6\) & \(23\pm 3\) & 9.02 & 12.9 & [10, 16, 20] \\ HD27778 & 62 Tau & 280 & \(55\pm 7\) & \(29\pm 4\) & 6.49 & 6.23 & [17, 20] \\ HD24398 & \(\zeta\) Per & 215 & \(57\pm 6\) & \(28\pm 4\) & 6.26 & 4.75 & [10, 16, 19] \\ HD24534 & X Per & 325 & \(57\pm 4\) & \(46^{+21}_{-13}\) & 7.34 & 8.38 & [10, 16, 20] \\ HD41117 & \(\chi^{2}\) Ori & 200 & \(60\pm 7\) & \(29\pm 13\) & 5.29 & 4.90 & [17, 21] \\ HD110432 & 140 & \(68\pm 5\) & \(30\pm 2\) & 5.22 & 4.37 & [10, 16] \\ HD210839 & \(\lambda\) Cep & 115 & \(72\pm 6\) & \(34\pm 2\) & 7.58 & 6.88 & [10, 20] \\ HD43384 & 9 Gem & _120\({}^{*}\)_ & \(74\pm 15\) & \(38\pm 11\) & 4.07 & 7.36 & [17, 21] \\ \hline \multicolumn{7}{l}{\({}^{*}\) value unavailable. Educated guess only.} \\ \end{tabular}
\end{table}
Table 2: Selection of sightlines with both \(\text{H}_{2}\) and \(\text{H}_{3}^{+}\) observations.
Both models A and B are fairly standard. To account for the presence of purely atomic gas along the line of sight, we use an \(A_{\rm V}=0.7\) for both models A and B, allowing to account for the molecular hydrogen abundance.
The results are summarised in Table 3. The examples shown here have been selected from an evaluation of the following \(\chi^{2}\), where \(\sigma\) are the observational uncertainties
\[\chi^{2}=\frac{1}{4}\,\left(\frac{(N_{Obs}({\rm H}_{2})-N_{Mod}( {\rm H}_{2}))^{2}}{\sigma_{N({\rm H}_{2})}^{2}}+\frac{(N_{Obs}({\rm H}_{3}^{+} )-N_{Mod}({\rm H}_{3}^{+}))^{2}}{\sigma_{N({\rm H}_{3}^{+})}^{2}}+\right.\\ \left.\frac{(T_{01,Obs}({\rm H}_{2})-T_{01,Mod}({\rm H}_{2}))^{2 }}{\sigma_{T_{01}({\rm H}_{2})}^{2}}+\frac{(T_{12,Obs}({\rm H}_{3}^{+})-T_{12,Mod}({\rm H}_{3}^{+}))^{2}}{\sigma_{T_{12}({\rm H}_{3}^{+})}^{2}}\right). \tag{22}\]
The proposed ionization rates are rather high, but in line with the other recent evaluations based on H\({}_{3}^{+}\) and OH\({}^{+}\)abundances [9, 10, 55]. We note that our previous estimate of the cosmic ionization rate towards \(\zeta\) Per was somewhat lower, as we included in that study additional constraints provided by OH and HD column densities. The constraints imposed by the excitation temperature of H\({}_{2}\) and H\({}_{3}^{+}\) are not sensitive to the cosmic ionization rate of H\({}_{2}\), as shown in Fig. 3. We then recover the predictions based on molecular ion observations.
Using the recombination rate from Eq. (12b), the excitation temperature of H\({}_{3}^{+}\) is slightly overestimated, as found in the previous Section. Applying the same empirical correction to the recombination rate leads to a lower excitation temperature, without any impact on H\({}_{2}\). The total amount of H\({}_{3}^{+}\) is lower due to the overall larger recombination rate. The resulting \(\chi^{2}\) varies accordingly. This illustrates the very high sensitivity of H\({}_{3}^{+}\) to the exact value of the electron recombination rate.
Figure 6: Computed H\({}_{3}^{+}\) excitation temperature \(T_{12}({\rm H}_{3}^{+})\) as a function of the observed H\({}_{2}\) excitation temperature for the 9 lines of sight given in Tab. 2. Sightlines with the same \(T_{01}\) have been shifted by 0.1 K for clarity. Three different hypothesis are used for the dissociative recombination rate of \(\rm o\)-H\({}_{3}^{+}\) with electrons (see text).
The situation now is much better than it used to be, but smaller uncertainties are still needed for quantitative analysis. Note that we do not claim to estimate these rates from observational data.
Overall, this comparison validates the excitation model presented in Section 4 and shows that the temperatures of H\({}_{3}^{+}\) and H\({}_{2}\) in the diffuse ISM can, in fact, be predicted considering a a strongly reduced set of reactions.
## 6 Discussion and conclusion
In this paper, we review all physical processes relevant to formation, excitation and destruction of H\({}_{3}^{+}\) in the diffuse interstellar medium. We provide references to the best data available to date.
We show that a \(0D\) statistical model of H\({}_{3}^{+}\) level populations, including formation/destruction terms and updated collisional excitation/deexcitation processes, allows to explain the low excitation temperature observed in diffuse clouds that has puzzled observers and modelers alike for twenty years. In particular, we show that it is mandatory to include state-to-state chemical formation and destruction processes to explain the departure from thermal equilibrium (Boltzmann ratio) observed for the two lowest levels of H\({}_{3}^{+}\). Specifically, the formation of \(p\)-H\({}_{3}^{+}\) by \(p\)-H\({}_{2}\) at temperatures below 70 K is efficient. Considering the individual levels of H\({}_{3}^{+}\), we find that reactive collisions with H\({}_{2}\) are generally too slow - when compared to spontaneous radiative decay and the fast destruction by electron recombination - to bring the populations into equilibrium with the gas kinetic temperature. These results are confirmed by an updated version of the Meudon PDR code that includes the specific _ortho_/_para_-dependence of the formation/destruction reactions of H\({}_{3}^{+}\) in addition to the radiative/collisional excitation balance of that molecular ion.
While the formation process may be primarily responsible for the increased population of \(p\)-H\({}_{3}^{+}\) levels at low temperature, it is important to note that the consideration of different classes of processes - radiative transitions as well as chemical reactions with H\({}_{2}\) - is required to achieve quantitative results. We find that all attempts to simplify our master equation further and remove more processes from our models lead to significant changes in the H\({}_{3}^{+}\) excitation temperature and impair the agreement with the observational data.
Moreover, we show that the inclusion of rotationally excited levels, besides the respective _ortho_ and _para_ ground states that are usually considered, has substantial implications for the population of the first two levels and thus for \(T_{12}\left(\mathrm{H}_{3}^{+}\right)\), and we find that at least 10 levels should be included in the coupled equations to get a converged result. Our models suggest that typically more than 10 % of H\({}_{3}^{+}\) ions are in metastable excited states, which may be observable in absorption towards bright stars or quasi-stellar objects.
Finally, we stress that some key processes are still either badly determined or completely un
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Obs. & \(N(\mathrm{H}_{2})\) & \(T_{01}\) & \(N(\mathrm{H}_{3}^{+})\) & \(T_{12}(\mathrm{H}_{3}^{+})\) & \(\chi^{2}\) \\ \hline \(\zeta\) Per & 4.75(20) & 57\(\pm\) 6 & 6.26(13) & 28 \(\pm\) 4 & \\
[10] & \(\pm 0.95(20)\) & & \(\pm 0.52(13)\) & & \\ \hline Models & \multicolumn{3}{c}{\(n_{\mathrm{H}}=150\,\mathrm{cm}^{-3}\), \(\zeta=8\,10^{-16}\,\mathrm{s}^{-1}\)} \\ \hline A & 5.00(20) & 56.9 & 5.62(13) & 31.0 & 0.54 \\ B & 5.00(20) & 62.3 & 6.30(13) & 33.7 & 0.73 \\ \hline \multicolumn{5}{c}{ Same parameters, but \(\alpha_{DR}^{o}=\alpha_{DR}^{p}/1.5\)} \\ \hline A & 5.00(20) & 56.9 & 4.58(13) & 23.0 & 3.0 \\ B & 5.00(20) & 62.3 & 5.18(13) & 26.0 & 1.4 \\ \hline \end{tabular}
\end{table}
Table 3: PDR model results. Column densities are in cm\({}^{-2}\) and excitation temperatures in K. Numbers in parenthesis are power of 10.
known. In particular, precise quantitative computation of H\({}_{3}^{+}\) excitation will not be possible as long as we still lack accurate state specific recombination rates with electrons at temperatures between 20 and 200 K. Numerical manipulations of the rate coefficients shown in Section 5 reveal the sensitivity of \(T\left(\mathrm{H}_{3}^{+}\right)\) to the ratio \(\alpha_{DR}^{\alpha}/\alpha_{DR}^{p}\). However, it is well-known how dangerous it can be to try to infer reaction rates from observational results, and we do not claim that the rates used here are the final word. Another class of processes that are lacking accurate description are collision rates of H\({}_{3}^{+}\) with He and H. In particular, exchange of hydrogen atoms during collisions with H may impact the _ortho_ to _para_ ratio of H\({}_{3}^{+}\). Here we used estimated values for the rate coefficients scaled from reaction rates with \(p\)-H\({}_{2}\) in order to include these processes in the models. While our results seem not to depend strongly on the exact choice of the estimated rate coefficients, more accurate values for these reactions are clearly desirable.
Despite the remaining limitations, our models for the first time are able to account for the observed H\({}_{3}^{+}\) excitation temperature in diffuse cloud sightlines. This marks a major step in our understanding of interstellar hydrogen chemistry, providing a framework of state-selective chemistry for two of the most important and fundamental molecular gas phase species.
## Appendix A Two-level approximation for H\({}_{2}\)
As first recognized by ([56, 57]) from Copernicus observations, \(T_{01}\) of H\({}_{2}\) is an excellent proxy for the kinetic temperature \(T\) in diffuse clouds, where collisions with protons allow the two rotational levels to reach thermal equilibrium, in absence of any radiative transition. This is not true anymore in dense cloud conditions, where the _ortho_-to-_para_ ratio is expected to be very far from thermal equilibrium [58], and where collisions with H\({}_{3}^{+}\) modify the excitation balance. We discuss rapidly the conditions of validity of this feature through a simple 2-level approximation. Transitions between \(J=0\) and \(J=1\) occur only through reactive collisions with H\({}^{+}\) with rates \(k_{01}\) and \(k_{10}\) (reactive collisions with other species (H, H\({}_{3}^{+}\)) are negligible here). Besides collisions, these two levels are populated by direct formation on grains with rates \(k_{f0}\) and \(k_{f1}\) or depopulated by photodissociation with rates \(d_{0}\) and \(d_{1}\). Other chemical reactions have only a minor impact. The resulting balance equations are
\[\left(k_{01}\,n\left(\mathrm{H}^{+}\right)+d_{0}\right)x_{0}-k_{10}\,n\left( \mathrm{H}^{+}\right)x_{1}=k_{f0}\,n\left(\mathrm{H}\right)\,\frac{n_{\mathrm{ H}}}{n\left(\mathrm{H}_{2}\right)},\]
\[\left(k_{10}\,n\left(\mathrm{H}^{+}\right)+d_{1}\right)x_{1}-k_{01}\,n\left( \mathrm{H}^{+}\right)x_{0}=k_{f1}\,n\left(\mathrm{H}\right)\,\frac{n_{\mathrm{ H}}}{n\left(\mathrm{H}_{2}\right)}.\]
This system can be solved for \(x_{0}\) and \(x_{1}\) and leads to
\[\frac{x_{1}}{x_{0}}=\frac{k_{01}\,n\left(\mathrm{H}^{+}\right)\,k_{f0}+\left( k_{01}\,n\left(\mathrm{H}^{+}\right)+d_{0}\right)\,k_{f1}}{\left(k_{10}\,n\left( \mathrm{H}^{+}\right)+d_{1}\right)\,k_{f0}+k_{10}\,n\left(\mathrm{H}^{+} \right)\,k_{f1}}.\]
In the temperature range from 50 to 100K appropriate do diffuse and translucent cloud conditions, the Boltzmann factor \(\frac{g_{1}}{g_{0}}\,\exp\left(-\frac{E_{10}}{T}\right)=\,9\,\exp\left(-\frac{ 170.5}{T}\right)\) varies from 0.5 to 1.6. So, \(k_{01}\) and \(k_{10}\) remain close to one another. Furthermore, the formation rates \(k_{f0}\) and \(k_{f1}\) are close to one another and simplify. The ratio can be arranged using detailed balance as:
\[\frac{x_{1}}{x_{0}}=\frac{g_{1}}{g_{0}}\,\exp\left(-\frac{E_{10}}{T}\right)\, \frac{1+\frac{d_{0}}{2\,k_{01}\,n\left(\mathrm{H}^{+}\right)}}{1+\frac{d_{1}} {2\,k_{10}\,n\left(\mathrm{H}^{+}\right)}}\]
In regions of low radiation field where most diffuse clouds are found the H/H\({}_{2}\) transition is very close to the edge of the cloud, as shown in [59]. Hence, in most of the region that builds H\({}_{2}\) column density the dissociation rates \(d_{0}\) and \(d_{1}\) are about 4 orders of magnitude lower than the products \(k_{10}\,n\left(\mathrm{H}^{+}\right)\) and \(k_{01}\,n\left(\mathrm{H}^{+}\right)\). Thus, the correction factor coming from the chemistry is very close to 1 and the ratio \(\frac{x_{1}}{x_{0}}\) gives a very good measure of the kinetic temperature.
## Appendix B Computation of the electronic fraction
Ionization balance can be solved analytically for diffuse cloud conditions. Due to ultraviolet photons, all metals with an ionization threshold below 13.6 eV are ionized, providing a minimal electronic abundance of \(\delta_{M}\,n_{\mathrm{H}}\), where \(\delta_{M}\) is the fraction of relevant metals (mostly C and S, with traces of Si and other heavier species). In the following, we take \(\delta_{M}=1.55\,10^{-4}\).
Additional electrons come from H\({}^{+}\) and He\({}^{+}\) resulting from the balance between ionization via cosmic rays and recombination with electrons and grains.We follow here for the most part the presentation of [60], Section 13.6, extended to include He. The balance equations are
\[\zeta_{\rm H}\,n\left({\rm H}\right)=\alpha_{rr}\left({\rm H}^{+}\right)\,n \left({\rm H}^{+}\right)\,n\left(e^{-}\right)+\alpha_{gr}\left({\rm H}^{+} \right)\,n_{\rm H}\,n\left({\rm H}^{+}\right),\]
\[\zeta_{\rm He}\,n\left({\rm He}\right)=\alpha_{rr}\left({\rm He}^{+}\right)\,n \left({\rm He}^{+}\right)\,n\left(e^{-}\right)+\alpha_{gr}\left({\rm He}^{+} \right)\,n_{\rm H}\,n\left({\rm He}^{+}\right).\]
Where \(\alpha_{rr}\) is the radiative recombination rate, and \(\alpha_{gr}\) the rate of recombination on grains. \(\zeta_{\rm H}\) and \(\zeta_{\rm He}\) are the cosmic ray ionization rates of H and He, respectively. With respect to H\({}_{2}\), we use \(\zeta_{\rm H}=0.77\,\zeta\), including secondary ionization, and \(\zeta_{\rm He}=0.5\,\zeta\). The total abundance of electrons is
\[n\left(e^{-}\right)=\delta_{\rm M}\,n_{\rm H}+n\left({\rm H}^{+}\right)+n \left({\rm He}^{+}\right).\]
For all abundances, we write \(x\left({\rm X}\right)=n\left({\rm X}\right)\,n_{\rm H}\). We take \(x\left({\rm He}\right)=\delta_{\rm He}\), with \(\delta_{\rm He}=0.1\) and compute H abundance from the molecular fraction \(f_{m}\)
\[n\left({\rm H}\right)=\left(1-f_{m}-x\left({\rm H}^{+}\right)\right)\,n_{\rm H}.\]
The resulting system of two equations can be written in a more compact form by using \(X\) for hydrogen and \(Y\) for helium
\[\alpha_{rr}^{X}\,n_{\rm H}\,X^{2}+\alpha_{rr}^{X}\,n_{\rm H}\,X\,Y+\left( \zeta_{\rm H}+\alpha_{rr}^{X}\,\delta_{\rm M}\,n_{\rm H}+\alpha_{gr}^{X}\,n_{ \rm H}\right)\,X=\zeta_{\rm H}\,\left(1-f_{m}\right),\]
\[\alpha_{rr}^{Y}\,n_{\rm H}\,X\,Y+\alpha_{rr}^{Y}\,n_{\rm H}\,Y^{2}+\left( \alpha_{rr}^{Y}\,\delta_{\rm M}+\alpha_{gr}^{Y}\right)\,n_{\rm H}\,Y=\zeta_{ \rm He}\,\delta_{\rm He}.\]
This system is easily solved using a Newton-Raphson scheme once the rates are known.
Radiative electronic recombination rates are taken from [61], the relevant coefficients are given in Table 1
\[\alpha_{rr}=A\times\left[\sqrt{\frac{T}{T_{0}}}\left(1+\sqrt{\frac{T}{T_{0}}} \right)^{1-BB}\times\left(1+\sqrt{\frac{T}{T_{1}}}\right)^{1+BB}\right]\]
with
\[BB=B+C\,\exp\left(-\frac{T_{2}}{T}\right).\]
Electronic recombination on grains, comes from [62]
\[\alpha_{gr}\left(X^{+},\frac{G_{0}}{n\left(e^{-}\right)},T\right)=\frac{10^{- 14}\,C_{0}}{1+C_{1}\,\psi^{C_{2}}\,\left(1+C_{3}\,T^{C_{4}}\,\psi^{-C_{5}-C_{6 }\,\ln T}\right)}\]
with
\[\psi=\frac{G_{0}\,\sqrt{T}}{n\,(e^{-})}.\]
Here, \(G_{0}\) is the Inter Stellar Radiation Field (ISRF) intensity in units of Draine's ISRF. Coefficients \(C_{0}\) to \(C_{6}\) are given in Table (B2).
## Appendix C Fortran code
The code solving for H\({}_{3}^{+}\) populations from Eq. 14 is available at Meudon ISM Services Platform. Compilation requires a modern Fortran 90 compiler (gfortran will do) and access to the LAPACK library. The later is usually provided with all standard compilers. Otherwise, it is self contained.
The code takes very few input parameters: (\(n_{\rm H}\), \(T\), \(f_{m}\), \(I_{m}\), \(\zeta\)) and the number of levels used, from the command line or redirection of a small input file. It uses the latest data available, as described in this paper. Comments in the source file, coupled to this paper, should be enough for easy use and adaptation.
## Disclosure statement
No conflict of interest.
## Funding
ER, JLB & FLP were supported in part by the Programme National "Physique et Chimie du Milieu Interstellaire" (PCMI) of CNRS/INSU with INC/INP, co-funded by CEA and CNES. FK, AO & HK acknowledge financial support by the Max Planck Society. |
2310.16148 | Yin Yang Convolutional Nets: Image Manifold Extraction by the Analysis
of Opposites | Computer vision in general presented several advances such as training
optimizations, new architectures (pure attention, efficient block, vision
language models, generative models, among others). This have improved
performance in several tasks such as classification, and others. However, the
majority of these models focus on modifications that are taking distance from
realistic neuroscientific approaches related to the brain. In this work, we
adopt a more bio-inspired approach and present the Yin Yang Convolutional
Network, an architecture that extracts visual manifold, its blocks are intended
to separate analysis of colors and forms at its initial layers, simulating
occipital lobe's operations. Our results shows that our architecture provides
State-of-the-Art efficiency among low parameter architectures in the dataset
CIFAR-10. Our first model reached 93.32\% test accuracy, 0.8\% more than the
older SOTA in this category, while having 150k less parameters (726k in total).
Our second model uses 52k parameters, losing only 3.86\% test accuracy. We also
performed an analysis on ImageNet, where we reached 66.49\% validation accuracy
with 1.6M parameters. We make the code publicly available at:
https://github.com/NoSavedDATA/YinYang_CNN. | Augusto Seben da Rosa, Frederico Santos de Oliveira, Anderson da Silva Soares, Arnaldo Candido Junior | 2023-10-24T19:48:07Z | http://arxiv.org/abs/2310.16148v1 | # Yin Yang Convolutional Nets: Image Manifold Extraction by the Analysis of Composites
###### Abstract
Computer vision in general presented several advances such as training optimizations, new architectures (pure attention, efficient block, vision language models, generative models, among others). This have improved performance in several tasks such as classification, and others. However, the majority of these models focus on modifications that are taking distance from realistic neuroscientific approaches related to the brain. In this work, we adopt a more bio-inspired approach and present the Yin Yang Convolutional Network, an architecture that extracts visual manifold, its blocks are intended to separate analysis of colors and forms at its initial layers, simulating occipital lobe's operations. Our results shows that our architecture provides State-of-the-Art efficiency among low parameter architectures in the dataset CIFAR-10. Our first model reached 93.32% test accuracy, 0.8% more than the older SOTA in this category, while having 150k less parameters (726k in total). Our second model uses 52k parameters, losing only 3.86% test accuracy. We also performed an analysis on ImageNet, where we reached 66.49% validation accuracy with 1.6M parameters. We make the code publicly available at: [https://github.com/NoSavedDATA/YinYang_CNN](https://github.com/NoSavedDATA/YinYang_CNN).
## 1 Introduction
The field of neural computer vision have presented a great advancement, for instance, improved neural network architectures were proposed. Some of these architectures include: mobile model families as MobileNet [8], EfficientNet V2 [21] and RegNet [14]; two-branch neural networks for semantic segmentation, as BiSeNet [25], Deep Dual-Resolution networks [12] and SeaFormer [24]; pure attention mechanisms applied into image classification, such as ViT [4] and MaxViT [22]; image generative models like Stable Diffusion [16] and DALL-E-2
[15]; and lastly, vision-language models, as CLIP [13]. The majority of these architectures focuses on increasing model efficiency by improving the micro-architecture - there is, by making adjustments relative to the inside of a network block, as in mobile model families. Some of these architecture also leverage the potential of CNNs and Transformers into high level computer vision tasks, as image generation or vision-language models creation.
However, none of the famous modern architectures are trying to reach a more realistic neuroscientific approach in respect to brain occipital (visual cortex) mechanisms. In this regard, we base our research on two neuroscientific findings about function specialized occipital lobe areas, that is, edge detection that happens on V1 area [5] and color processing at V4 area [1]. Also, this form of specialization is also observed in the human eye, in which rod components are related to white and black processing [2], and cone components to color processing [3].
In this research, we present Yin Yang Convolutional Net (YYNet), a neural network model which makes adjustments relative to the global scale of the model. This is performed in the macro-architecture by aggregating blocks (or single purpose networks) that extracts visual manifolds by doing a separate analysis of colors and forms from its input. We find that this architecture provides State-of-the-Art (SOTA) efficiency among low parameter architectures applied to the low data and image resolution dataset, CIFAR-10, using less parameters and less training epochs than existing models.
## 2 Related Work
Vision neural network research have grown in quality at a very fast pace. In this section, we present related architectures and their given tasks. Related work can be categorized into five approaches: mobile networks; two branch networks; transformers; vision-language models; generative models.
Some of this approaches are usually divided in three parts: stem, stage and head. The stem is usually single convolution with stride 2, but may present optional extra convolutions. The stage contains the main architecture of the model, that can be divided into blocks with layers, in which each stage block shares hyperparameters (number of channels and extra hyperparameters) across all its layers. The head may or not contain convolutions and then it is followed by average pooling, an optional linear layer and the final classification linear layer followed by a softmax. This is the case for Mobile Nets and MaxViT.
### Mobile Networks
Mobile network based approaches focus on building efficient models with fewer parameters. The authors of MobileNet V2 [17], shown in Figure 1, propose residual inverted bottleneck blocks, a micro-architecture that became more efficient than the reference model ResNet [6]. The objective of this micro-architecture is to reduce data dimensionality in a way that the manifold spans the entire space
of lower dimensional sub-spaces. They do so by inserting inverted bottlenecks at each block, instead of keeping the channels number constant at repeating blocks as in ResNet. That is, they first expand the number of channels for a given factor (e.g. 4), then they apply convolutions in this higher dimension and finally go back into the original dimension, similar to feed-forward networks in the transformer architecture [23]. On their architecture, they first increase the channels number with 1x1 kernels, then use 3x3 depth-wise kernels on the higher dimension and, lastly, they reduce the dimension back to what it was before with a 1x1 kernel. The reason to use 1x1 kernels follwed by 3x3 is because the authors of Mobile Net V1 [9] found it more efficient than directly expanding using 3x3 kernels.
Further, on Mobile Net V3 [8], they improve Mobile Net V2 efficiency by applying a Squeeze and Excitation [10] mechanism after the 3x3 depth-wise convolution. We will refer to the block of Mobile Net V3 as the MBConv.
Besides, in EfficientNet V2 [21], the authors have changed the original architecture of MobileNet V3 into the Fused MBConv and interpolated this new block with the original MBConv, finding the best parameter configuration given their applied model search. They have also proposed slight modifications to the compound scaling method of EfficientNet [20], which consists in adjusting model depth, channel number and image size to find the best scale inside a family of models.
### Two-branch Neural Networks
Two-branch based models allow the signal to travel along two different paths, conventionally after a shared stem - except for BiSeNet. This is done to extract manifold efficiently. An example of different paths for signal propagation can be seem in BiSeNet [25] and Deep Dual-Resolution Networks [12], these archi
Figure 1: Mobile Net Micro-Architectures [8]
tectures are built for the purpose of semantic segmentation, in which one of the branches is shallow and wide, extracting local details, and the other is deep and narrow, capturing high-level semantics. For simplicity, we show only the Deep Dual-Resolution Network at Figure 2.
Another example of two branch network is SeaFormer [24], that increases the efficiency by using mobile transformers, from which the embedding of the stem backbone is processed and fused at multiple steps with mobile transformer blocks embeddings, as can be seen at Figure 3. Our work is inspired by these architectures.
### Vision Transformers
Vision Transformers apply transformer encoder blocks to explore the attention paradigm into vision tasks. On ViT [4], as shown in Figure 4, the authors do this by dividing the image into multiple embedding patches of equal window size, in which each patch embedding is cross-attended with attention to all other
Figure 3: SeaFormer [24].
Figure 2: Deep Dual-Resolution Network [12].
patches. They also reserve a patch embedding for classification (usually referred as "cls" token in language models), which does not come from the image, and so aggregates global information about other patches. The advantage of using a pure transformers architecture is that the multiple heads mechanism turns it possible to apply tensor parallelism [19].
Then, MaxViT [22] enhances ViT with more efficient attention mechanisms for the vision paradigm, the authors use Axial Attention for local details and Grid Attention for global interactions between pixels. They also append a MBConv at the start of each block.
## 3 Architecture
Yin-Yang Net uses a **micro-architecture** presented in Figure 5. In each repeating block, we start with a sub-block of ResNet and then use \(n\) sub-blocks of MBConv from Mobile Net V3. We found this configuration has better accuracy when training solely with MBConv given our hyperparameters, and it is more efficient during training than using only ResNet blocks. At the two branch layers, stride 2 is applied on the last or the first sub-block according to the micro-architecture type, Yin or Yang, respectively. The single branch layers use stride 2 on the ResNet block.
Our work is inspired by two branch architectures. However, our approach differs from classical two branch networks in 2 aspects. First, instead of building a shallow and a deep branch for details and semantics extraction, YYNet uses the same number of layers and channels at blocks on the same level, but stride 2 at different parts of these layers. Second, in our work, there is no common stem backbone to the branches, the stem is the focus of this paper, where different manifolds are analyzed.
The Yin branch has the purpose of form analysis. Yin blocks can use the first channel of the input or the mean of all channels. We found that both configurations perform well. For simplicity, when the first channel approach is
Figure 4: ViT [4].
in use, the first block receives the red channel in the network input layer. This way, as there is no other color to extract, it is obliged to the task of extracting the form manifold. Also, in order to focus on local/higher scale details, a strategy of later stride 2 is used, meaning that the last MBConv of each block applies striding.
On the other hand, the Yang branch analyzes colors. With that in mind, as colors in nearly pixel are generally the same, we use an early stride 2 on this branch to remove color redundancy and only care about different colors interactions. That is, the first MBConv on each block applies striding. This block resembles standard single branch architectures, as its input is the three RGB channels and early stride 2 is applied.
Then, at the **macro-architecture** level, as shown in Figure 6, we send the same input to both these micro-architectures. Their final embeddings have the same shape, as they have the same number of layers and channels, with the exception of the first sub-block of each micro-architecture. We then apply a Fusion Gate mechanism adapted for our embeddings.
We apply a Fusion Gate, similar to SeaFormer and Multimodal Chain-of-Thought [27]. For this, we use an embedding fusion mechanism at outputs from Yin and Yang branches, each branch having a different embedding meaning. This is done to unify both embeddings, or the manifold, as presented in (1).
\[SP_{X}=A_{Y}+I_{Y} \tag{1}\]
\(X\) represents the input of one network and \(Y\) represents an output, \(A\) represents the Yang blocks, \(I\) represents the Yin blocks, \(\odot\) represents a Hadamard product (elementwise product). We have tested several combinations of \(A\) and \(I\) (as described in Section 4). The operations presented in (1) are those with best performance. On CIFAR-10, this gated fusion yielded a slightly better accuracy than the concatenation - about less than 0.5%, yet halving the channels number compared to concatenation.
After this, we send the embedding into Single Path blocks, that consists of a
Figure 5: Micro-architecture
sub-block of ResNet with stride 2 on the first convolution and then \(n\) sub-blocks of MBConv with no stride 2 (except for CIFAR-10, as described in Section 4). We use GELU [7] as the activation function of any sub-block. As the head of our model, we use average pooling, flattening, a linear layer, GELU, dropout and the final classification linear layer followed by a softmax.
## 4 Experiments and Results
### Experiments
We tested multiple fusion approaches to combine Yin and Yang outputs. We selected the approach presented in Equation 1. However, Table 1 presents other approaches tested in this work. We performed 3 runs with batch size 512 on CIFAR-10 for each approach and report the mean and standard deviation.
\begin{table}
\begin{tabular}{l c c} \hline
**Formula** & **Mean** & **STD** \\ \hline A*(1-I) & 87.61 & 0.09 \\ A*I + A+I & 87.63 & 0.08 \\ A*(1-I) + A-I & 87.81 & 0.19 \\ A*I & 87.87 & 0.23 \\ A*(1-I) + A+I & 87.98 & 0.22 \\ A+I & 88.21 & 0.46 \\ \hline \end{tabular}
\end{table}
Table 1: Fusion Approach
Figure 6: Macro-architecture.
We then reduced the batch size into 64 for the final model on CIFAR-10. We do this because batch normalization seems to work better in batch sizes on the range of 50 to 100 [26]. We use a smaller batch size in ImageNet due to computational constraints.
We used gradient clip of 1 and mixed precision. At 25% of training, we activate a exponential moving average with a multiplier of 0.1 for the averaged model parameter and 0.9 for current model parameter. Also, one of the graphs at [11] shows that there are specific values of weight decay that works better with specific values of learning rate. We therefore use an adaptive value for the weight decay. At the end of each epoch, we set the weight decay to be equal to the learning rate*1.56. The input resolution for CIFAR-10 is 32x32 and ImageNet is 224x224. Further hyperparameters and settings are provided in Table 2. Specific Hyperparameters adjustments for CIFAR-10 and ImageNet are presented on Table 3. We designed 3 models for CIFAR-10 and one for ImageNet.
Regarding CIFAR-10, we use a constant channel number over all the sub-blocks, exploring 3 models variants. They have 1 Yin Yang layer with 3 MBConvs and 1 single branch layer with 2 MBConvs. We apply an extra stride 2 at the first MBConv of the single branch layer on CIFAR-10 networks. The linear layer before the classification layer on CIFAR-10 has 40 neurons.
\begin{table}
\begin{tabular}{l r r} \hline \hline
**Hyperparameter** & **YYNet Small** & **YYNet** \\ \hline Optimizer & AdamW [11] & AdamW \\ LR Scheduler & One Cycle & One Cycle \\ Max LR & 1e-2 & 18e-4 \\ Epochs & 40 & 300 \\ Batch Size & 64 & 32 \\ GPU & RTX 2060 & RTX 2080 TI \\ Dataset & CIFAR-10 & ImageNet \\ \hline \hline \end{tabular}
\end{table}
Table 2: General Hyperparameters and Settings
\begin{table}
\begin{tabular}{l r r} \hline \hline
**Hyperparameter** & **CIFAR-10** & **ImageNet** \\ \hline YY Starting Channels & (16, 32, 64) & 16 \\ SP Starting Channels & (16, 32, 64) & 64 \\ Channels added per MBConv & 0 & 2 \\ Extra SP stride 2 & Yes & No \\ YY Layers & 1 & 1 \\ SP Layers & 1 & 4 \\ YY MBConv per Layer & 3 & 3 \\ SP MBConv per Layer & 2 & 2 \\ Pre-Classification Linear Neurons & 40 & 500 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Dataset Specific Hyperparameters
Regarding ImagetNet, we start with 16 channels, then a constant number of 2 channels is added at each MBConv on the Yin and Yang branches. After that, a fixed number of channels 64 is provided for the first single branch sub-block. We then continue adding channels after each block. We also use a single layer and 3 MBConvs for the Yin Yang branches and 2 MBConvs for the single branch sub-blocks, but in this dataset we use 4 layers of single-branch and no extra stride 2 is applied. The linear layer before the classification layer has 500 neurons.
### Results
We conduct experiments with the small version of YYNet on CIFAR-10, in which we reach State-of-the-Art (SOTA) at model efficiency for models with few parameters. These results are provided in Table 4.
We also test a model version on the ImageNet validation dataset. We do not provide results on the test dataset, since ImageNet team only send results on the test set when the challenge is open. Comparison with similar size models are provided in Table 5. Our model uses 6% less parameters than [24] reaching an validation accuracy only 1.2% smaller. Regarding with MobileNet V3 [8], the authors do not provide validation accuracy but we assume the validation accuracy is similar to test accuracy. In this case, our model is considerable smaller and presents similar efficiency.
## 5 Conclusion and Discussion
In this work, we took inspiration in neuroscience research to model efficient neural networks. We developed a two branch stem for CNNs intended to analyze
\begin{table}
\begin{tabular}{l r r} \hline
**Model** & **Test Accuracy** & **Parameters** \\ \hline ExquisiteNetV2 [28] & 92.52 & 890,000 \\ YYNet Small 64 channels (ours) & **93.32** & 726,274 \\ kMobileNet 16ch [18] & 89.81 & 240,000 \\ YYNet Small 32 channels & 91.91 & 191,330 \\ YYNet Small 16 channels & 89.46 & **52,882** \\ \hline \end{tabular}
\end{table}
Table 4: CIFAR-10
\begin{table}
\begin{tabular}{l r r r} \hline
**Model** & **Validation Acc** & **Test Acc** & **Parameters** \\ \hline MobileNet V3 (Small) [8] & - & 67.4 & 2.5M \\ SeaFormer (Tiny) [24] & 67.7 & 67.9 & 1.7M \\ YYNet (ours) & 66.49 & - & 1.6M \\ \hline \end{tabular}
\end{table}
Table 5: ImageNet
colors and shapes of images separately. Our model reached State-of-the-Art results on CIFAR-10 considering models with few parameters. We reached 93.32% test accuracy with 726k parameters, 0.8% more than the older SOTA in this category, while having close to 150k less parameters. Our model with 52k parameters, 17 times smaller than ExquisiteNet, lose only 3.86% test accuracy. We reached 66.49% validation accuracy on ImageNet with 1.6M parameters.
Future work includes parameter search for the Yin Yang network at the ImageNet dataset. Yin branch would also benefits from this parameter search, as it applies stride 2 relatively late in this branch, increasing processing cost.
We also plan to investigate if our architecture is useful for other tasks beyond classification. One example is applying YYNets in generative AIs such as Stable Diffusion, by changing the latent space currently generated by U-Nets [16]. Another possible use is combining YYNets shape and color separation with architectures such as ViT, i.e, adding gray-scale input patches or queries.
|
2302.01625 | Stability of local tip pool sizes | In distributed ledger technologies (DLTs) with a directed acyclic graph (DAG)
data structure, a block-issuing node can decide where to append new blocks and,
consequently, how the DAG grows. This DAG data structure is typically
decomposed into two pools of blocks, dependent on whether another block already
references them. The unreferenced blocks are called the tips. Due to network
delay, nodes can perceive the set of tips differently, giving rise to local tip
pools.
We present a new mathematical model to analyse the stability of the different
local perceptions of the tip pools and allow heterogeneous and random network
delay in the underlying peer-to-peer communication layer. Under natural
assumptions, we prove that the number of tips is ergodic, converges to a
stationary distribution, and provide quantitative bounds on the tip pool sizes.
We conclude our study with agent-based simulations to illustrate the
convergence of the tip pool sizes and the pool sizes' dependence on the
communication delay and degree of centralization. | Sebastian Müller, Isabel Amigo, Alexandre Reiffers-Masson, Santiago Ruano-Rincón | 2023-02-03T09:52:58Z | http://arxiv.org/abs/2302.01625v1 | # Stability of local tip pool sizes
###### Abstract.
In distributed ledger technologies (DLTs) with a directed acyclic graph (DAG) data structure, a block-issuing node can decide where to append new blocks and, consequently, how the DAG grows. This DAG data structure is typically decomposed into two pools of blocks, dependent on whether another block already references them. The unreferenced blocks are called the tips. Due to network delay, nodes can perceive the set of tips differently, giving rise to _local_ tip pools.
We present a new mathematical model to analyse the stability of the different local perceptions of the tip pools and allow heterogeneous and random network delay in the underlying peer-to-peer communication layer. Under natural assumptions, we prove that the number of tips is ergodic, converges to a stationary distribution, and provide quantitative bounds on the tip pool sizes. We conclude our study with agent-based simulations to illustrate the convergence of the tip pool sizes and the pool sizes' dependence on the communication delay and degree of centralization.
Key words and phrases:distributed queueing system, DAG-based distributed ledgers, stochastic process, stationarity, ergodicity
## 1. Introduction
A major challenge in distributed systems is the _relativity of simultaneity_ and the fact that whether two spatially separated events occur simultaneously or in a particular order is not absolute but depends on the local perceptions of the participants. To fight this phenomenon, classical approaches in distributed ledger technologies (DLTs) such as Bitcoin [27] typically use a totally ordered data structure, a blockchain, to find consensus on the order of the events. However, this design creates a bottleneck, e.g. a miner or validator, through which each transaction must pass. And even in this solution, due to network delay, block creation can happen concurrently at different parts of the network, leading to bifurcations of the chain that must be resolved. This resolution is typically made by the longest-chain rule [27], or some variant of the heaviest sub-tree [39].
In blockchain-like DLTs, the system's throughput is artificially limited to guarantee the system's security so that each block propagates to all the participants before the next block is created. The blocks are created by miners or validators, and the blockchain can be seen as a three-step process. In the first step, a client sends a transaction to the block producers, then a particular block producer, also called the "leader", proposes a block containing a batch of transactions, and in the last step, validators validate the block.
A more novel approach that addresses the limited throughput problem and the bifurcation of the chain problem of distributed ledgers uses a directed acyclic graph (DAG)
instead of a chain to encode the dependencies of the blocks. For instance, protocols like SPECTRE [37], Byteball [5], Algorand [13], PHANTOM [38], Prism [3], Aleph [14], Narwhal [8], and IOTA [34]) were proposed to improve the performance of distributed ledgers. The consensus mechanism and the writing access in a DAG-based system can be conceptually different from the one in a linear blockchain system, and the transaction throughput is potentially no longer limited. For instance, in DAG-based protocols like Aleph [14] and Narwhal [8], only a predefined set of nodes can add new blocks to the ledger, while in IOTA [26], every participant has writing access.
We consider the more general model where every participant can add blocks to the data structure, referring to at least two previous blocks. This property reduces the update of the ledger to two steps: one node proposes a block to the ledger and waits for the other nodes to validate it, i.e., by adding a new block referencing them. This collaborative design in which all participants play the same role promises to mitigate (or even solve) several problems of the blockchain design, e.g., mining races [9], centralisation [25], miner extractable value [7], and negative externalities [36]. However, the parallelism in adding new blocks to the ledger implies that local perceptions of the nodes may differ much more than in the traditional blockchain design.
In this paper, we give a mathematical model describing the evolution of the local number of unreferenced blocks, or tips, in a distributed ledger and prove their stability. More precisely, we prove the stationarity and ergodicity of the number of tips. Except for [20], this model is new, as previous research neglected the difference between local perceptions due to heterogeneous network delays. In [20], a similar, but much more restrictive, model has been considered with deterministic delay, deterministic arrival of blocks and discrete time. This paper considers a continuous time model with random block creation and random delays. In the next section, we give an informal description of the model.
### Informal description
We consider a network of nodes that manage a distributed database. In cryptocurrency applications, this database is called a ledger, but the model could potentially be applied to other use cases of collaborative databases. The data consists of blocks that contain atomic data in the sense that either the entire block is added to the database or all the information in the block is discarded. The distributed ledger is assumed to be built using two fundamental mechanisms:
**Sharing mechanism:** Each node aims to create new blocks and inform the other nodes about these blocks. The information about the blocks is passed via a gossip protocol on an underlying communication layer. Specifically, each node is only directly connected to a subset of the other nodes. Once a node has created a block and added it to its local database, it broadcasts it to a random subset of its neighbours. As soon as a node receives a block that it has not yet received, it adds this block to its database and forwards it to a random subset of its neighbours.
**Reference mechanism:** The blocks in the database (which we referred to also as _vertices_) are connected to each other by references. The rule is that each newly created block must refer to up to \(k\geq 2\) already existing blocks. The meaning of these references can depend on the specific use case of the protocol. For example, in cryptocurrency applications, a reference of a block means that the node issuing the referencing transaction verifies the previous blocks. Verification includes semantic and syntactical checks of the block content. In addition, referencing a block can be used for validation and consensus building; see IOTA, [33], and IOTA 2.0, [26]. In distributed-queuing theory, blocks can correspond to different jobs. Referencing can then imply that the issuing node handles or will handle the jobs in the referenced blocks. The way nodes choose previously referenced
blocks has an impact on the performance of the system. In particular, previously referenced blocks should no longer be referenced. Instead, the focus should be on referencing non-referenced blocks, which we call tips.
Regarding the reference mechanism, we can note that the delay between nodes has a huge impact on the performance of the reference mechanism. Indeed, it is instructive to consider the extreme case where all nodes have the same perception of the database. This can be the case when the creation of a block is instantaneous, i.e., there is no delay between selecting the references of the block and sending it to the neighbours, and all neighbours receive the new blocks without delay. Suppose we start the database with one block (the genesis) and assume that no blocks can be created simultaneously. In that case, there will always be only one tip (non-referenced block), as each block is referenced by precisely one other block. However, this situation changes drastically if there is a delay between the selection of references and the time when all nodes have received the new block. In this case, the blocks are created concurrently, and the blocks can be referenced by more than one other block. Thus, a prior, it is no longer clear whether the system is in a stationary regime or the number of tips explodes. In this paper, we propose a mathematical procedure to model the different local tip tools and prove the stability of their sizes under standard synchrony assumptions.
### Contributions
This paper has three major contributions:
1. We formalize the above description of the (distributed) protocol using an appropriate stochastic process. This is the first continuous-time model for local perceptions of a DAG-based distributed ledger together with the communication on the underlying peer-to-peer network.
2. Our main result, Theorem 3.10, is a _formal proof_ of the stability of the local tip pool sizes. The proof relies on an asymptotic drift analysis, Theorem 3.1, that allows, together with a regeneration structure, to obtain qualitative results on the stationarity and ergodicity of the local tip pools.
3. Finally, through Monte-Carlo simulations, we provide more quantitative results highlighting the influence of the protocols environment on the differences in the local perceptions.
### Related work
To the best of our knowledge, S. Popov introduced the first mathematical model on DAG-based ledgers [33]. Popov's analysis is based on a global and perfect observation of the existing blocks. The communication delay is assumed to be homogeneous, and newly created blocks can be referenced only after a given constant network delay. The author heuristically obtains a formula for the expected number of tips assuming that the tip pool size is stationary.
Under the above assumptions of the existence of a central node, a lot of works have extended the work of Popov, studying non-Poisson arrival rate [23], fluid limit approximations of the evolution of the number of tips [11, 12], discrete-time model [4], and simulation-based works [21, 28]. One of the main drawbacks of all these works is that they do not consider heterogeneous delays between nodes.
Three recent works have introduced different types of heterogeneous delays and studied the evolution of the number of tips under such conditions. First, a simulator of DAG-based distributed ledgers with delays in the transmission of information between nodes has been proposed in [41]. From a more theoretical perspective, the authors in [31] have studied the impact of heterogeneous delays coming from different processing times of the blocks and not due to _propagation of information delay_. They also assume the existence of a central node which maintains a stable version of the ledger and have not
considered the different views of each node in the network. Our work is building on the model proposed in [20]. In that paper, the authors model the evolution of the number of tips using coupled stochastic processes in discrete time. However, [20] makes strong assumptions; the delay between two nodes is deterministic and constant over time and the number of issued blocks by each node at each discrete time-step is constant. Under these conditions, they prove the stability of the stochastic process using super martingale arguments and drift analysis.
## 2. Notations and setting
### Peer-to-peer network
There are several factors that should be considered when modelling a peer-to-peer (P2P) network, including the number and distribution of participants, the speed and capacity of the network connections, and the rules governing the creation and exchange of data.
We consider a peer-to-peer network with \(N\) nodes and denote the set of nodes by \(\mathcal{N}:=\{1,\ldots,N\}\). These nodes can create (or issue) and exchange blocks of data without the need of a central authority.
Nodes communicate their blocks on the P2P network, leading to communication delays and, thus, different local perceptions of the system's state. The network latency is the time it takes for a block to travel from the source node to the destination node and can be affected by a number of factors, including the distance between the two nodes, the speed of the network connection, and the amount of traffic on the network at the time the message is sent. Network latency is an important factor in the performance of a communication network, as it can affect the speed at which information is transmitted and the overall reliability of the network.
Thus, latency plays a crucial role in our model. We allow these delays to be random, asymmetric, and different for different nodes. More precisely, the delay between a node \(i\) and a node \(j\), for a given block \(b\), is described by a random variable \(\delta_{j}^{(b)}(i)\) with values in \(\mathbb{R}_{+}\). These delays are supposed to be i.i.d. in the following sense: for every block \(b\) issued by a node \(i\) the delay \(\delta_{i}^{(b)}(j)\) is independently distributed as \(\Delta_{i}(j)\).
\begin{table}
\begin{tabular}{|l|l|} \hline Variable & Description \\ \hline \hline \(\mathcal{N}:=\{1,\ldots,N\}\) & set of nodes \\ \(\lambda_{i}\in\mathbb{R}_{+}\) & block issuance rate of node \(i\) \\ \(\lambda:=\sum_{i=1}^{N}\lambda_{i}\) & total block issuance rate \\ \(\delta_{j}^{(b)}(i)\) & random variable describing \\ & latency from node \(j\) to node \(i\) for block \(b\) \\ \(\Delta_{j}(i)\) & latency distribution from node \(j\) to node \(i\) \\ \(\Delta\in\mathbb{R}_{+}\) & maximal latency between two nodes \\ \hline \(k\) & number of blocks to be referenced by a new block \\ \hline \(\operatorname{pool}_{n}^{(i)}\) & tip pool of node \(i\) at time \(t_{n}\) \\ \(\operatorname{pool}_{n}^{(c)}\) & common tip pool at time \(t_{n}\) \\ \(\operatorname{pool}_{n}^{(o)}\) & tips of the perfect observer at time \(t_{n}\) \\ \(X_{n}^{(i)}:=|\operatorname{pool}_{n}^{(i)}|\) & size of the tip pool of node \(i\) \\ \(X_{n}^{(c)}:=|\operatorname{pool}_{n}^{(c)}|\) & size of the common tip pool \\ \(X_{n}^{(o)}:=|\operatorname{pool}_{n}^{(o)}|\) & size of the tip pool of the perfect observer \\ \hline \end{tabular}
\end{table}
Table 1. Table of notations
The nature of the random distribution \(\Delta_{i}(j)\) is important in the context of distributed systems. In a fully synchronous system, the distributions \(\Delta_{i}(j)\) are almost surely bounded, and the bound is known and used in the protocol. In a fully asynchronous system, there is no fixed upper bound on the delays; the distributions \(\Delta_{i}(j)\) have infinite support. As a result, a fully asynchronous system relies less on precise timing and can tolerate a higher degree of latency or delay. This can make it more resilient and less prone to failure, but it can also make it less efficient for applications that require low latency and high reliability.
The concept of partial synchrony in a distributed system refers to a system that falls between a fully synchronous system and a fully asynchronous system; we refer to [10] for more details.
**Assumption 2.1** (Partial Synchronicity).: _There exists some \(\Delta<\infty\) such that_
\[\mathbb{P}(\Delta_{i}(j)\leq\Delta)=1,\forall i,j\in\mathcal{N}.\]
_The exact value of \(\Delta\) is unknown, and its value is not used in the protocol design._
This assumption means that there is a finite (but unknown) time for a node to receive information from another node. Usually, distributed ledgers are using P2P networks as a means to exchange information.
Nodes communicate directly with each other, rather than through a central server, to exchange information; in our situation, this information consists of blocks. One approach to exchanging information in a P2P network is through a technique called "gossiping". In gossiping, a node sends a piece of information to a (random) subset of its neighbours, and each of those neighbours then sends the information to a subset of their own neighbours, and so on. This can allow for the rapid dissemination of information throughout the network, even if some nodes are offline or unable to communicate directly with each other, ensuring a finite time to transmit information between two nodes.
### Block issuance
The blocks are created or issued by the participating nodes. We model this issuance by a Poisson point process. More precisely, each node \(i\in\mathcal{N}\) issues blocks according to a given Poisson point process of intensity \(\lambda_{i}\). In other words, the intervals between issued blocks are distributed as \(Exp(\lambda_{i})\), where the parameter \(\lambda_{i}\) corresponds to the issuance rate of node \(i\). We define \(\lambda:=\sum_{i=1}^{N}\lambda_{i}\) to be the total block issuance rate.
We define a marked point process \(\xi=(t_{n},\kappa_{n})_{n\in\mathbb{N}}\) on \(\mathbb{R}^{+}\) that will describe the time of the creation of the blocks in the network. The times \(t_{n}\) in the marked process \(\xi\) are given by a Poisson point process on the line and the marks \(\kappa_{n}\) consist of the following
\[\kappa_{n}=(\text{blockID}_{n},\text{Ref}_{n},\text{nodeID}_{n},\text{delay}_{ n}), \tag{1}\]
where:
* \(\text{blockID}_{n}\) is the id of the \(n\)-th block;
* \(\text{Ref}_{n}\) is the list of references of the \(n\)th block;
* \(\text{nodeID}_{n}\) is the id of the node who created the \(n\)th block;
* \(\text{delay}_{n}\) is a (random) vector of network delays. It describes the times it takes for the other nodes to receive the \(n\)th block.
In other words, at time \(t_{n}\) the \(n\)th block with ID \(\text{blockID}_{n}\) is created by node \(\text{nodeID}_{n}\). This block refers to \(\text{Ref}_{n}\) previous blocks and is delayed by the random vector \(\text{delay}_{n}\).
We describe the construction of these marks in more detail. The variable \(\text{blockID}_{n}\) identifies the issued block and is uniformly distributed in \([0,1]\). This is a usual assumption that is justified by the fact that in practice the block ids are deduced from cryptographic hash functions. The \(\text{nodeID}_{n}\) describes the node ID of the issuing node, it is independent
(of the rest of the process) and identically distributed on \(\mathcal{N}\). More precisely, we have that
\[\mathbb{P}(\mathrm{nodeID}_{n}=i)=\frac{\lambda_{i}}{\lambda},\forall i\in \mathcal{N}. \tag{2}\]
Every new block references \(k\) previous blocks; they are chosen uniformly (with replacement) among all blocks that have not yet been referenced, a.k.a. tips. More precisely, once a node \(i\) issues a new block it references \(k\) blocks (sampled uniformly with replacement) from its local tip pool. The references of the \(n\)th block are written as \(\mathrm{Ref}_{n}=(\mathrm{ref}_{1},\ldots,\mathrm{ref}_{k})\), where each \(r_{i}\) is a blockID of a previous block. The references are not independent of the previous history of the process. More precisely, we denote \((\Omega,\mathcal{F},\mathbb{P})\) the underlying probability space and let \(\mathcal{F}_{n}=\sigma((t_{1},\kappa_{1}),\ldots,(t_{n},\kappa_{n}))\) be the filtration corresponding to the marked Poisson process. Then, the "\(\mathrm{Ref}_{n}\)-mark" is not independent (in contrast to the other marks) of \(\mathcal{F}_{n-1}\). In the next section, we give more details on the tip selection and the different local perceptions of the nodes.
The variable \(\mathrm{delay}_{n}\) defined as: \(\mathrm{delay}_{n}=(\delta_{\mathrm{nodeID}_{n}}^{(\mathrm{blockID}_{n})}(j))_ {j\in\mathcal{N}}\), describes the delay between \(t_{n}\) (the issuance time of the block) and the arrival time of the block at each of the other nodes. It is therefore a random vector and the delays are i.i.d. given \(\mathrm{nodeID}_{n}\) and supposed to satisfy Assumption 2.1.
### Tip selection and dynamics
In this section, we describe the different (local) perceptions of the nodes; namely of the issued blocks known by the node and whether these blocks are already referenced. For our purposes, it is enough to observe the process only at (block issuance) times \(t_{1},t_{2},\ldots\). The set of blocks created up to time \(t_{n}\) is defined by
\[\mathrm{Blocks}_{n}:=\bigcup_{k=1}^{n}\mathrm{blockID}_{k}. \tag{3}\]
The set of blocks created between \(t_{\ell}\) and \(t_{m}\) is denoted by
\[\mathrm{Blocks}_{\ell,m}:=\bigcup_{k=\ell}^{m}\mathrm{blockID}_{k}. \tag{4}\]
Due to the communication delay, these blocks are not immediately visible to all nodes. For every node \(i\), we define the set of all visible blocks at time \(t_{n}\) as
\[\mathrm{visBlocks}_{n}(i):=\bigcup_{k:t_{k}+\mathrm{delay}_{k}(i)<t_{n}} \mathrm{blockID}_{k} \tag{5}\]
and the set of all visible references as
\[\mathrm{visRef}_{n}(i):=\bigcup_{k:t_{k}+\mathrm{delay}_{k}(i)<t_{n}}\mathrm{ Ref}_{k}, \tag{6}\]
where we treat \(\mathrm{Ref}_{k}\) not as a vector but as a set.
**Definition 2.2** (Different tip pools).: _The local tip pool from node \(i\in\mathcal{N}\) at time \(t_{n}\) is defined as_
\[\mathrm{pool}_{n}(i)=\mathrm{visBlocks}_{n}(i)\setminus\mathrm{ visRef}_{n}(i). \tag{7}\]
_The common tip pool at time \(t_{n}\) is defined as_
\[\mathrm{pool}_{n}^{(c)}:=\bigcap_{i\in\mathcal{N}}\mathrm{pool}_{n}(i). \tag{8}\]
_The (perfect) observer tip pool at time \(t_{n}\) is defined as_
\[\mathrm{pool}_{n}^{(o)}:=\mathrm{Blocks}_{n}\setminus\bigcup_{k=1}^{n}\mathrm{ Ref}_{k} \tag{9}\]
**Definition 2.3** (Tip pool sizes).: _We denote by \(X_{n}^{(i)}:=|\mathrm{pool}_{n}(i)|\) the number of tips at node \(i\) at time \(t_{n}\). We also define the common tip pool size \(X_{n}^{(c)}=|\mathrm{pool}_{n}^{(c)}|\). We denote by \(X_{n}^{(o)}=|\mathrm{pool}_{n}^{(o)}|\) the number of tips of the perfect observer._
The process starts at time \(n=0\) with one tip called the genesis. More precisely, we set
\[\mathrm{pool}_{0}^{(o)}=\mathrm{pool}_{0}^{(c)}=\mathrm{pool}_{0}^{(i)}\ \forall i\in\mathcal{N},X_{0}^{(o)}=1. \tag{10}\]
The different tip pool sizes can be defined for all positive real times and can be seen as continuous time stochastic processes. Due to the delay, the local and common tip pool sizes may even change at times different to the ones given by the point process. However, since nodes do issue blocks only at times \(t_{1},t_{2},\ldots\) we only observe the processes at these times.
Since we assume \(\delta_{i}(i)=0\) we have that \(X_{n}^{(c)}\leq X_{n}^{(o)}\). To see this, note that the observer has zero delays and perceives the blocks right after their creation. Hence, once a node takes tips out of its local tip pool, these are immediately deleted from the observer tip pool and the newly issued block is added to the observer tip pool. The newly referenced blocks are also removed, immediately, from the common tip pool, but the new block is added to the common tip pool only after all nodes receive it.
A crucial observation is that we also have a lower estimate conditioned on the number of blocks recently issued \(L_{n}:=|\mathrm{Blocks}_{t_{n}-\Delta,t_{n}}|\). \(L_{n}\) can also be interpreted as the number of all possible non-visible blocks. This definition of \(L_{n}\) also implies that the selected tips at time step \(n\) by each node will only depend on the tips at \(n-L_{n}\) known by the observer and the new blocks issued between \(n-L_{n}\) and \(n\).
**Lemma 2.4**.: _For all \(L\in\mathbb{N}\) we have that_
\[\mathbb{P}\left(X_{n}^{(c)}\geq X_{n}^{(o)}-(k+1)L|L_{n}=L\right)=1,\quad \forall n\in\mathbb{N}, \tag{11}\]
_and_
\[\mathbb{P}\left(X_{n}^{(i)}\leq X_{n}^{(o)}+kL\ \forall i\in\mathcal{N}|L_{n}=L \right)=1,\quad\forall n\in\mathbb{N}, \tag{12}\]
Proof.: We have \(X_{n}^{(o)}\leq X_{n-L}^{(o)}+L\) as, in the worst case, none of the \(L\) recently added blocks removed a tip from the tip pool. Assumption 2.1 (Partial Synchronicity assumption) implies that all tips from time \(n-L\) are perceived/known by any other node at time \(n\). During this time, at most \(kL\) tips could have been removed from their local tip pool. Hence, in the best case \(X_{n}^{(c)}\) is equal to \(X_{n-L}^{(o)}-kL\). Therefore, almost surely, given \(L_{n}=L\), we obtain
\[X_{n}^{(c)}\geq X_{n-L}^{(o)}-kL\geq X_{n}^{(o)}-(k+1)L. \tag{13}\]
For the second claim, it suffices to observe that all blocks that have been tips in the observer tip pool at time \(n-L\) are visible to every node \(i\) at time n, and at most \(L\) new tips could have been added to the local tip pool. Hence,
\[X_{n}^{(i)}\leq X_{n-L}^{(o)}+L. \tag{14}\]
At every block creation, at most \((k-1)\) tip can be removed from the observer tip pool since every new block becomes a tip. Hence,
\[X_{n-L}^{(o)}-(k-1)L\leq X_{n}^{(o)}, \tag{15}\]
and the second claim follows.
## 3. Stability of tip pool sizes
We start with our central result on the asymptotic negative drift of the observer tip pool size. This first result will show that when \(X_{n}^{(0)}=x\) is large, our stochastic process becomes a super-martingale. Therefore, we can use tools coming from martingale theory to obtain upper bounds on the distribution tail of \(X_{n}^{(0)}\).
**Theorem 3.1** (Asymptotic negative drift).: _There exist \(K\in\mathbb{N}\) and \(\varepsilon>0\) such that_
\[\mathbb{E}\left[X_{n+1}^{(o)}-X_{n}^{(o)}|X_{n}^{(o)}=x\right]\leq-\varepsilon,\quad\forall x\geq K. \tag{16}\]
Proof.: Recall that \(L_{n}=|\text{Blocks}_{t_{n}-\Delta,t_{n}}|\) and write
\[\mathbb{E}\left[X_{n+1}^{(o)}-X_{n}^{(o)}|X_{n}^{(o)}=x\right]= \sum_{L=0}^{\infty}\mathbb{E}\left[X_{n+1}^{(o)}-X_{n}^{(o)}lvertX_{n}^{(o)}= x;L_{n}=L\right]\mathbb{P}\left(L_{n}=L\right)\] \[=0\cdot\mathbb{P}\left(L_{n}=0\right)\] \[+\sum_{L=1}^{\tilde{L}}\mathbb{E}\left[X_{n+1}^{(o)}-X_{n}^{(o)} |X_{n}^{(o)}=x;L_{n}=L\right]\mathbb{P}\left(L_{n}=L\right)\] \[+\sum_{L=\tilde{L}+1}^{\infty}\mathbb{E}\left[X_{n+1}^{(o)}-X_{n} ^{(o)}|X_{n}^{(o)}=x;L_{n}=L\right]\mathbb{P}\left(L_{n}=L\right) \tag{17}\]
with \(\tilde{L}\) such that \(\mathbb{P}(0<L_{n}\leq\tilde{L})\geq 2\mathbb{P}(L_{n}>\tilde{L})\) for all \(n\in N\). Note that the existence of such a random variable \(\tilde{L}\) follows from the stationarity of \(L_{n}\). The last summand is bounded above by
\[\sum_{L=\tilde{L}+1}^{\infty}\mathbb{E}\left[X_{n+1}^{(o)}-X_{n}^{(o)}|X_{n}^ {(o)};L_{n}=L\right]\mathbb{P}\left(L_{n}=L\right)\leq 1\cdot\mathbb{P}(L_{n}> \tilde{L}), \tag{18}\]
since, in the worst case, a new tip is added to the observer tip pool.
To control the second summand, we suppose that \(K>2(k+1)\tilde{L}\). Lemma 2.4 implies that there are at least \(X_{n}^{(o)}-(k+1)L\) common tips at time \(n\) for all \(L\leq\tilde{L}\) and \(X_{n}^{(o)}\geq K\). The block at time \(n\) will be issued by some node \(i\), the probability that this node chooses at least two tips from the common tip pool is therefore larger than
\[\frac{X_{n}^{(c)}}{X_{n}^{(i)}}\cdot\frac{X_{n}^{(c)}-1}{X_{n}^{( i)}} \geq\frac{X_{n}^{(o)}-(k+1)L}{X_{n}^{(i)}}\cdot\frac{X_{n}^{(o)}-( k+1)L-1}{X_{n}^{(i)}}\] \[\geq\frac{X_{n}^{(o)}-(k+1)L}{X_{n}^{(o)}+L}\cdot\frac{X_{n}^{(o)} -(k+1)L-1}{X_{n}^{(o)}+L}\] \[\geq\frac{K-(k+1)L}{K}\cdot\frac{K-(k+1)L-1}{K}=:p(K,L), \tag{19}\]
where we use the second statement of Lemma 2.4 in the second estimate and \(X_{n}^{(o)}\geq K\) for the last bound. We obtain
\[\sum_{L=1}^{\tilde{L}} \mathbb{E}\left[X_{n+1}^{(o)}-X_{n}^{(o)}|X_{n}^{(o)}=x;L_{n}=L \right]\mathbb{P}\left(L_{n}=L\right)\] \[\leq\sum_{L=1}^{\tilde{L}}\left(-1\cdot p(K,L)+1\cdot(1-p(K,L)) \right)\mathbb{P}(L_{n}=L)\] \[=\sum_{L=1}^{\tilde{L}}\left(1-2p(K,L)\right)\mathbb{P}(L_{n}=L)\] \[\xrightarrow[K\to\infty]{}-\mathbb{P}(0<L_{n}\leq\tilde{L}). \tag{20}\]
Finally, we obtain that
\[\mathbb{E}\left[X_{n+1}^{(o)}-X_{n}^{(o)}\big{|}X_{n}^{(o)}=x\right] \leq 0+(-\mathbb{P}(0<L_{n}\leq\tilde{L})+\tilde{\varepsilon}+ \mathbb{P}(L_{n}>\tilde{L})\] \[\leq-\frac{1}{2}\mathbb{P}(0<L_{n}\leq\tilde{L})+\tilde{ \varepsilon}, \tag{21}\]
with \(\tilde{\varepsilon}<\frac{1}{2}\mathbb{P}(0<L_{n}\leq\tilde{L})\) and \(K\) sufficiently large. This yields Inequality (16) with \(\varepsilon=-\frac{1}{2}\mathbb{P}(0<L_{n}\leq\tilde{L})+\tilde{\varepsilon}\).
### Bounds on hitting-times and tails
The last theorem has several important and well-known consequences as ergodicity and concentration type of results. Our first focus is on general bounds on hitting times and tails. The drift condition (16) suggests that \(X_{n}^{(o)}\) should eventually cross below \(K\) and not lie too far above \(K\) most of the time. In the following, we give quantitative results of this intuition. These results are essentially straightforward implications of (16) together with the fact that the increments of \(X_{n}^{(o)}\) are bounded. In this work, we do not strive for optimal results but prefer to gather classical results that follow from [15] and define the necessary terms to apply the results.
Let us first observe that the increments of \(X_{n}^{(o)}\) are bounded; the number of tips is increased at most by \(k\) and decreased at most by \(k-1\) at each time step. Let \(Z\) be a random variable that stochastically dominates the increments \(|X_{n+1}^{(o)}-X_{n}^{(o)}|\) for all \(n\). In our case, we use \(Z=k\), which is deterministic and not random.
For \(\lambda>0\) define
\[c:=c(\lambda):=\sum_{j=2}^{\infty}\frac{\lambda^{j-2}}{j!}\mathbb{E}[Z^{j}]= \frac{e^{k\lambda}-(1-\lambda k)}{\lambda^{2}}, \tag{22}\]
and
\[D:=\mathbb{E}[e^{\lambda Z}]=e^{\lambda k}. \tag{23}\]
As suggested in [15] we choose
\[0<\eta:=\eta(\lambda,\varepsilon)<\min\left\{\lambda,\frac{\varepsilon}{2c} \right\}\text{ and }\rho:=\rho(\lambda,\varepsilon):=1-\frac{1}{2}\eta \varepsilon\in(0,1), \tag{24}\]
where \(\varepsilon\) is the constant in Inequality (16). We define
\[\tau_{K,m}:=\min\{n\geq 0:X_{m+n}^{(o)}\leq K\} \tag{25}\]
the return time after \(m\) to the set \(\{1,\ldots,K\}.\) Note that here \(K\) is from Inequality (16). In our notation, we rewrite [15, Theorem 2.3].
**Theorem 3.2** (Hitting-time and tail bounds).: _Under Assumption 2.1 we have that_
\[\mathbb{E}[e^{\eta X^{(o)}_{m+n}}|\mathcal{F}_{m}] \leq\rho^{n}e^{\eta X^{(o)}_{m}}+\frac{1-\rho^{n}}{1-\rho}De^{\eta K},\] \[\mathbb{E}[s^{\tau_{K,m}}|\mathcal{F}_{m}] \leq e^{\eta(X^{(o)}_{m}-K)}\frac{s-1}{1-\rho s}+1,\quad 1<s<\rho^{-1},\] \[\mathbb{P}(X^{(o)}_{m+n}\geq M|\mathcal{F}_{m}) \leq\rho^{n}e^{\eta(X^{(o)}_{m}-M)}+\frac{1-\rho^{n}}{1-\rho}De^{ \eta(K-M)},\] \[\mathbb{P}(\tau_{K,m}>n|\mathcal{F}_{m}) \leq e^{\eta(X^{(o)}_{m}-K)}\rho^{n}. \tag{26}\]
**Remark 3.3**.: _The case \(\mathcal{F}_{m}=\mathcal{F}_{o}\) gives the bounds for the original model starting with a "genesis block" at time \(n=0\). A crucial fact, however, is that the bounds are uniform on \(m\), indicating a memoryless property of the process. This will be used to construct a regeneration structure in Section 3.2._
**Remark 3.4**.: _Similar bounds as in Theorem 3.2 are also valid for the local tip pool sizes \(X^{(i)}_{n}.\) This is due to Lemma 2.4 and the fact that the random variables \(L_{n}\) have exponential moments. This holds for all concentration and stability results on the tip pool sizes in this section._
We also obtain bounds on the occupation-time from [15, Theorem 3.1]. We start with the observation that
\[\liminf_{n\to\infty}\mathbb{P}(X^{(o)}_{n}<M)\geq p_{0} \tag{27}\]
with
\[p_{0}:=p_{0}(M):=1-\frac{1}{1-\rho}De^{\eta(K-M)}. \tag{28}\]
**Theorem 3.5** (Occupation-time bounds).: _Under Assumption 2.1 for every \(\varepsilon^{\prime}\) there exists some constants \(C\) and \(\gamma<1\) such that_
\[\mathbb{P}\left(\frac{1}{n}\sum_{j=1}^{n}\mathbf{1}\{X^{(o)}_{n}<M\}\leq p_{0} (1-\varepsilon^{\prime})\right)\leq C\gamma^{n},\quad\forall n\geq 1, \tag{29}\]
_where \(p_{0}\) is given in (28)._
It follows from the above results or directly from [30, Theorem 1] that all moments of \(X^{(o)}_{n}\) are bounded.
**Theorem 3.6**.: _Let \(\varepsilon\) and \(K\) be the constants from Theorem 3.1 and suppose Assumption 2.1 holds. Then, for every \(r>0\) there exists some constant \(c=c(r,\varepsilon,K)\) such that_
\[\mathbb{E}\left[\left(X^{(o)}_{n}\right)^{r}\right]\leq c,\ \forall n\in \mathbb{N}. \tag{30}\]
_The same statement holds true for the local tip pool sizes \(X^{(i)}_{n}\) and constants \(c=c(r,\varepsilon,K,i)\) depending additionally on \(i\)._
**Remark 3.7**.: _We want to note that [30] also provides bounds on the constant \(c\). However, as these bounds are rather implicit and do not straightforwardly lead to an explicit formula, we do not address the question of finding the optimal bounds in the present work._
### Regeneration structure, ergodicity, and stationarity
The asymptotic negative drift and the bounds of the previous section do not immediately imply the ergodicity of the various tip pool sizes since the processes are not Markov processes. In this section, we construct a regeneration structure that allows proving (mean) ergodicity and stationarity. The main idea behind this construction is quite natural and is first sketched informally. The trajectory of the process \(X_{n}^{(c)}\) will be decomposed into independent and identically distributed pieces. Processes of this kind are known as regenerative processes, see e.g., [2, Chapter VI]. Let us consider the indicator function of the event that all nodes are synchronized and there is only one active tip:
\[\operatorname{sync}_{n}:=\mathbf{1}\left\{\operatorname{pool}_{n}^{(o)}= \operatorname{pool}_{n}^{(c)}=\operatorname{pool}_{n}^{(i)}\,\,\forall i\in \mathcal{N},X_{n}^{(o)}=1\right\}. \tag{31}\]
We construct the sequences of time \(\tau_{n}\) where the nodes are in the synchronized state. More precisely, let \(\tau_{0}:=0\) and inductively for \(k>0\)
\[\tau_{k} := \inf\{n>\tau_{k-1}:\operatorname{sync}_{n}=1\}. \tag{32}\]
We start with the observation that the process \(X_{n}^{(o)}\) is \(\mathcal{F}_{n}\)-measurable for all \(n\) but not necessarily a Markov chain. However, we have the following "decoupling properties".
**Lemma 3.8**.: _We have that for every \(x>1\) there exists some constant \(c_{x}\) such that_
\[\mathbb{P}\left(X_{n+1}^{(o)}=x-1,t_{n+1}-t_{n}>\Delta|X_{n}^{(o)}=x\right) \geq c_{x}. \tag{33}\]
_Furthermore, for every \(x>1\) there exists some constants \(d_{x}\) and \(n_{x}\) such that_
\[\mathbb{P}\left(\operatorname{sync}_{n+n_{x}}=1|X_{n}^{(o)}=x\right)\geq d_{x}. \tag{34}\]
Proof.: Under the assumption that no new blocks are issued between \(t_{n}\) and \(t_{n}+\Delta\) (which happens with a positive probability independent of \(\mathcal{F}_{n}\)), all nodes will share the same perception of the tip pool. The node that issues the next block will choose only two distinct tips with positive probability. As this probability only depends on \(x\) the first claim follows. The second claim follows by recursively applying the first.
**Lemma 3.9**.: _The regeneration times \(\tau_{n}\) are almost surely finite and for any \(k\in\mathbb{N}\) and any subset sets \(A\in\mathbb{N}^{\mathbb{N}}\) we have_
\[\mathbb{P}\left(\left(X_{\tau_{k}+n}^{(o)}\right)_{n\in\mathbb{N}}\in A \right)=\mathbb{P}\left(\left(X_{n}^{(o)}\right)_{n\in\mathbb{N}}\in A\right). \tag{35}\]
_In particular, \((\tau_{k+1}-\tau_{k}),k\in\mathbb{N},\) are i.i.d. random variables under \(\mathbb{P}\), and, in addition, have some exponential moments. The random variables_
\[M_{k}:=\max\left\{\tau_{k}\leq n\leq\tau_{k+1}:X_{n}^{(c)}\right\},k\in \mathbb{N}, \tag{36}\]
_are i.i.d. and have some exponential moments._
Proof.: We start by verifying that the first return time \(\tau_{1}\) is a.s. finite. Let \(K\) be from Inequality (16) and define \(A:=\{K-k,\ldots,K\}\). Now, by Lemma 3.8 we have that there exists some \(d_{A}:=\max_{x\in A}d_{x}\) and \(n_{A}:=\max_{x\in A}n_{x}\) such that
\[\mathbb{P}(\exists m\leq n_{A}:\operatorname{sync}_{n+m}=1|X_{n}^{(o)}\in A) \geq d_{A}. \tag{37}\]
Hence, whenever our process is in the "state" \(A\), we have a positive probability of regenerating. If we regenerate we have that \(\tau_{1}\) is finite; if we are not successful, then \(X_{n+n_{A}}^{(o)}\leq K+kn_{A}\) and Theorem 3.2, see also Remark 3.3, ensures that we return to the set \(A\) in a time with exponential moments. Therefore, it takes a geometrically distributed number of such trials to regenerate.
The claim (35) for \(k=1\) follows from the observation that if (at time \(n\)) the event
\[\left\{\operatorname{pool}_{n}^{(o)}=\operatorname{pool}_{n}^{(c)}= \operatorname{pool}_{n}^{(i)}\ \forall i\in\mathcal{N},X_{n}^{(o)}=1\right\}\]
occurs, all nodes have the same information on the state of the system and the state equals the state at time \(0\) together with the "memorylessness property" of the exponential random variables in the underlying Poisson point process. Recursively, we obtain the a.s. finiteness of the \(\tau_{k}\) and Equality (35) for all \(k.\) The exponential moments of \(\tau_{k+1}-\tau_{k}\) follow from (35) and Theorem 3.2. The Claim (36) follows from the fact that the increments of \(X_{n}^{(o)}\) are bounded and that \(\tau_{k+1}-\tau_{k}\) has exponential moments.
The previous two lemmas allow us to show that it is possible to view the stochastic process \(\{X_{n}^{(o)}\}\) as a regenerative process, e.g., see [2]. In the next theorem, using the regenerative structure of \(\{X_{n}^{(o)}\}\), we prove the convergence in \(L^{2}\) of the ergodic average of \(X_{n}^{(o)}\) and \(X_{n}^{(i)}\) for all \(i.\)
**Theorem 3.10** (Mean ergodicity and stationarity).: _Under Assumption 2.1 there exist some constants \(\mu^{(o)},\mu^{(i)},i\in\mathcal{N}\), such that_
\[\frac{1}{n}\sum_{k=1}^{n}X_{k}^{(o)}\ \underset{n\to\infty}{\longrightarrow}\mu^{(o)} \tag{38}\]
_and_
\[\frac{1}{n}\sum_{k=1}^{n}X_{k}^{(i)}\ \underset{n\to\infty}{\longrightarrow}\mu^{(i )},\forall i\in\mathcal{N} \tag{39}\]
_almost surely and in \(L^{2}\) (mean square sense). Moreover, \(X_{n}^{(o)}\) and \(X_{n}^{(i)},i\in\mathcal{N},\) converge in distribution to some random variables \(X^{(o)}\) and \(X^{(i)},i\in\mathcal{N}.\)_
Proof.: The law of large numbers for i.i.d. sequences, applied to \((\tau_{n+1}-\tau_{n})_{n\in\mathbb{N}}\), yields
\[\frac{\tau_{n}}{n}\ \to\ \mathbb{E}[\tau_{2}-\tau_{1}] \tag{40}\]
Define \(k(n)=\max\{k\in\mathbb{N}_{0}:\,\tau_{k}\leq n\}\). Clearly, \(k(n)\to\infty\) as \(n\to\infty\). Further,
\[\frac{n}{k(n)}\ =\ \frac{n}{\tau_{k(n)}}\frac{\tau_{k(n)}}{k(n)}.\]
The second factor tends to \(\mathbb{E}[\tau_{2}-\tau_{1}]\)\(\mathbb{P}\)-a.s. as \(n\to\infty\) by (40). Regarding the first factor, observe that \(\tau_{k(n)}\leq n\leq\tau_{k(n)+1}\) and, therefore,
\[1\ \leq\ \frac{n}{\tau_{k(n)}}\ \leq\ \frac{\tau_{k(n)+1}}{\tau_{k(n)}}\ \to\ 1\quad \mathbb{P}\text{-a.s. as }n\to\infty.\]
Consequently, \(\lim_{n\to\infty}n/k(n)=\mathbb{E}[\tau_{1}-\tau_{0}]\)\(\mathbb{P}\)-a.s. The convergence also holds for all \(L^{p}\), \(p\geq 1\), which can be shown similarly by using the exponential moments of \(\tau_{k+1}-\tau_{k}\) and Holder's Inequality. We can now decompose the sum as
\[\frac{1}{n}\sum_{k=1}^{n}X_{k}^{(c)}=\frac{k(n)}{n}\frac{1}{k(n)} \sum_{k=1}^{\tau_{k(n)}}X_{k}^{(c)}+\frac{1}{n}\sum_{k=\tau_{k(n)}+1}^{n}X_{k} ^{(c)}. \tag{41}\]
The first summand becomes
\[\frac{k(n)}{n}\frac{1}{k(n)}\sum_{k=1}^{\tau_{k(n)}}X_{k}^{(c)} = \frac{k(n)}{n}\frac{1}{k(n)}\sum_{k=1}^{k(n)}\tilde{X}_{k}^{(c)}, \tag{42}\]
with
\[\tilde{X}_{k}^{(c)}:=\sum_{j=\tau_{k(n)-1}+1}^{\tau_{k(n)}}X_{j}^{(c)}.\]
Due to Lemma 3.9 the random variables \(\tilde{X}_{k}^{(c)},k\in\mathbb{N}\), are i.i.d. with exponential moments, and hence,
\[\frac{k(n)}{n}\frac{1}{k(n)}\sum_{k=1}^{\tau_{k(n)}}X_{k}^{(c)}\underset{n\to \infty}{\longrightarrow}\mu_{c},\]
for some constant \(\mu_{c}\) and convergence a.s. and in \(L^{2}\). It remains to treat the second term on the right-hand side of (41). We have
\[\frac{1}{n}\sum_{k=\tau_{k(n)}+1}^{n}X_{k}^{(c)}\leq\frac{1}{n}(\tau_{k(n)+1}- \tau_{k(n)})M_{k} \tag{43}\]
and hence, using (36), we see that this terms converges a.s. and in mean to \(0\). Note that the convergence in \(L^{2}\) can be seen using the Cauchy criteria, e.g., [16, Proposition 2.9], together with the Cauchy-Schwarz Inequality. It remains to prove convergence in distribution. For this, let us note that we constructed a so-called regeneration structure, and, hence, the convergence follows directly from [2, Corollary 1.5]. The proofs for the local tip pool sizes are analogous.
## 4. Experimental results
We provide quantitative simulation results to further demonstrate the tip pools' stability. While our theoretical results do provide stability of the local tip pools, they do not allow us to compare the different perceptions and how they depend on the model parameters. We thus evaluate the impact of delay on the tip pools for different scenarios through simulations. The simulations are performed in an open-source simulator [17] also used in [24]. This simulator simulates both communications over the peer-to-peer layer and blocks creations. The statistical analysis of the data is done with the software R (4.1.2), and the package "ggstatsplot" [29].
We use a gossip protocol to model the network latency on top of a network topology with a small diameter. More precisely, we use a Watts-Strogatz network [40] with mean degree \(10\) and re-wiring probability \(1\). The gossip algorithm forwards the new blocks to all its neighbours in the Watts-Strogatz network. The delay for each of these connections on the P2P layer is independent and uniformly distributed in the interval \([\delta_{min},\delta_{max}]\).
We model the different issuance rates of the nodes in the network using the Zipf empirical law with parameter \(s\), [35]. This is motivated by the fact that in a real-world scenario with heterogeneous weights, the Zipf law is frequently observed, e.g., see [1, 18, 22]. Note that, with Zipf's law, a homogeneous network, e.g., can be modelled for \(s=0\), while the higher the \(s\), the more heterogeneous or centralized the weight distribution becomes.
### Heterogeneous rates
The issuing rates of the \(N=100\) nodes are Zipf-distributed with parameter \(s\), i.e.,
\[\lambda_{i}=\frac{i^{-s}}{\sum_{j=1}^{N}j^{-s}}\lambda, \tag{44}\]
where \(\lambda\) is the total issuance rate.
We have set the other parameters of our numerical experiments as follows: the number of references \(k=8\). This choice of \(k=8\) is made since it is in the "middle" on a logarithmic scale of the extreme cases \(2^{0}\) and \(2^{7}\). If \(k=1\) we obtain a tree and if \(k\) is
close to the number of nodes, then the number of tips is generally very small. Moreover, \(k=8\) is the value considered in [24].
The network latency between two peers in the P2P network is modelled by a uniform random variable with \(\delta_{min}=20ms\), \(\delta_{max}=180ms\). It is a common assumption to consider the mean latency to be close to \(100\)ms. Moreover, most delays in wide area networks and the Internet fall into our interval, e.g., see [19]. The total block issuance rate is set to \(\lambda=500\) blocks per second (BPS). The local tip pools are measured in the simulation every \(50ms\), and every simulation lasts for \(60\) seconds.
Let us first consider the case of a heterogeneous node activity, \(s=1\). In this scenario, Node 1 issues blocks at a rate of \(96\) BPS, Node 2 with a rate of \(48\) BPS, and the two slower nodes, Node 99 and \(100\) issue with rates around \(1\) BPS.
In Figures 0(a) and 1(a), we present the different perceptions of the tip pool sizes for these nodes.
### Homogeneous rates
We consider the homogeneous case, where every node issues blocks with the same rate, i.e. \(s=0\). The other parameters are set as before. The results in Figures 0(b) and 1(b) show that the local tip pools have similar sizes. Comparing these results with the results in the heterogeneous setting above, Figure 1(a), we can also note that the size of the tip pools decreases with the system's centralisation, i.e. higher values of \(s\).
### Randomness of delay
In the last section, we identified that different issuing rates might considerably affect the local tip pools. A natural explanation is that the average delay of high-frequency nodes is much smaller than those of lower frequencies. In previous heuristic results, [20], it was observed that the random distribution of the delay might already impact the tip pool sizes. Consequently, optimal bounds on the tip pool sizes must contain more information than only the mean delay. We illustrate this effect by performing the same simulations as above for \(s=0\) but keeping the message delay constant with \(100ms\), see Figure 0(c) and 1(c). In this case, we see larger tip pools than in the case with more "randomness". This effect is also present for heterogeneous rates, but we omit the figures for brevity.
## 5. Discussion and extensions
This paper presents a DAG-based distributed ledgers model that considers variable and heterogeneous network delays. It is a continuous time model with random arrivals of new blocks and random communication delays between the nodes. First, we have proven asymptotic negative drift of the tip pool sizes, 3.1, that implies concentration results, Theorem 3.2. A regeneration structure then led to the stationarity and ergodicity of the tip pool sizes, Theorem 3.10. Finally, using Monte-Carlo simulations, we showcase the impact of the rate distribution and the randomness of delays on the evolution of the local tip pool sizes. Let us discuss possible extensions of our work.
**Different type of delays:** As already mentioned in subsection 1.3, a different type of delay (time to validate a block) has been studied in [31]. One natural way to incorporate such delays is to include an additional mark in the Poisson point process that encodes the block type. The delays of a block then also depend on its type. While our obtained results carry over to this more general situation, understanding how these delays impact the tip pool sizes is more challenging as it requires more quantitative results.
**Quantitative results** We obtained qualitative results about the stability. For real-world applications, quantitative bounds are essential. The most important measure is the expected tip pool size. Previous results, [20, 31, 6], and our simulations show that
Figure 1. Tip pool sizes of the top and bottom nodes with \(N=100\) nodes for different scenarios. Randomness in the delay results in smaller tip pool sizes. Heterogeneity in the rates results in more disparate and smaller tip pool sizes.
Figure 2. Comparison of the local tip pool sizes; \(N=100\) nodes, different scenarios.
Figure 3. Comparison of the local tip pool sizes; \(N=100\) nodes, different scenarios.
the tip pool size depends on the distribution of the delays. Hence, explicit formulas for the expected tip pool size seem currently out of reach. A more feasible approach is to obtain meaningful upper and lower bounds on the tip pool sizes. Moreover, Figures 1 and 2 show the fast convergence to the stationary regime, and it seems achievable to obtain quantitative bounds on this speed of convergence as described in Remark 3.7.
**Extreme values and large deviations:** In Theorem 3.2, we derived an upper bound on the probability that \(X_{k}^{(o)}\) is greater than a given value \(L\). Such a result is important from an application perspective because we can quantify the risk that the number of tips is too high at a given instant. The probabilities of deviating from the mean are usually expressed by large deviation results and the distribution of the maximal values by extreme value results. The regeneration structure introduced in Section 3.2 offers an i.i.d. decomposition of the underlying process and, with the exponential moment bounds, strongly suggests the validity of a large deviation principle and an extreme value theorem. We refer to [32] for details on how to obtain a large deviation principle from a regeneration structure and to [2], Chapter IV Section 4, for more details on extreme value theory for regenerative processes.
**General arrival point processes:** In our model, the assumption of a pure Poisson point process is not necessary, and the results seem to carry over the stationary and ergodic point processes. A more realistic model, for instance, is to consider stationary point processes with minimal distance between the points; so-called hard care point processes.
|
2308.14515 | Spreading of a viscoelastic drop on a solid substrate | We study the spreading of viscous and viscoelastic drops on solid substrates
with different wettability. In the early stages of spreading, we find that the
viscoelastic drop spreads with faster and a different power law than the
Newtonian drop (i.e. aqueous glycerine solution) for the same zero shear rate
viscosity. We argue that the effect of viscoelasticity is only observable for
experimental time scales in the order of the internal relaxation time of the
polymer solution or longer times. Near the contact line, the effective
viscosity is lower for the viscoelastic drop than for the Newtonian drop.
Together with its shear rate dependency, this difference in effective viscosity
can explain the different spreading dynamics. We support our experimental
findings with a simple perturbation model that qualitatively agrees with our
findings. | Peyman Rostami, Mathis Fricke, Simon Schubotz, Himanshu Patel, Reza Azizmalayeri, Güunter K. Auernhammer | 2023-08-28T12:03:02Z | http://arxiv.org/abs/2308.14515v2 | # Spreading of a viscoelastic drop on a solid substrate
###### Abstract
We study the spreading of viscous and viscoelastic drops on solid substrates with different wettability. In the early stages of spreading, we find that the viscoelastic drop spreads faster and has a different power law than the Newtonian drop (i.e. aqueous glycerine solution) for the same zero shear rate viscosity. We argue that the effect of viscoelasticity is only observable for experimental time scales in the order of the internal relaxation time of the polymer solution or longer times. Near the contact line, the effective viscosity is lower for the viscoelastic drop than for the Newtonian drop. Together with its shear rate dependency, this difference in effective viscosity can explain the different spreading dynamics. We support our experimental findings with a simple perturbation model that qualitatively agrees with our findings.
D 1
E remarkRemark
Drop spreading, Viscoelastic liquids
## 1 Introduction
For at least the last two centuries, the interaction of droplets with surfaces has been studied quantitatively. In retrospective outstanding work was done by Young for static wetting (Young (1805)), Worthington for drop impact (Worthington Arthur & Reynolds (1883)) and drop spreading over different surfaces (Hardy (1919); Shuttleworth & Bailey (1948); Fox & Zisman (1950)). Drop spreading and its dynamics play an essential role in many industrial applications, from printing to coating (Hoath (2016); Sankaran & Rothstein (2012); Glasser _et al._ (2019)). The spreading of Newtonian drops has been the subject of an extensive research over the last two decades (Biance _et al._ (2004); Bonn _et al._ (2009); Bird _et al._ (2008); Snoeijer & Andreotti (2013)). For low-viscosity drops, the key finding is that the spreading dynamics consist of two regimes; an inertial and a viscous dominated regime (Biance _et al._ (2004)).
The boundary condition has an important influence on the calculation of the flow field close to the contact line and thus on the viscous dissipation. By assuming a non-slip condition, the contact line motion is solved by Moffatt (1964). The assumption of non-slip condition
leads to a divergence of the viscous stress due to the hydrodynamic singularity at the moving contact line (Huh & Scriven (1971_a_); Tanner (1979); Huh & Scriven (1971_b_); Huh & Mason (1977)). Various solutions have been proposed to this problem in molecular scale (Blake & Haynes (1969)), hydrodynamic models by (Cox (1986),Voinov (1976) and Shikhmurzaev (1997, 2020)) and by including the evaporation (Rednikov & Colinet (2012)). For the fast processes, the dynamics can be modeled by the hydrodynamic model with a slip length that generates a lower cut-off length below which the liquid and solid velocities are allowed to differ in the vicinity of the contact line and/or the substrate.
Consider a drop of an initial radius \(R\), volume of \(V\), density \(\rho\), viscosity \(\eta\), and surface tension \(\sigma\) which gently touches a solid substrate, it starts spreading with velocity of \(u\). In the early stage of spreading, inertia is assumed to be dominant (Biance _et al._ (2004)). By writing a force balance between inertial \(\frac{d}{dt}((\rho V)u)\) and capillary forces \(\sim\sigma r\), one can derive the spreading rate, Eq. (1.1), where the radius of the wetted on the substrate area is \(r\).
\[\left(\frac{r}{R}\right)^{2}=t\sqrt{\frac{\sigma R^{3}}{\rho}} \tag{1.1}\]
In a second regime, the viscous dissipation near the contact line is the rate-limiting process when the drop shape is close to a spherical cap. For the viscous dominated regime (Tanner regime) of drop spreading, the Cox-Voinov relation for the dynamic contact angle in case of perfect wetting is \(\theta^{3}\sim\eta\frac{\pi t}{\sigma}\), and the conservation of volume, \(r^{3}\theta\sim V\), results in the spreading dynamic in the viscous regime. This relation is known as Tanner's law \(r\sim R\left(\frac{\sigma t}{\eta R}\right)^{\frac{1}{10}}\)(Tanner (1979)). It should be noted that Cox-Voinov was originally developed for a final contact angle \(\theta=0\), but was later shown to be valid for higher contact angles up to \(100^{\circ}\)(Fermigier & Jenffer (1991); Petrov _et al._ (2003)). By equating the radius from the inertial and viscous regimes, the transition between these two regimes can be calculated: \(\tau_{iv}\sim(\frac{\rho\sigma R}{\eta^{2}})^{\frac{1}{8}}\sqrt{\frac{\rho R^{ 3}}{\sigma}}\) (Biance _et al._ (2004)).
The above models work reasonable for low viscosity drops (e.g. water). For the early stages of high-viscosity drop spreading, there are several conflicting results (Carlson _et al._ (2011, 2012); Eddi _et al._ (2013)). Carlson _et al._ (2012) stated that the drop spreading dynamics still follow the power-law type of spreading with same exponent but with a friction factor (\(\mu_{f}\)) as a correction factor for the prefactor of the power law, \(\frac{r}{R}\sim(\frac{\sigma t}{R\mu_{f}})^{\frac{1}{2}}\). Eddi _et al._ (2013) argue that, for high viscous liquids, the inviscid solution is not valid anymore, so they solve Stokes flow in this case. An important approach is to use the assumed analogy between the merging of identical drops (Eggers _et al._ (1999)) and the spreading of drops on a substrate. Eddi _et al._ (2013) used a logarithmic model to scale their experimental data \(r\simeq-\frac{1}{4\pi}\frac{\sigma}{\eta}t\ln\left(\frac{t}{R}\right)\).
Despite many industrial applications (e.g. printing), the early drop spreading of viscoelastic fluids has not been extensively studied. In most of the studies, the viscous dominated drop spreading (late stage) is studied experimentally and numerically (Carre & Eustache (2000); Betelu & Fontelos (2003); Liang _et al._ (2009); Iwamatsu (2017)Jalalal _et al._ (2021); Iwamatsu (2017)). The viscous spreading exponent (\(\alpha\)) is correlated with the rheological exponent \(n\) in these models. Just very recently, the early stage of drop spreading of shear thinning fluids is studied by two groups (Yada _et al._ (2023); Bouillant _et al._ (2022)). Both groups reported that the early stage of drop spreading (regardless of polymer concentration and molar mass) follows the same trend as low viscosity drop spreading (e.g. water drops). The considered time scale of experiments, in both cases are bellow inertia-capillary time (\(\tau_{ic}=\sqrt{\frac{\rho R^{3}}{\sigma}}\)) which is in the order of few milliseconds for millimetric drops.
Viscoelastic materials combine an elastic component and a viscous component in their properties. When a polymer solution is stretched, in the beginning only the elastic part contributes to the dynamics and after a characteristic time the viscous part part becomes relevant (Costanzo _et al._ (2016)). To observer the effect of viscoelasticity, the experimental time scale should be in order of viscoelastic time scale, i.e. polymer relaxation time. In a simple approach, the viscosity of polymer solutions can be described by the Cross-model (Gastone _et al._ (2014); Subbaraman _et al._ (1971); Cross (1965)). It is shown that when shear is applied to a viscoelastic material, it takes several times of the relaxation time of the sub chain to reach a steady state. The relaxation time depends on the polymer concentration or/and molar mass (number of entanglements) (Costanzo _et al._ (2016); Vereroudakis _et al._ (2023)).
\[\eta=\frac{\eta_{0}-\eta_{\infty}}{1+(\tau_{ve}\dot{\gamma})^{m}}+\eta_{\infty} \tag{2}\]
Here \(\dot{\gamma}\) is the shear rate, \(\tau_{ve}\) and \(m\) are fluid parameters (polymer relaxation time and rheological exponent), and \(\eta_{0}\) and \(\eta_{\infty}\) are zero and infinite shear rate viscosities respectively. By increasing the polymer concentration and/or polymer molar mass, the polymer relaxation time \(\tau_{ve}\) increases and the rheological exponent \(m\) decreases (see SI).
In this contribution, we follow the hypothesis that three time scales should be considered; the inertia-viscous (\(\tau_{iv}\)) cross over time, the inertia-capillary cross over time (\(\tau_{ic}\)) and polymer relaxation time (\(\tau_{ve}\)). To illustrate these time scales, we calculate them for water and an aqueous PEO solution (\(4\%(w/w)\) ) of the molar mass of (\(6\times 10^{5}(\frac{g}{mol})\)). For a millimetric water drop we get, \(\tau_{iv}\sim 15\,\mathrm{ms}\), \(\tau_{ic}\sim 3.7\,\mathrm{ms}\) and \(\tau_{ve}\sim 0\), and for the polymer solution \(\tau_{iv}\sim 2.8\,\mathrm{ms}\), \(\tau_{ic}\sim 4\,\mathrm{ms}\) and \(\tau_{ve}\sim 22\,\mathrm{ms}\). In this contribution, we study some effects of these changes in the order of the time scales and provide a simple model to rationalize our findings.
## 2 Experimental method
The droplet dispenser is set up so that the drop hangs from a needle and the substrate is lifted up gently touching the drop. After contact, the drop spreads immediately. The process is recorded in side view by a high-speed camera (FASTCAM Mini AX 200, Photron). The test section is illuminated by an LED lamp (SCHOTT KL 2500) along with a diffuser sheet to have homogeneous light in the background (Fig. 1).
Water (MicroPure UV/UF, Thermo Scientific Co.), glycerin (Sigma Aldrich co. 99%), and mixtures thereof as well as water-polyethylene oxide (PEO, Sigma Aldrich co.) solutions of various molar masses and concentrations are used as Newtonian and viscoelastic operating fluids, respectively. For aqueous PEO solution, from now on, the weight concentrations are shown as % and the molar mass of polymers are mentioned as \(k\) which represent \(10^{3}\frac{g}{mol}\). The sample names and viscosities at zero shear rate (zero shear viscosity) for each liquid are given in table 1. The surface tension \(\sigma\) of all samples is in the range of \(63\,\mathrm{mN}\,\mathrm{m}^{-1}\leqslant\sigma\leqslant 72\,\mathrm{mN}\, \mathrm{m}^{-1}\). To measure the flow curves, a commercial rheometer (MRC 502, Anton-Paar GmbH) is used. For all measurements a cone-plate geometry is used with diameter of \(50\) (\(mm\)) and cone angle of \(1^{\circ}\) and the gap of \(100\mu m\) (CP50-1). In the rheological experiments the temperature of the sample was kept constant at \(20(1)\,^{\circ}\)C. The rheological properties of each sample are given in the supplementary information. Two types of surface coatings are used to study the effect of contact angle. Cleaned glass substrates as hydrophilic (contact angle of water drop around \(15^{\circ}\)) and silanised glass substrates as hydrophobic substrates (contact angle of water drop around \(90^{\circ}\)) are used. Details of substrate preparation are given in the supplementary information. When preparing polymer solutions, it is crucial to
wait long enough for the polymer to dissolve homogeneously in the solution. This is illustrated by our rheology experiments. For high molar masses, we measured changes in the flow curves within the first month after preparing the sample (see supplementary information).
## 3 Spreading of viscoelastic and viscous Newtonian drops
### Hydrophilic substrates
In Fig. 2a), we plot the radius of the wetted area (\(r\)) normalized to the initial drop radius (\(R\)), the initial spreading of viscous Newtonian (water-glycerol mixture) and viscoelastic drops on a hydrophilic substrate are plotted, (Fig. 2a). One of the experimental challenges of plotting the spreading radius over time is to determine the time of the first contact. Three options have been suggested to overcome this problem, the simplest of which is to add a bottom view camera and capture the process from bottom (Eddi _et al._ (2013)). Plotting the contact line velocity against the dimensionless spreading radius (\(r/R\)) is another possible approach (Hartmann _et al._ (2021)). The advantage of this method is that it does not need
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Sample & \multicolumn{1}{c}{Molar mass (\(10^{3}\frac{g}{mol}\))} & \(\eta_{0}\) (mPa \(\cdot\) s) & Sample & \(\eta_{0}\) (mPa \(\cdot\) s) \\ Water + PEO (\(2\%,300k\)) & 300 & \(35\pm 0.5\) & Water + Glycerin (0\%) & \(0.93\pm 0.01\) \\ Water + PEO (\(3\%,300k\)) & 300 & \(101\pm 0.5\) & Water + Glycerin (72\%) & \(35\pm 0.5\) \\ Water + PEO (\(4\%,300k\)) & 300 & \(254\pm 0.5\) & Water + Glycerin (85\%) & \(98\pm 0.5\) \\ Water + PEO (\(2\%,600k\)) & 600 & \(103\pm 0.5\) & Water + Glycerin (91.5\%) & \(293\pm 0.5\) \\ Water + PEO (\(3\%,600k\)) & 600 & \(537\pm 0.5\) & Water + Glycerin (93.5\%) & \(389\pm 0.5\) \\ Water + PEO (\(4\%,600k\)) & 600 & \(1324\pm 0.5\) & Water + Glycerin (100\%) & \(1078\pm 0.5\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Composition of operating fluids and the zero-shear viscosity \(\eta_{0}\) (at 23 \({}^{\circ}\)C). The rheological properties of each sample are given in the supplementary information.
Figure 1: a) Sketch of the drop spreading setup, with high speed camera (1), light source and diffuser sheet (4), the drop and needle (2) and the adjustable solid substrate (3). Different stages of spreading are illustrated over time, (I) before contact between substrate and the drop (II) the substrate is gently coming up and the drop touches the substrate (the initial point of contact). (III) After contact and spreading of drop on the substrate with contact line velocity of \(u_{cl}\). b) The spreading dynamics of a milimetric drop over hydrophilic substrate for different times, the spreading radius \(r\) is shown.
the definition of zero time (see supplementary information). The third option is to define a fitting parameter as \(t_{0}\), this parameter can pop up in the fitting function \(r=B(t-t_{0})^{\alpha}\). In all our experiments, this parameter is of the order of a few frames \(t_{0}\sim 0.0001s\). In all plots the first four data points are omitted to be sure that the definition of the first contact time has no influence on the fit. Consequently, we fitted the drop spreading results from the fifth data point to the polymer relaxation time scale (\(\tau_{ve}\)).
The viscous Newtonian drop spreads like expected from Eq. (1) proportional to square root of time, \(r\sim\sqrt{t}\) however, aqueous PEO solutions spread with different spreading exponents (\(r=B\ t^{\alpha}\)). To illustrate this difference two samples with same zero-shear viscosities, here "Water + PEO (3%, 300k)" and "Water + Glycerin (85%)" (\(\eta_{0}\approx 100\) mPa s), same initial drop size (\(D\approx 4\) mm), density, surface tension (\(\sigma\approx 65\) mN m\({}^{-1}\)) have clearly different spreading exponents \(\alpha\) (Fig. 2a). Based on the present models, these drops should spread in a same manner, the major difference between two samples is the viscoelasticity of second drop. For the same prefactor in the power law, a smaller spreading exponent (\(\alpha\)) (for \(t\leqslant 0.01\) s) results in higher spreading rates.
To illustrate the effect of viscoelasticity, we use a typical flow curve of PEO solutions. The viscosity at low shear rates remains constant. Above a certain critical shear rate, the viscosity decays with increasing the shear rate. Such a behavior can be described by the cross-fluid model, Eq. (2). In contrast, water-glycerin mixtures show no evidence of shear thinning in our data, Fig. 2b. However, this may occur at even higher shear rates (Dontula _et al._ (1999)). The shear rate at which the viscosity differs from its zero shear value defines an internal relaxation time of the polymer solution, \(\tau_{ve}\) in Eq. (2).
Figure 2: a) Radius of wetted area (\(r\)) normalized by initial radius of drop (\(R\)) as a function of time, for water and PEO (3%, 300k and 3%, 600k) and water and glycerin (85%) on hydrophilic substrates. b) Flow curve for water and PEO (3%, 300k) solution and mixture of water and glycerin (85%). c) and d) Spreading exponent (\(\alpha\)) and spreading prefactor as a function of zero-shear viscosity \(\eta_{0}\) for all viscous Newtonian and viscoelastic liquids mentioned in table 1.
Our hypothesis is that the spreading dynamics depend on the rheological properties of operating fluid. The velocity of the contact line in this early regime of drop spreading is in the order of (m s\({}^{-1}\)). At 1 um from the contact line, the shear rate is estimated to be in the order of \(10^{6}\) s\({}^{-1}\). Most of the viscous dissipation occurs near the contact line (Bodziony _et al._ (2023)). At these high shear rates, the viscosity of polymer solutions decreases significantly. On the other hand, the viscosity of a Newtonian liquid remains more or less constant. This means that in the Newtonian case, the effective viscosity near the contact line is higher. Consequently, the dissipation is higher in this case, leading to a lower contact line velocity, which is in good agreement with our experimental results. This argument relies on the steady state viscosity of the liquid, i.e., when the polymer chains adapted to the shear rate and contribute to the dissipation in the polymer solution. For shorter times, the polymers cannot fully contribute to the dissipation in the polymer solution; the solution is not yet viscoelastic Costanzo _et al._ (2016); Vereroudakis _et al._ (2023).
To test our hypothesis, we measure the spreading exponent and the spreading prefactor over a wide range of zero-shear viscosities and rheological properties for viscous Newtonian and viscoelastic drops spreading on hydrophilic substrates, (Fig. 2c and d). The initial observations are verified by this systematic variation of the material parameters. For viscous Newtonian fluids, the spreading exponent is slightly decreasing with increasing zero-shear viscosity, \(\alpha\approx 0.5\) on the hydrophilic substrates. In contrast, for viscoelastic fluids, the spreading exponent is a strongly decreasing function of the zero-shear viscosity, (Fig. 2c). This difference is an indication of the dependence of the spreading exponent (\(\alpha\)) on the rheological properties. The trend of the spreading prefactor seems to depend only on the zero-shear viscosity but not on the rheological exponent (Fig. 2d). The spreading prefactor (for Newtonian and viscoelastic cases) roughly follows \(B\sim\eta_{0}{}^{-0.5}\) Fig. 2d), which was also observed previously (Carlson _et al._ (2011); Eddi _et al._ (2013)).
### Effect of substrate's wettability
Repeating the spreading experiments for Newtonian and viscoelastic liquids on hydrophobic substrates (i.e. \(\theta_{0}\simeq 90^{\circ}\)) reveals a number of important observations, Fig. 3 a. (i) On average, the spreading exponent (\(\alpha\)) decreases as the contact angle increases. (ii) The spreading exponent remains almost independent of zero-shear viscosity for Newtonian drops on hydrophobic substrates. (iii) Increasing the zero shear viscosity of viscoelastic liquids (i.e. increasing the concentration and/or molar mass of the polymer) reduces the exponent. The difference between the spreading of Newtonian and viscoelastic drops shows the same trend regardless of the hydrophobicity of the substrate. To summarize our experimental results, the spreading exponent is dependent on the viscoelasticity of drop and the prefactor is a function of viscosity. All of the developed models up to now cannot predict the effect of viscoelasticity, in the next section we present a simplistic model to predict this behavior.
## 4 Modeling
### Inviscid case
For low viscosity drops, increasing the hydrophobicity of the substrate (by suitable surface modification of the substrate), results in a decreasing spreading rate and exponent (Bird _et al._ (2008); Chen _et al._ (2013); Du _et al._ (2021)). This was explained in terms of a simple energy balance. This balance assumes that no energy is dissipated. In this approximation, the kinetic energy (left hand side of Eq. 4.1) is balanced by a combination of free surface energy and
wetting energy (right hand side of Eq. 4.1, Bird _et al._ (2008)).
\[\int_{V}\frac{1}{2}\rho u^{2}\mathrm{d}V=\sigma\left[A(0)-A(t)+\pi r(t)^{2}\cos( \theta_{0})\right]\,. \tag{4.1}\]
Here \(u\) is the velocity field inside the drop, \(\rho\) is the liquid density, \(A(t)\) and \(A(0)\) are the surface area of the liquid-vapor interface during the spreading and at the time zero and \(\theta_{0}\) is the contact angle at which the drop spreading would stop, i.e., the static advancing contact angle. Bird _et al._ (2008) solved the balance equation (by modeling the kinetic energy integral) and showed that the spreading is a function of the substrate's wettability (Eq. 4.2).
\[r(t)=c_{1}t^{\alpha}. \tag{4.2}\]
In this solution, the spreading exponent is \(\alpha=c_{2}\sqrt{F(\theta_{0})+\cos(\theta_{0})}\), where the unknown function \(F\) depends weakly on \(\theta_{0}\) (see Bird _et al._ (2008)). This means that as the contact angle increases, the spreading exponent decreases. Our experiments show the same behavior (Fig. 3 a).
### Including viscous dissipation
For viscous drops, viscous dissipation cannot be neglected. The Moffatt (1964) solution suggests that the dissipation near the contact line is in the order of \(\sim 2\pi\eta ru^{2}\). The dissipation rate is balanced by the rate of change of total energy. We add the dissipation term to the left hand side of the time derivative of Eq. 4.1. To solve the resulting equation (Eq. 4.3), we assume the viscous term to be small (\(\eta\rightarrow\epsilon\eta^{*}\)) and act as a perturbation term. From now on we drop all explicit mentioning of time as an argument.
\[2\pi\sigma\left\{\frac{1}{2}t\;[\dot{r}]^{2}+\frac{1}{2}t^{2}\;[\ddot{r}]-r\; \dot{r}\;[F(\theta_{0})+\cos(\theta_{0})]\right\}=-2\pi\epsilon\eta^{*}r\dot {r}^{2} \tag{4.3}\]
We rewrite Eq. 4.3 in dimensionless units (\(t=\tau_{ve}\;\dot{t}\), \(r=r^{*}\;\dot{r}\)). After simplification can be rewritten as Eq. 4.4. Here the elastocapillary number is the characteristic dimensionless number \(\mathrm{Ec}=\frac{\sigma\tau_{ve}}{\eta r}\). The elastocapillary number is the ratio between the capillary time scale and the polymer relaxation time scale. In our perturbation approach, it is convenient to have \(\mathrm{Ec}\) on the right side. We actually use the inverse of \(\mathrm{Ec}\), as \(\mathrm{Ec}^{*}=\frac{\eta^{*}r^{*}}{\sigma\tau_{ve}}=\frac{1}{\mathrm{Ec}}\). Taking typical values for viscosity of the liquids (\(\eta^{*}\to 100\,\mathrm{mPa\,s}\)), the length scale as initial drop radius (\(r^{*}\to 2\,\mathrm{mm}\)) and the time scale as the highest relaxation time of polymer solution (\(\tau_{ve}\to 22\,\mathrm{ms}\)), we get \(\mathrm{Ec}^{*}\approx 0.12\).
\[\left\{\frac{1}{2}t\;\dot{r}^{2}+\frac{1}{2}t^{2}\;\ddot{r}-r\;\dot{r}\;[F( \theta_{0})+\cos(\theta_{0})]\right\}=-\epsilon\;\mathrm{Ec}^{*}r\;\dot{r}^{2} \tag{4.4}\]
To address the viscoelastic case, we express the viscosity in Eq. 4.3 as \(\eta\rightarrow\frac{\eta_{0}}{\dot{\gamma}^{m}}\rightarrow\epsilon\frac{\eta _{0}^{*}}{\dot{\gamma}^{m}}\). Note that \(\frac{\eta_{0}^{*}}{\dot{\gamma}^{m}}\) has the dimensions of a viscosity. Since mainly the high shear region close to the contact line contributes to the viscous dissipation, only the high shear-rate viscosity is considered here. By estimating the shear rate as a function of the contact line velocity as \(\dot{\gamma}\simeq\frac{u_{c}t}{\dot{r}}\) (\(d^{*}\) is the distance to the contact line), the viscous dissipation can be written as \(\simeq\eta^{*}ru^{2-m}\). With this assumption, the force balance equation (Eq. 4.3) can be rewritten:
\[\left\{\frac{1}{2}t\;\dot{r}^{2}+\frac{1}{2}t^{2}\;\ddot{r}-r\;\dot{r}\;[F( \theta_{0})+\cos(\theta_{0})]\right\}=-\epsilon\;\mathrm{Ec}_{0}^{*}r\;\dot{r} ^{2-m} \tag{4.5}\]
We use Mathematica (Wolfram Alpha Co. Version 10) to solve Eq. 4.4 and 4.5 numerically,
see SI for details. We start exploring the effect of Newtonian and viscoelastic viscosity from the inviscid case discussed in (Bird _et al._ (2008)), for example we consider the case that the drop spreading follows: \(r(t)=0.02t^{0.5}\),, in our dimensionless units. The obtained solution is not exactly a power law, but close to it. We fitted the numerical results by a simple power law (\(B^{{}^{\prime}}t^{{}^{\prime}}\)), where \(B^{{}^{\prime}}\) and \(\alpha^{{}^{\prime}}\) are the effective prefactor and exponent, respectively (see SI). The theoretical exponents are plotted in Fig. 3 b as a function of \(\epsilon\) and \(m\). We consider the elastocapillary numbers are equal in both cases, since in experimental part we compare the drops with same zero shear viscosity. These results show that in viscoelastic case, the exponent decreases stronger than in the Newtonian case, by increasing the \(m\) value (i.e. increasing the viscoelasticity of samples). This agrees with the experimental observation, (Fig. 3 a). We should mention that the theoretical exponent just maps very first data points of experimental results (\(\max(\eta)\to 10\,\mathrm{mPa\,s}\)), see SI.
In summary, this simple perturbation analysis confirms the major tendencies observed in the experiments: i) Adding viscoelasticity to the system in terms of a shear-rate dependent viscosity, the effective exponent (\(\alpha^{{}^{\prime}}\)) decreases. ii) Increasing the viscous dissipation (i.e., the perturbation term), the prefactor decreases. This simple proposed model shows the key features of the experimental tendencies.
## 5 Conclusion
The early stage spreading of Newtonian and viscoelastic fluids on hydrophilic and hydrophobic substrates has been studied. Generally speaking, viscoelastic drops spread faster compared to the Newtonian case with the same physical properties (zero shear rate viscosity and surface tension). This difference can be justified by the fact that near the contact line, the shear rate is extremely high. This leads to decreasing the effective viscosity which is not the case for Newtonian liquids. To be able to observe the viscoelastic effect, the experimental time scale should be in order of internal relaxation time of the used polymer solution or longer. These experimental observations can be supported by a simple perturbation model. The results also confirm the dependency of the spreading exponent to the wettability of the substrate.
Figure 3: a) Experimental spreading exponent (\(\alpha\)) as a function of the zero-shear viscosity \(\eta_{0}\) for viscous Newtonian and viscoelastic liquids on hydrophobic and hydrophilic substrates. b)The theoretically predicted effective exponent(\(\alpha^{{}^{\prime}}\)) versus the different effective viscosity \(\epsilon\mathrm{Ec}^{*}\) values, for viscous Newtonian and viscoelastic fluids (\(m=0.25\) and \(m=0.5\)).
## 6 Funding
This study was funded by the Deutsche Forschungsgemeinschaft Project No. 265191195-SFB 1194, "Interaction between Transport and Wetting Processes" and the Deutsche Forschungsgemeinschaft (DFG) Project No. 422852551, within the priority program SPP 2171.
|
2308.03069 | On ideals in quantales -- I | Taking a ring-theoretic perspective as our motivation, the main aim of this
series is to establish a comprehensive theory of ideals in commutative
quantales with an identity element. This particular article focuses on an
examination of several key properties related to ideals in quantale, including
prime, semiprime, radical, primary, irreducible, and strongly irreducible
ideals. Furthermore, we investigate the primary decomposition problem for
quantale ideals. In conclusion, we present a set of future directions for
further exploration, serving as a natural continuation of this article. | Amartya Goswami | 2023-08-06T09:40:32Z | http://arxiv.org/abs/2308.03069v1 | # On Ideals in Quantales, I
###### Abstract
Taking a ring-theoretic perspective as our motivation, the main aim of this series is to establish a comprehensive theory of ideals in commutative quantales with an identity element. This particular article focuses on an examination of several key properties related to ideals in quantales, including prime, semiprime, radical, primary, irreducible, and strongly irreducible ideals. Furthermore, we investigate the primary decomposition problem for quantale ideals. In conclusion, we present a set of future directions for further exploration, serving as a natural continuation of this article.
Key words and phrases:Quantale; Multiplicative lattice; Semiprime ideal; Primary decomposition; Strongly irreducible ideal. 2020 Mathematics Subject Classification: 06F07; 13A15; 06B23
###### Contents
* 1 Introduction
* 2.1 Operations on ideals
* 2.2 Prime and semiprime ideals.
* 2.3 Primary ideals and primary decompositions
* 2.4 Irreducible and strongly irreducible ideals
* 2.5 Proof of Theorem 1.1
* 3 Proof of Theorem 1.2
* 4 Proof of Theorem 1.3
* 5 Proof of Theorem 1.4
* 6 Proof of Theorem 1.5
* 7 Proof of Theorem 1.6
* 8 Proof of Theorem 1.7
* 9 Proof of Theorem 1.8
* 10 Proof of Theorem 1.9
* 11 Proof of Theorem 1.10
* 12 Proof of Theorem 1.111
## 1 Introduction
As pointed out in [36], a quantale can be regarded as a special case within various algebraic and categorical perspectives. Here are a few examples:
* complete multiplicative lattices,
* complete residuated lattices,
* semirings with infinitary sums,
* thin closed monoidal categories,
* monoids in complete semilattices.
For the purpose of developing an ideal theory of quantales, we will adopt the first approach mentioned above. Ideal theory in an algebraic structure provides a powerful framework for understanding the structural properties and behaviour of ideals within the algebraic system. It focuses on the investigation of properties and characteristics of ideals in algebraic structures, such as their generation, factorization, and intersection. It explores the interplay between ideals and other fundamental algebraic concepts, such as modules, prime ideals, and quotient structures. One of the primary goals of ideal theory is to establish connections between ideals and the overall structure of the algebraic system in which they reside.
Although abstract ideal theory has been extensively studied (see, e.g., [1, 3, 4, 11, 12, 44, 45]) for various classes of lattices by introducing different types of elements (replacing various types of ideals), there is a notable absence of detailed study on semiprime, primary, strongly irreducible, _etc._, types
of elements in the context of multiplicative lattices 1. It is worth noting that in a lattice without a multiplication operation, the notions of prime and strongly irreducible elements coincide. However, in the context of rings, these two types of ideals are distinct from each other.
Footnote 1: however, definitions of semiprime and primary ideals have been mentioned in [46, Definition 2.6] (see also [37, Definition 2.2]) in order to in order to introduce rough semiprime and rough primary ideals in a quantale. Also, a definition of prime ideal and completely prime ideal has been introduced in [10, Definition 3 and Definition 4] to show when they coincide.
Surprisingly, despite the wealth of research on ideal theory, we have not come across a well-defined notion of an ideal in the context of quantales. Motivated by this observation, the primary objective of this series of papers is to explore the extent to which the ideal theory of rings can be extended to quantales. In many instances, the reader find that the proofs of various properties for different classes of ideals in rings can be adapted almost identically for ideals in quantales. However, we shall present complete proofs for all results unless they are trivial. This approach serves two main purposes:
Firstly, providing comprehensive proofs contribute to the understanding and advancement of this field of study. By presenting detailed arguments, we aim to enhance the readers' comprehension and facilitate further developments in this area.
Secondly, we intend to showcase the reliance on multiplicative ideal theory principles by explicitly demonstrating the application of these principles in quantale analysis. Through the presentation of proofs, we aim to emphasize the significance of ideal-theoretic aspects in the study of quantales.
Here is an explanation of the terminology and notation used in this paper. The symbol \(\mathds{N}\) represents the set of natural numbers, _i.e._, \(1,2,3,\ldots\). When we have a set \(X\) and \(S\) is a subset of \(X\), we denote the complement of \(S\) with respect to \(X\) as \(X\neg S=\{x\in X\mid x\notin S\}\). The empty set is denoted by \(\emptyset\), while the top and bottom elements of a lattice \((\mathcal{L},\preccurlyeq,\vee,\wedge)\) are represented as \(\top\) and \(\bot\), respectively. For any element \(x\in\mathcal{L}\) with \(x\neq\bot\), the notation \(nx\) signifies the repeated join (supremum) of \(x\) with itself \(n\) times, _i.e._, \(x\vee\cdots\lor x\). Throughout the paper, the default ordering relation for the poset of subsets of a set be the set inclusion relation, denoted by \(\preccurlyeq\).
The paper is organized as follows. In SS2.1, we provide an introduction to the concept of ideals in quantales and explore the construction of new ideals using operations between them. We examine the general properties of ideals and investigate their behavior under quantale homomorphisms. Additionally, we briefly discuss some key results pertaining to maximal ideals in quantales.
In SS2.2, we study prime and semiprime ideals in quantales. We examine the notion of radicals of ideals and establish an equivalence between semiprime ideals and radical ideals in a quantale. Throughout this analysis, the significance of multiplicatively closed subsets becomes apparent as they play a crucial role in the development of the theory.
Moving forward, SS2.3 focuses on primary ideals in quantales and primary decompositions. We explore the properties of primary ideals and investigate their decompositions. This section sheds light on the fundamental aspects of primary ideals within the context of quantales.
Lastly, in SS2.4, we delve into the discussion of irreducible and strongly irreducible ideals in quantales. We present a representation theorem, stating that any ideal in a Noetherian quantale can be expressed as an intersection of a finite number of irreducible ideals. Furthermore, we partially characterize arithmetic quantales by examining their strongly irreducible ideals.
## 2 Ideals
### Operations on ideals
Quantales can be seen as a generalization of rings, where the underlying additive abelian groups are replaced by sup-lattices and multiplication part as a semigroup. Since in our
investigation, we aim to extend the ideal theory associated with commutative rings with identity, our focus shall be on a specific type of quantales that align with this goal. For a more comprehensive exploration of the algebraic aspects of quantales in general, we recommend [41].
**Definition 2.1.1**.: An _unital commutative quantale_ is a complete lattice \((\mathcal{Q},\preccurlyeq,\bot,\top)\) endowed with an operation \(\&\), satisfying the following axioms:
1. \(x\&(y\&z)=(x\&y)\&z\),
2. \(x\&y=y\&x\),
3. \(x\&(\bigvee_{\lambda\in\Lambda}y_{\lambda})=\bigvee_{\lambda\in\Lambda}(x\&y _{\lambda})\),
4. \(x\&\top=x\),
for all \(x\), \(y\), \(y_{\lambda}\), \(z\in\mathcal{Q}\), and for all \(\lambda\in\Lambda\), where \(\Lambda\) is an index set.
Throughout this paper, unless otherwise stated, by a "quantale," we shall always refer to a commutative quantale with the top element \(\top\) as the identity with respect to \(\&\). We shall also use the notation \(x^{n}\) to denote \(x\&\cdots\&x\) (repeated \(n\) times).
**Remark 2.1.1**.: The aforementioned Definition 2.1.1 was first introduced in [45] (also see [11]), where the authors referred to it as a _multiplicative lattice_.
Unital commutative quantales offer numerous examples that illustrate their versatility and applicability. Below, we provide a few of them.
1. The quantale of non-negative real numbers with the operation of maximum (supremum) and the identity element \(0\). This quantale is generally denoted as \(([0,\infty],\max,0)\).
2. The quantale \((\mathcal{P}(X),\cup,X)\) of non-empty subsets of a set \(X\), ordered by inclusion and equipped with the operation of set union as the multiplication and the full set \(X\) as the identity element.
3. The quantale \((\mathcal{C}(X),\vee,0)\) of continuous real-valued functions on a topological space \(X\), ordered pointwise and equipped with the operation of pointwise maximum (supremum) as the multiplication and the constant function \(0\) as the identity element.
4. The quantale \((\mathcal{I}(R),\cap,0)\) of ideals in a commutative unital ring \(R\), ordered by inclusion and equipped with the operation of ideal intersection as the multiplication and the zero ideal \(0\) as the identity element.
In the following lemma, we compile a number of elementary results on quantales that be utilized in sequel.
**Lemma 2.1.2**.: _Let \(\mathcal{Q}\) be a quantale. Then the following hold._
1. \(x\&y\preccurlyeq x\wedge y\)_, for all_ \(x,y\in\mathcal{Q}\)_._
2. _If_ \(x\preccurlyeq y\)_, then_ \(x\&xz\preccurlyeq y\&z\)_, for all_ \(z\in\mathcal{Q}\)_._
3. _If_ \(x\preccurlyeq y\) _and_ \(u\preccurlyeq v\)_, then_ \(x\&u\preccurlyeq y\&v\)_, for all_ \(x,\,y,\,u,\,v\in\mathcal{Q}\)_._
4. _If_ \(n\in\mathbb{N}\) _and if_ \(x,\,y\in\mathcal{Q}\)_, then_ (2.1) \[(x\lor y)^{n}=\bigvee_{k=0}^{n}\binom{n}{k}a^{n-k}\&b^{k},\qquad\text{where, }\ \binom{n}{k}=\frac{n!}{k!(n-k)!}.\]
Proof.: (1) Notice that \(x=x\&\top=x\&(y\vee\top)=(x\&y)\lor x\), and this implies \(x\&y\preccurlyeq x\). Similarly, we have \(x\&y\preccurlyeq y\), and hence \(x\&y\preccurlyeq x\wedge y\).
(2) Since \(x\preccurlyeq y\), we have \(y\&z=(x\lor y)\&z=(x\&z)\vee(y\&z)\), which implies that \(x\&z\preccurlyeq y\&z\).
(3) Since \(x\preccurlyeq y\) and \(u\preccurlyeq v\), by (2), we have \(x\&u\preccurlyeq y\&u\) and \(y\&u\preccurlyeq y\&v\). From these, we obtain the claim.
(4) For \(n=1\), the formula (2.1) is trivially true. Suppose the formula is true for \(n=k\). Using the conditions of Definition 2.1.1, it is a routine exercise to check that the formula is also true for \(n=k+1\). Hence by induction we have the desired claim.
We shall now present the definition of the central concept in this paper, which pertains to the notion of ideals in a quantale.
**Definition 2.1.2**.: A nonempty subset \(I\) in \(\mathcal{Q}\) is called an _ideal_, if for all \(x\), \(y\), \(l\in\mathcal{Q}\),
1. \(x\), \(y\in I\) implies that \(x\lor y\in I\), and
2. \(x\in I\) and \(l\preccurlyeq x\) implies that \(l\in I\).
We shall denote the set of all ideals in \(\mathcal{Q}\) by \(\mathcal{I}_{\mathcal{Q}}\), and by \(0\), the zero ideal in \(\mathcal{Q}\).
**Remark 2.1.3**.: One may immediately notice that this definition does not involve an any axiom related to the operation \(\oplus\). If \(l\preccurlyeq x\), where \(l\in\mathcal{Q}\) and \(x\in I\), then according to Lemma 2.1.2(1), we can deduce that \(l\&x\in I\). This implies that in the context of rings, the property of an ideal mentioned above is automatically satisfied in a quantale, whereas it needs to be explicitly defined as an axiom. In simpler terms, the definitions of an ideal in a complete lattice and in a quantale are identical.
Note that a definition of an ideal in a (commutative) quantale without identity requires the following additional axiom:
\[\text{If }\,l\preccurlyeq x\text{, where }l\in\mathcal{Q}\text{ and }x\in I\text{, and }l\&x\in I\text{,} \tag{2.2}\]
(see [46, Definition 2.2] and [10, Definition 2(2)]). This is because we have obtained (2.2) using the fact that \(x\&y\preccurlyeq x\wedge y\) holds for all \(x,y\in\mathcal{Q}\), which we proved assuming the existence of an identity element in \(\mathcal{Q}\) (see Lemma 2.1.2(1)).
Based on Definition 2.1.2, we shall now introduce a set of operations on ideals in a quantale. These operations play a fundamental role in the development of our theory.
**Definition 2.1.3**.: Let \(\mathcal{Q}\) be a quantale.
1. The _meet_\(\bigwedge\limits_{\lambda\in\Lambda}I_{\lambda}\) of ideals \(\{I\}_{\lambda\in\Lambda}\) in \(\mathcal{Q}\) is given by their intersection.
2. The _join_ of ideals \(\{I\}_{\lambda\in\Lambda}\) of \(\mathcal{Q}\) is defined by \[\bigvee\limits_{\lambda\in\Lambda}I_{\lambda}=\left\{l\in\mathcal{Q}\mid l \preccurlyeq\bigvee\limits_{\text{finite}}x_{\lambda}\text{ for }x_{\lambda}\in I_{\lambda}\right\}.\]
3. The _product_ of two ideals \(I\) and \(J\) in \(\mathcal{Q}\) is defined as \[I\&J=\left\{l\in\mathcal{Q}\mid l\preccurlyeq\bigvee\limits_{\text{finite}}\{x \&y\mid x\in I,y\in J\}\right\}.\]
4. The _residual_ of an ideal \(I\) by an ideal \(J\) is defined by \[(I:J)=\{x\in\mathcal{Q}\mid x\&J\preccurlyeq I\}=\{x\in\mathcal{Q}\mid x\&j \in I\text{, for all }j\in J\}.\]
Through the operations defined above, we can establish that the resulting entities are indeed ideals. This be demonstrated in the following lemma.
**Lemma 2.1.4**.: _Let \(\mathcal{Q}\) be a quantale. If \(I\) and \(J\) are ideals in \(\mathcal{Q}\), then so are_ (a)_\(I\wedge J\),_ (b)_\(I\lor J\),_ (c)_\(I\&J\), and_ (d)_\((I:J)\)._
Proof.: (a) Since \(I\) and \(J\) are ideals in \(\mathcal{Q}\), if \(x\), \(y\in I\wedge J\), then \(x\lor y\in I\wedge J\). Moreover, if \(z\in I\wedge J\) then for any \(x\leqslant z\), \(x\in I\wedge J\). Thus \(I\wedge J\) is an ideal.
(b) To show \(I\lor J\) is an ideal, let \(l,l^{\prime}\in I\lor J\). This implies that \(l\preccurlyeq x\lor y\) and \(l^{\prime}\preccurlyeq x^{\prime}\lor y^{\prime}\) for some \(x\), \(x^{\prime}\in I\) and \(y\), \(y^{\prime}\in J\). Therefore,
\[l\lor l^{\prime}\preccurlyeq(x\lor y)\lor(x^{\prime}\lor y^{\prime})=(x\lor x ^{\prime})\lor(y\lor y^{\prime})\in I\lor J.\]
Also, if \(l\in I\lor J\) and \(l^{\prime}\preccurlyeq l\), then \(l^{\prime}\preccurlyeq l\preccurlyeq x\lor y\) for some \(x\in I\) and \(y\in J\). This implies that \(l^{\prime}\in I\lor J\).
(c) The proof of \(I\&J\) is an ideal is similar to (b).
(d) Finally, to show \((I:J)\) is an ideal, let \(l\), \(l^{\prime}\in(I:J)\). Then \(l\&J\preccurlyeq I\) and \(l^{\prime}\&J\preccurlyeq I\). This by Definition 2.1.1(3) implies that
\[(l\lor l^{\prime})\&J=(l\&J)\lor(l^{\prime}\&J)\preccurlyeq I.\]
Suppose \(l\in(I:J)\) and \(l^{\prime}\preccurlyeq l\). Then by Lemma 2.1.2(2), we have \(l^{\prime}\&j\preccurlyeq l\&j\) for all \(j\in J\). Hence \(l^{\prime}\&J\preccurlyeq l\&J\preccurlyeq I\). In other words, \(l^{\prime}\in(I:J)\) as required.
**Definition 2.1.4**.: _If \(S\) is a nonempty subset in a \(\mathcal{Q}\), then the ideal generated by \(S\) is defined by_
\[\langle S\rangle=\left\{x\in\mathcal{Q}\mid x\preccurlyeq\bigvee_{i=1}^{n}a_{ i},\,\text{for some}\;n\in\mathbb{N}\;\text{and}\;a_{i}\in\mathcal{Q}\&S\right\}.\]
In particular, if \(S=\{s\}\), then the ideal \(\langle\{s\}\rangle\) is called principal. We shall write \(\langle s\rangle\) for \(\langle\{s\}\rangle\). The following lemma is going to be useful in the sequel.
**Lemma 2.1.5**.: _Let \(\mathcal{Q}\) be a quantale. If \(S\) and \(T\) are nonempty subsets of a quantale \(\mathcal{Q}\), then \(\langle S\wedge T\rangle=\langle S\rangle\wedge\langle T\rangle\)._
Proof.: Since \(S\wedge T\preccurlyeq S\) and \(S\wedge T\preccurlyeq T\), we immediately have \(\langle S\wedge T\rangle\preccurlyeq\langle S\rangle\wedge\langle T\rangle\). For teh converse, let \(x\in\langle S\rangle\wedge\langle T\rangle\). Then \(x\preccurlyeq\bigvee_{i=1}^{n}a_{i}\) and \(x\preccurlyeq\bigvee_{j=1}^{m}b_{i}\), for some \(a_{i}\in\mathcal{Q}\&S\) and \(b_{j}\in\mathcal{Q}\&T\), where \(1\leqslant i\leqslant n\) and \(1\leqslant j\leqslant m\). This implies that \(x\preccurlyeq\left(\bigvee_{i=1}^{n}a_{i}\right)\wedge\left(\bigvee_{j=1}^{m}b_ {i}\right),\)_i.e._, \(x\in\langle S\wedge T\rangle\).
The algebraic manipulation of ideals in a quantale closely resembles that of a commutative ring. We can draw analogies between the following ideal-theoretic relations and their counterparts in the elementwise setting in a multiplicative lattice (see [45, 11]).
**Proposition 2.1.6**.: _Let \(I\), \(\{I_{\lambda}\}_{\lambda\in\Lambda}\), \(J\), \(\{J_{\lambda}\}_{\lambda\in\Lambda}\), and \(K\) be ideals in a quantale \(\mathcal{Q}\). Then the following hold._
1. \(I\&(J\&K)=(I\&J)\&K\)_._
2. \(I\&J=J\&I\)_._
3. \(\mathcal{Q}\&I=I\)_._
4. \(0\&I=0\)
_._
5. \(I\&\left(\bigvee_{\lambda\in\Lambda}J_{\lambda}\right)=\bigvee_{\lambda\in\Lambda }(I\&J_{\lambda})\)_._
6. \(I\&J\preccurlyeq I\wedge J\)_._
7. \(I\&(J\wedge K)\preccurlyeq(I\&J)\wedge(I\&K)\)_._
8. \((I\lor J)\&(J\lor K)\preccurlyeq(I\&J)\lor K\)_._
9. _If_ \(I\lor K=J\lor K=\mathcal{Q}\)_, then_ \(((I\&J)\lor K)=\mathcal{Q}\)_._
10. _If_ \(I\lor K=\mathcal{Q}\)_, then_ \((I\wedge J)\lor K=J\lor K\)_._
11. \((I:J)\&J\preccurlyeq I\)_._
12. \(I\preccurlyeq(I:J)\)_._
13. \(J\preccurlyeq I\) _if and only if_ \((I:J)=\mathcal{Q}\)_._
14. \((I:\mathcal{Q})=I\)_._
15. \(I\preccurlyeq((I\&J):J)\)_._
16. \(\left(\bigwedge_{\lambda\in\Lambda}I_{\lambda}:J\right)=\bigwedge_{\lambda\in \Lambda}(I_{\lambda}:J)\)_._
17. \(\bigwedge_{\lambda\in\Lambda}(I:J_{\lambda})\leqslant\left(I:\bigvee_{\lambda \in\Lambda}J_{\lambda}\right)\)_._
18. \(((I:J):K)=(I:(J\&K))\)_._
19. \((I:J)=(I:(I\lor J))\)_._
20. \((I:J)=((I\wedge J):J)\)_._
Proof.: (1) Follows from Definition 2.1.1(1).
(2) Follows from Definition 2.1.1(2).
(3) From Remark 2.1.3, it follows that \(\mathcal{Q}\&I\preccurlyeq I\). Indeed, \(x\in\mathcal{Q}\), \(i\in I\) implies \(x\&i\preccurlyeq x\wedge i\preccurlyeq i\in I\). By Definition 2.1.1(4), \(i=\mathbb{T}\&i\preccurlyeq Q\&I\) for all \(i\in I\), proving that \(I\preccurlyeq\mathcal{Q}\&I\).
(4) Let \(i\in I\). Then \(\bot\&i\preccurlyeq\bot\wedge i=\bot\preccurlyeq i\) implies that \(\bot\&i\in I\), and hence, \(0\&I\preccurlyeq I\). Since \(\bot\preccurlyeq x\) for all \(x\in\mathcal{Q}\), in particular, \(\bot\preccurlyeq\bot\&i\) for all \(i\in I\), showing that \(0\preccurlyeq 0\&I\).
(5) Follows from Definition 2.1.1(3).
(6) Follows from Remark 2.1.3.
(7) Since \(J\wedge K\preccurlyeq J\) and \(J\wedge K\preccurlyeq K\), by Lemma 2.1.2(2), we obtain \(I\&(J\wedge K)\preccurlyeq(I\&J)\wedge(I\&K)\).
(8) For all \(i\in I\), \(j\in J\), and \(k\in K\), applying Definition 2.1.1(3), we have
\[(i\lor k)\&(j\lor k)=(i\&j)\vee(i\lor j\lor k)\&k\in(I\&J)\lor K.\]
(9) From (8), it follows that \(\mathcal{Q}\preccurlyeq((I\&J)\lor K)\), whereas the other inclusion is trivial.
(10) Since \(I\wedge J\preccurlyeq J\), we have \((I\wedge J)\lor K\preccurlyeq J\lor K\). For the other half of the inclusion, we have
\[J\lor K =(I\lor K)\&(J\lor K)\] \[=I\&(J\lor K)\lor K\&(J\lor K)\] \[\preccurlyeq I\&(J\lor K)\lor K\] \[=(I\&J)\vee((I\&K)\lor K)\] \[=(I\&J)\lor K\] \[\preccurlyeq(I\wedge J)\lor K.\]
(11) If \(x\in(I:J)\&J\), then for some finite set \(\{1,\ldots,n\}\), we have \(x\prec\bigvee\limits_{i=1}^{n}l_{i}\&j_{i}\in J\), where \(l_{i}\in\mathcal{Q}\) and \(j_{i}\in J\).
(12) If \(i\in I\), then by Lemma 2.1.2(2), \(i\&J\preccurlyeq I\&J\), and since \(I\) is an ideal, by Remark 2.1.3, \(i\&J\preccurlyeq I\).
(13) If \(J\preccurlyeq I\) then by applying (3),we have \(\mathcal{Q}\&J=J\preccurlyeq I\). Hence \((I:J)=\mathcal{Q}\). Conversely, if \((I:J)=\mathcal{Q}\), then by Definition 2.1.1(4), \(\top\&j=j\in I\) for all \(j\in J\).
(14) Follows from (3).
(15) Follows by Lemma 2.1.2(2).
(16) The desired identity follows from the following chain of equivalent statements:
\[l\in\left(\bigwedge_{\lambda\in\Lambda}I_{\lambda}:J\right)\Leftrightarrow l \&J\prec\bigwedge_{\lambda\in\Lambda}I_{\lambda}\Leftrightarrow l\&J \preccurlyeq I_{\lambda},\forall\lambda\in\Lambda\Leftrightarrow l\in \bigwedge_{\lambda\in\Lambda}(I_{\lambda}:J).\]
(17) Note that
\[l\in\bigwedge_{\lambda\in\Lambda}(I:J_{\lambda})\Rightarrow l\&J_{\lambda} \subseteq I,\forall\lambda\in\Lambda\Rightarrow\bigvee_{\lambda\in\Lambda}l \&J_{\lambda}\subseteq I\Rightarrow l\&\left(\bigvee_{\lambda\in\Lambda}J_{ \lambda}\right)\subseteq I\Rightarrow l\in(I:\bigvee_{\lambda\in\Lambda}J_{ \lambda}).\]
(18) Observe that \(l\in((I:J):K)\Leftrightarrow l\&K\preccurlyeq(I:J)\Leftrightarrow(l\&K)\&J \preccurlyeq I\Leftrightarrow l\&((J\&K))\subseteq I\Leftrightarrow l\in(I:J \&K)\).
(19) Note that
\[x\in(I:J)\Rightarrow x\&J\preccurlyeq I\Rightarrow x\&I\lor x\&J\preccurlyeq x \&I\lor I\Rightarrow x\&(I\lor J)\preccurlyeq I\Rightarrow x\in(I:(I\lor J )),\]
and for the other half of the inclusion,
\[x\in(I:(I\lor J))\Rightarrow x\&(I\lor J)\preccurlyeq I\Rightarrow x\&I\lor x \&J\preccurlyeq I\Rightarrow x\&J\preccurlyeq I.\]
(20) Finally, we have \(x\in(I:J)\Leftrightarrow x\&J\preccurlyeq I\Leftrightarrow x\&J\preccurlyeq I \wedge J\Leftrightarrow x\in((I\wedge J):J)\).
The well-known fact is that the set of ideals in a commutative ring with identity forms a quantale, following the definition provided in Definition 2.1.1. Now, we shall proceed to demonstrate in the following theorem that the set of ideals in a quantale possesses an analogous structure.
Theorem 2.1.7.: _If \(\mathcal{Q}\) is a quantale, then \((\mathcal{I}_{\mathcal{Q}},\preccurlyeq,\vee,\wedge,\bot,\top,\&)\) is a quantale._
Proof.: By (1) and (2) of Definition 2.1.3, it follows that arbitrary joins and arbitrary meets exist in the poset \((\mathcal{I}_{\mathcal{Q}},\preccurlyeq)\), where \(\preccurlyeq\) is the usual subset inclusion relation. It is easy to see that the bottom element \(\bot\) and the top element \(\top\) of the poset \((\mathcal{I}_{\mathcal{Q}},\preccurlyeq)\) are respectively \(0\) (the zero ideal) and \(\mathcal{Q}\). Hence \((\mathcal{I}_{\mathcal{Q}},\preccurlyeq)\) is a complete lattice. Taking the multiplication operation between ideals in \(\mathcal{Q}\) as in Definition 2.1.3(3), it follows respectively from (1), (2), (5), and (3) of Proposition 2.1.6 that the operation \(\&\) satisfies Axioms (1), (2), (3), and (4) of Definition 2.1.1.
Similar to rings, the concept of an annihilator of a subset in a quantale is also established as a special case of the residual operation.
Definition 2.1.5.: If \(S\) is a nonempty subset in a quantale \(\mathcal{Q}\), then the _annihilator of \(S\)_ is defined by
\[\mathcal{A}_{\mathcal{Q}}(S)=\{x\in\mathcal{Q}\,|\,x\&s=\bot,\,\text{for all}\,s\in S\}.\]
In the following lemma, we discuss a few elementary properties of annihilators.
Lemma 2.1.8.: _Let \(S\) and \(T\) be subsets in a quantale \(\mathcal{Q}\). Then the following hold._
1. \(S\preccurlyeq T\) _implies that_ \(\mathcal{A}_{\mathcal{Q}}(T)\preccurlyeq\mathcal{A}_{\mathcal{Q}}(S)\)
2. \(S\preccurlyeq\mathcal{A}_{\mathcal{Q}}(\mathcal{A}_{\mathcal{Q}}(S))\).
3. \(\mathcal{A}_{\mathcal{Q}}(S)=\mathcal{A}_{\mathcal{Q}}(\mathcal{A}_{\mathcal{Q} }(\mathcal{A}_{\mathcal{Q}}(S)))\).
Proof.: -- (1) If \(x\in\mathcal{A}_{\mathcal{Q}}(T)\), then \(x\&ct=\bot\) for all \(t\in T\). Since \(S\preccurlyeq T\), this implies that \(x\&s=\bot\) for all \(s\in S\). Hence \(\mathcal{A}_{\mathcal{Q}}(T)\preccurlyeq\mathcal{A}_{\mathcal{Q}}(S)\).
(2) If \(s\in S\), then \(x\&s=s\&x=\bot\) for all \(x\in\mathcal{A}_{\mathcal{Q}}(S)\), and hence \(s\in\mathcal{A}_{\mathcal{Q}}(\mathcal{A}_{\mathcal{Q}}(S))\).
(3) Since by (2), \(S\preccurlyeq\mathcal{A}_{\mathcal{Q}}(\mathcal{A}_{\mathcal{Q}}(S))\), by (1), it follows that \(\mathcal{A}_{\mathcal{Q}}(\mathcal{A}_{\mathcal{Q}}(\mathcal{A}_{\mathcal{Q} }(S)))\preccurlyeq\mathcal{A}_{\mathcal{Q}}(S)\). By (2), it follows that \(\mathcal{A}_{\mathcal{Q}}(S)\preccurlyeq\mathcal{A}_{\mathcal{Q}}(\mathcal{A}_ {\mathcal{Q}}(\mathcal{A}_{\mathcal{Q}}(S)))\).
**Definition 2.1.6**.: Let \(\mathcal{Q}\) be a quantale.
1. An ideal \(I\) of \(\mathcal{Q}\) is called _proper_ if \(I\neq\mathcal{Q}\), and we shall denote the set of all proper ideals in \(\mathcal{Q}\) by \(\mathcal{I}_{\mathcal{Q}}^{+}\).
2. A proper ideal \(M\) of \(\mathcal{Q}\) is called _maximal_ if there is no other proper ideal \(I\) of \(\mathcal{Q}\) properly containing \(M\). A quantale \(\mathcal{Q}\) with exactly one maximal ideal is called _local_.
3. An ideal \(I\) of \(\mathcal{Q}\) is called _minimal_ if \(I\neq 0\) and \(I\) properly contains no other nonzero ideals in \(\mathcal{Q}\).
The subsequent result ensures that the set of maximal ideals in a quantale is not empty.
**Theorem 2.1.9**.: _Every quantale \(\mathcal{Q}\) with \(0\neq 1\) has a maximal ideal._
Proof.: Since \(0\in\mathcal{I}_{\mathcal{Q}}^{+}\), \(\operatorname{Prp}(\mathcal{Q})\neq\emptyset\). Consider a chain \(\{I_{\lambda}\}_{\lambda\in\Lambda}\) of ideals in \(\mathcal{I}_{\mathcal{Q}}^{+}\). By Definition 2.1.3(1), it follows that \(\bigvee\limits_{\lambda\in\Lambda}I_{\lambda}\in\mathcal{I}_{\mathcal{Q}}^{+}\) and \(I_{\lambda}\preccurlyeq\bigvee\limits_{\lambda\in\Lambda}I_{\lambda}\) for all \(\lambda\in\Lambda\). Hence, by Zorn's lemma, \(\operatorname{Prp}(\mathcal{Q})\) has a maximal element, which is our desired maximal ideal in \(\mathcal{Q}\).
**Corollary 2.1.10**.: _Every proper ideal in a quantale \(\mathcal{Q}\) is contained in a maximal ideal in \(\mathcal{Q}\)._
The following proposition presents a condition that is sufficient for a quantale to be local.
**Proposition 2.1.11**.: _Let \(\mathcal{Q}\) be a quantale and \(M\neq\mathcal{Q}\) an ideal in \(\mathcal{Q}\) such that every \(x\in\mathcal{Q}\neg M\) is a unit in \(\mathcal{Q}\). Then \(\mathcal{Q}\) is local and \(M\) is its maximal ideal._
Proof.: Let \(I\) be an ideal in \(\mathcal{Q}\) such that \(M\subsetneq I\preccurlyeq\mathcal{Q}\). By hypothesis, every element \(x\in I\neg M\) is a unit and since \(M\neq\mathcal{Q}\), we must have \(\top=x\&y\in I\) for some \(y\in\mathcal{Q}\). Hence \(I=\mathcal{Q}\), showing that \(M\) is maximal. Since every element of \(\mathcal{Q}\neg M\) is a unit, if \(M^{\prime}\) be another maximal ideal in \(\mathcal{Q}\), then we must have
\[(\mathcal{Q}\neg M)\wedge M^{\prime}=\emptyset,\]
which implies that \(M^{\prime}\preccurlyeq M\). But \(M^{\prime}\) is a maximal ideal. Therefore, \(M^{\prime}=M\).
Our next objective is to examine the properties of ideals and their products, meet, and residual operations under quantale homomorphisms. These properties serve as the lattice-theoretic counterparts to their respective ring-theoretic versions (see [6, Proposition 1.17 and Exercise 1.18]). To this end, recall that a map \(\phi\colon\mathcal{Q}\to\mathcal{Q}^{\prime}\) from \(\mathcal{Q}\) to \(\mathcal{Q}^{\prime}\) is called a _quantale homomorphism_ if
1. \(x\preccurlyeq x^{\prime}\) implies that \(\phi(x)\preccurlyeq\phi(x^{\prime})\);
2. \(\phi(x\lor x^{\prime})=\phi(l)\vee\phi(l^{\prime})\);
3. \(\phi(x\wedge x^{\prime})=\phi(x)\wedge\phi(x^{\prime})\);
4. \(\phi(x\&x^{\prime})=\phi(x)\&\phi(x^{\prime})\),
for all \(x\), \(x^{\prime}\in\mathcal{Q}\). Suppose that \(\phi\colon\mathcal{Q}\to\mathcal{Q}^{\prime}\) is a quantale homomorphism. If \(J\) is an ideal in \(\mathcal{Q}^{\prime}\), then the _contraction of_\(J\), denoted by \(J^{c}\), is defined by \(\phi^{-1}(J).\) If \(I\) is an ideal in \(\mathcal{Q}\), then the _extension of_\(I\), denoted by \(I^{e}\), is defined by \(\langle\phi(I)\rangle\).
Theorem 2.1.12.: _Let \(\phi\colon\mathcal{Q}\to\mathcal{Q}^{\prime}\) be a quantale homomorphism. For \(I,\)\(I_{1}\), \(I_{2}\in\mathcal{I}_{\mathcal{Q}}\), and \(J,\)\(J_{1}\), \(J_{2}\in\mathcal{I}(\mathcal{Q}^{\prime})\), the following hold_.
1. \(J^{c}\) _is an ideal in_ \(\mathcal{Q}\)_._
2. \(I^{e}\) _is an ideal in_ \(\mathcal{Q}^{\prime}\)_._
3. (a)__\(I\preccurlyeq I^{ec}\)_._ (b)__\(J^{ce}\preccurlyeq J\)_._ (c)__\(J^{c}=J^{cec}\)_._ (d)__\(I^{e}=I^{cece}\)_._
4. _There is a bijection between the sets_ \(\{I\mid I^{ec}=I\}\) _and_ \(\{J\mid J^{ce}=J\}\)_._
5. (a)__\((I_{1}\wedge I_{2})^{e}\preccurlyeq I_{1}^{e}\wedge I_{2}^{e}\)_. (b)__\((I_{1}\&I_{2})^{e}=I_{1}^{e}\&I_{2}^{e}\)_. (c)__\((I_{1}:I_{2})^{e}\preccurlyeq(I_{1}^{e}:I_{2}^{e})\)_._
6. (a)__\((J_{1}\wedge J_{2})^{c}=J_{1}^{c}\wedge J_{2}^{c}\)_. (b)__\(J_{1}^{c}\&J_{2}^{c}\preccurlyeq(J_{1}\&J_{2})^{c}\)_._ (c)__\((J_{1}:J_{2})^{c}\preccurlyeq(J_{1}^{c}:J_{2}^{c})\)_._
Proof.: -- (1) Let \(x,x^{\prime}\in J^{c}\). Then \(\phi(x)\), \(\phi(x^{\prime})\in J\). Since \(J\) is an ideal, we have \(\phi(x\lor x^{\prime})=\phi(x)\vee\phi(x^{\prime})\in J\), implying that \(x\lor x^{\prime}\in J^{c}\). If \(x^{\prime}\preccurlyeq x\) and \(x\in J^{c}\), then \(\phi(x^{\prime})\preccurlyeq\phi(x)\in J\), proving that \(x^{\prime}\in J^{c}\), as required.
(2) Follows from Definition 2.1.4.
(3)-(4) Follows by usual set-theoretic arguments.
5(a) Observe that
\[(I_{1}\wedge I_{2})^{e}=\langle\phi(I_{1}\wedge I_{2})\rangle\preccurlyeq \langle\phi(I_{1})\wedge\phi(I_{2})\rangle=\langle\phi(I_{1})\rangle\wedge \langle\phi(I_{2})\rangle=I_{1}^{e}\wedge I_{2}^{e},\]
where the second equality follows from Lemma 2.1.5.
5(b) First, we show that \(\langle\phi(I_{1}\&I_{2})\rangle=\langle\phi(I_{1})\rangle\&\langle\phi(I_{2})\rangle.\) Since \(I_{1}\&I_{2}\preccurlyeq I_{1}\) and \(I_{1}\&I_{2}\preccurlyeq I_{2}\), we immediately obtain that \(\langle\phi(I_{1}\&I_{2})\rangle\preccurlyeq\langle\phi(I_{1})\rangle\&\langle \phi(I_{2})\rangle.\) For the other inclusion, let \(x\in\langle\phi(I_{1})\rangle\&\langle\phi(I_{2})\rangle\). This implies \(x=x_{1}\&x_{2}\) for some \(x_{1}\), \(x_{2}\in\mathcal{Q}\) such that \(x_{1}\preccurlyeq\bigvee\limits_{i=1}^{n}a_{i}\) and \(x_{2}\preccurlyeq\bigvee\limits_{j=1}^{m}b_{j}\), where \(a_{i}\in\mathcal{Q}\&\phi(I_{1})\) and \(b_{j}\in\mathcal{Q}\&\phi(I_{2})\) for all \(1\leqslant i\leqslant n\) and \(1\leqslant j\leqslant m\). So,
\[x=x_{1}\&x_{2}\preccurlyeq\left(\bigvee\limits_{i=1}^{n}a_{i}\right)\&\left( \bigvee\limits_{j=1}^{m}b_{j}\right)=\bigvee\limits_{i=1}^{n}\bigvee\limits_ {j=1}^{m}a_{i}\&b_{j},\]
where
\[a_{i}\&b_{j}\in\mathcal{Q}\&\phi(I_{1})\&\mathcal{Q}\&\phi(I_{2})=\mathcal{Q} \mathcal{Q}\&\phi(I_{1})\&\phi(I_{2})=\mathcal{Q}\&\phi(I_{1})\&\phi(I_{2}).\]
Therefore, \(x\in\langle\phi(I_{1}\&I_{2})\rangle.\) Now, \((I_{1}\&I_{2})^{e}=\langle\phi(I_{1}\&I_{2})\rangle=\langle\phi(I_{1})\rangle \&\langle\phi(I_{2})\rangle=I_{1}^{e}\&I_{2}^{e}\).
5(c) By Proposition 2.1.6(11) and 5(b), we obtain
\[(I_{1}:I_{2})^{e}\&I_{2}^{e}=((I_{1}:I_{2})\&I_{2})^{e}\preccurlyeq I_{1}^{e}.\]
6. The proofs are similar to (5).
### Prime and semiprime ideals
In this section, our objective is to present the concept of prime and semiprime ideals in the context of a quantale. We shall demonstrate that their definitions based on elements are equivalent to the definitions rooted in ideals. Additionally, we shall explore the topic of radical ideals and delve into their some basic properties.
Definition 2.2.1.: Let \(\mathcal{Q}\) be a quantale.
1. A proper ideal \(P\) of \(\mathcal{Q}\) is called _prime_ if \(x\&y\in P\) implies \(x\in P\) or \(y\in P\) for all \(x\), \(y\in\mathcal{Q}\). By \(\operatorname{Spec}_{\mathcal{Q}}\), we denote the set of all prime ideals in \(\mathcal{Q}\). An ideal \(P\) of \(\mathcal{Q}\) is called _minimal prime_ if \(P\) is both a minimal ideal and a prime ideal.
2. A proper ideal \(P\) of \(\mathcal{Q}\) is called _semiprime_ if \(x^{2}\in P\) implies that \(x\in P\) for all \(x\in\mathcal{Q}\).
3. Two ideals \(I\) and \(J\) of \(\mathcal{Q}\) is said to be _coprime_ if \(I\lor J=\mathcal{Q}\).
4. \(\mathcal{Q}\) is called _Noetherian_ if every ascending chain of ideals in \(\mathcal{Q}\) is eventually stationary.
Proposition 2.2.1.: _Let \(\mathcal{Q}\) be quantale._
1. _An ideal_ \(P\in\operatorname{Spec}_{\mathcal{Q}}\) _if and only if_ \(I\&J\preccurlyeq P\) _implies_ \(I\preccurlyeq P\) _or_ \(J\preccurlyeq P,\) _for all_ \(I,J\in\mathcal{I}_{\mathcal{Q}}\)_._
2. _Any prime ideal_ \(P\) _of_ \(\mathcal{Q}\) _contains a prime ideal_ \(P^{\prime}\) _such that_ \(S\preccurlyeq P^{\prime}\)_._
3. _If_ \(I\) _is a proper ideal in a Noetherian quantale_ \(\mathcal{Q}\)_, then_ \(\mathcal{Q}\) _has only a finite number of minimal prime ideals over_ \(I\)_._
4. _An ideal_ \(I\) _of_ \(\mathcal{Q}\) _is semiprime if and only if_ \(J\&J\preccurlyeq I\) _implies_ \(J\preccurlyeq I,\) _for all_ \(J\in\mathcal{I}_{\mathcal{Q}}\)_._
5. _Let_ \(\mathcal{Q}\) _be a quantale. If_ \(I\) _and_ \(J\) _are coprime ideals in_ \(\mathcal{Q}\)_, then_ \(I\wedge J=IJ.\)__
Proof.: (1) Suppose \(P\) is a prime ideal, and \(I\&J\preccurlyeq P\) with \(J\neq P\). This implies the existence of an element \(j\in J\) such that \(j\notin P\). For any \(i\in I\), we then have \(i\&j\in P\) and based on the assumption, this implies \(i\in P\). Since \(i\) was chosen arbitrarily from \(I\), we conclude that \(I\preccurlyeq P\), as required. Conversely, suppose \(I\&J\preccurlyeq P\) implies either \(I\preccurlyeq P\) or \(J\preccurlyeq P\). Now consider \(x\&y\in P\) for some \(x\), \(y\in\mathcal{Q}\). Let \(a\&b\in\langle x\rangle\&\langle y\rangle\), where \(a\in\langle x\rangle\) and \(b\in\langle y\rangle\). This implies that
\[a\&b\prec\left(\bigvee_{i=1}^{n}x_{i}\right)\&\left(\bigvee_{j=1}^{m}y_{j} \right)=\bigvee_{i=1}^{n}\left(\bigvee_{j=1}^{m}(x_{i}\&y_{j})\right),\]
where \(x_{i}\&y_{j}\in(x\&y)\mathcal{Q}\), for all \(i=1,\ldots,n\) and \(j=1,\ldots,m.\) Thus, \(x_{i}\&y_{j}\in P\), and hence \(a\&b\in P\), implying that \(\langle x\rangle\&\langle y\rangle\preccurlyeq P.\) By the assumption, this implies either \(\langle x\rangle\preccurlyeq P\) or \(\langle y\rangle\preccurlyeq P\); in other words, either \(x\in P\) or \(y\in P\).
(2) Suppose \(\Omega=\{P^{\prime}\in\operatorname{Spec}_{\mathcal{Q}}\mid S\preccurlyeq P^{ \prime}\preccurlyeq P\}.\) Since \(P\in\Omega\), the set \(\Omega\) is nonempty. Consider a subset \(\{P^{\prime}_{\lambda}\}_{\lambda\in\Lambda}\) of decreasing chain of prime ideals of \(\Omega\). Then by Zorn's lemma, there exists a minimal element \(\bigwedge\limits_{\lambda\in\Lambda}P^{\prime}_{\lambda}\) of that chain, and this minimal element is our desired prime ideal.
(3) First we show that every ideal containing \(I\) contains a finite product of prime ideals each containing \(I\). Suppose that this is not the case, and let \(S\) be the set of ideals containing \(I\) which do not contain a finite product of prime ideals each containing \(I\). By hypothesis, \(S\) is not empty. Let \(C\) be a chain in \(S\). As \(\mathcal{Q}\) is Noetherian, \(C\) has a maximum element. From Zorn's lemma, \(S\) has a maximal element \(M\). As \(\mathcal{Q}\) has a prime ideal containing \(I\), \(M\neq\mathcal{Q}\). Also, \(M\) is not prime. There exist \(a\), \(b\in\mathcal{Q}\) such that \(a\&b\in M\) and \(a\notin M\), \(b\notin M\). Setting
\[A=\langle M,a\rangle,\quad B=\langle M,b\rangle,\]
we obtain \(A\&B\preccurlyeq M\), with \(A\) and \(B\) strictly included in \(M\). Since \(M\) is maximal, \(A\) and \(B\) both contain a finite product of prime ideals each containing \(I\), hence so does \(M\), which contradicts the fact that \(M\in S\). It follows that every ideal containing I contains a finite product of prime ideals each containing \(I\). We now apply this result to the ideal \(I\): there exist prime ideals \(P_{1},\ldots,P_{n}\), each containing \(I\), whose product is contained in \(I\). We claim that any minimal prime \(P\) over \(I\) is among the \(P_{i}\). Indeed,
\[P_{1}\&\cdots\&P_{n}\preccurlyeq I\preccurlyeq P.\]
We deduce that \(P_{i}\preccurlyeq P\), for some \(i\). However, \(P\) is minimal, so \(P_{i}=P\) and it follows that there is only a finite number of minimal prime ideals over \(I\).
(4) Suppose \(I\) is a semiprime ideal in \(\mathcal{Q}\). Let \(j\in J\). Then \(j^{2}\in J\&J\preccurlyeq I\). This implies that \(j^{2}\in I\) and since \(I\) is semiprime, \(j\in I\), _i.e._, \(J\preccurlyeq I\). For the converse, we first show that \(\langle x\rangle\&\langle x\rangle\preccurlyeq\langle x^{2}\rangle\), for any \(x\in\mathcal{Q}\). Suppose \(l\in\langle x\rangle\&\langle x\rangle\). In particular, this implies that \(l\preccurlyeq x^{2}\), and hence \(l\in\langle x^{2}\rangle\). Now, let \(x^{2}\in I\). Then
\[\langle x\rangle\&\langle x\rangle\preccurlyeq\langle x^{2}\rangle\preccurlyeq I.\]
By assumption, this means \(x\in\langle x\rangle\preccurlyeq I.\) Hence, \(I\) is a semiprime ideal.
(5) Notice that
\[(I\lor J)\&(I\wedge J)=I\&(I\wedge J)\lor J(I\wedge J)\preccurlyeq(I\wedge(I \&J))\vee((I\&J)\wedge J)\preccurlyeq I\&J,\]
where the equality is obtained by Definition 2.1.1(3), and the first inclusion follows from Proposition 2.1.6(7). If \(I\lor J=\mathcal{Q}\), then by Proposition 2.1.6(3), \(\mathcal{Q}(I\wedge J)=I\wedge J\preccurlyeq I\&J\).
The following proposition is adapted from the realm of rings (see [5, Lemma 3.19]).
Proposition 2.2.2 (Prime avoidance lemma).: _Let \(I\) be a subset in a quantale \(\mathcal{Q}\) that is stable under join and the operation \(\&\). Let \(P_{1},\ldots,P_{n}\) be ideals in \(\mathcal{Q}\) such that \(P_{3},\ldots,P_{n}\in\operatorname{Spec}_{\mathcal{Q}}\). If \(I\neq P_{j}\) for all \(j\), then there is an \(x\in I\) such that \(x\notin P_{j}\) for all \(j\)._
Proof.: We prove the claim by induction. If \(n=1\), the claim is trivially true. Suppose \(n\geqslant 2\) and suppose for every \(i\), there exists an \(x_{i}\in I\) such that \(x_{i}\notin P_{j}\) for all \(j\neq i\). Assume that \(x_{i}\in P_{i}\) for all \(i\). If \(n=2\), then \(x_{1}\lor x_{2}\notin P_{j}\) for \(j=1\), \(2\). Indeed, \(x_{2}\preccurlyeq x_{1}\lor x_{2}\in P_{1}\) implies \(x_{2}\in P_{1}\), a contradiction, and similarly, \(x_{1}\preccurlyeq x_{1}\lor x_{2}\in P_{2}\) implies \(x_{1}\in P_{2}\), a contradiction. For \(n\geqslant 3\),
\[(x_{1}\&\cdots\&x_{n-1})\lor x_{n}\notin P_{j}.\]
For the case, \(n=j\), we have \(x_{n}\in P_{n}\) and since \(P_{n}\) is a prime ideal, \(x_{k}\in P\) for some \(k\in\{1,\ldots,n-1\}\), a contradiction. For, \(j<n\), \(x_{n}\in P_{j}\), again leads to a contradiction.
Our following aim is to explore the radicals of ideals and their associations with prime and semiprime ideals.
Definition 2.2.2.: The _radical_ of an ideal \(I\) in a quantale \(\mathcal{Q}\) is defined as follows:
\[\mathcal{R}(I)=\{x\in\mathcal{Q}\mid x^{n}\in I,\;\text{for some}\;n\in \mathbb{N}\}.\]
An ideal \(I\) is said to be _radical ideal_ if \(\mathcal{R}(I)=I\).
The following proposition gives an equivalent definition of radical of an ideal and it extends Proposition 1.14 of [6].
Proposition 2.2.3.: _If \(I\) is an ideal in a quantale \(\mathcal{Q}\), then \(\mathcal{R}(I)=\bigwedge\limits_{P}\{P\in\operatorname{Spec}_{\mathcal{Q}} \mid I\preccurlyeq P\}\)._
Proof.: Let \(l\in\mathcal{R}(I)\). Then there exists \(n\in\mathbb{N}\) such that \(l^{n}\in I\), and also \(l^{n}\in P\), for all \(P\in\operatorname{Spec}_{\mathcal{Q}}\) with \(I\preccurlyeq P\). This implies that \(l\in P\) for all such \(P\in\operatorname{Spec}_{\mathcal{Q}}\). Hence \(l\in\bigwedge\limits_{P}\left\{P\in\operatorname{Spec}_{\mathcal{Q}}\mid I \preccurlyeq P\right\}.\) Conversely, assume that \(l\notin\mathcal{R}(I)\), for some \(l\in\mathcal{Q}\). Consider the set
\[\Omega=\{J\in\mathcal{I}_{\mathcal{Q}}\mid I\preccurlyeq J\;\text{and}\;l^{n} \notin J,\forall n\in\mathbb{N}\}.\]
It is easy to see that \(\Omega\) is nonempty and by Zorn's lemma \((\Omega,\leqslant)\) has a maximal element, say \(P\) such that \(I\preccurlyeq P\). It suffices to show that \(P\) is a prime ideal. Suppose \(x\), \(y\notin P\) for some \(x\), \(y\in\mathcal{Q}\). This implies \(\langle x,P\rangle\notin\Omega\)\(\langle x,P\rangle\notin\Omega\), which followingly implies \(\langle x\&y,P\rangle\notin\Omega\). Hence \(x\&y\notin P\).
In the next lemma, we compile some elementary properties of the radical of an ideal in a quantale.
**Lemma 2.2.4**.: _For any ideals \(I\), \(J\), \(\{I_{\lambda}\}_{\lambda\in\Lambda}\) in a quantale \(\mathcal{Q}\), the following hold._
1. \(\mathcal{R}(I)\) _is an ideal containing_ \(I\)_._
2. _If_ \(I\preccurlyeq J\)_, then_ \(\mathcal{R}(I)\preccurlyeq\mathcal{R}(J)\)_._
3. \(\mathcal{R}(\mathcal{R}(I))=\mathcal{R}(I)\)_._
4. \(\mathcal{R}(I)=\mathcal{R}(I\&\cdots\&I)\)__(repeated \(n\)-times)_._
5. \(\mathcal{R}(I\wedge J)=\mathcal{R}(I)\wedge\mathcal{R}(J)=\mathcal{R}(I\&J)\)_._
6. \(\bigvee\limits_{\lambda\in\Lambda}\mathcal{R}(I_{\lambda})\preccurlyeq \mathcal{R}\left(\bigvee\limits_{\lambda\in\Lambda}I_{\lambda}\right).\)__
7. \(\mathcal{R}(I)=\mathcal{Q}\) _if and only if_ \(I=\mathcal{Q}\)_._
8. \(\mathcal{R}(I\lor J)=\mathcal{R}(\mathcal{R}(I)\vee\mathcal{R}(J))\)_._
Proof.: (1) By taking \(n=1\), it follows that \(I\preccurlyeq\mathcal{R}(I).\) To show \(\mathcal{R}(I)\) is an ideal, let \(x\in\mathcal{R}(I)\) and let \(y\preccurlyeq x\), for some \(y\in\mathcal{Q}\). Then \(x^{n}\in I\) for some \(n\in\mathbb{N}\). By Lemma 2.1.2(2), this implies \(y^{n}\preccurlyeq x^{n}\), and since \(I\) is an ideal, we obtain that \(y^{n}\in I\). Hence \(y\in\mathcal{R}(I)\), showing that the condition (2) of Definition 2.1.2 holds. To check condition (1) of Definition 2.1.2, let \(x\), \(y\in\mathcal{R}(I)\). Then \(x^{n}\), \(y^{m}\in\mathcal{R}(I)\) for some \(n\), \(m\in\mathbb{N}\). It suffices to show that \((x\lor y)^{m+n}\in I.\) Applying formula (1), we obtain
\[(x\lor y)^{m+n}=\bigvee\limits_{k=0}^{m+n}\binom{m+n}{k}x^{m+n-k}\&y^{k}.\]
Since \(0\leqslant k\leqslant m+n\), we have two possibilities: either \(n\leqslant i\), in which case, \(x^{i}\in I\) (since \(x^{n}\in I\)); or \(m\leqslant n+m-i\), in which case \(y^{n+m-i}\in I\) (since \(y^{m}\in I\)). This proves that \((x\lor y)^{m+n}\in I\).
(2) Follows from Proposition 2.2.3.
(3) Applying 2. on \(I\preccurlyeq\mathcal{R}(I)\) (which follows from (1)) gives \(\mathcal{R}(I)\preccurlyeq\mathcal{R}(\mathcal{R}(I))\). Conversely, suppose that \(x\in\mathcal{R}(\mathcal{R}(I))\). This implies \(x^{n}\in\mathcal{R}(I)\) for some \(n\in\mathbb{N}\), which further implies that \((x^{n})^{m}\in I\), for some \(m\in\mathbb{N}\). However, \((x^{n})^{m}=x^{nm}\in I\). Hence \(x\in\mathcal{R}(I)\).
(4) Since \(I\&\cdots\&I\preccurlyeq I\), by (2), we have \(\mathcal{R}(I\&\cdots\&I)\preccurlyeq\mathcal{R}(I).\) If \(x\in\mathcal{R}(I)\), then \(x^{m}\in I\), for some \(m\in\mathbb{N}\). But then \(x^{mn}=(x^{m})^{n}\in I\&\cdots\&I\), implying that \(x\in\mathcal{R}(I\&\cdots\&I)\).
(5) For the first equality, \(\mathcal{R}(I\wedge J)\preccurlyeq\mathcal{R}(I)\wedge\mathcal{R}(J)\) follows from (2). If \(x\in\mathcal{R}(I)\wedge\mathcal{R}(J)\), then \(x^{n}\in\mathcal{R}(I)\) and \(x^{m}\in\mathcal{R}(J)\) for some \(n\), \(m\in\mathbb{N}\). Let \(k=\max\{n,m\}\). Then \(x^{k}\in I\wedge J\), and hence \(x\in\mathcal{R}(I\wedge J)\). For the second equality, note that
\[(I\wedge J)\&(I\wedge J)\preccurlyeq I\&J\preccurlyeq I\wedge J.\]
Therefore, by (2) and (4), we obtain
\[\mathcal{R}((I\wedge J)\&(I\wedge J))=\mathcal{R}(I\wedge J)\preccurlyeq \mathcal{R}(I\&J)\preccurlyeq\mathcal{R}(I\wedge J).\]
(6) Follows from (2).
(7) Follows from Proposition 2.2.3.
(8) By (2) and (3), it follows that
\[\mathcal{R}(\mathcal{R}(I)\vee\mathcal{R}(J))\preccurlyeq\mathcal{R}(\mathcal{ R}(I\lor J))=\mathcal{R}(I\lor J).\]
Since by (1), \(I\preccurlyeq\mathcal{R}(I)\) and \(J\preccurlyeq\mathcal{R}(J)\), we obtain \(I\lor J\preccurlyeq\mathcal{R}(I)\vee\mathcal{R}(J)\), and applying (2), we get the desired inclusion.
Expanding upon the previously introduced definition of the radical of an ideal in a quantale, we shall now introduce two additional types of radicals specific to quantales: nilradicals and Jacobson radicals. Additionally, we shall explore the relationships that exist among these distinct types of radicals.
**Definition 2.2.3**.: Let \(\mathcal{Q}\) be a quantale.
1. An element \(x\) of \(\mathcal{Q}\) is called _nilpotent_ if \(x^{n}=\bot\) for some positive integer \(n\). The set \(\mathcal{N}(\mathcal{Q})\) of all nilpotent elements of a quantale \(\mathcal{Q}\) is called the _nilradical_ of \(\mathcal{Q}\). A quantale \(\mathcal{Q}\) is called _reduced_ if \(\mathcal{N}(\mathcal{Q})=0\)
2. The _Jacobson radical_\(\mathcal{J}(\mathcal{Q})\) of a quantale \(\mathcal{Q}\) is defined as the intersection of all maximal ideals in \(\mathcal{Q}\).
3. A _zero-divisor_ of \(\mathcal{Q}\) is an element \(x\) of \(\mathcal{Q}\) for which there exists an element \(y\) (\(\neq\bot\)) in \(\mathcal{Q}\) such that \(x\&y=\bot\). If \(\bot\neq\top\) in \(\mathcal{Q}\) and if \(\mathcal{Q}\) does not have any nonzero zero-divisor, then \(\mathcal{Q}\) is called a _quantale domain_ (QD). Note that a quantale \(\mathcal{Q}\) is a QD if and only if \(0\) is a prime ideal.
**Proposition 2.2.5**.: _Let \(\mathcal{Q}\) be a quantale. Then the following hold._
1. \(\mathcal{N}(\mathcal{Q})\in\mathcal{I}_{\mathcal{Q}}\) _and it is the intersection of prime ideals in_ \(\mathcal{Q}\)_. Moreover,_ \(\mathcal{N}(\mathcal{Q})\preccurlyeq\mathcal{J}(\mathcal{Q})\)_._
2. \(\mathcal{Q}\) _is reduced and has only one minimal prime ideal if and only if_ \(\mathcal{Q}\) _is a QD._
Proof.: (1) First we show \(\mathcal{N}(\mathcal{Q})\) is an ideal in \(\mathcal{Q}\). Suppose \(x\in\mathcal{N}(\mathcal{Q})\) and \(y\preccurlyeq x\). Then \(x^{n}=0\) for some \(n\in\mathbb{N}\) and by Lemma 2.1.2(3), \(y^{n}\preccurlyeq x^{n}=0\), implying that \(y^{n}=0\), and hence, \(y\in\mathcal{N}(\mathcal{Q})\). Now, let \(x\), \(y\in\mathcal{N}(\mathcal{Q})\). This implies \(x^{n}=0\) and \(y^{m}=0\) for some \(n\), \(m\in\mathbb{N}\). Observe that \((x\lor y)^{m+n-1}\) is the join of integer multiplies of elements \(x^{r}\&y^{s}\) (see proof of Lemma 2.2.4(1)), where \(r+s=m+n-1\). Since we cannot have both \(r<m\) and \(s<n\), each of these terms vanishes, and hence
\[(x\lor y)^{m+n-1}=0.\]
The fact that \(\mathcal{N}(\mathcal{Q})\) is the intersection of prime ideals in \(\mathcal{Q}\) follows from Proposition 2.2.3. Suppose \(x\in\mathcal{N}(\mathcal{Q})\). Then there exists \(n\geqslant 1\) such that \(x^{n}=\bot\). Let \(M\) be a maximal ideal in \(\mathcal{Q}\). Then \(x^{n}\in M\) and \(M\) is prime, \(x\in M\), proving that \(\mathcal{N}(\mathcal{Q})\preccurlyeq\mathcal{J}(\mathcal{Q})\).
(2) Suppose \(\mathcal{Q}\) is reduced. Then \(\mathcal{N}(\mathcal{Q})=0\). By Proposition 2.2.3, this implies
\[0=\bigwedge_{P\in\operatorname{Spec}_{\mathcal{Q}}}P.\]
Then by Lemma 2.2.1(2) and hypothesis, there exists a prime ideal \(P\) such that \(P=0\). Hence \(\mathcal{Q}\) is a QD. The converse follows from the fact that \(\mathcal{Q}\) is a QD if and only if \(0\) is a prime ideal in \(\mathcal{Q}\).
Our next objective is to establish the equivalence between semiprime ideals and radical ideals within a quantale. This equivalence is well-known in the realm of (noncommutative) rings. However, in the noncommutative scenario, the concepts of \(m\)-systems and \(n\)-systems in rings are necessary. On the other hand, when dealing with ideals in quantale (or commutative rings), the presence of multiplicatively closed subsets alone suffices. To demonstrate this equivalence, we shall first proceed by presenting a series of lemmas.
Lemma 2.2.6.: _An ideal \(P\) in a quantale \(\mathcal{Q}\) is prime if and only if the complement of \(P\) in \(\mathcal{Q}\) forms a multiplicatively closed subset of \(\mathcal{Q}\). Here, a subset \(S\) of \(\mathcal{Q}\) is considered multiplicatively closed if it satisfies two conditions: \(\top\in S\), and for any \(x\in S\) and \(y\in S\), implies \(x\&y\in S\)._
Proof.: Suppose that \(P\) is a prime ideal in \(\mathcal{Q}\) and \(x\), \(y\in\mathcal{Q}\neg P\). Then by Lemma 2.2.1(1), \(x\&y\notin P\), and hence \(x\&y\in\mathcal{Q}\neg P\). Conversely, suppose \(x\), \(y\notin P\). This implies that \(x\), \(y\in\mathcal{Q}\neg P\). Since \(\mathcal{Q}\neg P\) is multiplicatively closed, \(x\&y\in\mathcal{Q}\neg P\), and hence \(x\&y\notin P\), proving that \(P\) is a prime ideal.
Lemma 2.2.7.: _Let \(S\) be a multiplicatively closed subset in a quantale \(\mathcal{Q}\). Suppose \(P\in\mathcal{I}_{\mathcal{Q}}\) such that it is maximal with respect to the property: \(P\cap S=\emptyset\). Then \(P\in\operatorname{Spec}_{\mathcal{Q}}\)._
Proof.: Suppose \(x\notin P\) and \(y\notin P\) for some \(x\), \(y\in\mathcal{Q}\). By the property on \(P\), this implies that the ideals \(\langle x,P\rangle\) and \(\langle y,P\rangle\) both intersect with \(S\). Hence, there exist \(l\), \(l^{\prime}\in\mathcal{Q}\), \(p\), \(p^{\prime}\in P\), and \(s\), \(s^{\prime}\in S\) such that \((x\&l)\vee(p\&s)\in S\) and \((y\&l^{\prime})\vee(p^{\prime}\&s^{\prime})\in S\). Since \(S\) is a multiplicatively closed set, we must have \(((x\&l)\vee(p\&s))\&((y\&l^{\prime})\vee(p^{\prime}\&s^{\prime}))\in S\). On the other hand,
\[((x\&l)\vee(p\&s))((y\&l^{\prime})\vee(p^{\prime}\&s^{\prime}))\] \[=(x\&y\&l\&l^{\prime})\vee(p^{\prime}\&(x\&l\&s^{\prime}))\vee(p \&(y\&l^{\prime}\&s))\vee(p\&p^{\prime}\&(s\&s^{\prime}))\in\langle x\&y,P\rangle,\]
which implies \(x\&y\notin P\). Hence \(P\) is a prime ideal.
Lemma 2.2.8.: _Let \(\mathcal{Q}\) be a quantale. For every \(I\in\mathcal{I}_{\mathcal{Q}}\), \(\mathcal{R}(I)\) is equal to the set_
\[T=\{l\in\mathcal{Q}\mid\text{every multiplicatively closed subset containing $l$ intersects $I$}\}.\]
Proof.: Suppose that \(r\in T\) and \(P\in\operatorname{Spec}_{\mathcal{Q}}\) such that \(I\preccurlyeq P\). Then by Lemma 2.2.6, \(\mathcal{Q}\neg P\) is a multiplicatively closed subset of \(\mathcal{Q}\) and \(l\notin\mathcal{Q}\neg P\). Hence \(l\in P\). Conversely, let \(l\notin T\). This implies that there exists a multiplicatively closed subset \(S\) of \(\mathcal{Q}\) such that \(l\in S\) and \(S\wedge I=\emptyset\). By Zorn's lemma, there exists an ideal \(P\) containing \(I\) and maximal with respect to the property that \(P\wedge S=\emptyset\). By Lemma 2.2.7, \(P\) is a prime ideal with \(l\notin P\).
Lemma 2.2.9.: _Suppose \(I\) is a semiprime ideal in a quantale \(\mathcal{Q}\) and suppose \(x\in\mathcal{Q}\neg I\). Then there exists a multiplicatively closed subset \(S\) of \(\mathcal{Q}\) such that \(x\in S\preccurlyeq Q\neg I\)._
Proof.: Define the elements of \(S=\{x_{1},x_{2},\ldots,x_{n},\ldots\}\) inductively as follows: \(x_{1}:=x\); \(x_{2}:=x_{1}\&x_{1}\); \(\ldots\); \(x_{n}:=x_{n-1}\&x_{n-1}\); \(\ldots\). Obviously \(x\in S\) and it is also easy to see that \(x_{i}\), \(x_{j}\in S\) implies that \(x_{i}\&x_{j}\in S\).
Theorem 2.2.10.: _For any ideal \(I\) in a quantale \(\mathcal{Q}\), the following are equivalent._
1. \(I\) _is semiprime._
2. \(I\) _is an intersection of prime ideals in_ \(\mathcal{Q}\)_._
3. \(I\) _is a radical ideal._
Proof.: From Proposition 2.2.3, it follows that (3)\(\Rightarrow\)(2). Since the intersection of semiprime ideals is a semiprime ideal, (2)\(\Rightarrow\)(1) follows. What remains is to show that (1)\(\Rightarrow\)(3) and for that, it is sufficient to show \(\mathcal{R}(I)\preccurlyeq I\). Suppose that \(x\notin I\). Then \(x\in\mathcal{Q}\neg I\) and by Lemma 2.2.9, there exists a multiplicatively closed subset \(S\) of \(\mathcal{Q}\) such that \(x\in S\preccurlyeq Q\neg I\). But \(S\wedge I=\emptyset\) and hence by Lemma 2.2.8, \(x\notin\mathcal{R}(I)\).
Corollary 2.2.11.: _If \(I\in\mathcal{I}_{\mathcal{Q}}\), then \(\mathcal{R}(I)\) is the smallest semiprime ideal in \(\mathcal{Q}\) containing \(I\)._
Next we wish to consider saturation of a subset of a quantale and relate with prime ideals.
**Definition 2.2.4**.: Let \(\mathcal{Q}\) be a quantale and \(S\) be a multiplicatively closed subset of \(\mathcal{Q}\). We say \(S\) is _saturated_ if for \(x\), \(y\in\mathcal{Q}\) and \(x\&y\in S\) implies that \(x\), \(y\in S\). The _saturation_ of \(S\) is defined as
\[\overline{S}=\{x\in\mathcal{Q}\mid\text{there exists }y\in\mathcal{Q}\text{ such that }x\&y\in S\}.\]
The following proposition shows that \(\overline{S}\) behaves as a closure operation. Moreover, it is related with prime ideals as expected, and extends Exercise 7 (from Chapter 3) of [6].
**Proposition 2.2.12**.: _Let \(\mathcal{Q}\) be a quantale and \(S\) be a multiplicatively closed subset of \(\mathcal{Q}\). Then the following hold._
1. \(\overline{S}\) _is the smallest saturated multiplicatively closed subset in_ \(\mathcal{Q}\) _containing_ \(S\)_._
2. \(S\) _is stauarted if and only if_ \(\mathcal{Q}\neg S\) _is a join of prime ideals._
Proof.: (1) If \(s\in S\), then \(\top\&s=s\in\overline{S}\), and hence \(S\preccurlyeq\overline{S}.\) Suppose \(x\), \(y\in\overline{S}\). Then there exist \(y\), \(y^{\prime}\in\mathcal{Q}\) such that \(x\&x^{\prime}\in S\) and \(y\&y^{\prime}\in S\). Since \(S\) is a multiplicatively closed set,
\[(x\&x^{\prime})\&(y\&y^{\prime})=(x\&y)\&(x^{\prime}\&y^{\prime})\in S,\]
which implies that \(x\&y\in\overline{S}.\) Therefore, \(\overline{S}\) is a multiplicatively closed subset of \(\mathcal{Q}\). To show \(\overline{S}\) is saturated, let \(x\&y\in\overline{S}\) for some \(x\), \(y\in\mathcal{Q}\). Then there exists \(z\in\mathcal{Q}\) such that \(x\&(y\&z)=(x\&y)\&z\in S\). This implies \(x\in\overline{S}\). Similarly, \(y\in\overline{S}.\) Finally, let \(T\) be a sataretd multiplicatively closed subset of \(\mathcal{Q}\) containing \(S\). Suppose \(s\in\overline{S}.\) Then there exits \(y\in\mathcal{Q}\) such that \(x\&y\in S\preccurlyeq T\). Since \(T\) is saturated, \(x\in T\).
(2) Suppose \(\mathcal{Q}\neg S=\bigvee\limits_{\lambda\in\Lambda}\{P_{\lambda}\mid P_{ \lambda}\in\operatorname{Spec}_{\mathcal{Q}}\}\). Let \(x\&y\in S\) for some \(x\), \(y\in\mathcal{Q}\) and if possible, let \(x\notin S\). This implies \(x\preccurlyeq p_{\lambda}\) for some \(p_{\lambda}\in P_{\lambda}\). Since \(P_{\lambda}\) is an ideal and \(x\&y\preccurlyeq x\wedge y\preccurlyeq x\preccurlyeq p_{\lambda}\in P_{\lambda}\), we must have \(x\&y\in P_{\lambda}\). This implies \(x\&y\notin S\), a contradiction. This proves that \(S\) is saturated. Conversely, let \(S\) be a saturated subset of \(\mathcal{Q}\). Let \(a\in\mathcal{Q}\neg S.\) It suffices to show that \(a\in P_{\lambda}\) for some \(P_{\lambda}\in\operatorname{Spec}_{\mathcal{Q}}\) such that \(P_{\lambda}\wedge S=\emptyset.\) Suppose
\[\Omega=\{I\in\mathcal{I}_{\mathcal{Q}}\mid a\in I\text{ and }I\wedge S=\emptyset\}.\]
Observe that \(\langle a\rangle\wedge S=\emptyset\). Indeed, \(x_{i}\&a\preccurlyeq\bigvee\limits_{i=1}^{n}x_{i}\&a\) for \(x_{i}\in\mathcal{Q}\), implies \(x_{i}\&a\in S\) and since \(S\) is saturated, this implies \(a\in S\), contradiction. Hence \(\langle a\rangle\in\Omega\), and thus \(\Omega\neq\emptyset.\) By Zorn's lemma, \(\Omega\) has a maximal element, say \(P\). We claim that \(P\) is a prime ideal and proof of this is already described in Proposition 2.2.3.
### Primary ideals and primary decompositions
The decomposition of an ideal into primary ideals holds a significant place in ideal theory of rings and is considered a traditional cornerstone. The objective of this section is to establish several classical uniqueness theorems for ideals for quantales, and hence extend the corresponding results of [6].
**Definition 2.3.1**.: A proper ideal \(P\) in a quantale \(\mathcal{Q}\) is called _primary_ if for \(x\), \(y\in\mathcal{Q}\) and \(x\&y\in I\) imply that \(x\in I\) or \(y^{n}\in I\) for some \(n\in\mathbb{N}\). If \(P^{\prime}\) is a primary ideal in \(\mathcal{Q}\) and \(\mathcal{R}(P^{\prime})=P\), then we say \(P^{\prime}\) is \(P\)_-primary_.
**Proposition 2.3.1**.: _Let \(\mathcal{Q}\) be a quantale. Then the following hold._
1. _Every prime ideal in_ \(\mathcal{Q}\) _is primary._
_._
2. _Let_ \(P\) _be a primary ideal in_ \(\mathcal{Q}\)_. Then_ \(\mathcal{R}(P)\) _is the smallest prime ideal in_ \(\mathcal{Q}\) _containing_ \(P\)_._
Proof.: (1) Suppose \(P\) is a prime ideal in \(\mathcal{Q}\) and \(x\&y\in P\) for some \(x\), \(y\in\mathcal{Q}\). Then \(x\in P\) or \(y=y^{1}\in P\),
(2) By Proposition 2.2.3, it suffices to show that \(\mathcal{R}(P)\) is a prime ideal. Suppose \(x\&y\in\mathcal{R}(P)\) for some \(x\), \(y\in\mathcal{Q}\). Then \((x\&y)^{n}\in P\) for some \(n\in\mathbb{N}\). This implies that \(x^{n}\in P\) or \(y^{mn}\in P\) for some \(m\in\mathbb{N}\). Hence, \(x\in\mathcal{R}(P)\) or \(y\in\mathcal{R}(P)\).
When considering a finite intersection of primary ideals in a quantale, with the condition that all the ideals involved are \(P\)-primary for a given prime ideal \(P\), the resulting intersection is indeed primary. To establish this claim, we first require a preliminary result.
**Lemma 2.3.2**.: _If \(I_{1},\ldots,I_{n}\) are ideals in a quantale \(\mathcal{Q}\) and \(I=\bigwedge\limits_{i=1}^{n}I_{i},\) then \(\mathcal{R}(I)=\bigwedge\limits_{i=1}^{n}\mathcal{R}(I_{i})\)._
Proof.: Suppose \(x\in\mathcal{R}(I)\). Then \(x^{n}\in I\) for some \(n\in\mathbb{N}\). This implies that \(x^{n}\in I_{i}\) for all \(i\in\{1,\ldots,n\}\), and hence \(x\in\mathcal{R}(I_{i})\) for all \(i\in\{1,\ldots,n\}\), _i.e._, \(x\in\bigwedge\limits_{i=1}^{n}\mathcal{R}(I_{i}).\) Now, to obtain the other inclusion, let \(z\in\bigwedge\limits_{i=1}^{n}\mathcal{R}(I_{i}).\) This implies that \(z^{m_{i}}\in\mathcal{R}(I_{i}),\) where \(m_{i}\in\mathbb{N}\), for all \(i\in\{1,\ldots,n\}.\) Choose \(m=\max\limits_{1\prec i\prec n}\{m_{i}\}.\) Then \(z^{m}\in I_{i}\) for all \(i\in\{1,\ldots,n\}\), and hence \(z\in\mathcal{R}(I)\).
**Theorem 2.3.3**.: _If \(P\) is a prime ideal in a quantale \(\mathcal{Q}\) and if \(P_{1},\ldots,P_{n}\) are \(P\)-primary ideals in \(\mathcal{Q}\), then \(P^{\prime}=\bigwedge\limits_{i=1}^{n}P_{i}\) is also a \(P\)-primary ideal in \(\mathcal{Q}\)._
Proof.: By Lemma 2.2.4(5) and Lemma 2.3.2, we have
\[\mathcal{R}(P^{\prime})=\mathcal{R}\left(\bigwedge\limits_{i=1}^{n}P_{i}\right) =\bigwedge\limits_{i=1}^{n}\mathcal{R}(P_{i})=\bigwedge\limits_{i=1}^{n}P=P.\]
What remains for us to prove is that \(P^{\prime}\) is a primary ideal. Let \(x\&y\in P^{\prime}\). Then \(x\&y\in P_{i}\) for all \(1\prec i\prec n\). If \(x\notin P_{j}\) for some \(j\), then \(y^{m}\in P_{j}\) for some \(m\in\mathbb{N}\), because \(P_{j}\) is a primary ideal. It follows that \(y\in\mathcal{R}(P_{j})=P\). Since \(\mathcal{R}(P^{\prime})=P\), there exists \(n\in\mathbb{N}\) such that \(y^{n}\in P^{\prime}\).
**Proposition 2.3.4**.: _Suppose \(P\) is a prime ideal in a quantale \(\mathcal{Q}\), \(P^{\prime}\) is a \(P\)-primary ideal, and \(x\) is an element in \(\mathcal{Q}\). Then the following hold._
1. _If_ \(x\in P^{\prime}\) _then_ \((P^{\prime}:x)=\mathcal{Q}\)_._
2. _If_ \(x\notin P^{\prime}\)_, then_ \((P^{\prime}:x)\) _is a_ \(P\)_-primary ideal._
3. _If_ \(x\notin P\)_, then_ \((P^{\prime}:x)=P^{\prime}\)_._
Proof.: (1) If \(x\in P^{\prime}\), then \(x\&1=x\in P^{\prime}\), and hence \(1\in(P^{\prime}:x)\), implying that \((P^{\prime}:x)=\mathcal{Q}\).
(2) Let \(x\notin P^{\prime}\). If \(y\in(P^{\prime}:x)\), then \(x\&y\in P^{\prime}\). Since \(P^{\prime}\) is a primary ideal and \(x\notin P^{\prime}\), there exists \(n\in\mathbb{N}\) such that \(y^{n}\in P^{\prime}\). Hence \(y\in\mathcal{R}(P^{\prime})=P\). Therefore, we have \(P^{\prime}\prec(P^{\prime}:x)\preccurlyeq P\), implying that
\[P=\mathcal{R}(P^{\prime})\preccurlyeq\mathcal{R}(P^{\prime}:x)\preccurlyeq \mathcal{R}(P)=P.\]
Hence \(\mathcal{R}(P^{\prime}:x)=P\). It remains for us to show that \((P^{\prime}:x)\) is a primary ideal. Since \(x\notin P^{\prime}\), we have \(\top\notin(P^{\prime}:x)\), so \((P^{\prime}:x)\neq\mathcal{Q}\), _i.e._ is a proper ideal in \(\mathcal{Q}\). Let \(a\&b\in(P^{\prime}:x)\). If \(b^{m}\notin P^{\prime}\) for all \(m\in\mathbb{N}\), then \(b\notin\mathcal{R}(P^{\prime}:x)=P\). However, \(a\&b\&x\in P^{\prime}\) implies that \(a\&x\in P^{\prime}\) or \(b^{k}\in P^{\prime}\) for some \(k\in\mathbb{N}\), because \(P^{\prime}\) is a primary ideal. If \(b^{k}\in P^{\prime}\), then \(b\in\mathcal{R}(P^{\prime})=P\), a contradiction. Therefore, \(a\&x\in P^{\prime}\), implying that \(a\in(P^{\prime}:x)\)whence \((P^{\prime}:x)\) is a primary ideal.
(3) Let \(x\notin P\). If \(y\notin P^{\prime}\) and \(xky\in P^{\prime}\), then \(x^{n}\in P^{\prime}\) for some \(n\in\mathds{N}\), because \(P^{\prime}\) is a primary ideal. Therefore, \(x\in\mathcal{R}(P^{\prime})=P\), a contradiction. Hence \(x\&y\notin P^{\prime}\). This implies that \(y\notin(P^{\prime}:x)\). By Proposition 2.1.6(12), \(P^{\prime}\preccurlyeq(P:x)\), and this we have \(P^{\prime}=(P^{\prime}:x)\).
**Definition 2.3.2**.: A _primary decomposition_ of an ideal \(I\) in a quantale \(\mathcal{Q}\) is an expression:
\[I=\bigwedge_{i=1}^{n}P_{i}, \tag{2.3}\]
where \(P_{i}\) are primary ideals in \(\mathcal{Q}\). The expression (2.3) is said to be _minimal_ if
1. \(\mathcal{R}(P_{1}),\ldots,\mathcal{R}(P_{n})\) are distinct.
2. \(\bigwedge\limits_{j\neq i}P_{j}\nsubseteq P_{i}\), for all \(i\).
If an ideal \(I\) in \(\mathcal{Q}\) has a primary decomposition, then we say that \(I\) is _decomposible_.
**Proposition 2.3.5**.: _A primary decomposition may be replaced by a minimal primary decomposition._
Proof.: Consider a primary decomposition \(I=\bigwedge\limits_{i=1}^{n}P_{i}\), where \(P_{i}\) are primary ideal in \(\mathcal{Q}\). If
\[\mathcal{R}(P_{i_{1}})=\cdots=\mathcal{R}(P_{i_{k}})=P,\]
then by Theorem 2.3.3, \(P^{\prime}=\bigwedge\limits_{j=1}^{k}P_{i_{j}}\) is a \(P\)-primary ideal. Therefore, we can replace \(P_{i_{1}},\ldots,P_{i_{k}}\) by \(P^{\prime}\), and continue the process to guarantee that the condition (1) of Definition 2.3.2 holds. If condition (2) of Definition 2.3.2 does not hold, then we can eliminate ideals until it does hold, without changing the overall intersection.
While primary decompositions, or minimal primary decompositions, may not be unique, they possess certain uniqueness properties, which are outlined in the following theorem.
**Theorem 2.3.6**.: _Let \(I\) be a decomposible ideal in a quantale and \(I=\bigwedge\limits_{i=1}^{n}P_{i}\), a minimal primary decomposition. Let \(P^{\prime}_{i}=\mathcal{R}(P_{i})\), for \(i=1,\ldots,n\). Then the set \(\{P^{\prime}_{1},\ldots,P^{\prime}_{n}\}\) is composed of the prime ideals \(P\) of \(\mathcal{Q}\) such that \(P=\mathcal{R}((I:x)),\) for some \(x\in\mathcal{Q}\)._
Proof.: Suppose \(x\in\mathcal{Q}\). By Proposition 2.1.6(16) and Lemma 2.3.2, we have
\[\mathcal{R}((I:x))=\mathcal{R}\left(\left(\bigwedge\limits_{i=1}^{n}P_{i}:x \right)\right)=\mathcal{R}\left(\bigwedge\limits_{i=1}^{n}(P_{i}:x)\right)= \bigwedge\limits_{i=1}^{n}\mathcal{R}((P_{i}:x)).\]
From (1) and (2) of Proposition 2.3.4, we obtain
\[\mathcal{R}((I:x))=\bigwedge\limits_{i=1}^{n}\mathcal{R}((P_{i}:x))=\bigwedge \limits_{i,x\notin P_{i}}P^{\prime}_{i}.\]
If the intersection of a finite set of ideals is a prime ideal, then the intersection is equal to one of the ideals; thus, if \(\mathcal{R}(I:x)\) is a prime ideal, then
\[\mathcal{R}((I:x))\preccurlyeq\{P^{\prime}_{i}\mid x\notin P_{i}\}\preccurlyeq \{P^{\prime}_{1},\ldots,P^{\prime}_{n}\}.\]
For the converse, let \(i\in\{1,\ldots,n\}\). Because the primary decomposition is minimal, for each \(i\), there exists \(x_{i}\in\left(\bigwedge\limits_{j\neq i}P_{j}\right)\neg P_{i}\). If \(y\in(P_{i}:x_{i})\), then \(y\&x_{i}\in P_{i}\). Therefore,
\[y\&x_{i}\in P_{i}\wedge\left(\bigwedge\limits_{j\neq i}P_{j}\right)=I,\]
which implies that \(y\in(I:x_{i})\). Hence
\[(P_{i}:x_{i})\preccurlyeq(I:x_{i})\preccurlyeq(P_{i}^{\prime}:x_{i}),\]
where the second inclusion follows from the fact that \(I\preccurlyeq P_{i}\). Therefore, \((P_{i}:x_{i})=(I:x_{i})\). By Proposition 2.3.4(2), we have
\[\mathcal{R}((I:x_{i}))=\mathcal{R}((P_{i}:x_{i}))=P_{i}^{\prime}.\]
From this it follows that \(P_{i}^{\prime}\) form the set of those ideals \(\mathcal{R}((I:x))\) which are prime ideals in \(\mathcal{Q}\).
Suppose \(I\) is a decomposable ideal and \(I=\bigwedge\limits_{i=1}^{n}P_{i}\), a minimal primary decomposition. Let us denote \(\mathcal{R}(P_{i})=P_{i}^{\prime}\). We say that the prime ideals \(P_{1}^{\prime},\ldots,P_{n}^{\prime}\)_belong to_\(I\). The minimal elements of the set \(S=\{P_{1}^{\prime},\ldots,P_{n}^{\prime}\}\) with respect to inclusion are said to be _isolated_ prime ideals belonging to \(I\), and the others are called _embedded_ prime ideals. Define
\[I^{\mathbb{TSpec}_{\mathcal{Q}}}=\{P\in\operatorname{Spec}_{\mathcal{Q}}|P \supseteq I\}.\]
We shall show that the minimal elements of the set \(S\) are the minimal elements of the set \(I^{\mathbb{TSpec}_{\mathcal{Q}}}\). To obtain that, we need the following result.
**Proposition 2.3.7**.: _Let \(I\) be a decomposable ideal in a quantale \(\mathcal{Q}\), \(I=\bigwedge\limits_{i=1}^{n}P_{i}\) a minimal primary decomposition, with \(S\) the set of prime ideals belonging to \(I\). If \(P\in I^{\mathbb{TSpec}_{\mathcal{Q}}}\), then \(P\) contains an isolated prime ideal \(P_{j}^{\prime}\)._
Proof.: Set \(P_{i}^{\prime}=\mathcal{R}(P_{i})\), for \(i\in\{1,\ldots,n\}\). Applying Lemma 2.3.2, we obtain
\[\bigwedge\limits_{i=1}^{n}P_{i}^{\prime}=\bigwedge\limits_{i=1}^{n}\mathcal{R} (P_{i})=\mathcal{R}(I)\preccurlyeq\mathcal{R}(P)=P.\]
However, if a prime ideal contains an intersection of ideals, then at least one of the ideals in the intersection is contained in the prime ideal, therefore \(P_{j}^{\prime}\preccurlyeq P\), for some \(j\). The result now follows.
**Corollary 2.3.8**.: _The isolated prime ideals are the minimal elements in \(I^{\mathbb{TSpec}_{\mathcal{Q}}}\)._
Proof.: In Proposition 2.3.7, we have observed that \(S\preccurlyeq I^{\mathbb{TSpec}_{\mathcal{Q}}}\), so a fortiori if \(P_{j}^{\prime}\) is an isolated prime in \(S\), then \(P_{j}\in I^{\mathbb{TSpec}_{\mathcal{Q}}}\). Suppose now that \(P\in I^{\mathbb{TSpec}_{\mathcal{Q}}}\), with \(P\preccurlyeq P_{j}^{\prime}\). From Proposition 2.3.7, it follows that there exists an isolated prime ideal \(P_{k}^{\prime}\preccurlyeq P\), so \(P_{k}^{\prime}\preccurlyeq P_{j}^{\prime}\). Since \(P_{j}^{\prime}\) is isolated, \(P_{k}^{\prime}=P_{j}^{\prime}\), hence \(P=P_{j}^{\prime}\), and it follows that \(P_{j}^{\prime}\) is minimal in \(I^{\mathbb{TSpec}_{\mathcal{Q}}}\).
Although the primary ideals in different minimal decompositions of an ideal are not necessarily the same, the primary ideals whose radicals are isolated are the same. We aim now to establish this.
Lemma 2.3.9.: _Let \(I\) be a decomposable ideal in a quantale \(\mathcal{Q}\) and \(P_{j}\) an ideal in a minimal decomposition of \(I\) such that \(\mathcal{R}(P_{j})\) is an isolated prime ideal. Then \(P_{j}\) is composed of the elements \(a\in\mathcal{Q}\) for which there exists \(b\notin\mathcal{R}(P_{j})\) with \(a\&b\in I\)._
Proof.: Suppose \(I=\bigwedge\limits_{i=1}^{n}P_{i}\) is a minimal decomposition, with \(\mathcal{R}(P_{j})\) isolated. We claim that \(P_{i}\nsubseteq\mathcal{R}(P_{j})\), for \(i\neq j\). If possible, let \(P_{i}\preccurlyeq\mathcal{R}(P_{j})\) and let \(x\in\mathcal{R}(P_{i})\); then \(x^{n}\in P_{i}\), for some \(n\in\mathds{N}\), which implies that there exists \(m\in\mathds{N}\) such that \(x^{nm}\in P_{j}\), because \(P_{i}\preccurlyeq\mathcal{R}(P_{j})\), hence \(x\in\mathcal{R}(P_{j})\). It follows that \(\mathcal{R}(P_{i})\preccurlyeq\mathcal{R}(P_{j})\), which is impossible, because \(\mathcal{R}(P_{j})\) is isolated. Thus \(P_{i}\nlesseq\mathcal{R}(P_{j})\), as claimed.
Now let \(a\in P_{j}\). From what we have just seen, for \(i\neq j\), there exists \(b_{i}\in P_{i}\
In Theorem 2.2.10, it was noted that a radical ideal could be presented as the intersection of prime ideals that contain it. Now, we aim to illustrate in the next proposition that any proper ideal can be expressed in a comparable fashion, albeit utilizing irreducible ideals instead. This result extends Proposition 7.35 of [22]. However, prior to advancing further, we must establish a lemma.
**Lemma 2.4.2**.: _Consider a quantale \(\mathcal{Q}\). Assume that \(\bot\neq x\in\mathcal{Q}\) and \(I\) is a proper ideal in \(\mathcal{Q}\) such that \(x\notin I\). In this case, there exists an irreducible ideal \(J\) of \(\mathcal{Q}\) satisfying the conditions \(I\preccurlyeq J\) and \(x\notin J.\)_
Proof.: Let \(\{J_{\lambda}\}_{\lambda\in\Lambda}\) be a chain of ideals in \(\mathcal{Q}\) such that \(I\preccurlyeq J_{\lambda}\) and \(x\notin J_{\lambda}\) for all \(\lambda\in\Lambda\). Then \(I\preccurlyeq\bigvee_{\lambda\in\Lambda}J_{\lambda}\) and \(x\notin J_{\lambda}\), for all \(\lambda\in\Lambda\). By Zorn's lemma, there exists a maximal element \(J\) of this chain. Suppose that \(J=J_{1}\wedge J_{2}\). By the maximality condition of \(J\), we must have \(x\in J_{1}\) and \(x\in J_{2}\), and hence \(x\in J_{1}\wedge J_{2}=J,\) a contradiction. Therefore, \(J\) is the required irreducible ideal.
**Proposition 2.4.3**.: _If \(I\) is a proper ideal in a quantale \(\mathcal{Q}\), then \(I=\bigwedge\limits_{I\preccurlyeq J}\{J\mid J\in\operatorname{Irr}(\mathcal{Q})\}\)._
Proof.: By Lemma 2.4.2, there exists an irreducible ideal \(J\) of \(\mathcal{Q}\) such that \(I\preccurlyeq J\). Let
\[J^{\prime}=\bigwedge\limits_{I\preccurlyeq J}\{J\mid J\in\operatorname{Irr}( \mathcal{Q})\}.\]
Then \(I\preccurlyeq J^{\prime}\). We claim that \(J^{\prime}=I.\) If \(J^{\prime}\neq I\), then there exists an \(x\in J^{\prime}\neg I\), and by Lemma 2.4.2, there exists an irreducible ideal \(J^{\prime\prime}\) such that \(x\notin J^{\prime\prime}\) and \(I\preccurlyeq J^{\prime\prime}\), a contradiction.
In the context of a Noetherian ring, it is widely recognized that radicals can be expressed as a finite intersection of prime ideals. Expanding upon this idea, the following proposition extends this result to include any ideal by utilizing irreducible ideals. This result also extends Proposition 7.3 of [40]. We say a quantale _Noetherian_ if every ascending chain of ideals eventually becomes stationary.
**Proposition 2.4.4**.: _If \(\mathcal{Q}\) is a Noetherian quantale, then every ideal in \(\mathcal{Q}\) can be represented as the intersection of a finite number of irreducible ideals in \(\mathcal{Q}\)._
Proof.: Suppose that
\[\mathcal{F}=\left\{J\in\mathcal{R}(\mathcal{Q})\mid J\neq\bigwedge\limits_{j= 1}^{n}I_{j},\,I_{j}\in\operatorname{Irr}(\mathcal{Q})\right\}.\]
It is sufficient to show that \(\mathcal{F}=\emptyset\). Since \(\mathcal{Q}\) is Noetherian, \(\mathcal{F}\) has a maximal element, say \(I\). Since \(I\in\mathcal{F}\), it is not a finite intersection of irreducible ideals in \(\mathcal{Q}\). This implies that \(I\) is not irreducible. Hence, there are ideals \(J\) and \(K\) such that \(I\preccurlyeq J\), \(I\preccurlyeq K\), \(I\neq J\), \(I\neq K\), and \(I=J\wedge K.\) Since \(I\) is a maximal element of \(\mathcal{F}\), we must have \(J\), \(K\notin\mathcal{F}.\) Therefore, \(J\) and \(K\) are a finite intersection of irreducible ideals in \(\mathcal{Q}\), which followingly implies that \(I\) is also a finite intersection of irreducible ideals in \(\mathcal{Q}\), a contradiction.
The following proposition establishes the relationships between prime, semiprime, and strongly irreducible ideals, and it extends Proposition 7.36 of [22] and Theorem 2.1(i) of [2].
**Proposition 2.4.5**.: _If \(P\) is a strongly irreducible ideal in \(\mathcal{Q}\), then \(P\) is prime if and only if \(P\) is radical._
Proof.: Notice that if \(P\) is prime, then obviously \(P\) is a radical ideal. For the converse, suppose that \(P\) is a radical ideal and \(I\&J\preccurlyeq P\), for some \(I\), \(J\in\mathcal{I}_{\mathcal{Q}}\). Then
\[I\wedge J\preccurlyeq\mathcal{R}(I\wedge J)=\mathcal{R}(I\&J)\preccurlyeq \mathcal{R}(P)=P,\]
where the first inclusion and the first equality respectively follow from Lemma 2.2.4(1) and Lemma 2.2.4(5). Moreover, the second inclusion and the last equality respectively follow from Lemma 2.2.4(2) and the fact that \(P\) is a prime ideal. Since \(P\) is strongly irreducible, either \(I\preccurlyeq P\) or \(J\preccurlyeq P\).
In the context of commutative rings, it has been established (see [2, Theorem 2.1(ii)]) that every proper ideal is contained in a minimal strongly irreducible ideal. Remarkably, the same principle applies to strongly irreducible ideals in quantales.
**Proposition 2.4.6**.: _Every proper ideal in a quantale is contained in a minimal strongly irreducible ideal._
Proof.: Let \(I\) be a proper ideal in a quantale \(\mathcal{Q}\) and let
\[\mathcal{E}=\{J\,|\,I\preccurlyeq J,\,J\in\operatorname{Irr}^{+}(\mathcal{Q} )\}.\]
Note that every maximal ideal in \(\mathcal{Q}\) is prime and by Lemma 2.1.6(6), every prime ideal is strongly irreducible. Also, observe that by Theorem 2.1.9, every proper ideal is contained in a maximal ideal. Therefore, the set \(\mathcal{E}\) is nonempty. By Zorn's lemma, \(\mathcal{E}\) has a minimal element, which is our desired minimal strongly irreducible ideal.
The following result demonstrates the scenario in which all ideals in a quantale are strongly irreducible, and its proof is evident. This result extends Lemma 3.5 of [2].
**Proposition 2.4.7**.: _Every ideal in a quantale \(\mathcal{Q}\) is strongly irreducible if and only if \(\mathcal{I}_{\mathcal{Q}}\) is totally ordered._
We bring this subsection to a close with a theorem pertaining to arithmetic quantales, wherein the notions of irreducibility and strongly irreducibility align. The proof of this theorem is identical to that of [32, Theorem 3 and Theorem 7]
**Theorem 2.4.8**.: _In a arithmetic quantale \(\mathcal{Q}\), an ideal in \(\mathcal{Q}\) is irreducible if and only if it is strongly irreducible. Conversely, if an irreducible ideal in a quantale \(\mathcal{Q}\) is strongly irreducible, then \(\mathcal{Q}\) is arithmetic._
**Corollary 2.4.9**.: _In an arithmetic quantale, any ideal is the intersection of all strongly irreducible ideals containing it._
**Concluding Remarks 2.4.10**.: Building upon the framework of the "ideal theory of quantales" that we have begun to develop in this paper, we shall now provide a brief outline of our future work. In [24], our focus be on examining the properties of different ideals in relation to the quotient of a quantale. We shall also introduce the concept of localization for a quantale. Additionally, by considering modules over quantales, we shall extend some of the significant results in commutative algebra.
On the other hand, in [25], our objective be to establish lower topologies on distinguished classes of ideals in a quantale, and this extend the work done in [13] and [16] for rings. We shall investigate the topological properties of "quantale spaces," including quasi-compactness, separation properties, connectedness, continuity, and spectral spaces (as defined in [30]). Specifically, we shall delve into the detailed
examination of quantale spaces associated with prime ideals, minimal prime ideals, maximal ideals, and strongly irreducible ideals in a quantale. These investigations respectively extend the work carried out in [27], [31], [42], and [2] for rings. Furthermore, we shall demonstrate that the quantale space of proper ideals is spectral, thereby generalizing the corresponding result presented in [23] for rings.
While our focus in this paper was on a commutative quantale with identity \(1\), it is worth noting that the developed theory can be further extended to a complete lattice \((\mathsf{L},\preceq,\bot,\top)\) equipped with an operation denoted as \(\odot\) that retains Axioms (1), (2), and (4) from Definition 2.1.1, as well as the condition:
\[x\odot(y\lor z)=(x\odot y)\vee(x\odot z),\]
for all \(x\), \(y\), and \(z\) in \(\mathsf{L}\).
The attentive reader may have observed that we have thus far assumed commutativity in our quantales. However, by relaxing the commutativity axiom, we naturally arrive at the concept of primitive ideals (as described in [34]) in quantales. In the forthcoming paper [26], our objective be to explore Jacobson's structure theory for quantales. Specifically, we shall formulate and prove the Jacobson-Cheveley Density Theorem for quantales. In line with [33], we shall also examine the Jacobson topology on the primitive ideals in a quantale.
|
2310.17256 | fairret: a Framework for Differentiable Fairness Regularization Terms | Current fairness toolkits in machine learning only admit a limited range of
fairness definitions and have seen little integration with automatic
differentiation libraries, despite the central role these libraries play in
modern machine learning pipelines.
We introduce a framework of fairness regularization terms (fairrets) which
quantify bias as modular, flexible objectives that are easily integrated in
automatic differentiation pipelines. By employing a general definition of
fairness in terms of linear-fractional statistics, a wide class of fairrets can
be computed efficiently. Experiments show the behavior of their gradients and
their utility in enforcing fairness with minimal loss of predictive power
compared to baselines. Our contribution includes a PyTorch implementation of
the fairret framework. | Maarten Buyl, MaryBeth Defrance, Tijl De Bie | 2023-10-26T09:13:15Z | http://arxiv.org/abs/2310.17256v2 | # Fairret: a Framework for Differentiable Fairness Regularization Terms
###### Abstract
Current tools for machine learning fairness only admit a limited range of fairness definitions and have seen little integration with automatic differentiation libraries, despite the central role these libraries play in modern machine learning pipelines.
We introduce a framework of fairness regularization terms (fairrets) which quantify bias as modular objectives that are easily integrated in automatic differentiation pipelines. By employing a general definition of fairness in terms of linear-fractional statistics, a wide class of fairrets can be computed efficiently. Experiments show the behavior of their gradients and their utility in enforcing fairness with minimal loss of predictive power compared to baselines. Our contribution includes a PyTorch implementation of the fairret framework.
## 1 Introduction
Many machine learning _fairness_ methods aim to enforce mathematical formalizations of non-discrimination principles (Mehrabi et al., 2021), often by requiring statistics to be equal between groups (Agarwal et al., 2018). For example, we may require that men and women receive positive decisions at equal rates in binary classification (Dwork et al., 2012). The main interest in fairness tools is to meet such constraints without destroying the accuracy of the ML model.
A large class of these fairness tools utilizes _regularization terms_, i.e. quantifications of unfairness that can be added to the existing error term of an unfair ML model (Kamishima et al., 2012; Berk et al., 2017; Zafar et al., 2019; Padala and Gujar, 2021; Padh et al., 2021; Buyl and De Bie, 2022). The modularity of such loss terms appears to align well with the paradigm of automatic differentiation libraries like PyTorch (Paszke et al., 2019), which have become the bedrock of modern machine learning pipelines. However, the practical use of this modularity has seen little interest thus far.
ContributionsHence, we formalize a modular framework of fairness regularization terms (fairrets) and unify recent advances in fairness tools. A fairret quantifies a model's unfairness as a single value that is minimized like any other objective through automatic differentiation.
In this paper, we implement two types of fairrets: fairrets that directly penalize the _violation_ of fairness constraints and fairrets that minimize the distance between a model and its _projection_ onto the set of fair models. These fairrets make use of linear-fractional statistics (Celis et al., 2019), which support a wider range of fairness notions than the exclusively linear statistics typically considered in literature (Zafar et al., 2019; Agarwal et al., 2018; Alghamdi et al., 2020). Moreover, our framework generalizes to the simultaneous handling of multiple sensitive traits and (a weaker form of) fairness with respect to continuous sensitive variables.
We visualize the gradients of the proposed fairrets and evaluate their empirical performance in enforcing fairness notions compared to baselines. We infer that fairness notions with linear-fractional statistics are far harder to achieve than those with linear statistics, though the latter are far more popularly studied in literature.
Related WorkFairness tools are classified as _preprocessing_, _inprocessing_ or _postprocessing_(Mehrabi et al., 2021). fairrets perform inprocessing, as they are minimized during training.
The most straightforward and popular approach to fairness regularization terms is to directly penalize the violation of the fairness constraint (Zemel et al., 2013; Padala and Gujar, 2021; Wick et al., 2019), which we formalize as a fairret. We also take inspiration from postprocessing methods that project classifiers onto the set of fair classifiers (Alghamdi et al., 2020; Wei et al., 2020) by penalizing the cost of this projection (Buyl and De Bie, 2021).
Our framework makes extensive use of the observation by Celis et al. (2019) that many fairness definitions can be expressed as a parity between linear-fractional statistics. They propose a meta-algorithm to generically find optimal classifiers that satisfy this constraint. Instead, we employ a simpler (yet sufficiently expressive) linear-fractional form and propose a novel algorithm to use them in the construction of linear constraints such that a meta-algorithm is not necessary.
Popular fairness toolkits such as Fairlearn (Bird et al., 2020) and AIF360 (Bellamy et al., 2018) expect the underlying model in the form of _scikit-learn Estimators1_ that can be retrained at-will in fairness meta-algorithms. Instead, our proposed fairrets act as a loss term that can simply be added _within_ a training step. The aforementioned toolkits have some integration with automatic differentiation libraries in adversarial fairness approaches (Zhang et al., 2018), yet these still require full control over the training process and lack generality in the fairness notions they can enforce.
Footnote 1: [https://scikit-learn.org/1.3/developers/develop.html](https://scikit-learn.org/1.3/developers/develop.html) describes these _Estimators_
Two PyTorch-specific projects with similar goals as our paper are FairTorch (Masashi, 2020) and the Fair Fairness Benchmark (FFB) (Han et al., 2023). However, neither present a formal framework and both only support a limited range of fairness definitions.
## 2 Fairness in Binary Classification
In fair binary classification, we are provided with random variables \((\mathbf{X},\mathbf{S},Y)\) with \(\mathbf{X}\in\mathbb{R}^{d_{x}}\) the \(d_{x}\)-dimensional feature vector of an individual, \(\mathbf{S}\in\mathbb{R}^{d_{s}}\) their \(d_{s}\)-dimensional _sensitive_ feature vector and \(Y\in\{0,1\}\) the binary output label. Any expectations in the rest of the paper are taken over the joint distribution of these random variables \((\mathbf{X},\mathbf{S},Y)\).
The goal is to learn a classifier \(f\) such that its predictions \(f(\mathbf{X})\) match \(Y\) while avoiding discrimination with respect to \(\mathbf{S}\). In this section, we will assume \(f\) directly provides binary decisions, i.e. \(f:\mathbb{R}^{d_{s}}\rightarrow\{0,1\}\), as this is expected in traditional formalizations of fairness. However, since such 'hard' classifiers are not differentiable, we will instead be learning probabilistic classifiers in Sec. 3.
Further note that our definition of sensitive features \(\mathbf{S}\) as real-valued and \(d_{s}\)-dimensional vectors is a generalization of typical fairness definitions which assume a categorical (or binary) domain for sensitive features (Verma and Rubin, 2018). We will one-hot encode such categorical traits, e.g. by encoding 'white' or 'non-white' as the vectors \(\mathbf{S}=(1,0)^{\top}\) and \(\mathbf{S}=(0,1)^{\top}\) respectively. Our generalization allows us to take multiple non-exclusive sensitive traits into account by mapping them to different values \(S_{k}\) in the same vector \(\mathbf{S}\) for \(k\in[d_{s}]=\{0,...,d_{s}-1\}\). Additionally, by letting \(S_{k}\in\mathbb{R}\), we allow soft specifications of identity rather than requiring hard discretization.
### Partition Fairness
Though we will allow any feature vector \(\mathbf{S}\in\mathbb{R}^{d_{s}}\) in our framework, popular fairness definitions require every person to belong to exactly one demographic group. We call this _partition fairness_.
**Definition 1**.: _In **partition fairness**, \(\mathbf{S}\) is a one-hot encoding, i.e. \(S_{k}\in\{0,1\}\) and \(\sum_{k\in[d_{s}]}S_{k}=1\)._
**Example 1**.: _A straightforward, popular definition in partition fairness is Demographic Parity (DP), also known as statistical parity (Dwork et al., 2012; Verma and Rubin, 2018). It enforces_
\[\forall k\in[d_{s}]:P(f(\mathbf{X})=1\mid S_{k}=1)=P(f(\mathbf{X})=1)\]
_which states that all groups ought to get positive predictions at the same rate (i.e. the overall rate)._
_Let \(\gamma(k;f)\triangleq\frac{\mathbb{E}[S_{k}f(\mathbf{X})]}{\mathbb{E}[S_{k}]}\). It is easily shown that \(\gamma(k;f)=P(f(\mathbf{X})=1\mid S_{k}=1)\). Thus also_
\[P(f(\mathbf{X})=1\mid S_{k}=1)=P(f(\mathbf{X})=1)\iff\gamma(k;f)=\mathbb{E}[ f(\mathbf{X})].\]
In Example 1, fairness is formalized by requiring a statistic \(\gamma\) to be equal across groups. This principle can be generalized to a wide class of parity-based fairness notions. In particular, we consider those expressed through _linear-fractional_ statistics (Celis et al., 2019).
**Definition 2**.: _A **linear-fractional** statistic \(\gamma\) computes values \(\gamma(k;f)\in\mathbb{R}\) for sensitive variable \(S_{k}\) and classifier \(f:\mathbb{R}^{d_{x}}\to\{0,1\}\). We assume \(\gamma\) is differentiable with respect to \(f\). It takes the form_
\[\gamma(k;f)=\frac{\mathbb{E}[S_{k}(\alpha_{0}(\mathbf{X},Y)+f(\mathbf{X}) \beta_{0}(\mathbf{X},Y))]}{\mathbb{E}[S_{k}(\alpha_{1}(\mathbf{X},Y)+f(\mathbf{ X})\beta_{1}(\mathbf{X},Y))]}\]
_with \(\alpha_{0}\), \(\alpha_{1}\), \(\beta_{0}\), and \(\beta_{1}\) all functions that do not depend on \(\mathbf{S}\) or \(f\). Let \(\Gamma\) denote all such statistics. Also, let denote the overall statistic value without conditioning on \(\mathbf{S}\)._
**Definition 3**.: _A **fairness notion** is expressed through a statistic \(\gamma\in\Gamma\). The set \(\mathcal{F}_{\gamma}\) of classifiers that adhere to the fairness notion is defined as_
\[\mathcal{F}_{\gamma}\triangleq\left\{f:\mathbb{R}^{d_{x}}\to\{0,1\}\mid \forall k\in[d_{s}]:\gamma(k;f)=\overline{\gamma}(f)\right\}\]
_i.e. the statistic \(\gamma(k;f)\) for each \(S_{k}\) equals the overall statistic \(\overline{\gamma}(f)\)._
Indeed, the DP fairness notion in Example 1 is expressed as a fairness notion as defined in Def. 3 with linear-fractional statistics as defined in Def. 2. The same holds for the following notions.
**Example 2**.: _Equalized Opportunity (EO) (Hardt et al., 2016) only computes DP for actual positives \(Y=1\). Its statistic \(\gamma\) is thus the recall \(P(f(\mathbf{X})=1\mid Y=1,S_{k}=1)\), i.e. \(\gamma(k;f)=\frac{\mathbb{E}[S_{k}f(\mathbf{X})Y]}{\mathbb{E}[S_{k}Y]}\)._
**Example 3**.: _Predictive Parity (PP) (Chouldechova, 2017), which compares the precision statistic \(P(Y=1\mid f(\mathbf{X})=1,S_{k}=1)\), i.e. \(\gamma(k;f)=\frac{\mathbb{E}[S_{k}f(\mathbf{X})]}{\mathbb{E}[S_{k}f(\mathbf{X })]}\)._
**Example 4**.: _Treatment Equality (TE) (Berk et al., 2021) balances the ratios of false negatives over false positives, i.e. \(\gamma(k;f)=\frac{\mathbb{E}[S_{k}(1-f(\mathbf{X})Y]]}{\mathbb{E}[S_{k}f( \mathbf{X})(1-Y)]}\). Unlike the other notions, its \(\gamma\) is not a probability._
Table 1 summarizes the \(\alpha\) and \(\beta\) functions of several fairness notions (Verma and Rubin, 2018) with linear-fractional statistics. Their derivations are found in Appendix A.1.
**Definition 4**.: _A linear-fractional statistic \(\gamma\in\Gamma\) is **linear** when \(\beta_{1}(\mathbf{X},Y)\equiv 0\)._
_Let \(\Gamma_{\mathrm{L}}\subset\Gamma\) denote the set of all linear statistics._
Fairness notions with linear statistics \(\gamma\in\Gamma_{\mathrm{L}}\) are thus identified in Table 1 by checking the column for \(\beta_{1}\). Such notions are especially useful because the fairness constraint in Def. 3 is easily written as a linear constraint over classifier \(f\). In turn, this makes the set of fair classifiers \(\mathcal{F}_{\gamma}\) a convex set, which leads to convex optimization problems (Boyd and Vandenberghe, 2004). Thus, a constrained optimization of \(f\) can be efficiently performed if \(f\) is itself convex (Zafar et al., 2019).
However, fairness notions with linear-fractional statistics \(\gamma\in\Gamma\setminus\Gamma_{\mathrm{L}}\) do not directly lead to linear constraints in Def. 3. To facilitate optimization, we therefore propose to narrow the set of fair classifiers \(\mathcal{F}_{\gamma}\) to the subset where the statistics are all equal in a particular value \(c\).
**Definition 5**.: _Fix a \(c\in\mathbb{R}\). A **c-fixed fairness notion** is expressed through a linear-fractional statistic \(\gamma\in\Gamma\) such that the set \(\mathcal{F}_{\gamma}(c)\) of classifiers \(f\) that adhere to the fairness notion is defined as_
\[\mathcal{F}_{\gamma}(c)\triangleq\left\{f:\mathbb{R}^{d_{x}}\to\{0,1\}\mid \forall k\in[d_{s}]:\gamma(k;f)=c\right\}.\]
**Proposition 1**.: _With \(\gamma\in\Gamma\), the \(c\)-fixed fairness notion \(\mathcal{F}_{\gamma}(c)\) enforces **linear** constraints:_
\[\gamma(k;f)=c\iff\mathbb{E}[S_{k}(\alpha(\mathbf{X},Y,c)+f(\mathbf{X})\beta( \mathbf{X},Y,c))]=0\]
_where \(\alpha(\mathbf{X},Y,c)=\alpha_{0}(\mathbf{X},Y)-c\alpha_{1}(\mathbf{X},Y)\) and \(\beta(\mathbf{X},Y,c)=\beta_{0}(\mathbf{X},Y)-c\beta_{1}(\mathbf{X},Y)\)._
\begin{table}
\begin{tabular}{l c c c c} Fairness Definition & \(\alpha_{0}\) & \(\beta_{0}\) & \(\alpha_{1}\) & \(\beta_{1}\) \\ \hline Demographic Parity (Dwork et al., 2012) & 0 & 1 & 1 & 0 \\ Conditional Demographic Parity (Wachter et al., 2020) & 0 & \(\zeta(\mathbf{X})\) & \(\zeta(\mathbf{X})\) & 0 \\ Equal Opportunity (Hardt et al., 2016) & 0 & Y & Y & 0 \\ False Positive Parity (Hardt et al., 2016) & 0 & 1 - Y & 1 - Y & 0 \\ Predictive Parity (Chouldechova, 2017) & 0 & Y & 0 & 1 \\ False Omission Parity & Y & -Y & 1 & -1 \\ Accuracy Equality (Berk et al., 2021) & 1 - Y & 2Y - 1 & 1 & 0 \\ Treatment Equality (Berk et al., 2021) & Y & -Y & 0 & 1 - Y \\ \end{tabular}
\end{table}
Table 1: Fairness definitions and their \(\alpha\) and \(\beta\) functions. Conditional Demographic Parity encompasses many notions with an arbitrary function \(\zeta\) conditioned on the input \(\mathbf{X}\).
Using Prop. 1, we can still obtain linear constraints for fairness notions \(\mathcal{F}_{\gamma}\) with linear-fractional statistics \(\gamma\in\Gamma\setminus\Gamma_{\mathrm{L}}\) by considering their \(c\)-fixed variant \(\mathcal{F}_{\gamma}(c)\) instead. This sacrifices a degree of freedom because statistics \(\gamma(k;f)\) are no longer allowed to be equal for any overall statistic \(\overline{\gamma}(f)\), they must now do so for the specific case where \(\overline{\gamma}(f)=c\). However, there are \(c\) values that still lead to interesting sets \(\mathcal{F}_{\gamma}(c)\). In the fairrets we propose, we take an unfair classifier \(h\) and fix \(c=\overline{\gamma}(h)\) to construct the set of all _fair_ classifiers \(\mathcal{F}_{\gamma}(\overline{\gamma}(h))\) that would result from a fair redistribution of scores in \(h\) over the sensitive groups.
Though Prop. 1 is inspired by Celis et al. (2019), our use of this result vastly differs. Instead of fixing the statistics to a single value \(c\), they set many pairs of upper and lower bounds for each group's statistics, giving rise to as many optimization programs. They then propose a meta-algorithm that searches the best classifier over each of these programs. A meta-algorithm is not necessary in our framework, as we will allow \(c\) to evolve during training. While we have no formal convergence guarantees for this approach, empirical results show it works well in practice.
### Beyond Partition Fairness
Having firmly rooted our definitions in partition fairness (Def. 1), we now abandon its assumptions. First, we allow \(S_{k}\in\mathbb{R}\). Second, we extend to multiple sensitive features with \(\sum_{k}S_{k}\in\mathbb{R}\).
#### 2.2.1 Continuous Sensitive Values
Admitting continuous values for someone's sensitive trait, i.e. \(S_{k}\in\mathbb{R}\) allows us to take naturally continuous features, such as age, into account. Also, it provides an opportunity for an imprecise specification of demographic group membership.
For instance, instead of exactly knowing the gender of an individual, we may only have a probability available, e.g. because it is noisily predicted by a third-party classifier, or to protect the individual's privacy. By allowing \(S_{k}\in(0,1)\), the attribute \(S_{k}\) could then express 'woman-ness' instead of a binary 'woman' or 'not woman'. Thus, we also allow individuals to themselves quantify how strongly they identify with a group, rather than requiring a binary membership.
Our notation already generalizes to non-binary \(S_{k}\) values; they can simply be filled in for linear-fractional statistics \(\gamma\in\Gamma\) as defined in Def. 2. Fairness as formalized in Def. 3 can then still be enforced through \(\gamma(k;f)=\overline{\gamma}(f)\).
**Remark 1**.: _Partition fairness constraints are derived from the ideal that a set of distinct groups are treated equally, as measured through a statistic. This does not directly apply for a non-binary \(S_{k}\). For example, if there is only one, continuous sensitive variable \((S_{0})=\mathbf{S}\) such as the age of an individual, then we cannot compare \(\gamma(0;f)\) to another group's statistics. Instead, \(\gamma(0;f)\) must be compared to a value independent of \(\mathbf{S}\)._
_Enforcing \(\gamma(k;f)=\overline{\gamma}(f)\) is then a sensible choice, as it satisfies key properties one can expect from a fairness measure. First, the constraint is met when \(S_{k}\equiv s\), i.e. when \(S_{k}\) is a deterministic constant. Second, it holds if \(S_{k}\) has no linear influence on the nominator and denominator of \(\gamma\), i.e._
\[\mathrm{cov}(S_{k},\alpha_{0}(\mathbf{X},Y)+f(\mathbf{X})\beta_{0}(\mathbf{X },Y))=\mathrm{cov}(S_{k},\alpha_{1}(\mathbf{X},Y)+f(\mathbf{X})\beta_{1}( \mathbf{X},Y))=0\implies\gamma(k;f)=\overline{\gamma}(f)\]
_For a full derivation of this result, we refer to Appendix B.1._
#### 2.2.2 Multiple Axes of Discrimination
By allowing \(\sum_{k}S_{k}\in\mathbb{R}\), we support that \(\mathbf{S}\) contains information about people from several sensitive traits, e.g. gender, ethnicity, and religion. Because these each form a possible axis of discrimination, we can'sum' these sources of discrimination by combining the constraints.
For example, if pairs of sensitive features \((S_{0},S_{1})\) and \((S_{2},S_{3})\) each partition the dataset, then fairness requires both \(\gamma(0;f)=\gamma(1;f)=\overline{\gamma}(f)\) and \(\gamma(2;f)=\gamma(3;f)=\overline{\gamma}(f)\). Combined, these constraints make up the fairness definition in Def. 3. The use of one-hot notations for sensitive values thus already allows us to combine axes of discrimination for categorical sensitive traits.
**Remark 2**.: _An important limitation is that we only view fairness separately per axis of discrimination. Outside the partition fairness setting, this means that some intersections of sensitive groups, e.g. 'black woman', will not be represented in the constraints that enforce fairness with respect to 'black' and 'woman' separately (Kearns et al., 2018). A toy example is given in Appendix B.2._
## 3 Fairness Regularization Terms
The popular approach to modern machine learning is to construct pipelines consisting of modular, parameterized components that are differentiable from the objective to the input. We therefore use _probabilistic_ classifier models
\(h:\mathbb{R}^{d_{s}}\to(0,1)\) from now on, where decisions are sampled from a Bernoulli distribution with parameter \(h(\mathbf{X})\). Let \(\mathcal{H}\) denote the hypothesis class of these models.
**Remark 3**.: _Fairness statistics \(\gamma(k;h)\) over the output of a probabilistic classifier \(h\) only approximately verify their respective fairness notions, as these were only defined for hard classifiers with a binary output (Lohaus et al., 2020). In Appendix B.3, we discuss the impact of this approximation and how its fidelity can be traded-off with the quality of the gradient of \(\gamma(k;h)\) with respect to \(h\)._
In binary classification, we minimize a loss \(\mathcal{L}_{Y}(h)\) over the probabilistic classifier \(h\) given output labels \(Y\), e.g. the cross-entropy. In _fair_ binary classification we additionally pursue \(h\in\mathcal{F}_{\gamma}\):
\[\min_{h\in\mathcal{F}_{\gamma}}\mathcal{L}_{Y}(h) \tag{1}\]
For linear-fractional statistics, the constraint is linear when considering the \(c\)-fixed variant of \(\mathcal{F}_{\gamma}\) (using Prop. 1). However, for non-convex models \(h\), the constrained optimization of \(h\) will remain non-convex as well. In the general case, we thus relax \(h\in\mathcal{F}_{\gamma}\) and instead incur a cost to \(h\not\in\mathcal{F}_{\gamma}\).
**Definition 6**.: _A **fairness regularization term** (fairret) \(R_{\gamma}(h):\mathcal{H}\to\mathbb{R}_{\geq 0}\) quantifies the unfairness of the model \(h\in\mathcal{H}\) with respect to the fairness notion defined through statistic \(\gamma\)._
_A fairret is **strict** if it holds that \(h\in\mathcal{F}_{\gamma}\iff R_{\gamma}(h)=0\)._
The objective in Eq. (1) is then relaxed as
\[\min_{h}\mathcal{L}_{Y}(h)+\lambda R_{\gamma}(h). \tag{2}\]
with \(\lambda\) a hyperparameter. The objective in Eq. (2) is equivalent to Eq. (1) for \(\lambda\to\infty\) if \(R_{\gamma}\) is strict.
**Remark 4**.: _We call \(R_{\gamma}\) a regularization term, yet its purpose is not to reduce model complexity or improve generalization performance, in contrast to traditional regularization in machine learning (Kukacka et al., 2017). Instead, we aim to limit the hypothesis class of \(h\) to the set of fair classifiers._
In what follows, we introduce two archetypes of fairrets: _violation_ and _projection_. We visualize \(\nabla_{h}R_{\gamma}\) for each fairret in Fig. 1 with \(\gamma\) the positive rate statistic (thereby enforcing DP).
Figure 1: The model \(h\) was trained on the _ACSIncome_ dataset without fairret (i.e. \(\lambda=0\)) and ends up with disparate positive rates \(\gamma(0;h)>\overline{\gamma}(h)>\gamma(1;h)\) for the one-hot encoded sensitive variables \((S_{0},S_{1})\). These should be brought closer to the overall positive rate \(\overline{\gamma}(f)\). We show probability scores \(h\) and the gradients3 of several fairrets \(R_{\gamma}\) with respect to \(h\). The gradients are normalized by dividing them by their maximum absolute value per fairret and per group. They are positive for samples with \(S_{0}=1\), implying their scores should decrease, and vice versa for \(S_{1}=1\).
### Violation FAIRRETs
To quantify \(h\notin\mathcal{F}_{\gamma}\), we can start from the _violation_\(\mathbf{v}(h)\) of the constraint that defines \(\mathcal{F}_{\gamma}\):
\[\mathbf{v}_{k}(h)=\left|\frac{\gamma(k;h)}{\overline{\gamma}(h)}-1\right| \tag{3}\]
with \(\mathbf{v}:\mathcal{H}\rightarrow\mathbb{R}^{d_{s}}\) a vector-valued function with components \(\mathbf{v}_{k}\). Clearly, \(\mathbf{v}(h)=\mathbf{0}\iff h\in\mathcal{F}_{\gamma}\).
Note that \(\mathbf{v}(h)\) is normalized2 by \(\overline{\gamma}(h)\) such that a classifier cannot minimize \(\mathbf{v}(h)\) by uniformly downscaling its statistics \(\gamma\) without reducing relative differences between groups (Celis et al., 2019).
Footnote 2: In cases where \(\overline{\gamma}(h)=0\), we can simply use \(\mathbf{v}_{k}(h)=\left|\gamma(k;h)\right|\) instead. We assume \(h(\mathbf{X})\in(0,1)\), so this only occurs in degenerate cases for the notions in Table 1 (like when all \(Y=0\) for Equal Opportunity).
**Definition 7**.: _We define the **Norm** fairret as \(R_{\gamma}(h)\triangleq\left\|\mathbf{v}(h)\right\|\), with \(\left\|\cdot\right\|\) a norm over \(\mathbb{R}^{d_{s}}\)._
Many variants of the Norm fairret have been proposed, e.g. by Zemel et al. (2013), Padala and Gujar (2021), Wick et al. (2019) and Chuang and Mroueh (2020). However, fairness evaluation metrics often only consider the maximal violation. Hence, we propose the SmoothMax variant.
**Definition 8**.: _We define the **SmoothMax** fairret as \(R_{\gamma}(h)\triangleq\log\sum_{k\in[d_{s}]}\exp(\mathbf{v}_{k}(h))-\log d_{s}\)_
Because the SmoothMax performs the log-sum-exp operation over the violation, it can be considered a smooth approximation of the maximum. We subtract \(\log d_{s}\) to ensure the fairret is strict.
Generally, violation fairrets can be characterized as functions of the violation \(\mathbf{v}(h)\). This lends them interpretability, but it also means that the gradient3\(\nabla_{h}R_{\gamma}\) decomposes as
Footnote 3: There is some abuse of notation here. When taking the gradient or Jacobian with respect to \(h\), we take it with respect to the vector of \(n\) outputs of \(h\) for a set of \(n\) input features sampled from the distribution over \(\mathbf{X}\).
\[\nabla_{h}R_{\gamma}=\left(\frac{\partial\mathbf{v}}{\partial h}\right)^{ \top}\nabla_{\mathbf{v}}R_{\gamma} \tag{4}\]
with \(\frac{\partial\mathbf{v}}{\partial h}\) the Jacobian3 of \(\mathbf{v}(h)\). The gradients of violation fairrets \(R_{\gamma}\) thus only differ in the \(\nabla_{\mathbf{v}}R_{\gamma}\) gradient. Hence, the Norm fairret is excluded from Fig. 1 because its gradients equal those of SmoothMax after normalization. Figure 1 also suggests that violation fairrets convey little information on how each individual \(h(\mathbf{X})\) score should be modified. Instead, they merely direct scores to uniformly increase or decrease within each group.
Footnote 3: There is some abuse of notation here. When taking the gradient or Jacobian with respect to \(h\), we take it with respect to the vector of \(n\) outputs of \(h\) for a set of \(n\) input features sampled from the distribution over \(\mathbf{X}\).
### Projection FAIRRETs
Recent postprocessing approaches to fairness redistribute all individual probability scores of a model \(h(\mathbf{X})\) to a fair scores vector with a minimal loss in predictive power. For example, Alghamdi et al. (2020) project the scores onto the fair set \(\mathcal{F}_{\gamma}\) as a postprocessing step. Yet, the cost of this projection can be seen as a quantification of unfairness that may be minimized as a fairret during training.
Given a statistical divergence or distance \(D\), we can generally define such a _projection_ fairret as
\[R_{\gamma}(h)\triangleq\min_{f\in\mathcal{F}_{\gamma}(\overline{\gamma}(h))} \mathbb{B}[D(f(\mathbf{X})\parallel h(\mathbf{X}))]. \tag{5}\]
Importantly, we do not project \(h\) onto the general fair set \(\mathcal{F}_{\gamma}\), but on the \(c\)-fixed subset \(\mathcal{F}_{\gamma}(c)\) with \(c=\overline{\gamma}(h)\). The \(c\)-fixing is done such that the projection only requires linear constraints for linear-fractional statistics (see Prop. 1). Equation (5) is then a convex optimization problem if we limit ourselves to a \(D\) that is convex with respect to \(f\), which is the case for all projections discussed here. In particular, we \(c\)-fix to the overall statistic \(\overline{\gamma}(h)\) of \(h\) because this ensures \(h\) can always be projected onto itself if it is already fair, as then \(h\in\mathcal{F}_{\gamma}(\overline{\gamma}(h))\).
**Definition 9**.: _The \(D_{\mathrm{KL}}\)**-projection** uses the binary Kullback-Leibler divergence_
\[D_{\mathrm{KL}}(f(\mathbf{X})\parallel h(\mathbf{X}))\triangleq f(\mathbf{X}) \log\frac{f(\mathbf{X})}{h(\mathbf{X})}+(1-f(\mathbf{X}))\log\frac{1-f( \mathbf{X})}{1-h(\mathbf{X})}.\]
The \(D_{\mathrm{KL}}\)-divergence is both a Csiszar divergence and a Bregman divergence (Amari, 2009). Also, the cross-entropy error minimized in \(\mathcal{L}_{Y}(h)\) equals \(D_{\mathrm{KL}}(Y\parallel h(\mathbf{X}))\) up to a constant. The minimization of Eq. (2) thus comes down to simultaneously minimizing \(D_{\mathrm{KL}}\) between \(h(\mathbf{X})\) and the data \(Y\), and between \(h(\mathbf{X})\) and the closest \(f\in\mathcal{F}_{\gamma}(\overline{\gamma}(h))\)(Buyl and De Bie, 2021).
**Definition 10**.: _The \(D_{\mathrm{JS}}\)**-projection** uses the binary Jensen-Shannon divergence._
\[D_{\mathrm{JS}}(f(\mathbf{X})\parallel h(\mathbf{X}))\triangleq\frac{1}{2}D_{ \mathrm{KL}}(f(\mathbf{X})\parallel m(\mathbf{X}))+\frac{1}{2}D_{\mathrm{KL}}(h (\mathbf{X})\parallel m(\mathbf{X}))\]
_with \(m(\mathbf{X})=\frac{1}{2}f(\mathbf{X})+\frac{1}{2}h(\mathbf{X})\)._
Just like \(D_{\mathrm{KL}}\), the \(D_{\mathrm{JS}}\)-divergence is a Csiszar divergence. However, the \(D_{\mathrm{JS}}\)-divergence is symmetric with respect to its arguments \(f\) and \(h\), which is not the case for the \(D_{\mathrm{KL}}\)-divergence.
**Definition 11**.: _The \(D_{\mathrm{SED}}\)**-projection** uses the squared Euclidean distance between the two points \((1-f(\mathbf{X}),f(\mathbf{X}))\) and \((1-h(\mathbf{X}),h(\mathbf{X}))\):_
\[D_{\mathrm{SED}}(f(\mathbf{X})\parallel h(\mathbf{X}))\triangleq 2(f(\mathbf{X })-h(\mathbf{X}))^{2}.\]
\(D_{\mathrm{SED}}\) is a Bregman divergence between the Bernoulli distributions with parameters \(f(\mathbf{X})\) and \(h(\mathbf{X})\).
In practice, we evaluate projection fairrets\(R_{\gamma}(h)\) in two steps.
\[\text{(i)}\quad f^{*}=\operatorname*{arg\,min}_{f\in\mathcal{F}_{ \gamma}(\overline{\gamma}(h))}\mathbb{E}[D(f(\mathbf{X})\parallel h(\mathbf{ X}))]\] \[\text{(ii)}\quad R_{\gamma}(h)=\mathbb{E}[D(f^{*}(\mathbf{X}) \parallel h(\mathbf{X}))]\]
While keeping \(h\) fixed, step (i) computes the overall statistic \(\overline{\gamma}(h)\) and then finds the projection \(f^{*}\) through constrained optimization. Subsequently, step (ii) keeps \(f^{*}\) fixed and computes \(\mathbb{E}[D(f^{*}(\mathbf{X})\parallel h(\mathbf{X}))]\) as a function of \(h\), which we use to compute the gradient with respect to \(h\). This gradient differs from the actual gradient of the optimization as a function of \(h\) in Eq. (5), because the latter would require us to treat \(f^{*}\) as a function of \(h\). However, by treating \(f^{*}\) as fixed instead (without backpropagating through it), we significantly simplify the fairret's implementation. The optimization in step (i) can then be solved generically using specialized libraries such as cvgy (Agrawal et al., 2018; Diamond and Boyd, 2016). In our experiments, we find that only 10 optimization steps is enough to get a reasonable approximation of the solution. We refer to Appendix C.2 for a discussion of this approximation and Appendix C.1 for a visualization of each projection \(f^{*}\).
Figure 1 shows that the gradients of the projection fairrets increase with higher values of \(h\). We hypothesize this occurs when \(\gamma(k;h)>\overline{\gamma}(h)\) because \(\gamma(k;h)\) is more easily decreased by reducing higher \(h\) values than lower ones. Conversely, when \(\gamma(k;h)<\overline{\gamma}(h)\), there is more to gain from increasing lower \(h\) values than higher ones. The sharp bend of the gradients of the \(D_{\mathrm{SED}}\)-projection is explained in Appendix C.1 through an analysis of the projected distributions.
### Analysis
**Proposition 2**.: _All fairrets presented in this paper (i.e. Def. 7, 8, 9, 10 and 11) are strict._
Hence, all proposed fairrets can indeed be properly regarded as quantifications of unfairness. Proofs are provided in Appendix A.3.
Moreover, they are differentiable with respect to \(h\). Violation fairrets owe this to the differentiability of \(\gamma\) and projection fairrets to the differentiability of \(D\). Hence, fairrets are easily implemented with an automatic differentiation library like PyTorch. Their computational overhead is unaffected by the complexity of the parameters \(\boldsymbol{\theta}\) of the model \(h\), as the gradients \(\nabla_{\boldsymbol{\theta}}\mathcal{L}_{Y}=\left(\frac{\partial h}{\partial \boldsymbol{\theta}}\right)^{\top}\nabla_{h}\mathcal{L}_{Y}\) and \(\nabla_{\boldsymbol{\theta}}R_{\gamma}=\left(\frac{\partial h}{\partial \boldsymbol{\theta}}\right)^{\top}\nabla_{h}R_{\gamma}\) of both loss functions in Eq. (2) share the computation of the Jacobian \(\frac{\partial h}{\partial\boldsymbol{\theta}}\).
It is common to minimize \(\mathcal{L}_{Y}\) using mini-batches; the same batches can be used to minimize \(R_{\gamma}\). Indeed, this is done in our experiments. Yet, though this makes fairrets more scalable, insufficient batch sizes will lead to poor approximations of the statistics \(\gamma\). Clearly, the mean violation \(\mathbf{v}(h)\) in Eq. (3) computed over mini-batches is not an unbiased estimate of the actual violation computed over all data. We thus report the mean SmoothMax loss for increasing batch sizes in Appendix C.3.
## 4 Experiments
### Setup
Experiments were conducted on the _Bank_(Moro et al., 2014), _CreditCard_(Yeh and hui Lien, 2009), _LawSchool4_, and _ACSIncome_(Ding et al., 2021) datasets. Each has multiple sensitive features, including some continuous. The classifier \(h\) was a fully connected neural net with hidden layers of sizes [256, 128, 32] followed by a sigmoid and did not take sensitive features \(\mathbf{S}\) as input. We trained with all fairrets discussed in Sec. 3 but only report results of Norm, \(D_{\text{IS}}\)-projection and \(D_{\text{SED}}\)-projection in Appendix C.4 to avoid clutter here. The remaining fairrets, SmoothMax and \(D_{\text{KL}}\)-projection, were representative for their archetype. These are compared against three baselines implemented in the Fair Fairness Benchmark (FFB) by Han et al. (2023), as their implementation provides these baselines as loss terms in idiomatic PyTorch. They are _PRemover_(Kamishima et al., 2012), _HSIC_(Perez-Suay et al., 2017), and _AdvDebias_(Adel et al., 2019) (where the reverse of the adversary's loss is the fairness loss term). In contrast to the fairret
Figure 2: Mean test set results with confidence ellipse for the standard error. Each marker is a separate combination of dataset, fairret, fairret strength, and statistic. Results in the lower right are optimal. Failed runs (with an AUROC far worse than the rest) are omitted.
implementations, they only accept a single, categorical sensitive attribute. Each fairret and FFB fairness loss was added to the cross-entropy loss according to Eq. (2) in a separate training run for a range of strengths \(\lambda>0\).
We measured fairness over the four statistics \(\gamma\) in Table 1 that relate to Demographic Parity (DP), Equal Opportunity (EO), Predictive Parity (PP), and Treatment Equality (TE) respectively. Violation of each fairness notion is computed as \(\max_{k}\,\mathbf{v}_{k}(h)\) (see Eq. (3)). Each fairret was minimized with respect to each \(\gamma\) in a separate training run (and only the optimized violation is reported). The three FFB baselines only consider one fairness notion, which is to maximize independence between the model's output and the sensitive attributes. Their violation is reported for each statistic \(\gamma\).
In summary, there was an experiment run for each dataset, fairness method, fairness strength \(\lambda\), and statistic \(\gamma\) (except for the FFB baselines). Finally, we also use the _Unfair_ baseline with \(\lambda=0\). Each of these combinations was repeated across 10 random seeds with each different train/test splits.
Appendix D provides further details on the experiment setup, i.e. the datasets, hyperparameters, the baselines implementation, the computation of the confidence ellipses and runtimes.
### Results
Test set results are visualized in Fig. 2; train set results are found in Appendix C.5 (and display the same trends). We separately discuss the notions with linear and with linear-fractional statistics.
**For DP and EO, which have _linear_ statistics**, both the SmoothMax and \(D_{\text{KL}}\)-projection fairrets are effectively used to minimize the fairness violation with respect to multiple sensitive attributes while minimally suffering a loss in AUROC scores, though the projection fairret clearly performs better than the violation-based SmoothMax fairret. As expected, the FFB baselines perform worse than the methods implemented in our fairret framework, since they cannot be configured to optimize the same general range of fairness definitions. Also, their implementation only minimizes bias with respect to a single sensitive attribute, and so they are oblivious to some of the components in \(\mathbf{S}\) that the violation in Fig. 2 measures. We report their violations on this single attribute in Appendix C.6, though the fairrets still outperform them there as well.
**For PP and TE, which have _linear-fractional statistics_**, all methods appear to struggle far more. SmoothMax is most consistent and never makes the fairness violation worse, yet the \(D_{\text{KL}}\)-projection in most cases makes both the fairness violation and the AUROC worse. The same occurs for the FFB baselines. To some extent, this can be attributed to overfitting, as SmoothMax leads to a significantly more consistent reduction of the train set fairness violation than the test set (see Appendix C.5). Still, non-linear fairness notions are clearly harder to optimize, which aligns with the results of Celis et al. (2019). Though Barocas et al. (2019) conclude that sufficiency (a notion related to PP) 'often comes for free', further work is needed to better understand how such notions can be consistently achieved.
## 5 Conclusion
The fairret framework allows for a wide range of fairness definitions and tools by comparing linear-fractional statistics for each sensitive feature. We implement several fairrets and show how they are easily integrated in existing machine learning pipelines utilizing automatic differentiation.
Empirically, violation fairrets like SmoothMax consistently lead to trade-offs between fairness and AUROC, though the more involved projection fairrets like the \(D_{\text{KL}}\)-projection clearly outperform them on fairness definitions with linear statistics. However, all methods struggle with fairness notions that have linear-fractional statistics like PP and TE, which have mostly been ignored in prior work. This signals a lucrative direction for future research.
## Ethics Statement
The fairet framework was made as a technical tool to help unveil and address a mathematical formalization of fairness in machine learning systems. However, such tools should never be considered a sufficient solution to truly achieve fairness in real-world decision processes (Buyl and De Bie, 2023), e.g. because the social, human component of fairness is completely outside the control of this framework (Selbst et al., 2019). There is a significant risk that technologies such as ours may anyway be abused to suggest discriminatory bias has been'removed' from a decision process without actually addressing underlying injustices (Hoffmann, 2019).
## Acknowledgments
The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) (ERC Grant Agreement no. 615517), and under the European Union's Horizon 2020 research and innovation programme (ERC Grant Agreement no. 963924), from the Special Research Fund (BOF) of Ghent University (BOF20/IBEF/117), from the Flemish Government under the "Onderzoeksprogramm Artificiele Intelligentie (AI) Vlaanderen" programme, and from the FWO (project no. GOF9816N, 3G042220). MB is supported by a doctoral scholarship from the Special Research Fund (BOF) of Ghent University (reference number: BOF20/DOC/144).
|
2308.02377 | Sowing 'Seeds of Doubt': Cottage Industries of Election and Medical
Misinformation in Brazil and the United States | We conducted ethnographic research with 31 misinformation creators and
consumers in Brazil and the US before, during, and after a major election to
understand the consumption and production of election and medical
misinformation. This study contributes to research on misinformation ecosystems
by focusing on poorly understood small players, or "micro-influencers", who
create misinformation in peer-to-peer networks. We detail four key tactics that
micro-influencers use. First, they typically disseminate "gray area" content
rather than expert-falsified claims, using subtle aesthetic and rhetorical
tactics to evade moderation. Second, they post in small, closed groups where
members feel safe and predisposed to trust content. Third, they explicitly
target misinformation consumers' emotional and social needs. Finally, they post
a high volume of short, repetitive content to plant seeds of doubt and build
trust in influencers as unofficial experts. We discuss the implications these
micro-influencers have for misinformation interventions and platforms' efforts
to moderate misinformation. | Amelia Hassoun, Gabrielle Borenstein, Beth Goldberg, Jacob McAuliffe, Katy Osborn | 2023-08-04T15:18:28Z | http://arxiv.org/abs/2308.02377v2 | Sowing 'Seeds of Doubt': Cottage Industries of Election and Medical Misinformation in Brazil and the United States
### Abstract
_We conducted ethnographic research with 31 misinformation creators and consumers in Brazil and the US before, during, and after a major election to understand the consumption and production of election and medical misinformation. This study contributes to research on misinformation ecosystems by focusing on poorly understood small players, or'micro-influencers,' who create misinformation in peer-to-peer networks. We detail four key tactics that micro-influencers use. First, they typically disseminate misleading 'gray area' content rather than falsifiable claims, using subtle aesthetic and rhetorical tactics to evade moderation. Second, they post in small, closed groups where members feel safe and predisposed to trust content. Third, they explicitly target misinformation consumers' emotional and social needs. Finally, they post a high volume of short, repetitive content to plant'seeds of doubt' and build trust in influencers as unofficial experts. We discuss the implications these micro-influencers have for misinformation interventions and platforms' efforts to moderate misinformation._
## Introduction
Understanding why and how people consume, trust, and disseminate misinformation is crucial for developing effective strategies to combat its influence. This paper presents the results of ethnographic research with 31 misinformation creators and consumers in Brazil and the United States, before, during, and after the 2022 Brazil presidential election and US midterm elections. We examined the following questions:
_RQ1: How and why do people encounter, trust, and amplify misinformation?_
_RQ2: What tactics and signifiers of trust do misinformation creators use to influence others and amplify misinformation?_
_RQ3: How are consumers' trust heuristics and creators' tactics informed by the affordances and dynamics of online platforms?_
Existing literature often defines disinformation as explicit and deliberate fabrication (Bennett and Livingston, 2018; Damstra et al., 2021; Dan et al., 2021; Freelon and Wells, 2020), spread primarily by mega-influencers (Center for Countering Digital Hate, 2021). In this paper, we first argue for expanding disinformation studies' focus beyond political elites and popular superspreaders to'micro-influencers' (<100K followers) who produce misinformation within what we term a 'cottage industry' of relatable, trusted peer-to-peer networks. By investigating these lesser-studied actors and locations of misinformation activity, our research shows how diffuse grassroots creation, sharing and engagement contribute to misinformation's spread and influence.
Second, researchers employ a definitional divide between intentional disinformation and unintentional misinformation (Hameleers, 2022). In contrast, we found participants predominantly consuming, amplifying and creating more subtly misleading 'gray area' content (Krause et al., 2022)--with multiple or ambiguous intentions. We argue for the overlapping study of mis- and disinformation (e.g., Kapantai et al., 2020; Anderson, 2021), moving beyond typologies to analyze the effects of this content on humans (Wardle, 2023). In this paper, we use'misinformation' to refer to this misleading information, regardless of (often ambiguous) creator intentionality.
Third, we detail the underlying social and emotional motivations behind misinformation sharing, responding to researchers' calls for a'more comprehensive picture of the emotional nature of misinformation' (Pasquetto et al., 2020: 5; Kim and Chen, 2022). We find misinformation creation and consumption that fulfilled unmet emotional needs--for example, a desire for recognition--aided its spread and influence, while participants often found the content's veracity secondary or unimportant.
Finally, we found that repetitive exposure to similar misinformative messages (often in short-form content like memes or tweets) increased participants' misinformation belief and engagement. This finding contrasts with the prevalent epiphanic'red pill' metaphor and related cultural imaginaries that individuals come to trust in unorthodox or extreme views through singular, watershed moments (Stern, 2019; Madison, 2021). We show how misinformation creators strategically focus on content quantity across platforms, planting many small'seeds of doubt' to foster an engaged community.
We next situate our study in existing misinformation literature, detail our research methods and key findings, and discuss implications for future misinformation research and interventions.
#### Sourcing misinformation: from big to small influencers
Ethnographic explorations of actors producing misinformation since social media's rise in the 2010s emphasize the key role of accessible, democratized tools for crafting misleading content (Polleri, 2022; Woolley, 2023). This technical toolkit is available to the average\(-\)not necessarily technological savvy\(-\)individual, from the production of GIFs and gossip to livestreams. As it becomes increasingly easy to create and share misinformation, misinformation (and its difference from disinformation) has become increasingly subtle (Guess, 2020). Paris and Donovan (2019) use the term 'cheap fakes' to distinguish easy-to-produce, but harder to verify or debunk, forms of audiovisual manipulation from wholly fabricated 'deep fakes'. Our study analyzes the emergent effects of this democratized misinformation content creation and amplification.
Recent scholarship exploring how this democratization of production has led to the proliferation of misinformation typically diagnoses misinformation's severity based upon its reach, using metrics of impressions, views, or shares. Network modeling approaches which examine the nature and temporality of misinformation dispersion often trace misinformation to public figures and influencers with large online followings (Nogara et al., 2022; Allcott et al., 2019; Allen et al., 2020). This may be because these prominent, public accounts are easy to identify and collect data on. Such was the case in the 2021 'Disinformation Dozen' report, which found that 65% of COVID-19 misinformation on mainstream social media sites originated from 12 public accounts (Center for Countering Digital Hate, 2021).
Less attention has been paid to how smaller accounts contribute to misinformation creation and spread. Marketers and propagandists have popularized the use of micro-influencers (1,000-100,000 followers, sometimes called 'nano-influencers') by leveraging their localized influence and ability to build a loyal audience with higher levels of trust and engagement (Maheshwari, 2018; Conde and Casais, 2023).
Within misinformation studies, we build on prior ethnographic research examining micro-influencers paid to promote propaganda (Ong and Cabanes, 2018; Woolley, 2022). Firms are 'particularly fond of leveraging "nano-influencers" who have followings of fewer than ten-thousand users each\(\ldots\)nano-influencers have a more localized, relationally potent effect' (Woolley, 2022:119). Given the advantages micro-influencers have in building trust within closed communities, we argue that the effects of micro-influencers on misinformation belief and amplification may be understated in a literature focused on popular influencers and institutions.
We propose that micro-influencers influence misinformation spread in part because falsehoods shared by'regular people' increase others' susceptibility to misinformation (Anspach, 2017). People typically perceive sentiments shared by relatable individuals, rather than celebrities or sponsored influencers, as more credible and authentic because they seem less biased by profit motives (Hassoun et al., 2023). Their content format also feels familiar, mirroring the unpolished, personal content that people encounter in social media ecosystems (Anspach, 2017).
Micro-influencers leverage this elevated level of self-disclosure, perceived authenticity, and familiarity to build 'parasocial relationships' that deeply engage their audience (Harff et al., 2022; Stehr et al., 2015). Another structural reason for their relatively high engagement is that they typically operate in trusted, closed networks that mirror word-of-mouth communication. Their "atoms" of propaganda...rocket through the information ecosystem at high speed powered by trusted peer-to-peer networks' (Wardle, 2017). Burgeoning private messaging services have increased the speed and relatability of misinformation spread by influencers in seemingly intimate groups (Rossini et al., 2020). Despite the proliferation of research on misinformation influencers across platforms, this scholarship has largely discounted the network effects of micro-influencers specifically. Our research suggests that this increase of micro-influencers using closed, peer-to-peer networks may meaningfully impact misinformation belief and sharing.
### Misinformation: from explicit fabrication to 'gray area' content
A series of 2020 Reuters Institute studies found that the majority of online COVID-19 medical misinformation they sampled was not purely fabricated (Brennen et al., 2021; Simon et al., 2020). Rather, creators reconfigured content based on existing--often true--medical information throughContextualization or other editorial interventions. Further, this reconfigured content had higher engagement than the wholly fabricated content. Most misinformation our participants produced and shared falls under this 'gray area' category. Our creator participants spread this hard-to-moderate content in direct response to the threat of deplatforming or demonetization, an instance of what boyd (2017) calls creators 'evolving along with the information landscape.' We show how this content requires different misinformation classification and moderation approaches, because its creators employ tactics like rhetorical questions and implied correlations instead of falsifiable, intentional fabrications.
## Methodology
We conducted the first phase of a two-year study in Fall 2022, employing ethnographic methods to longitudinally analyze why and how people consume, amplify, and create medical and election misinformation. Being in the field before, during, and after Brazil presidential elections and US midterm elections allowed us to study the dynamic relationship between political misinformation, belief, and action at a time of high-frequency engagement and stakes--leading up to the storming of Brazilian Congress on January 8th, 2023. Existing misinformation research is highly US-centric, with South America being the least studied (3.8% of studies) region in the world and "only 7.6% of studies analyzing US data along with those from other countries" (Seo and Faris, 2021:1166).
We chose to study both creators and consumers of misinformation because existing research focuses primarily on the drivers of and context surrounding misinformation _consumption_. Studying creators allowed us to understand what motivates individuals to move from passive consumption to active amplification and creation, and to interrogate trust heuristics and feedback loops between creators and consumers.
### Participants & Sites
We conducted semi-structured interviews with 31 participants aged 18-67 who regularly created, amplified, and/or consumed misinformation (Table 1). We also collected data from ethnographic participant-observation and by attending 3 misinformation-spreading events. Researchers were from each country and fluent in English (US) and/or Portuguese (Brazil). Participants spanned education levels and political affiliations, though most were right-leaning (n=21).
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline \multicolumn{1}{|p{42.7pt}|}{} & \multicolumn{1}{p{42.7pt}|}{**Misinformation Type**} \\ \hline
**Site** & **Total N** & **Men** (self-ID) & **Women** (self-ID) & **Urban** & **Rural** & **Medical** (only) & **Political** (only) & **Both** \\ \hline
**Brazil** & 16 & 9 & 7 & 12 & 4 & 2 & 4 & 10 \\ \hline
**US** & 15 & 6 & 9 & 10 & 5 & 5 & 4 & 6 \\ \hline
**TOTAL** & 31 & 15 & 16 & 22 & 9 & 7 & 8 & 16 \\ \hline \end{tabular}
\end{table}
Table 1: Participant Information
Following ethnographic methodology, we selected misinformation-spreading events based on where participants visited in their daily lives. In Brazil we visited smaller events, such as the Sunday church gathering that served as a key misinformation source for its community. In the US, multiple study participants invited us to ReAwaken America, a 5,000+ person conference series which National Public Radio describes as 'part QAnon expo and part political rally' (Hagen, 2022).
### Recruiting & Incentives
We used online channels to identify and engage potential research participants. We first identified relevant public social media groups, subreddits, and large chat app channels; disclosed ourselves as researchers and built rapport with potential participants; used direct messaging to provide study information and a recruitment screener; then set up a screening call.
### Research Methods
We conducted three-part, 6-8 hour online and offline ethnographic interviews with each participant. We chose this methodology to deeply understand how participants encounter, share, and create misinformation within their everyday lives and to develop a broader understanding of how on- and offline misinformation ecosystems connect.
### Remote Semi-Structured Interview + Observation
We began with a 3-hour Zoom session in which we sought to understand participants' online lives and behaviors using a combination of semi-structured interview, remote screen-sharing, and observation as they led us through their online ecosystems. The prompts used to investigate respondents' beliefs and behaviors were informed by prior ethnographic studies and were intentionally open-ended and non-judgmental. Researchers mirrored respondents' mental models and language, letting participants guide conversation.
### In-Person Semi-Structured Interview + Observation
We followed up in-person for a 3-hour session in participants' homes and everyday social ecologies to understand the relationships between their online and offline information ecosystems.
### Semi-Structured Interviews with Secondary Participants
During in-person sessions, we asked participants to introduce us to 1-2 important people in their lives. We had 1-hour conversations with a partner, close friend, or community member who helped us contextualize our core participants' beliefs and online behaviors within their social ecologies.
### Analysis Methods
The analysis process followed the basic structure of grounded theory (Chamaz, 2006): transforming our raw data into concepts, then clusters of concepts, and then theoretical coding of clusters.
Analysis began in the field, with each researcher consensually capturing images, video, and notes during interviews and site visits and writing detailed field notes immediately after. We then coded field notes, with multiple researchers performing open coding on each other's data (Saldana, 2021). Researchers then conducted clustering and thematic analysis together.
### Research Ethics
Our study underwent ethics and privacy review to ensure that participants provided informed consent and were safeguarded from undue risk. Human subjects research experts reviewed and approved our study and data management plan. Participants were told that the study was on trust in media and information. Participants received $100/hour (US) or 250 reais/hour (Brazil) and knew they would even if they withdrew. We have omitted personally identifying information from this paper. Pseudonyms used differ from those used in data collection.
### Limitations
We chose to ethnographically study 31 participants in-depth to give in-situ observational detail and phenomenological qualitative explanation to a phenomenon (misinformation belief and spread) most commonly studied using lab-based experimental or computational methods (Seo and Faris, 2021). Follow-up surveys on a representative sample would help explore whether our findings are more widely applicable. Self-reported data carries significant limitations, including self-censoring, recall challenges, and social desirability biases. We sought to analyze gaps between what participants say and do by cross-referencing semi-structured interview data with digital artifacts (e.g., search and message histories) and screen-sharing observation while participants navigated their digital ecosystems.
## Findings
In this section, we detail four key findings from our ethnographic engagement with creators and consumers (Table 2).
### 'Gray Area' Misleading Content
Creators use gray area content to make misinformation hard to recognize and avoid moderation or removal by platforms. We found four common forms of gray area content: decontextualized data, personal testimonials, pseudo-scientific jargon, and provocative questions.
### Data without Context
Creators posted decontextualized content that suggested associations without making explicit claims. Tina (52, US) connects images of 5G towers and children with cancer, urging viewers to draw their own conclusions: 'I wanted the videos to stay up on YouTube, so I just let the images speak for themselves with some background music.' Brandon (41, US) shared a poorly-labeled graph of 'excess deaths' (taken out of context of disease-related deaths) and COVID-19 vaccinations (Figure 1). While the graph uses real government data, he suggests correlation and conspiracy obliquely: 'You can't ignore these numbers. The data speaks for itself--just look at the graph.'
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Tactic** & **Action** & **Rationale** \\ \hline ‘Gray Area’ Misleading Content & Imply (mis)information rather than asserting it & Avoid detection and moderation \\ \hline Micro Influencing & Post in small, closed groups & Create a sense of intimacy to gain deeply committed, trusting followers \\ \hline Emotional Targeting & Speak to and satisfy unmet social and emotional needs & Create dependence on misinformation community \\ \hline Quantity over Quality & Plant small ‘seeds of doubt’ repeatedly across platforms & Avert consumers’ fact-checking impulses to subtly build belief over time \\ \hline \end{tabular}
\end{table}
Table 2: Key Findings
Creators also took data out of context to manufacture a sense of urgency or gravitas. Dona [62, BR] described a video shared with her: 'This video was after the election, this woman found a box full of ballot paper on the street...I thought it was very serious. I don't know if it's true or not. Let's leave this to God, he knows if it's true.' This urgency made fact-checking a secondary concern for consumers, as a fearful mentality of 'just in case it's true' overrode veracity concerns.
### Personal Testimonials
Creators used personal testimonials that were emotionally compelling but difficult to refute (Figure 2). Tina (52, US) narrativized her health struggles to sell anti-5G products. Nadia (34, US) shared her mother's illness story to undergird conspiratorial medical claims:
My mother has a rare condition called neurofibromatosis. She was put on narcotics...I was
Figure 1: Brandon (41, US) overlays data on excess deaths during COVID and vaccine doses to imply causation.
Figure 2: Brandon (41, US) found the personal testimonials in this COVID misinformation ‘documentary’ convincing.
watching her bedridden and no longer coherent. I went through a phase where I was grieving her while she was physically still alive, but then I started learning about plants and natural medicine...she was pretty much my guinea pig. We had nothing to lose...And eventually it helped her pain...She got off all her narcotics completely. I said: No plants over pills. I'm never looking back if my mom can do it. And so that was my awakening, and I just ran with it from there...There's a way outside of the pharmaceutical industry.
A 'Frontline COVID-19 Critical Care Alliance' group followed by participants promotes ivermectin treatment and anti-vaccine conspiracies. By sharing testimonials (Figure 3), they plant seeds of doubt about treatment while avoiding moderation.
### Pseudo-scientific Jargon
Creators often used technical, pseudo-scientific jargon to portray expertise and credibility, stylistically mirroring scientific experts' use of terminology and evidence. They flooded consumers with data and/or unfamiliar words to make their claims seem supported by trustworthy evidence. Alan (27, US) remarked how 'technical lingo' makes content seem credible. He showed videos linking'seismic activity' to world upheaval (Figure 4), saying that the creator's technical knowledge suggested it was true: 'he's [the creator] done all the research for me. He even got called out by the USGS. It means he's onto something.'
Figure 3: Personal testimonial from a wife whose husband died while hospitalized for COVID.
#### Strategic Questions
Creators asked provocative questions to challenge the status quo and evade moderation. Nadia (34, US) tries to inculcate medical skepticism without stating specific facts by encouraging viewers to draw misleading parallels between the COVID-19 vaccine and groupthink (Figure 5).
The jab is the real life scenario your mother warned you about. "If your friends all jumped off a bridge, would you?"... Now we know who would jump.
Tina (52, US) encourages followers to doubt vaccines (Figure 6) and 5G: 'healthy individuals losing their lives. Is it the vaccine? Well, interesting that they've all taken it.'
Figure 4: For Alan (27, US), the pseudo-scientific jargon in this video makes it feel like it comes from a credible ‘expert.’
Creators build trust by using'search queries with an ideological dialect or bias' to superficially validate their claims (Tripodi, 2022). Ted (60, US) searched 'World Economic Forum 2030 Agenda' when prompted by an Epoch Times article: 'If they [creators] tell you to look it up for yourself, and they tell you where you can go to do it, I start to believe that. And I do go there and look it up. Like the World Economic stuff...they told you can go on Google and find it yourself.' When Ted searched using creator-provided terms, the 'people also ask' function listed other conspiratorial queries (e.g., 'Who controls the world economy') (Figure 7). These questions, asked by other people, fueled his community-driven suspicions about WOF.
Other creator tactics to avoid moderation included using satire, opinion, and neologizing. Murilo (43, BR) strategically used coded language: "You can't say "Alexandre de Moraes", so I'll go and say
Figure 6: Tina (52, US) uses provocative questions comparing vaccines to car seats to raise stakes, highlight risk, and discourage people from getting vaccinated.
Figure 7: Ted (60, US)’s ‘People also Ask’ suggestions.
* "Xandex Xandovski" when he does something bad. Then I don't fall into the algorithm.'
### Micro-Influencing
Our participants' information ecosystems were primarily shaped by micro-influencers speaking to small, often closed, online groups, rather than mega-influencers. Participants often encountered similar narratives across platforms, reinforcing perceived veracity through repetition. This fragmented ecosystem makes tracking the spread of misinformation and moderation more difficult. Because consumers encounter misinformation content in spaces where they feel safe, it often subtly blends into their feeds through the evasive tactics detailed in the previous section.
#### Small, Local, Closed Groups
Those who shared misinformation with participants were rarely strangers. When participants did consume content from strangers, it appeared in trusted spaces using participants' vernacular. Many first encountered and consumed misinformation from friends and family via direct messages. Conceicao's [67, 68] son sent her videos: 'He recommended this YouTube channel. He said Mom, look at Luciano, he is also from Caxias.' I started following him and I liked it. I watch it every week.' These personalized recommendations drew participants to new (mis)information sources and closed communities.
Participants demonstrated a clear preference for consuming information from local communities (e.g., referrals from people they knew, local chapters of larger organizations). Dave [58, 58] joined his Concerned Doctors local chapter (Figure 8) because they offered hyper-local news and opportunities: 'They send out a weekly newsletter with the latest on what they are finding and they share in-person events where I've made a lot of friends.'
Paulo (_64_, BR) relies on a church WhatsApp group for information:
We created this group ["Faith, Politics, and Economy"] so we don't have to share information individually. These people are friends for a long-time. The majority are friends from the church, so I know them in real life...there are only about 10 people in this group.
Early misinformation encounters also came from unofficial 'experts' (e.g., influencers, podcasters) that participants trusted. Lorenzo (34, BR) first learned about election misinformation through cryptocurrency Facebook groups and YouTube channels. These groups felt like safe spaces to learn new ideas, feel prepared for the future, and find community. He joined in 2015 when inflation rose in Brazil and he sought protection. He slowly acquired conspiracy beliefs: 'I followed the YouTube channels to learn more about finance and bitcoin but ended up joining news channels that led me to conspiracy channels.'
Creators sometimes repurposed closed groups to introduce members to misinformation in a safe space. Susanna (43, BR)'s Telegram group 'Doctors for Life' was renamed to 'Politics and Health' and now spreads election misinformation as 'Geopolitics SOS Army.'
Consumers can become creators in these trusted circles. Susanna (43, BR) led 60+ WhatsApp groups in the 2018 election: 'People know me...On WhatsApp, when anything happens in politics, I get many messages saying 'Is it true?', 'Did you see this?' Now, I am a reference.'
Figure 8: Local Concerned Doctors chapter spreading misinformation via testimonial and ‘experts.’
### Personal Relatability
Because of this fragmented, recommendation-based information circulation, grassroots personal relatability played a significant role in creators' success. Nadia (34, US) shared her '6 figure formula' for building a loyal online following:
You need your healing story...you will show your story time and time again, it makes you at real person and relatable. That's where you build your credibility. My success all comes down to my story. Hearing about my experiences--how I hit rock bottom and healed myself, helped heal others naturally made people want to learn more from me. It's how I grew my presence online. I never used to use social media before but it's easy once you figure out how to harness your story.
Tina (52, US) similarly attributes her success to relatability: 'My brand's success comes from my story. It makes people trust me--I've been through it and I'm better now.'
### Belief and Incentives
Creators both asserted belief in the misinformation they spread and had clear incentives to amplify it. Tina (52, US) believes she has the solution to 5G and makes six figures selling it. Murilio (43, BR) believes Brazil's election was rigged and uses it to gain fame. Brandon (41, US) believes the COVID vaccine kills and spreads the word to build his Christian influencer brand.
Amplifier participants believed they had never shared false information. Ted (60, US) told us: 'Everything is something that I have checked out for myself, or I know to be true. I would never post misinformation intentionally. I don't think it's ethical.' Allison (47, BR) asserted: 'It's [5G] not conspiracy theory, it's reality theory.' Fernanda (48, BR) actively reads about Russian disinformation tactics (Figure 9) and thinks Bolsonaro's political opponents use them--but that she never has.
### Emotional Targeting
When participants encountered misinformation, they tended to engage and share when it satisfied key emotional needs: countering loneliness, feeling valued as an expert, alleviating fears, and venting frustrations. Creators recognized and capitalized upon these needs.
#### 4.1.1 Loneliness
Misinformation sharing offered participants real and imagined communities that abated feelings of loneliness. Brandon (41, US) felt physically and emotionally isolated when COVID shutdowns began. Online and in-person anti-vax events provided a sense of community and belonging: 'When Pandemic lockdowns started, we were so isolated in Texas. I said to Delilah [his wife], we gotta find some like-minded people to engage with. When I learned about ReAwaken America, I knew we needed to be a part of it.'
Livestreaming enabled viewers to quickly build online communities, finding emotional validation in real time. Participants attended livestream watch parties or coordinated viewings to communally react to events (Figure 10). Creators often held these at strange hours to build a special sense of community and avoid moderation. Fernanda (48, BR)'s favorite livestream promotes election fraud: 'Our livestreaming is more like a chat. It's not very professional or formal. The people talk as if it were a WhatsApp conversation. They participate a lot.' Restricted access to pages and events also gave participants a sense of community and special importance.
Figure 9: Fernanda (48, BR) showed us a book on disinformation to demonstrate that she knows it when she sees it.
#### Feeling Valued as an Expert
Misinformation sharing provided participants an antidote to obscurity. Many participants felt undervalued for their abilities, unappreciated for their intelligence, and/or lacked validation from work. Online misinformation communities provided positions of power and responsibility where they previously felt they had none. Susanna (43, BR) appreciated that an election misinformation community recognized her intelligence and experience: 'I'm not rich, I have some dignity from my work. But when I joined the community [around election misinformation] I started to have access to people I wouldn't have imagined. Like big businessmen, talking as equals, talking about politics.'
Femanda (48, BR) has law and business degrees but practices neither. She found recognition for her credentials through the election misinformation community, which gave her a sense of purpose: 'I was one of the people who used to comment...one day they called me to do [a livestream]. I never planned it. In life there are moments you have to make a choice. Either you keep quiet, carry on with your life and let the world fall apart, or do something.' Alan (27, US) felt validated when people responded to and re-shared his Reddit and Telegram posts: 'It [the re-posting] tells me I'm going in the right direction. It's something I can use to see how much people engage with it, how important it is for other people, how current it is and how important it is too.'
#### Agency over Fear
Creators marketed fear and sold solutions that gave participants a sense of control in uncertain times. Tina (52, US) originally felt sick and'suspected it was from radiation. But no doctor had any answers, and that's the scarieset part.' She found reassuring answers in 5G conspiracy: 'It all just
Figure 10: Dave (58, US) co-watches election coverage live on Rumble using a link he found on Mike Lindell’s Truth Social page.
keeps getting worse, too-look at how many towers there are! I knew I had to figure out something to do.' We saw at ReAwaken America that Tina now makes a living selling this same fear and solution: 'I touch and interact with at least 2000 people at each event, reading their radiation levels and showing them how bad 5G has gotten...I make people look around and see how much radiation is around them.'
At Reawaken America, we met many similar vendors. Brandon and Delliah (41, US) sold billboard advertisements to'spread the word' and vitamins to protect against perceived threats: government-produced vaccines, aging, and 5G. Brandon said: 'If I don't, it's like blood on my hands. You have to do something even if you're scared. You need to speak up, speak out.' Delliah used similar fearomongering, urgency-generating language: 'The fact is they're coming after kids. People are dying from these shots and now they're mandating children get vaccinated.'
_Judgment-Free Venting_
Finally, many participants liked misinformation communities because they provided judgment-free zones for venting frustrations (Figure 11). Most communities were anonymous or highly valued discretion. Reddit provided Kara (34, US) with an honest and safe space in contrast to 'normie' platforms where: 'I don't feel like I can fully share my experiences and thoughts...and it's not even about censorship and posts removed, I feel this strong sense of judgment...You should be able to say things without automatically being labeled anti-vaxox with three x's.' Ted (60, US) constantly reshared posts on CloutHub and MeWe to vent: 'This affords you an avenue of relief, even if it
Figure 11: Creators like Murilo (43, BR) made ‘satirical’ political videos to provide consumers with emotional outlets and misinformation beliefs while also evading moderation, as humor makes creator intention hard to determine.
doesn't do anything, it does something for you. Otherwise I would just keep it all bottled up. I would explode.'
These emotionally-driven reasons for misinformation sharing highlight the potential for interventions that meet these emotional needs through alternative means, but also demonstrate the deep hold misinformation communities can have. Online gardening communities initially pulled Allison (47, BR) away from misinformation communities, as they ameliorated his loneliness and feelings of obscurity. But the election, and the attendant fear and frustration it generated, drove him back to misinformation communities that addressed those emotions.
Emotionally unsettling events catalyzed more active engagement with trusted misinformation spaces. After a significant event\(-\)from elections to COVID\(-\)Ted (60, US) turns to the 98 pages he follows on CloutHub or to TruthSocial: 'As soon as something happens, someone is talking about it there' (Figure 12). Following the election, Fernanda (48, BR) returned to Jovem Pan and Bolsonaro's pages, as well as familiar podcasts and livestreams where she previously found election misinformation. These spaces became her go-to sources: 'Everyone's here. I find it easier to find things around here, you see? Bolsonaro's page is the first place that I look.'
Emotionally unsettling health concerns also drove participants to become'miracle cure' misinformation consumers and shares. Nadia (34, US) explained: 'I was desperate for another way. That's when I started researching.'
Figure 12: Ted (55, US) shares an emotionally-charged post connecting myriad misinformation and conspiracy theories.
### Quantity over Quality
Misinformation creators employ the tactic of repetition, planting many'seeds of doubt' in consumers' trusted spaces over time. Nadia (34, US) recognized that to gain new followers, (mis)information quantity was more important than its quality. She posts frequently on multiple platforms (Instagram, TikTok, Twitter, Facebook, Telegram, Bitchute, Rumble). After her posts were removed or downranked, she learned to optimize her content to evade each platform's misinformation policies: 'When the rhetoric around COVID intensified I started realizing some platforms are going to be easier to get the word out than others...I stopped talking about the vaccine and COVID. I just started planting seeds about the forms of corruption.'
This also enabled her to keep gaining followers on other platforms when she encountered removals: 'When I initially had my account disabled I was freaking out, this is my business account! But then I made a TikTok account and I jumped onto Twitter. Whatever, I will hop on everywhere to get the word out.' Creators like Nadia recognized and capitalized on word-of-mouth sharing: 'I have countless messages of people being like: "thank you for your content. I shared your video with my family who is now doing this" and that's what keeps me going.' Her tactics have gained her a wealth of followers on Instagram (67K+), TikTok (44K+), Twitter (14K+), YouTube (2K+) and Telegram (5K+).
These accumulative seeds of doubt worked on participants. Allison (57, BR) could not remember how he came to believe in 5G: there was no singular, red-pill moment, just an accumulation of many moments of doubt over time. After repeatedly encountering videos, posts and articles, the conspiracy became real. As in Allison's experience, participants granted limited thought to misinformation seeds of doubt in initial encounters: 'I heard about an experiment in which birds died after landing on a 5G wire. Is this true or not? I don't know.' Over time doubt increased: 'I heard it is bad for your health. I read a study and decided not to read any more about it. It is bad for your brain. It is really scary!'
Short posts on Instagram, TikTok, and WhatsApp grabbed viewers' attention and could plant many seeds of doubt within a short timeframe. Long-form lists with links to additional information inundated viewers with shareable data. At ReAwaken America, presenters shared long docs containing questions and misinformation 'evidence' using SMS, Google Doc & Drive, and Dropbox (Figure 13).
Repeated algorithmic recommendations also drove participants to misinformation during their routine online activity in trusted spaces. Alan (27, US) used suggested videos to discover new music, but the prompts led him to misinformation spaces: 'My YouTube used to give me music channels. Now it is all politics and what is going on in the world. The algorithms start to dominate my sidebar.' Ted (60, US) received podcast suggestions through algorithmic recommendations in the streaming app he already frequented: 'The podcasts start popping up, everywhere. And you're like, man, I never even knew this was going on! It's really waking people up.'
In the next section, we discuss these findings and their implications for future misinformation research and interventions.
## Discussion
Our findings highlight the importance of focusing on the effects of repetitive, 'gray area' misleading content prevalent for our participants and prolific in today's information ecosystems. Methods that sequentially analyze the harmfulness of standalone pieces of content miss the cumulative effects of misinformation sharing. Participants found the accumulation of many, often short online messages reinforcing the same point persuasive, regardless of the content's production quality (or veracity). In fact, creators openly stated that their posts' quality was less consequential for their success than frequency, authenticity, and customization for each platform.
Figure 13: ReAwaken America presenters shared long Google Docs of ‘evidence’ with audiences.
### Micro-Influencers as Unofficial Experts
Misinformation creation's barrier to entry is low; anyone with a smartphone can record and post short videos. Alongside this proliferation of low-quality content, there has been a parallel shift in perceptions of credibility. Polls find declining trust in institutionalized authority and elite expertise, with a corollary loss of trust in heuristics of reliable information like national and international institutional approval (Gallup and Knight Foundation 2020). Many people instead use signals of familiarity as a trust heuristic, like our participants who prioritized local influencers and groups. Given legitimacy need not be conferred by traditional markers of institutional authority, creators can confer credibility through personal testimony, entrepreneur, and deep engagement with community members. This enables the decentralization of misinformation creation to smaller creators in what we term a 'cottage industry of misinformation.'
We observed a complementary trend emerging from declining institutional trust: the rise of the 'unofficial expert'. Unofficial experts project possessing hidden knowledge about conspiracy theories or information not shared by mainstream institutions (Jigsaw, 2022). Sharing such information positions these 'experts' as truth tellers defying traditional authorities and institutional information sources. This is distinct from the 'fake expert' phenomenon, whereby individuals stylistically imitate authoritative information sources like news outlets to spread unverified, misleading information (Cook et al., 2017). 'Unofficial experts' need not repurpose the trust heuristics of mainstream media like formalwear or high production value videos; their credibility comes from being ordinary people with seemingly extraordinary knowledge (Hassoun et al., 2023). Our participants were drawn to and modeled such unofficial experts.
### Misinformation meets emotional needs
By serving as unofficial experts, creators met their own and others' emotional needs. Creators found validation from gaining followers and engagement, or becoming a sought-after source of information for a community, mitigating the feelings of obscurity, isolation, and fear that originally drove many to conspiracies. In line with recent scholarship, we find that searching for truth is often not the primary purpose of (mis)information consumption (Duque and Peres-Neto, 2022; Hassoun et al., 2023; Zimdars et al., 2023). Emotional needs--like desires for belonging, recognition, and control--create strong motivations for consuming and sharing misinformation.
Research shows that fear and a lack of control increase susceptibility to misinformation, and our participants described feeling both during elections and health crises (Weeks, 2015). This aligns with literature which finds emotional needs compound in contexts of uncertainty (e.g., pandemics) and events that prompt communal identification (e.g., elections) (Albertson and Gadarian, 2015). During the COVID-19 pandemic, sharing (mis)information in reaction to fear provided a sense of purpose, control, and community in extended social isolation (Freiling et al., 2023).
We found additional emotions affecting misinformation beliefs: obscurity, loneliness, and frustration. We found these emotional states heightened not only by the pandemic but also its aftermath. They were also fueled by political volatility stemming from elections and political disorder. Across both medical and election domains, we find that the impact of such events on participant emotions--and the desire for human connection and community they inculcate\(-\)drives consumption, sharing, and creation of misinformation. When information sharing can facilitate connection, it can spur both consumption and sharing of misinformation (Frieling et al., 2022).
We analyzed the consumption and creation of misinformation that meets\(-\)and manipulates\(-\)these needs. Based on our findings, attention to these emotional needs and the way misinformation meets them is more important than analyzing how 'truly' individuals believe false information and constructing counterarguments. As a result, we suggest that meeting these emotional needs in alternative ways could be more effective than debunking false information.
**What makes misinformation belief'real? The role of everyday consumption practices**
Guess and Lyons (2020) and Baptista (2022) argue that misinformation production implies an element of intent. Our findings, however, suggest that creators' underlying motivations are rarely singular nor explicitly known by creators. Analyses of the intentionality behind misinformation production must be contextualized in the (often conflicting) emotional, financial, and social needs expressed through online content creation, sharing, and consumption.
Our consumers-turned-creators evidence the complexity of intention and belief. None genuinely believed they shared misinformation, but many openly articulated that they were driven in part by financial motivation\(-\)and articulated that this business acumen helped spread the word. The language and practice of virtuous religious proselytizing and business amplification were overlapping and mutually reinforcing, a phenomenon we believe merits further study.
Misinformation researchers have aimed to classify belief ontology, systematically analyzing the content and composition of belief systems like QAnon or miracle cures (Kapantai et al., 2020; Wardle, 2023). Drawing on anthropological work situating belief as created and sustained through community-based practice, we argue for shifting research questions away from _what_ and _why_ people believe towards _how_ individuals acquire and sustain belief (Hassoun et al., 2023; Luhrmann, 2020; Deeb 2011). We found misinformation belief built and sustained through community-based practices of sharing, rather than belief always ontologically preceding sharing (Ren et al., 2023; Fountain, forthcoming). Further, emotional resonance and identity congruence (Molina, 2023) were powerful trust heuristics for consumers, highlighting the challenge of affecting misinformation beliefs through directly debunking facts or 'neutral' traditional institutional sources of authority.
### Moderation Challenges
Lastly, this work has implications for platforms' efforts to combat harmful misinformation. Platforms rely on content moderation to reduce or remove misinformation they deem harmful. We observed that many creators anticipate removals or algorithmic efforts to reduce visibility of their posts, preempting these moderation actions by migrating to less moderated spaces like Telegram or CloutHub. However, there was rarely a clean break from the old to the new; participants preferred to maintain a presence on as many platforms as possible, including a mix of moderated and less moderated platforms.
One implication of this cross-platform presence and proliferation of 'gray area' content is that moderation alone is insufficient to address misinformation. A complementary approach to removing or reducing misinformation is building resilience to it by teaching people to spot and resist common building blocks and manipulation techniques. A growing body of research into behavioral and cognitive interventions, spanning boosts, nudges, and techno-cognition (such as adding friction to technical processes), shows promising results in proactively reducing the spread of misinformation (Kozyreva et al., 2020), and further research is needed to analyze how these interventions apply to misinformation consumers and creators.
## Conclusion
We used ethnographic methods to investigate how and why people encounter, trust, and amplify election and health misinformation. Participants were more likely to engage with misinformation from creators and channels with less than 100K followers, who used grassroots sources of authority to establish legitimacy. Content creators employed subtle aesthetic and rhetorical techniques to blend their content into the everyday media consumed by participants in their safe spaces, targeting participants' emotional and social needs. They used repetition to build trust in misinformation through repeated exposure, rather than through single'red pill' events.
Given the dominance of misleading 'gray area' content in participants' online ecosystems, we recommend further research to identify and counter its effects. The impact of this misleading content on beliefs and behaviors is poorly understood at scale. It would be valuable to integrate anthropological approaches like ours with psychometric evaluations to better understand how conspiracies affect emotions and belief formation in context. Finally, the introduction of new technological tools for online content production, namely generative artificial intelligence, is observably affecting misinformation consumption and production. Interdisciplinary research is needed to understand how these emerging technologies affect creators' tactics and consumers' practices when encountering misinformation online.
|
2307.03834 | Elementary groups in $\PSL(3,\C)$ | In this paper, we give a classification of the subgroups of $\textrm{PSL}(3,
\mathbb{C})$ that act on $\mathbb{P}_{\mathbb{C}}^2$ in such a way that their
Kulkarni limit set has finitely many lines in general position lines. These are
the elementary groups. | Waldemar Barrera, Angel Cano, Juan Pablo Navarrete, José Seade | 2023-07-07T21:03:49Z | http://arxiv.org/abs/2307.03834v1 | # Elementary groups in \(\mathrm{PSL}(3,\mathbb{C})\)
###### Abstract.
In this paper, we give a classification of the subgroups of \(\mathrm{PSL}(3,\mathbb{C})\) that act on \(\mathbb{P}^{2}_{\mathbb{C}}\) in such a way that their Kulkarni limit set has finitely many lines in general position lines. These are the elementary groups.
Key words and phrases: Kleinian groups, projective complex plane, discrete groups, limit set 2010 Mathematics Subject Classification: Primary: 37F30, 32Q45; Secondary 37F45, 22E40
## Introduction
Kleinian groups are discrete subgroups of \(\mathrm{PSL}(2,\mathbb{C})\) acting on \(\mathbb{S}^{2}\cong\mathbb{P}^{1}_{\mathbb{C}}\) in such a way that their limit set is not all of \(\mathbb{S}^{2}\). These are classified into elementary and non-elementary groups. The elementary groups are those whose limit set consists of \(0,1\) or \(2\) points, and they are classified, see for instance (see [20]).
In this work we look at complex Kleinian groups, that is, discrete subgroups of \(\mathrm{PSL}(3,\mathbb{C})\) acting properly and discontinuously on some open subset of \(\mathbb{P}^{2}_{\mathbb{C}}\). In [11] it is proved that the Kulkarni limit set \(\Lambda_{\mathrm{Kul}}\) (see Definition 1.1) of every infinite complex Kleinian group contains at least one complex projective line. In fact we known from [1] that under some generic hypothesis on the group, the Kulkarni limit set is a union of complex projective lines in general position, _i.e._ no three of them intersect. By definition, the exception are the elementary groups. These have finitely many lines in general position in their limit set, and there are two types of such groups: those of the first kind have a finite number of lines in their limit set, and the groups of the second kind have infinitely many lines in their limit set, but only finitely many in general position.
As an example of elementary groups of the first kind, take the cyclic group generated by a \(3\times 3\) diagonal matrix with non-zero eigenvalues of different norms; this has two projective lines as limit set in \(\mathbb{P}^{2}_{\mathbb{C}}\). On the other hand (see [9]), take a \(\mathbb{C}\)-Fuchsian group in \(\mathrm{PSL}(3,\mathbb{C})\). Then its limit set in \(\mathbb{P}^{2}_{\mathbb{C}}\) consists of a cone of projective lines with base a circle (see [9] ; it has infinitely many lines, but all of them pass through the vertex of the cone, so there are only two in general position. We remark that if we take an \(\mathbb{R}\)-Fuchsian group in \(\mathrm{PSL}(3,\mathbb{C})\), then it is non-elementary and its limit set actually has infinitely many lines in general position, see [9]
In this work we study and describe all the elementary groups in \(\mathrm{PSL}(3,\mathbb{C})\).
An interesting class of elementary groups are the discrete purely parabolic groups. One finds that the limit set of such a group consists of either one line, or a cone of lines over the circle, or the whole of \(\mathbb{P}^{2}_{\mathbb{C}}\). We describe these in Section 2.
We know (see Theorem 1.5 below) that if \(\Gamma\subset\mathrm{PSL}(3,\mathbb{C})\) is an infinite discrete subgroup, then the number of complex projective lines in its limit set \(\Lambda_{\mathrm{Kul}}(\Gamma)\) is
## 1. Introduction
### Pseudo-projective transformations
Let \(\mathbb{P}^{2}_{\mathbb{C}}:=(\mathbb{C}^{3}\setminus\{0\})/\mathbb{C}^{*}\) be the complex projective plane. Let \(\mathbb{P}^{2}_{\mathbb{C}}:=(\mathbb{C}^{3}\setminus\{0\})/\mathbb{C}^{*}\) be the complex projective plane.
projectivization to \(\mathbb{P}^{2}_{\mathbb{C}}\), taking into account that \(Ker(S):=\varnothing\) whenever \(Ker(s)=\{(0,0,0)\}\). We refer to [10] for more details about this subject.
### The limit set
There are two types of limit sets relevant for this work. These are the Kulkarni limit set \(\Lambda_{\text{Kul}}\) and the Myrberg limit set \(\Lambda_{\text{Myr}}\). Let us define these. Let \(\Gamma\subset\text{PSL}(n+1,\mathbb{C})\) be a discrete subgroup.
**Definition 1.1**.: The _Kulkarni limit set_ of \(\Gamma\) is:
\[\Lambda(\Gamma)=L_{0}(\Gamma)\cup L_{1}(\Gamma)\cup L_{2}(\Gamma),\]
where:
* \(L_{0}(\Gamma)\) is the closure of the points in \(\mathbb{P}^{2}_{\mathbb{C}}\) with infinite isotropy group.
* \(L_{1}(\Gamma)\) is the closure of the set of accumulation points of the \(\Gamma\)-orbits of points in \(\mathbb{P}^{2}_{\mathbb{C}}\setminus L_{0}(\Gamma)\).
* \(L_{2}(\Gamma)\) is the closure of the union of accumulation points of \(\{\gamma(K):\gamma\in\Gamma\}\), where \(K\) runs over all the compact sets in \(\mathbb{P}^{2}_{\mathbb{C}}\setminus(L_{0}(\Gamma)\cup L_{1}(\Gamma))\).
This is a closed \(\Gamma\)-invariant set. Its complement
\[\Omega_{\text{Kul}}(\Gamma)=\mathbb{P}^{n}_{\mathbb{C}}\setminus\Lambda( \Gamma),\]
is the _Kulkarni discontinuity region_ of \(\Gamma\). We know from [18] that the \(\Gamma\)-action on \(\Omega_{\text{Kul}}(\Gamma)\) is properly discontinuous, and we further know from [1] that \(\Omega_{\text{Kul}}(\Gamma)\) contains the equicontinuity region \(\text{Eq}(\Gamma)\). However, as we will see later, \(\Omega_{\text{Kul}}(\Gamma)\) generically is the largest open subset of \(\mathbb{P}^{2}_{\mathbb{C}}\) where the group acts properly and discontinuously, but this is not always so.
Recall that \(\text{Eq}(\Gamma)\) is the set of points in \(\mathbb{P}^{2}_{\mathbb{C}}\) for which there is an open neighborhood where the family of transformations defined by \(\Gamma\) is a normal family.
Now, given \(\Gamma\subset\text{PSL}(n+1,\mathbb{C})\), a discrete subgroup, we let \(\Gamma^{\prime}\) be the set of pseudo-projective maps of \(\mathbb{P}^{2}_{\mathbb{C}}\) which are limits of sequences in \(\Gamma\). That is:
\[\Gamma^{\prime}:=\{S\in\text{SP}(3,\mathbb{C})\,|\,S\text{ is an accumulation point of }\Gamma\}\;.\]
The following notion is due to Myrberg [21] (cf. [1, Definition 3.3]):
**Definition 1.2**.: The _Myrberg limit set_ of \(\Gamma\) is:
\[\Lambda_{\text{Myr}}(\Gamma):=\cup_{S\in\Gamma^{\prime}}Ker(S)\;,\]
where \(Ker(S)\) is the kernel of the pseudoprojective transformation \(S\) defined above.
Notice that if a pseudoprojective map \(S\) is not in \(\text{PSL}(3,\mathbb{C})\), then its kernel is either a line or a point. The following useful notion is introduced in [4]:
**Definition 1.3**.: We say that a line \(\ell\subset\mathbb{P}^{2}_{\mathbb{C}}\) is an effective line for \(\Gamma\) if there exists a pseudoprojective transformation \(S\in\Gamma^{\prime}\) with kernel \(\ell\).
The Myrberg limit set of \(\Gamma\) contains all the effective lines, and it is not hard to see that, by [22], it contains the Kulkarni limit set. It is immediate from [21] that \(\Gamma\) acts properly and discontinuously on the complement of \(\Lambda_{\text{Myr}}(\Gamma)\). Furthermore, one has the following theorem.
**Theorem 1.4**.: _The Myrberg limit set is the complement of the equicontinuity region: \(\text{Eq}(\Gamma)=\mathbb{P}^{2}_{\mathbb{C}}\setminus\Lambda_{\text{Myr}}( \Gamma)\,.\) Also, the set \(\Lambda_{\text{Myr}}(\Gamma)\) equals the union of all effective lines of \(\Gamma\) except when it is the disjoint union of one line and one point. Moreover, \(\Lambda_{\text{Kul}}(\Gamma)\subset\Lambda_{\text{Myr}}(\Gamma)\)._
The first statement is a special case of [10, Lemma 4.1] (see also [4, Theorem 3.4]). The second statement is Corollary 4.5 in [4].
Throughout this work, when we say limit set we refer to the Kulkarni limit set, otherwise we will say it explicitly.
### Classification of the elements in \(\mathrm{PSL}(3,\mathbb{C})\)
Recall that the elements of \(\mathrm{PSL}(2,\mathbb{C})\) are classified as elliptic, parabolic or loxodromic: \(g\) is elliptic if regarded as a Mobius transformation in \(S^{2}\cong\mathbb{P}^{1}_{\mathbb{C}}\), up to conjugation it is a rotation; parabolic elements are translations up to conjugation and loxodromic elements are multiplication by a complex number of norm \(\neq 1\). Equivalently, given \(g\in\mathrm{PSL}(2,\mathbb{C})\) and a lift \(\tilde{g}\) to \(\mathrm{SL}(2,\mathbb{C})\), \(g\) is elliptic if \(\tilde{g}\) is diagonalizable with unitary eigenvalues, it is parabolic if \(\tilde{g}\) is not diagonalizable and it has unitary eigenvalues, and \(g\) is loxodromic otherwise. The classification given in these terms extends to all dimensions (see [22, 8]).
The _elliptic_ elements in \(\mathrm{PSL}(3,\mathbb{C})\) are those elements \(\gamma\) that have a lift to \(\mathrm{SL}(3,\mathbb{C})\) whose Jordan canonical form is
\[\left(\begin{array}{ccc}e^{i\theta_{1}}&0&0\\ 0&e^{i\theta_{2}}&0\\ 0&0&e^{i\theta_{3}}\end{array}\right).\]
The limit set for (the cyclic group generated by) \(\gamma\) elliptic is either empty or all of \(\mathbb{P}^{2}_{\mathbb{C}}\), according to whether the order of \(\gamma\) is finite or infinite. The subgroups of \(\mathrm{PSL}(3,\mathbb{C})\) containing an elliptic element of infinite order cannot be discrete.
The _parabolic_ elements in \(\mathrm{PSL}(3,\mathbb{C})\) are the elements \(\gamma\) with limit set \(\Lambda_{\mathrm{Kul}}(\gamma)\) equal to one single complex line. If \(\gamma\) is parabolic then it has a lift to \(\mathrm{SL}(3,\mathbb{C})\) with Jordan canonical form one of the following matrices:
\[\left(\begin{array}{ccc}1&1&0\\ 0&1&0\\ 0&0&1\end{array}\right),\left(\begin{array}{ccc}1&1&0\\ 0&1&1\\ 0&0&1\end{array}\right),\left(\begin{array}{ccc}e^{2\pi it}&1&0\\ 0&e^{2\pi it}&0\\ 0&0&e^{-4\pi it}\end{array}\right)\,,\,e^{2\pi it}\neq 1\;.\]
In the first case \(\Lambda_{\mathrm{Kul}}(\gamma)\) is the complex line consisting of all the fixed points of \(\gamma\), in the second case \(\Lambda_{\mathrm{Kul}}(\gamma)\) is the unique \(\gamma\)-invariant complex line. In the last case \(\Lambda_{\mathrm{Kul}}(\gamma)\) is the complex line determined by the two fixed points of \(\gamma\).
There are four kinds of _loxodromic_ elements in \(\mathrm{PSL}(3,\mathbb{C})\):
* The _complex homotheties_ are the elements that have a lift to \(\mathrm{SL}(3,\mathbb{C})\) with Jordan canonical form: \[\left(\begin{array}{ccc}\lambda&0&0\\ 0&\lambda&0\\ 0&0&\lambda^{-2}\end{array}\right),\quad|\lambda|\neq 1.\] The limit set \(\Lambda_{\mathrm{Kul}}(\gamma)\) is the set of fixed points of \(\gamma\), consisting of a complex line and a point.
* The _screws_ are those elements \(\gamma\in\mathrm{PSL}(3,\mathbb{C})\) that have a lift to \(\mathrm{SL}(3,\mathbb{C})\) whose Jordan canonical form is \[\left(\begin{array}{ccc}\lambda&0&0\\ 0&\mu&0\\ 0&0&(\lambda\mu)^{-1}\end{array}\right),\quad\lambda\neq\mu,\,|\lambda|=|\mu |\neq 1.\] The limit set consists of a complex line \(l\), on which \(\gamma\) acts as an elliptic transformation of \(\mathrm{PSL}(2,\mathbb{C})\), and the fixed point of \(\gamma\) not lying in \(l\).
* The _loxoparabolic_ elements \(\gamma\in\operatorname{PSL}(3,\mathbb{C})\) have a lift to \(\operatorname{SL}(3,\mathbb{C})\) whose Jordan canonical form is \[\left(\begin{array}{ccc}\lambda&1&0\\ 0&\lambda&0\\ 0&0&\lambda^{-2}\end{array}\right),\quad|\lambda|\neq 1.\] The limit set \(\Lambda_{\operatorname{Kul}}(\gamma)\) consists of two \(\gamma\)-invariant complex lines. The element \(\gamma\) acts on one of these complex lines as a parabolic element of \(\operatorname{PSL}(2,\mathbb{C})\) and on the other as a loxodromic element of \(\operatorname{PSL}(2,\mathbb{C})\).
* The _strongly loxodromic_ elements \(\gamma\in\operatorname{PSL}(3,\mathbb{C})\) have a lift to \(\operatorname{SL}(3,\mathbb{C})\) whose Jordan canonical form is \[\left(\begin{array}{ccc}\lambda_{1}&0&0\\ 0&\lambda_{2}&0\\ 0&0&\lambda_{3}\end{array}\right),\quad|\lambda_{1}|<|\lambda_{2}|<|\lambda_{ 3}|.\] This kind of transformation has three fixed points, one of them is attracting, another is repelling and the other point is a saddle. The limit set is the union of the complex line determined by the attracting and saddle points and the complex line determined by the saddle and repelling points.
### Elementary groups in \(\operatorname{PSL}(3,\mathbb{C})\)
In the previous section we have examples of elements in \(\operatorname{PSL}(3,\mathbb{C})\) where the limit set consists of one line, or one line and one point, or two lines. We have the following theorem from [4].
**Theorem 1.5**.: _Let \(\Gamma\subset\operatorname{PSL}(3,\mathbb{C})\) be an infinite discrete subgroup. Then:_
* _The number of complex projective lines in_ \(\Lambda_{\operatorname{Kul}}(\Gamma)\) _is equal to_ \(1,2,3\) _or_ \(\infty\)_._
* _The number of lines_ \(\Lambda_{\operatorname{Kul}}(\Gamma)\) _in general position can be_ \(1,2,3,4\) _or_ \(\infty\)_._
* _If the number of lines in_ \(\Lambda_{\operatorname{Kul}}(\Gamma)\) _is exactly 3, then these lines are in general position._
* _If there are infinitely many lines in_ \(\Lambda_{\operatorname{Kul}}(\Gamma)\)_, then the effective lines form a perfect set in_ \(\Lambda_{\operatorname{Kul}}(\Gamma)\)_._
* _There can be at most one isolated point in_ \(\Lambda_{\operatorname{Kul}}(\Gamma)\)_, and in that case this limit set is the disjoint union of that point and one line._
Statements (i), (ii) and (iv) are theorems 1.1 and 1.2 in [4]. Statement (iii) is a corollary of [4, Proposition 5.4]. Statement (v) is not in the literature so we now give a short proof of it.
Recall that a discrete subgroup of \(\operatorname{PSL}(2,\mathbb{C})\) is elementary if its limit set has finite cardinality. We remark that in the case of \(\operatorname{PSL}(3,\mathbb{C})\) there are groups with infinitely many lines in their limit set \(\Lambda_{\operatorname{Kul}}\), but only two of them in general position. In view of the previous theorem, this leads to the following definition:
**Definition 1.6**.: A discrete subgroup of \(\operatorname{PSL}(3,\mathbb{C})\) is elementary of the first kind if its limit set \(\Lambda_{\operatorname{Kul}}\) has finitely many lines. The group is elementary of the second kind if \(\Lambda_{\operatorname{Kul}}\) has finitely many lines in general position.
So every elementary group of the first kind also is of the second kind, but not conversely. In the following sections we describe the classification of elementary groups.
### The control group
We refer to [8] for a discussion about this section. Consider \(\Gamma\subset\operatorname{PSL}(3,\mathbb{C})\) a (discrete or not) subgroup which acts on \(\mathbb{P}^{2}_{\mathbb{C}}\) with a point \(p\) which is fixed by all of \(\Gamma\). Choose an arbitrary line \(\ell\) in \(\mathbb{P}^{2}_{\mathbb{C}}\setminus\{p\}\), and notice we have a canonical projection:
\[\pi=\pi_{p,\ell}:\mathbb{P}^{2}_{\mathbb{C}}\setminus\{p\}\longrightarrow\ell\,,\]
given by \(\pi(x)=\overleftrightarrow{x,p}\cap\ell\). It is clear that this map is holomorphic and it allows us to define a group homomorphism:
\[\Pi=\Pi_{p,\ell}:\Gamma\longrightarrow Bihol(\ell)\cong\operatorname{PSL}(2, \mathbb{C})\,,\]
by \(\Pi(g)(x)=\pi(g(x))\). If we choose another line, say \(\ell^{\prime}\), one gets similarly a projection \(\pi^{\prime}=\pi_{p,\ell^{\prime}}:\mathbb{P}^{2}_{\mathbb{C}}\setminus\{p\} \rightarrow\ell^{\prime}\,,\) and a group homomorphism \(\Pi^{\prime}=\Pi_{p,\ell^{\prime}}:\Gamma\rightarrow\operatorname{PSL}(2, \mathbb{C})\). It is an exercise to see that \(\Pi\) and \(\Pi^{\prime}\) are equivalent in the sense that there is a biholomorphism \(h:\ell\rightarrow\ell^{\prime}\) inducing an automorphism \(H\) of \(\operatorname{PSL}(2,\mathbb{C})\) such that \(H\circ\Pi=\Pi^{\prime}\). As before, the line \(\ell\) is called _the horizon_.
This leads to the following definition:
**Definition 1.7**.: Let \(\Gamma\subset\operatorname{PSL}(3,\mathbb{C})\) be a discrete group as above. We call \(\Pi=\Pi_{p,\ell}\) the control morphism (or map) and its image \(\Pi(\Gamma)\subset\operatorname{PSL}(2,\mathbb{C})\), is the _control group_. These are well-defined and independent of \(\ell\) up to an automorphism of \(\operatorname{PSL}(2,\mathbb{C})\).
The control map and the control group allow us to get information about the dynamics of \(\Gamma\) by looking at a subgroup of \(\operatorname{PSL}(2,\mathbb{C})\), which is far easier to handle. The prize we pay is that the control group in \(\operatorname{PSL}(2,\mathbb{C})\) may not be discrete.
## 2. Purely parabolic groups
We now follow [5] and look at the discrete subgroups in \(\operatorname{PSL}(3,\mathbb{C})\) that, besides the identity, have only parabolic elements. These are called purely parabolic and there are five families of such groups; three of them split into various subfamilies according to their limit set (and their control group, see [8]). All of these are elementary.
The simplest purely parabolic groups are cyclic, generated by a parabolic element. As described above, there are three types of such elements in \(\operatorname{PSL}(3,\mathbb{C})\), described by the Jordan normal form of their lifts to \(\operatorname{SL}(3,\mathbb{C})\). Each of these belongs to a different type of the families we describe below. The first type generates torus groups (see definitions below), the second generates Abelian Kodaira groups and the ellipto-parabolic elements generate elliptic groups.
* Elliptic groups. These are the only purely parabolic groups that are not conjugate to subgroups of the Heisenberg group \(\operatorname{Heis}(3,\mathbb{C})\) and they are subgroups of fundamental groups of elliptic surfaces. These have limit set a single line. Up to conjugation these groups are of the form: \[\operatorname{Ell}(W,\mu)=\left\{\left[\begin{array}{ll}\mu(w)&\mu(w)w&0\\ 0&\mu(w)&0\\ 0&0&\mu(w)^{-2}\end{array}\right]:w\in W\right\},\] where \(W\subset\mathbb{C}\) is an additive discrete subgroup and \(\mu:W\rightarrow\mathbb{S}^{1}\) is a group morphism.
2. Torus groups. These are subgroups of fundamental groups of complex tori. These are of the form: \[\mathcal{T}(\mathfrak{L})=\left\{\left[\begin{array}{ccc}1&0&a\\ 0&1&b\\ 0&0&1\end{array}\right]:(a,b)\in\mathfrak{L}\right\}\;,\] where \(\mathfrak{L}\) is an additive discrete subgroup of \(\mathbb{C}^{2}\). These groups also have a single line as limit set, so they are elementary of the first kind.
3. Dual torus groups, \[\mathcal{T}^{*}(\mathfrak{L})=\left\{\left[\begin{array}{ccc}1&a&b\\ 0&1&0\\ 0&0&1\end{array}\right]:(a,b)\in\mathfrak{L}\right\}\;.\] These split into three types: the first have Kulkarni limit set a complex projective line, so these are elementary of the first kind. The second type have limit set a cone of projective lines over a circle, so these are of the second kind. The third type have all \(\mathbb{P}^{2}_{\mathbb{C}}\) as limit set and they are non-elementary.
4. Inoue groups and their extensions. Inoue groups are proper subgroups of fundamental groups of Inoue surfaces. To define these, let \(\mathfrak{L}\subset\mathbb{C}^{2}\) be an additive discrete subgroup and consider a dual torus group. \[\mathcal{I}=\mathcal{I}(u,v):=\left\langle\begin{bmatrix}1&u&v\\ 0&1&0\\ 0&0&1\end{bmatrix}\;,\;(u,v)\in\mathfrak{L}\right\rangle\;.\] Inoue groups are obtained by taking a generator \(\mathcal{I}\) as above and adding to it a generator of the form \[\gamma_{1}=\gamma_{1}(x,y,z):=\begin{bmatrix}1&x+z&y\\ 0&1&z\\ 0&0&1\end{bmatrix}\;x,y,z\in\mathbb{C}\;.\] The limit set is a cone of lines over a circle, so these are elementary of the second kind. Then one has the extended Inoue group, which are purely parabolic as well, with limit set all of \(\mathbb{P}^{2}_{\mathbb{C}}\), so they are non-elementary and do not fall within the scope of this article. We refer to [5] for details.
5. Finally one has the Kodaira groups \(\mathrm{Kod}_{0}\), which are Abelian, and their extensions. A Kodaira group is a discrete group in \(\mathrm{PSL}(3,\mathbb{C})\) such that each element \(h\) in the group can be written as: \[\left[\begin{array}{ccc}1&a&b\\ 0&1&a\\ 0&0&1\end{array}\right]\;.\] One can show that these are extensions of dual torus groups. Their limit set is a line, so they are elementary of the first kind. There are five types of extensions \(\widetilde{\mathcal{K}_{i}}\), \(i=1,\cdots,5\), which are purely parabolic and discrete. These are all non-Abelian and they split into five types according to their limit set and the control group. The first type \(\widetilde{\mathcal{K}_{1}}\) have limit set a projective line, so they are elementary of the first kind; the second type \(\widetilde{\mathcal{K}_{2}}\) have limit set a cone of projective lines over the circle,
so they are of the second kind, while the remaining three types have limit set all of \(\mathbb{P}^{2}_{\mathbb{C}}\) and they are non-elementary. We refer to [5] for details.
## 3. Solvable groups
An essential step towards understanding elementary groups in \(\mathrm{PSL}(3,\mathbb{C})\) is studying the dynamics of solvable groups in \(\mathrm{PSL}(3,\mathbb{C})\). Unlike the classical case of \(\mathrm{PSL}(2,\mathbb{C})\), now there is great richness.
**Definition 3.1**.: A group \(H\) is called virtually solvable if it contains a finite index subgroups \(G\) and subgroups \(G_{i}\), \(i=0,...,k\), such that:
* \(e=G_{0}\subset G_{1}\subset\ldots\subset G_{k}=G\), and
* \(G_{j-1}\) is normal in \(G_{j}\), and \(G_{j}/G_{j-1}\) is an abelian group, for \(j=1,\ldots,k\).
For instance, for a subgroup of \(\mathrm{PSL}(2,\mathbb{C})\), virtually solvable is equivalent to saying that the group is elementary, _i.e._ its limits set has finite cardinality, see[11] and [20], or in an equivalent way (via Tit's dichotomy) a Mobius subgroup is non-virtually solvable if it contains either a non-Ciclyc Schottky group or a purely elliptic free group whose rank is at least two, see [7, 26] for a general discussion or [16, 20] for standard arguments in the one-dimensional case.
One has the following theorem from [27]:
**Theorem 3.2**.: _For subgroups of \(\mathrm{PSL}(3,\mathbb{C})\), solvable is equivalent to having a finite index subgroup which is conjugate to group of upper triangulable matrices._
So the following are examples of solvable groups:
* All purely parabolic groups.
* Hyperbolic Toral groups (we discuss these later, in section 7).
* Fundamental groups of Inoue surfaces, see [2, 5, 11].
So, after our previous section 2, to have the complete picture of solvable groups we must look at solvable groups with loxodromic elements. This is done in [27, 28]. In those articles the author determined the corresponding limit set and the representations of solvable groups in \(\mathrm{PSL}(3,\mathbb{C})\) containing loxodromic element. The main results can be summarized as follows. These summarize Theorems 1.1 to 1.4 in [28] and theorems 1 and 2 in [27]:
**Theorem 3.3**.: _The solvable groups in \(\mathrm{PSL}(3,\mathbb{C})\) can be of three main types:_
* _Commutative._
* _Cyclic extensions of torus groups by a strongly loxodromic element._
* _Dual torus groups extensions._
**Theorem 3.4**.: _If a solvable group \(\Gamma\) is commutative, then its limit set consists of either two lines, three lines in general position, or it is a line and a point, and \(\Gamma\) can be either triangular or a fake Hopf group:_
* _Diagonal groups:_ \[\{diag(\alpha^{n},\alpha^{m},1):n,m\in\mathbb{Z}\}\] _here_ \(\alpha,\beta\in\mathbb{C}^{*}\)_,_ \(|\alpha|\neq 1\) _or_ \(|\beta|\neq 1\)_._
* _Fake Hopf:_ \[\left\{\begin{bmatrix}\mu(w)&w\mu(w)&0\\ 0&\mu(w)&0\\ 0&0&\mu(w)^{-2}\end{bmatrix}\right\}\]
_where_ \(W\subset\mathbb{C}\) _is a discrete group and_ \(\mu:W\to\mathbb{C}^{*}\) _is a group morphism, satisfying some technical conditions, see_ _[_28_]__._
**Theorem 3.5**.: _If the group is an extension of either a torus group or a dual torus group, then its limit set can be a single projective line, or two lines, or else it has infinitely many lines but either it is a cone of lines over a circle or it has four lines in general position. And one has:_
* _If the group_ \(\Gamma\) _is a cyclic extension of a torus group, then it is a semi-direct product of a torus group and the cyclic group generated by a strongly loxodromic element._
* _If the group is an extension of a dual torus groups, then it can be:_
* _An extension by a loxodromic element (of any type); or_
* _An extension by two loxo-parabolic elements, and in this case the group is a semi-direct product of the dual torus group and the group generated by the two loxodromic elements._
In the sequel we describe all groups with limit set as stated in this theorem.
## 4. Groups with limit set exactly one line
In the previous section we described several purely parabolic groups with limit set a single line. Here we follow [3] and describe all the discrete groups in \(\mathrm{PSL}(3,\mathbb{C})\) with limit set a line. In particular, every such group is virtually nilpotent.
Let \(\Gamma\) be complex Kleinian with limit set a line \(\ell\). Then \(\Gamma\) acts properly and discontinuously on the complement of \(\ell\). From the above described classification of the elements in \(\mathrm{PSL}(3,\mathbb{C})\) we known that the elements in \(\Gamma\) are all either elliptic, parabolic or loxoparabolic. If the group contains a loxoparabolic element, then one finds that \(\Gamma\) is conjugate to a group such that every element can be represented by a matrix of the form:
\[\left(\begin{array}{ccc}a&0&v\\ 0&d&0\\ 0&0&1\end{array}\right)\,,\]
where \(\ell\) becomes the line \(\overleftarrow{e_{1},e_{2}}\). Moreover, \(\Gamma\) acts as an Euclidean group on the line \(\overleftarrow{e_{1},e_{3}}\). By well-known facts on Euclidean groups (see [20]), \(a\) is a root of unity of order \(1,2,3,4\) or \(6\), and the rank of \(\Gamma\) considered as acting on \(\overleftarrow{e_{1},e_{3}}\) is equal to \(1\) or \(2\). One can show that \(a\) cannot be a root of unity of order \(2,3,4\) nor \(6\), otherwise \(\Gamma\) would contain a complex homothety. Finally, the rank of \(\Gamma\) considered as acting on \(\overleftarrow{e_{1},e_{3}}\) is not equal to \(1\), otherwise, the Kulkarni limit set of \(\Gamma\) would be equal to two lines.
On the other hand, if the group does not contain any loxoparabolic elements, then there are two cases. If it acts on \(\ell\) without parabolic elements then \(\Gamma\) can be considered as a discrete group of Euclidean isometries of \(\mathbb{R}^{4}\). If some element acts on \(\ell\) as parabolic element then the group can be identified with a group of triangular matrices. Then one can rule out the existence of irrational ellipto-parabolic elements and show that there exists a unipotent subgroup of finite index.
In this way we arrive to the following [3, Theorem 1.1]:
**Theorem 4.1**.: _Let \(\Gamma\) be a subgroup of \(\mathrm{PSL}(3,\mathbb{C}\)) such that its Kulkarni limit consists of precisely one complex projective line \(\ell\). Then:_
1. _If_ \(\Gamma\) _does not contain loxoparabolic elements nor an element which acts as a parabolic element on the line_ \(\ell\)_, then_ \(\Gamma\) _is a group of isometries of_ \(\mathbb{C}^{2}\) _and it contains a free abelian normal subgroup of finite index and rank_ \(\leq 4\)_._
2. _If_ \(\Gamma\) _does not contain loxoparabolic elements but it does contain an element which acts as a parabolic element on_ \(\ell\)_, then_ \(\Gamma\) _does not contain any irrational ellipto-parabolic elements and it is a finite extension of a unipotent subgroup that consists of unipotent parabolic maps. Hence it is a finite extension of a group of the form_ \(\mathbb{Z},\mathbb{Z}^{2},\mathbb{Z}^{3},\mathbb{Z}^{4}\)_,_ \(\Delta_{k}\) _or_ \(G_{k}\)_, where_ \[\Delta_{k}=\left\langle A,B,C,D:\,C,D\text{ are central and }[A,B]=C^{k}\right\rangle,k\in\mathbb{N}\;,\] \[G_{k}=\left\langle A,B,C:\,C\text{ is central and }[A,B]=C^{k}\right\rangle,k\in\mathbb{N}\;,\]
3. _If_ \(\Gamma\) _does contain a loxoparabolic element, then_ \(\Gamma\) _is isomorphic to the group_ \(\mathbb{Z}\oplus\mathbb{Z}\oplus\mathbb{Z}_{n_{0}}\)_, where_ \(n_{0}\in\mathbb{N}\) _is arbitrary. The_ \(\mathbb{Z}_{n_{0}}\) _summand is a group of complex reflections, while_ \(\mathbb{Z}\oplus\mathbb{Z}\) _is generated by a loxoparabolic element and another element which can be loxoparabolic or parabolic._
## 5. Groups with limit set two lines
We must distinguish two cases:
a) Groups that have exactly two lines in their limit set. These are elementary of the first kind; and
b) Groups with more than two lines in the limit set, but only two in general position. In this case, we know from Theorem 1.5 that the total number of lines actually is infinite. These are elementary of the second kind.
For instance, cyclic groups generated by a strongly loxodromic element have exactly two lines in their limit set. On the other hand, for instance, the dual torus groups of the second type, described before, have limit set a cone of projective lines over a circle. In this case, there are infinitely many lines in the limit set, but only two in general position.
Let us describe these elementary groups. As in [4], for every open \(\Gamma\)-invariant set \(U\) we denote by \(\lambda(U)\) the maximum number of complex projective lines contained in \(\mathbb{P}^{2}_{\mathbb{C}}\setminus U\), and by \(\mu(U)\) the maximum number of such lines in general position. If \(U\) is the Kulkarni discontinuity region \(\Omega_{\operatorname{Kul}}(\Gamma)\), then its complement \(\mathbb{P}^{2}_{\mathbb{C}}\setminus U\) is the limit set \(\Lambda_{\operatorname{Kul}}=\Lambda_{\operatorname{Kul}}(\Gamma)\).
**Theorem 5.1**.: _Let \(\Gamma\subset\operatorname{PSL}(3,\mathbb{C})\) be a group satisfying \(\mu(\Omega_{\operatorname{Kul}}(\Gamma))=2\), then either the limit set is the union of two lines or it is a pencil of lines over a nowhere dense perfect set. Moreover, one of the following facts occurs:_
1. _if_ \(\lambda(\Omega_{\operatorname{Kul}}(\Gamma))=2\)_, then_ \(\Gamma\) _is virtually solvable and contains a loxodromic element, see_ _[_27, 28_]_ _for a precise description._
2. _if_ \(\lambda(\Omega_{\operatorname{Kul}}(\Gamma))=\infty\)_, then_ \(\Gamma\) _either is virtually purely parabolic or contains a loxodromic element, and:_ 1. _If the group is virtually purely parabolic then it contains a finite index subgroup conjugate to one of the following groups: a dual torus group of type II, a Kleinian Inoue group, or an extended Kodaira group_ \(\mathcal{K}_{2}\)_._ 2. _If the group contains a loxodromic element, then either_ \(\Gamma\) _is virtually conjugate to a solvable group (described in_ _[_28_]__), or it is a weakly controllable group whose control group is non-elementary._
Proof.: Since \(\mu(\Omega_{\operatorname{Kul}}(\Gamma))=2\), we have that \(\Gamma\) has a global fixed point \(p\), the point where those two lines intersect, and \(\lambda(\Omega_{\operatorname{Kul}}(\Gamma))\) is either \(2\) or \(\infty\) by Theorem 1.1 [4]. In fact, will see in the next section that if \(\lambda(\Omega_{\operatorname{Kul}}(\Gamma))=3\), then also \(\mu(\Omega_{\operatorname{Kul}}(\Gamma))=3\). And we know from Theorem 1.5 that if \(\lambda(\Omega_{\operatorname{Kul}}(\Gamma))>3\), then this number actually is \(\infty\).
Also, if \(\lambda(\Omega_{\operatorname{Kul}}(\Gamma))=2\), then, we claim, that \(\Gamma\) is virtually solvable. To see this notice that the group is weakly semi-controllable. Choose a line \(\ell\) as the "horizon". Then \(\Lambda_{\operatorname{Kul}}\) meets \(\ell\) in two points which are invariant under the control group \(\Pi(\Gamma)\). Hence \(\Pi(\Gamma)\) is virtually diagonalizable and therefore \(\Gamma\) itself is virtually diagonalizable. Hence it is virtually solvable. Then, by Theorem 0.1 in [5], we deduce that \(\Gamma\) contains a loxodromic element. Then, by [27, 28], its limit set consists of exactly two lines.
In case \(\lambda(\Omega_{\operatorname{Kul}}(\Gamma))=\infty\), we have that \(\Gamma\) is either virtually purely parabolic or it contains a loxodromic element. In the first case by part (b) item (2) of Theorem 0.1 in [5] we have that \(\Gamma\) contains a finite index subgroup conjugate to one of the following groups: a dual torus group of type II, a Kleinian Inoue group, or an extended Kodaira group \(\mathcal{K}_{2}\), thus, in this case, the limit set is a cone of lines over a Euclidean circle, see section 2 of [5]. Now the proof splits into the following cases:
**Case 1.** The group \(\Gamma\) is virtually solvable. In this case by Theorems 1.1, 1.2, 1.3, 1.4 in [28], the limit set of \(\Gamma\) is either two lines or a cone of lines over an Euclidean circle.
**Case 2.** The group \(\Gamma\) is non-virtually solvable. In this case it is clear that the control group \(\Pi_{p,\ell}\Gamma\) is non-elementary, here \(\ell\) is any line not containing \(p\), and
\[\Lambda_{\operatorname{Kul}}(\Gamma)=\bigcup_{z\in\Lambda(\Pi(\Gamma))} \overbrace{p,z}^{\perp}.\]
That \(\Pi_{p,\ell}\Gamma\) is non-elementary follows from the fact that, essentially by Borel's fixed point theory, a subgroup of \(\operatorname{PSL}(2,\mathbb{C})\) is elementary if and only if it is virtually solvable (see [11]).
## 6. Groups with limit set three lines
We now describe the subgroups of \(\operatorname{PSL}(3,\mathbb{C})\) with three lines in their limit set. As in the previous section, we denote by \(\lambda=\lambda(\Omega_{\operatorname{Kul}}(\Gamma))\) the maximum number of complex projective lines contained in \(\Lambda_{\operatorname{Kul}}\), and by \(\mu=\mu(\Omega_{\operatorname{Kul}}(\Gamma))\) the maximum number of such lines in general position. Then there are groups with \(\mu=\lambda=3\), and there are groups with \(\lambda=\infty\) but \(\mu=3\). In the case \(\lambda=\infty\) the groups in question are "suspensions" of non-elementary Kleinian groups in \(\operatorname{PSL}(2,\mathbb{C})\), see Chapter 5 in [8]:
**Theorem 6.1**.: _Let \(\Gamma\subset\operatorname{PSL}(3,\mathbb{C})\) be a group for which \(\mu=3\). Then \(\lambda\) can be either \(3\) or \(\infty\) and:_
* _If_ \(\lambda=3\)_, then these lines are in general position and_ \(\Gamma\) _contains a finite index subgroup conjugate to:_ \[G_{0}=\left\{diag[\alpha^{n},\beta^{m},1]:n,m\in\mathbb{Z}\right\}\;,\] _where_ \(\alpha,\beta\in\mathbb{C}^{*}\) _are non-unitary complex numbers._
_._
2. _If_ \(\lambda=\infty\)_, then the limit set is a cone of lines plus a non-concurrent line and_ \(\Gamma\) _contains a finite index subgroup conjugate to:_ \[H_{\Sigma,\rho,\alpha}=\left\{\begin{bmatrix}\alpha^{n}\rho([g])&0\\ 0&g\end{bmatrix}:n\in\mathbb{Z},g\in\operatorname{SL}(2,\mathbb{C}),\ [g]\in\Sigma \right\}\;,\] _where_ \(\alpha\in\mathbb{C}^{*}\) _is a non-unitary complex number,_ \(\Sigma\) _is a non-elementary discrete group in_ \(\operatorname{PSL}(2,\mathbb{C})\) _with non-empty discontinuity region, and_ \(\rho\) _is a function_ \(\Sigma\to\mathbb{C}^{*}\)_. In this case_ \[\Lambda_{\operatorname{Kul}}(H_{\Sigma,\rho,\alpha})=\overbrace{[e_{2}],[e_{3} ]}^{\star}\cup\bigcup_{p\in\Lambda(\Sigma)}\overbrace{[e_{1}],p}^{\star}.\]
Proof.: We know from statement (iii) in Theorem 1.5 that if \(\lambda=3\) then \(\mu=3\). That is, if the group has exactly three lines in its limit set, then these lines are in general position.
Now we assume \(\mu=3\) and we choose three distinct lines in general position contained in \(\Lambda_{\operatorname{Kul}}(\Gamma)\), say \(\ell_{1},\ell_{2},\ell_{3}\). We define
\[P=\left\{p_{ij}=\ell_{i}\cap\ell_{j}:1\leq i<j\leq 3\right\},\]
then any line contained in \(\Lambda_{\operatorname{Kul}}(\Gamma)\) must intersect \(P\).
We will say \(p\in P\) is a vertex whenever there are infinite lines in \(\Lambda_{\operatorname{Kul}}(\Gamma)\) passing through it. If the set of vertices is empty, then by Proposition 5.6 in [4] the set \(P\) is a \(\Gamma\)-invariant set. Then \(H=Isot(P,\Gamma)\) is a finite index subgroup of \(\Gamma\). Moreover, \(\Lambda_{\operatorname{Kul}}(H)=\Lambda_{\operatorname{Kul}}(\Gamma)\) and \(H\) is conjugate to a group \(G_{0}\) where every element has a diagonal lift. So by Theorem 1.1 in [28] we conclude \(G_{0}=\{diag[\alpha^{n},\beta^{m},1]:n,m\in\mathbb{Z}\}\) for some \(\alpha,\beta\in\mathbb{C}^{*}\) non-unitary complex numbers.
If the set of vertices is non-empty then by the proof of Proposition 6.5 in [4] we must have that this set is \(\Gamma\)-invariant. Observe that in case the number of vertices is at least two, we must have four lines in general position (compare with Proposition 6.5 in [4]) which is not possible, therefore we have exactly one vertex, say \(v\), which is \(\Gamma\)-invariant. Let \(\ell\in\{\ell_{1},\ell_{2},\ell_{3}\}\) be a line not containing \(v\). Since \(\mu(\Omega_{\operatorname{Kul}}(\Gamma))=3\), we conclude that \(\ell\) is \(\Gamma\)-invariant. After conjugating with a projective transformation, if necessary, let us assume that \(v=[e_{1}]\) and \(\ell=\overbrace{[e_{2}],[e_{3}]}^{\star}\), as in Section 1.5 take \(\pi=\pi_{v,\ell}\) and \(\Pi=\Pi_{v,\ell}\), now we claim:
**Claim 1.** If \(\Pi(\Gamma)\) is non-virtually solvable, then \(\Pi(\Gamma)\) is discrete. On the contrary let us assume that \(\Pi(\Gamma)\) is non-discrete, thus by Theorem 1 and Proposition 12 in [16] the principal component of \(\overline{\Pi(\Gamma)}\) is \(\operatorname{PSL}(2,\mathbb{C})\) or conjugate to \(\operatorname{SO}(3)\) or \(\operatorname{PSL}(2,\mathbb{R})\). In the first two cases, the orbit of any line passing through \(v\) is dense in \(\ell\), and this is not possible. In the latter case, let \((g_{n})\subset\Pi(G)\) be a sequence such that \(\lim_{n\to\infty}g_{n}=Id\) and \(h_{n}=[g_{1},g_{n}]\neq Id\) for \(n\geq 2\). If \(G_{m}\in\Gamma\) satisfies \(\Pi(G_{m})=g_{m}\), then a straightforward computation shows \(\lim_{n\to\infty}[G_{1},G_{n}]=Id\), which is a contradiction.
**Claim 2.** The group \(\Gamma\) cannot be virtually solvable. On the contrary, let us assume that \(\Gamma\) is solvable, then by Theorem 0.1 in [5] the group \(\Gamma\) contains a loxodromic element. By Theorems 1.1, 1.2, 1.3 and 1.4 in [28], there is \(W\subset\mathbb{C}\) a discrete additive subgroup with \(rank(W)\leq 2\) and \(\alpha,\beta\in\mathbb{C}^{*}\) such that \(\alpha\neq 1\) and \(\alpha\beta^{2}\notin\mathbb{R}\) and \(\Gamma\) is conjugate to:
\[\left\{\begin{bmatrix}1&0&0\\ 0&1&w\\ 0&0&1\end{bmatrix}:w\in W\right\}\rtimes\left\{\begin{bmatrix}\alpha^{2n}\beta^ {n}&0&0\\ 0&\alpha^{n}\beta^{2n}&0\\ 0&0&1\end{bmatrix}:n\in\mathbb{Z}\right\}.\]
Now a simple computation shows \(W\) is non-discrete, which is a contradiction.
By the previous claims, we conclude that \(\Pi(\Gamma)\) is a discrete non-elementary with a non-empty ordinary region. Now the rest of the proof is trivial.
## 7. Groups with limit set four lines in general position lines
In this section we provide an algebraic characterization of the subgroups of \(\operatorname{PSL}(3,\mathbb{C})\) with exactly four lines lines in general position in their limit set \(\Lambda_{\operatorname{Kul}}\). Of course that by Theorem 1.5 this implies that \(\Lambda_{\operatorname{Kul}}\) actually has infinitely many lines, but only four in general position. The basic reference is [2].
We recall first that an element \(A\in\operatorname{SL}(2,\mathbb{Z})\) is a hyperbolic toral automorphism if none of its eigenvalues has norm \(1\) (see [17]).
**Definition 7.1**.: A subgroup \(\Gamma\) of \(\operatorname{PSL}(3,\mathbb{C})\) is a _hyperbolic toral group_ if it is conjugate to a group of the form:
\[\Gamma_{A}\,=\left\{\left(\begin{array}{cc}A^{k}&b\\ 0&1\end{array}\right)\ \Big{|}\ b=\left(\begin{array}{c}b_{1}\\ b_{2}\end{array}\right)\,,\ b_{1},b_{2},k\in\mathbb{Z}\right\}\]
where \(A\in\operatorname{SL}(2,\mathbb{Z})\) is a hyperbolic toral automorphism.
One has:
**Theorem 7.2**.: _Let \(\Gamma\subset\operatorname{PSL}(3,\mathbb{C})\) be a discrete group. The maximum number of complex lines in general position contained in its Kulkarni limit set is equal to four if, and only if, \(\Gamma\) contains a hyperbolic toral group whose index is at most 8. [In this case the limit set of the subgroup coincides with that of \(\Gamma\), and the discontinuity region has four components each one diffeomorphic to \(\mathbb{H}\times\mathbb{H}\), where \(\mathbb{H}\) denotes the half open plane in \(\mathbb{C}\).]_
To prove the theorem above, the first step is showing that a toral group \(\Gamma_{A}\) as above is the subgroup of \(\operatorname{PSL}(3,\mathbb{Z})\) generated by the unipotent parabolic elements \(P_{1},P_{2}\) and the strongly loxodromic element \(L\) given by:
\[P_{1}=\left(\begin{array}{ccc}1&0&0\\ 0&1&1\\ 0&0&1\end{array}\right)\;,\;P_{2}=\left(\begin{array}{ccc}1&0&1\\ 0&1&0\\ 0&0&1\end{array}\right)\;,\;L=\left(\begin{array}{cc}A&0\\ 0&1\end{array}\right)\;.\]
Therefore \(\Gamma_{A}\) is discrete. The dynamical properties of the hyperbolic toral group are used to show that \(\Lambda_{\operatorname{Kul}}(L)\) is contained in Kulkarni limit set of \(\Gamma_{A}\). Also, a straightforward computation shows that \(\Lambda_{\operatorname{Kul}}(P_{1})\) is contained in \(\Lambda_{\operatorname{Kul}}(\Gamma_{A})\). Thus,
\[\Lambda_{\operatorname{Kul}}(\Gamma_{A})=\overline{\bigcup_{\gamma\in\Gamma_ {A}}\Lambda_{\operatorname{Kul}}(\gamma)},\]
because both sets in this equation contain three lines in general position (see Theorem 1.2 in [1]).
Now, it is possible to compute the sets \(\Lambda_{\operatorname{Kul}}(\gamma)\) for every \(\gamma\in\Gamma_{A}\). One finds that \(\Lambda_{\operatorname{Kul}}(\Gamma_{A})\) can be described as a union of two pencils of lines over two distinct circles, with vertices two distinct points, which share the line determined by the two vertices. Therefore, the maximum number of lines in general position contained
in \(\Lambda_{\operatorname{Kul}}(\Gamma_{A})\) is equal to four. Moreover, this description shows that the Kulkarni discontinuity region of \(\Gamma_{A}\) is a disjoint union of four copies of \(\mathbb{H}\times\mathbb{H}\), where \(\mathbb{H}\) denotes the upper half plane in \(\mathbb{C}\).
Conversely, if we assume that the maximum number of lines in general position contained in the Kulkarni limit set of a group \(\Gamma\) is equal to four, then one shows that there are two distinguished points (called vertices) contained in infinitely many lines in this limit set. The subgroup which stabilizes these two vertices is denoted by \(\Gamma_{0}\) and it has index at most two in \(\Gamma\). Furthermore, it can be proved that \(\Omega_{\operatorname{Kul}}(\Gamma_{0})\) has four components, each one diffeomorphic to a copy of \(\mathbb{H}\times\mathbb{H}\). Finally, the subgroup of \(\Gamma_{0}\) stabilizing each of these components has finite index (at most four) in \(\Gamma_{0}\), and it can be shown that it is a hyperbolic toral group.
A description of the quotient spaces \(\Omega_{\operatorname{Kul}}(\Gamma_{A})/\Gamma_{A}\) is given by the following theorem (see [6].)
**Theorem 7.3**.: _If \(\Gamma_{A}\) is a hyperbolic toral group then_
1. _The group_ \(\Gamma_{A}\) _is isomorphic to a lattice of the group Sol (see Definition_ 7.4_)._
2. _The quotient space_ \(\Omega_{\operatorname{Kul}}(\Gamma_{A})/\Gamma_{A}\) _is a disjoint union of four copies of_ \[(\operatorname{Sol}/\Gamma_{A})\times\mathbb{R}.\]
**Definition 7.4**.: The group \(\operatorname{Sol}\) is \(\mathbb{R}^{3}\) equipped with the operation
\[\left(\left(\begin{array}{c}x_{1}\\ y_{1}\end{array}\right),t_{1}\right)\left(\left(\begin{array}{c}x_{2}\\ y_{2}\end{array}\right),t_{2}\right)=\left(\left(\left(\begin{array}{c}x_{1} +e^{t_{1}}x_{2}\\ y_{1}+e^{-t_{1}}y_{2}\end{array}\right),t_{1}+t_{2}\right)\]
The proof of 7.3 is done by giving an explicit smooth foliation of \(\mathbb{H}\times\mathbb{H}\) where the leaves are diffeomorphic to \(\operatorname{Sol}\), and by showing that \(\mathbb{H}\times\mathbb{H}\) is diffeomorphic to \(\operatorname{Sol}\times\mathbb{R}\) by a \(\Gamma_{A}\)-equivariant diffeomorphism.
It can be shown that \(\Gamma_{A}\) is isomorphic to \(\mathbb{Z}^{2}\rtimes_{A}\mathbb{Z}\), and this group is determined by the conjugacy class in \(\operatorname{GL}(2,\mathbb{Z})\) of \(A\) (see [23] pages 96,97). It follows from Theorem 7.2 that there is a countable number of nonisomorphic Complex Kleinian groups such that the maximum number of lines in general position contained in its Kulkarni limit set is equal to four.
## 8. Groups with limit set having an isolated point
Let us suppose now that \(\Gamma\subset\operatorname{PSL}(3,\mathbb{C})\) is a group with at least an isolated point in its limit set. Then the results explained in this article show that \(\Gamma\) must have only one line in its limit set, for otherwise \(\Lambda_{\operatorname{Kul}}\) is a union of lines.
Let \(\ell\) be the unique line in \(\Lambda_{\operatorname{Kul}}(\Gamma)\) and let \(p\in\Lambda_{\operatorname{Kul}}(\Gamma)\) be an isolated point. Then \(\ell\) is an invariant set and we look at the action of \(\Gamma\) on \(\ell\) to deduce from it consequences on the global dynamics of \(\Gamma\).
We note that this is somehow similar to the controllable groups, where we have a global fix point and a line \(L\) that we call the horizon, which is not invariant but we get an action on it. Then we know a lot about the action of the group on \(\mathbb{P}_{\mathbb{C}}^{2}\) by looking at the induced action on \(L\). In the case we now envisage, we have the invariant line \(\ell\) and the action on it says a lot about the dynamics of the group on \(\mathbb{P}_{\mathbb{C}}^{2}\). There are two possibilities as the action of \(\Gamma\) on \(\ell\) can be either:
1. Virtually Solvable.
2. Non- Virtually solvable.
The idea of the proof consists in showing first that the non-virtually solvable case cannot happen, so the action must be virtually solvable. This uses the Tits Alternative.
In the solvable case, the dynamics implies that on the invariant line \(\ell\) we must have either one or two fixed points. Then one shows that in the first case, when there is only one fixed point, one actually has an invariant flag and up to conjugation the elements are all of the form
\[\begin{bmatrix}\alpha&0&0\\ 0&\beta&\delta\\ 0&0&\gamma\end{bmatrix}\.\]
In the second case we find that, up to conjugation, all the elements are diagonal, and the group consists of lodrodromic elements that are screws (see Section 1.3), so the limit set of each element consists of a point and a line. Then we show that the limit set of the whole group is a point and a line.
The above considerations lead to the following:
**Theorem 8.1**.: _Let \(\Gamma\subset\mathrm{PSL}(3,\mathbb{C})\) be a group with at least an isolated point in its limit set. Then the limit set consists of exactly one line and one point, and \(\Gamma\) is conjugate to either:_
1. \[G_{0}=\left\{\mathrm{diag}\left[\alpha^{n},e^{2\pi i\theta},e^{-2\pi i\theta} \right]:n,m\in\mathbb{Z}\right\}\] _where_ \(\theta\in\mathbb{R}\) _and_ \(\alpha\in\mathbb{C}^{*}\) _is non-unitary complex number._
2. _or_ \[H_{0}=\left\{\begin{bmatrix}\mu(w)^{-2}&0&0\\ 0&\mu(w)&w\mu(w)\\ 0&0&\mu(w)\end{bmatrix}:w\in W\right\}\] _where_ \(W\subset\mathbb{C}\) _is a discrete additive group and_ \(\mu:(\mathbb{C},+)\to(\mathbb{C}^{*},\cdot)\) _is a group morphism satisfying that for every sequence of distinct elements_ \((w_{n})\subset W\)_, the limit_ \(\lim_{n\to\infty}\mu(w_{n})\) _is either_ \(0\) _or_ \(\infty\)_._
Proof.: By Theorem 1.2 in [2] we have \(\mu(\Omega_{\mathrm{Kul}}(\Gamma))\leq 2\), and from Theorem 5.1 we conclude \(\mu(\Omega_{\mathrm{Kul}}(\Gamma))=1\). On the other hand, by Theorem 0.1 in [5], the group \(\Gamma\) contains a Loxodromic element. Let \(\ell\) be the unique line in \(\Lambda_{\mathrm{Kul}}(\Gamma)\), \(p\in\Lambda_{\mathrm{Kul}}(\Gamma)\) be an isolated point and \(\gamma\in\Gamma\) a loxodromic element. We must consider the following cases:
**Case 1.** The action of \(\Gamma\) restricted to \(\ell\) is given by a virtually solvable group. In this case the group \(\Gamma\) is virtually solvable and the theorem follows from Claim 2 and Theorems 1,1, 1.2, 1.3 and 1.4 in [28], see also section 1 above.
**Case 2.** The action of \(\Gamma\) restricted to \(\ell\) is given by a non- virtually solvable group. Under this assumption we claim:
**Claim**. There are \(g_{1},g_{2}\in\Gamma\) such that the group generated by \(\{\gamma_{1}|_{\ell},\gamma_{2}|_{\ell}\}\) is a rank two free group which is either purely loxodromic or purely elliptic. If the action of \(\Gamma\) on \(\ell\) is given by a discrete group \(H\), then it is well know that \(H\) contains a rank two Shottky group, see Chapter X in [20], these will do the job. If the action on \(\ell\) is given by a non discrete group \(G\), thus by Theorem 1 and Proposition 12 in [16] the principal component of \(\overline{G}\) is \(\mathrm{PSL}(2,\mathbb{C})\), or conjugate to \(\mathrm{SO}(3)\) or \(\mathrm{PSL}(2,\mathbb{R})\). In any such case we deduce that \(G\) contains a non-virtualy solvable finitelly generated subgroup, as a consequence of Tit's alternative we deduce that \(G\) contains a
rank two free subgroup which is either purely loxodromic or purely elliptic. Which proves the claim.
Now let \(g_{1},g_{2}\) be the elements given by the previous claim. After conjugating with a projective transformation we can assume that \(\ell=\overbrace{[e_{2}],[e_{3}]}\) so \(h=[g_{1},g_{2}]\) is a loxodromic element and
\[h=\begin{bmatrix}1&0&0\\ c&d&e\\ f&g&h\end{bmatrix}\]
where \(dh-eg=1\) and \((d+h)^{2}\in\mathbb{C}-\{4\}\). Thus \(h\) is either an elliptic element of infinite order, which cannot happen by the discreteness of \(\Gamma\), or a strongly loxodromic element whose attracting and repelling points are contained in \(\ell\). We conclude that the attracting and repelling lines of \(h\) are distinct from \(\ell\). On the other hand, from the \(\lambda\)-Lemma (see Lemma 7.3.6 in [8]) we know that the attracting and repelling line of \(h\) are both contained in \(\Lambda_{\operatorname{Kul}}(\Gamma)\), which is a contradiction.
|
2301.09729 | Long-term stable Electromyography classification using Canonical
Correlation Analysis | Discrimination of hand gestures based on the decoding of surface
electromyography (sEMG) signals is a well-establish approach for controlling
prosthetic devices and for Human-Machine Interfaces (HMI). However, despite the
promising results achieved by this approach in well-controlled experimental
conditions, its deployment in long-term real-world application scenarios is
still hindered by several challenges. One of the most critical challenges is
maintaining high EMG data classification performance across multiple days
without retraining the decoding system. The drop in performance is mostly due
to the high EMG variability caused by electrodes shift, muscle artifacts,
fatigue, user adaptation, or skin-electrode interfacing issues. Here we propose
a novel statistical method based on canonical correlation analysis (CCA) that
stabilizes EMG classification performance across multiple days for long-term
control of prosthetic devices. We show how CCA can dramatically decrease the
performance drop of standard classifiers observed across days, by maximizing
the correlation among multiple-day acquisition data sets. Our results show how
the performance of a classifier trained on EMG data acquired only of the first
day of the experiment maintains 90% relative accuracy across multiple days,
compensating for the EMG data variability that occurs over long-term periods,
using the CCA transformation on data obtained from a small number of gestures.
This approach eliminates the need for large data sets and multiple or periodic
training sessions, which currently hamper the usability of conventional pattern
recognition based approaches | Elisa Donati, Simone Benatti, Enea Ceolini, Giacomo Indiveri | 2023-01-23T21:45:00Z | http://arxiv.org/abs/2301.09729v1 | # Long-term stable Electromyography classification using Canonical Correlation Analysis
###### Abstract
Discrimination of hand gestures based on the decoding of surface electromyography (sEMG) signals is a well-establish approach for controlling prosthetic devices and for Human-Machine Interfaces (HMI). However, despite the promising results achieved by this approach in well-controlled experimental conditions, its deployment in long-term real-world application scenarios is still hindered by several challenges. One of the most critical challenges is maintaining high EMG data classification performance across multiple days without retraining the decoding system. The drop in performance is mostly due to the high EMG variability caused by electrodes shift, muscle artifacts, fatigue, user adaptation, or skin-electrode interfacing issues. Here we propose a novel statistical method based on canonical correlation analysis (CCA) that stabilizes EMG classification performance across multiple days for long-term control of prosthetic devices. We show how CCA can dramatically decrease the performance drop of standard classifiers observed across days, by maximizing the correlation among multiple-day acquisition data sets. Our results show how the performance of a classifier trained on EMG data acquired only of the first day of the experiment maintains 90% relative accuracy across multiple days, compensating for the EMG data variability that occurs over long-term periods, using the CCA transformation on data obtained from a small number of gestures. This approach eliminates the need for large data sets and multiple or periodic training sessions, which currently hamper the usability of conventional pattern recognition based approaches.
## I Introduction
Analysis of Electromyography (EMG) signals is the standard technique to decode the electrical activity of skeletal muscles during their contraction. EMG based hand gesture recognition represents a well-established approach for Human Machine Interaction (HMI) in several domains, ranging from robotic control to augmented reality, from personalized medicine to rehabilitation [1, 2, 3, 4]. Conventional approaches are based on mapping EMG signal patterns acquired during the muscular contractions, from superficial non-invasive methods, onto a discrete set of gestures. Despite the promising results of many recent basic and applied research efforts, one common and critical challenges that still remains is their robustness over long periods of time. The reason is that EMG signals are highly variable over time, due to many factors such as electrode shifts, muscle fatigue, or skin-electrode interface issues. Typically, within the same experimental session, the EMG patterns are repeatable and stable. However, for long-term applications, the donning/doffing of the sensor interface, that may occur during electrodes re-positioning, dramatically hampers the accuracy of gesture recognition algorithms [5, 6]. Performance drops can reach 30% figures, making these approaches unsuitable for long-term reliable use [6].
Some solutions have been proposed to overcome this limitation, mostly relying on data set augmentation, including more features [7], all expected displacement locations [8], multi-session training [9], and Transfer Learning [10].
However, these approaches require long-term training sessions that can be frustrating for the users and consume a considerable amount of power, resulting only in a gain of 10% of accuracy. Another solution explored to reduce the sensitivity of shift is to use electrodes with larger size, but it has been found that these electrodes perform worse, in terms of gesture classification accuracy, than electrodes with smaller size [5].
A promising approach relies on the use of Deep Learning (DL) techniques, that have been shown to be able to learn autonomously the input data representation without having to use handcrafted feature extraction. Hence, these methods can in principle learn input features without being affected by high levels of variability in the data and enable more robust discrimination of the hand gestures. In recent years, DL approaches have been successfully deployed for long-term biomedical applications [11, 12, 13] reaching state-of-the-art accuracy in classification tasks. However, they require very large training data sets. As they are based on complex architectures with very large numbers of hidden layers and parameters, they also require high memory footprint and power consumption which is not always suitable for real-time applications with embedded or portable systems.
In this work, we tackle the high variability present in a multi-days data set by leveraging a dimensionality reduction technique that maximizes the signal correlation among days. The technique used is the Canonical Correlation Analysis (CCA), a statistical method that connects two sets of variables by finding a linear combination that maximally correlates them in the mapping space [14]. A similar approach, based on EMG analysis was already introduced in [8] where two different shift values were investigated to understand the degree of correlation with the normal data; and in [15] where CCA were used to maximize correlation across different users' data. Here, we propose to use CCA
to generalise from one day to an other on the same subject using an EMG dataset, collected across 10 days where the electrodes' displacement -or other sources of temporal variability- is not known. We propose a framework where we first train a classifier on the data from the first day, and then, we calculate the CCA transformation that compensate the variability on a small number of gesture from the next day. This compensation allows using the original classifier in the next days without significant loss of performance, without the need of retraining.
## II Materials and Methods
The data set used in this study was acquired in multiple sessions across multiple days set by using a custom EMG wearable platform.
### _Long-term EMG data acquisition_
The sensor interface comprises an array of 16 passive gel-based EMG electrodes, placed in a ring configuration around the forearm. 8 EMG channels are acquired at 24-bit resolution in fully-differential configuration by a commercial Analog Front End (AFE) (ADS1298) specially designed for biopotential acquisition. The sampling rate of the ADC ranges from 1 to \(32kHz\) and in this application, to minimize the input-referred noise, it is maintained at \(4kHz\). To remove the Power Line interference (PLI), we applied a 10-tap Notch filter centered on \(50Hz\) frequency. Furthermore, to remove DC wondering and high-frequency noise, we applied a 15-tap Band-Pass filter (\(2Hz\)-\(1kHz\)). The EMG data set has been collected over 20 sessions on three able-bodied subjects (all male, average age of 29 \(\pm\) 3 years) without neurological disorders [13]. Data are acquired for 10 days and each day includes two sessions, one in the morning and one in the afternoon. Each session includes 8 hand gestures (palm, fist, index, pinky, hand supination and pronation, ILY sign and thumb up) repeated 8 times with a contraction times of approximately \(3s\) with \(3s\) rest between two contractions.
### _Multi-day data alignment through CCA_
To provide a quantitative measure of the correlation between two time series, such as the EMG data, we utilized the CCA. The CCA is deployed for two main aims: i) data reduction, meaning use a small number of linear combinations to explain covariation between two variables; ii) data interpretation that find important features for explaining the covariation between variables. In this work we used CCA to find the linear transformation that makes the acquisitions \(\mathbf{D}_{d}\) from day \(d\) maximally correlated to \(\mathbf{D}_{x}\) of the reference day \(x\), for example in our case the day of the training.
After the acquisition and segmentation of the EMG signals, we extracted features by computing the Root Mean Square (RMS) for each channel, a time-domain transformation generally used in EMG processing [16]. The RMS is represented as amplitude relating to a gestural force and muscular contraction and in the implementation proposed is calculated across a window of length \(300\)\(ms\) with a sliding window of \(100\)\(ms\). Then, for each day, we built a matrix with the same dimension \(n\times T\), where \(n\) is the number of channels (here \(n\) = 8), and \(T\) the number of samples from all the concatenated trials of a given day, this includes all of the repetitions for each gesture.
We indicate with \(\mathbf{D}_{1}\) the matrix assembled from the trials recorded in the day of the training (referring day). We used CCA to align the recording in \(\mathbf{D}_{1}\) with those of all the other days using the following approach. Given two matrices \(\mathbf{X}\in\mathbb{R}^{n\times T}\) and \(\mathbf{Y}\in\mathbb{R}^{n\times T}\) calculated separately for two days, we want to find a linear transformation that makes the corresponding dynamics maximally correlated. In other words we want to find \(\mathbf{A}=[\mathbf{a}_{1},...,\mathbf{a}_{m}]\) and \(\mathbf{B}=[\mathbf{b}_{1},...,\mathbf{b}_{m}]\) with \(\mathbf{a}_{i}\) and \(\mathbf{b}_{i}\in\mathbb{R}^{n}\) so that for \(i\in\{1,\ldots,m\}\) we have
\[\mathbf{a}_{i},\mathbf{b}_{i}=\operatorname*{arg\,max}_{\mathbf{a}\in\mathbb{ R}^{n},\mathbf{b}\in\mathbb{R}^{n}}\,Cor(\mathbf{a}^{T}\mathbf{X},\mathbf{b}^{T} \mathbf{Y})\]
The correlation term can be expanded to reveal the dependence of the maximization problem to the covariances of the data:
\[\mathbf{a}_{i},\mathbf{b}_{i}=\operatorname*{arg\,max}_{\begin{subarray}{c} \mathbf{a}^{T}\mathbf{C}_{xx}\mathbf{a}=1\\ \mathbf{b}^{T}\mathbf{C}_{yy}\mathbf{b}=1\end{subarray}}\mathbf{a}^{T}\mathbf{ C}_{xy}\mathbf{b}\]
where \(\mathbf{C}_{xy}=\mathbf{X}\mathbf{Y}^{T}\), \(\mathbf{C}_{xx}=\mathbf{X}\mathbf{X}^{T}\), \(\mathbf{C}_{yy}=\mathbf{Y}\mathbf{Y}^{T}\).
We can than define a auxiliary variable \(\mathbf{\Omega}\) and rewrite the maximization problem
\[\mathbf{\Omega}=\mathbf{C}_{xx}^{-1/2}\mathbf{C}_{xy}\mathbf{C}_{yy}^{-1/2}\]
\[\mathbf{c}=\mathbf{C}_{xx}^{-1/2}\mathbf{a}\quad\mathbf{d}=\mathbf{C}_{yy}^{- 1/2}\mathbf{b}\]
\[\mathbf{c}_{i},\mathbf{d}_{i}=\operatorname*{arg\,max}_{\begin{subarray}{c} \mathbf{c}\in\mathbb{R}^{n},\mathbf{d}\in\mathbb{R}^{n}\\ \|\mathbf{c}\|^{2}=\|\mathbf{d}\|^{2}=1\end{subarray}}\mathbf{c}^{T}\mathbf{ \Omega}\mathbf{d}\]
and
\[\mathbf{a}_{i}=\mathbf{C}_{xx}^{-1/2}\mathbf{c}_{i}\quad\mathbf{b}_{i}=\mathbf{ C}_{yy}^{-1/2}\mathbf{d}_{i}\]
Finally we can show that:
\[SVD(\mathbf{\Omega})=[\mathbf{c}_{1},...,\mathbf{c}_{m}]\times\mathbf{\Sigma} \times[\mathbf{d},...,\mathbf{d}_{m}]\]
where \(\mathbf{\Sigma}\) contains the canonical correlations.
### _Day to day compensation_
The principal aim of this framework is to leverage a gesture recognition system that uses EMG data measured from the same armband over long periods, across multiple days, but without having to retrain the system during everyday use, after the initial training phase on \(\mathbf{D}_{1}\). To maximize the system's robustness to unknown electrode shifts, a linear transformation is estimated via CCA so that this shift can be compensated. This method ensures that the features extracted from the EMG signals on days following the referring day can be mapped back to the space of the features used for training the system in the first place. To speed up the process of finding this mapping the transformation is calculated by considering only two repetitions of each gesture. This can be considered as a calibration phase that the end-user can do periodically during normal use of the system.
The full framework is described as follows: all the data collected on the first day \(\mathbf{D}_{1}\) are used to train (\(\sim 6000\) samples for training and \(\sim 1500\) sample for testing) a Support Vector Machine (SVM) to classify the 8 gestures (8 class classification problem) present in the data set. During this phase, it is important to properly regularize the SVM classifier to ensure initial robustness to possible outliers. After training the classifier, the next step is properly adapting it to the data obtained in the following days \(\mathbf{D}_{x}\). First, two repetitions for each gesture are collected. Then, CCA is applied to find the optimal transformation between these samples and the corresponding ones obtained during the referring day. As described is Section II-B, this operation returns the mappings \(\mathbf{A}\) and \(\mathbf{B}\) for the data \(\mathbf{D}_{1}\) and \(\mathbf{D}_{x}\) respectively. To project \(\mathbf{D}_{x}\) back into the space of the features of \(\mathbf{D}_{1}\) the following approach is used
\[\hat{\mathbf{D}}_{x}=(\mathbf{A}^{T})^{\dagger}\mathbf{B}^{T}\mathbf{D}_{x} \tag{1}\]
where \(\dagger\) represents the pseudo-inverse operation (used to avoid numerical instability) and \(\hat{\mathbf{D}}_{x}\) is the projection of the data of the new use day in the space spanned by the features \(\mathbf{D}_{1}\). The same procedure is applied independently for each new day of use. It is worth noting that the proposed framework assumes that the electrodes shift can be compensated by a linear transformation which is a fair assumption that allows for translations and rotation of the electrodes but does not consider changes in the position of the electrodes with respect to each other.
## III Results and Discussions
Figure 1 shows the correlation between different days. The align system, the one with the CCA (cyan) is much more correlated across days than the unaligned one (magenta). We computed normalization on the correlation where the ratio of the across-days aligned with CCA has the upper-bound provided by the within-day CCA. The blue line indicates the % of correlation difference between the aligned and unaligned. This corresponds to an average gain in the correlation of about 45%.
Figure 2, shows the classification accuracy of the SVM classifier trained on a single day \(\mathbf{D}_{1}\). Without compensation, we can see that the classifier completely fails in classifying the gestures of the other days with an average accuracy across days of around 17% on a small number of gestures, i.e. two trials for each gesture. The green line represents the classification accuracy on the original full dataset (meaning using samples from all days to train), this results in an average of about 60%. When compensated with some calibration gestures, the classification reaches almost the same accuracy as if tested on the gestures of \(\mathbf{D}_{1}\). This can be seen by the 95% relative classification accuracy between the classification of \(\mathbf{D}_{1}\) and the classification of the compensated \(\hat{\mathbf{D}}_{x}\). This shows the effectiveness of calculating a compensating transformation only on several trials. Moreover, this represents a valid alternative to the methods that use retraining of the classifier. The CCA compensation can be calculated with a simple one-shot calibration procedure that only needs a limited amount of data, thus avoiding the need for time-consuming and data-hungry re-training that includes new data from the new day.
Figure 3 shows a low dimensional representation (obtained with t-SNE [17]) of two example gestures for \(\mathbf{D}_{1}\), \(\mathbf{D}_{x}\) and \(\hat{\mathbf{D}}_{x}\). In the left panel, we can see that these two example gestures are linearly separable (see separation hyper-plane in dashed line). In the central panel, we see how these two gestures for the \(\mathbf{D}_{x}\) are still linearly separable but they fall within one side of the classifier boundary, showing the reason for such poor classification accuracy in Figure 2. Notice that the only factor that distinguishes \(\mathbf{D}_{1}\) and \(\mathbf{D}_{2}\) seems to be a rotation, as assumed in the beginning. Indeed, when applying the compensation process we can see in the right panel that the gestures of \(\hat{\mathbf{D}}_{x}\) are now well aligned with the samples of \(\mathbf{D}_{1}\) and are thus correctly classified.
The overall accuracy across 10 days is around 95%. If we compare this results with state-of-the-art for regression on the same dataset [13] we can see that the accuracy is 95% vs 93% obtained with a one-shot calibration instead of an 11-layers (9 convolutional and 2 full) convolutional neural networks trained across the entire dataset. This results in a
Fig. 1: Correlation of the EMG across the 10 days dataset on three different subjects. Cyan trace represents the aligned correlation across days calculated using CCA. The magenta line is the correlation across days without alignment and the blue is the difference between the two correlations.
Fig. 2: Classification accuracy, relative to the accuracy of the classifier on \(\mathbf{D}_{1}\). In purple the classification accuracy for unaligned gestures (\(\mathbf{D}_{x}\)). In green the classification on the full dataset and in blue is classification accuracy for gestures aligned with CCA (\(\hat{\mathbf{D}}_{x}\))
comparable accuracy but with a significant less computational complexity.
The number of electrodes used in this study was limited to 8. To increase the classification accuracy, e.g., to discriminate between finer gestures, it is possible to use HD-EMG systems. The proposed CCA analysis can be applied to such high dimensional problems in combination with Principle Component Analysis (PCA) techniques. PCA can be performed on the original data to find the time-dependent activation of a specific population-wide activity pattern [18].
## IV Conclusions
In this work, we addressed the temporal variability, in particular, the electrode shift that affects the generalization of the hand gesture recognition based on superficial EMG. We show that the CCA effectively solves the problem of variability on recordings from the same subject across multi-days recordings and allows a stable and robust gesture classification. In our approach, we trained the classifier on the data from the first day and we compensate the shift in the following days by using the transformation matrix calculate using CCA after a calibration phase and not an additional learning. Results show a gain in accuracy in the aligned data of about 30%, which is even more significant given that we are using a limited amount of data to compensate for the alignment.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.